idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
30,201 | How to interpret ARIMA(0,1,0)? | ARIMA(0,1,0) is random walk.
It is a cumulative sum of an i.i.d. process which itself is known as ARIMA(0,0,0). | How to interpret ARIMA(0,1,0)? | ARIMA(0,1,0) is random walk.
It is a cumulative sum of an i.i.d. process which itself is known as ARIMA(0,0,0). | How to interpret ARIMA(0,1,0)?
ARIMA(0,1,0) is random walk.
It is a cumulative sum of an i.i.d. process which itself is known as ARIMA(0,0,0). | How to interpret ARIMA(0,1,0)?
ARIMA(0,1,0) is random walk.
It is a cumulative sum of an i.i.d. process which itself is known as ARIMA(0,0,0). |
30,202 | How to interpret ARIMA(0,1,0)? | An ARIMA(0, 1, 0) series, when differenced once, becomes an ARMA(0, 0), which is random, uncorrelated, noise.
If $X_1, X_2, X_3, \ldots$ are the random variables in the series, this means that
$$X_{i+1} - X_{i} = \epsilon_{i + 1}$$
where $\epsilon_1, \epsilon_2, \ldots$ is a sequence of centered, uncorrelated random variables.
Rearranging
$$ X_{i+1} = X_i + \epsilon_i $$
reveals that we have a random walk. | How to interpret ARIMA(0,1,0)? | An ARIMA(0, 1, 0) series, when differenced once, becomes an ARMA(0, 0), which is random, uncorrelated, noise.
If $X_1, X_2, X_3, \ldots$ are the random variables in the series, this means that
$$X_{i+ | How to interpret ARIMA(0,1,0)?
An ARIMA(0, 1, 0) series, when differenced once, becomes an ARMA(0, 0), which is random, uncorrelated, noise.
If $X_1, X_2, X_3, \ldots$ are the random variables in the series, this means that
$$X_{i+1} - X_{i} = \epsilon_{i + 1}$$
where $\epsilon_1, \epsilon_2, \ldots$ is a sequence of centered, uncorrelated random variables.
Rearranging
$$ X_{i+1} = X_i + \epsilon_i $$
reveals that we have a random walk. | How to interpret ARIMA(0,1,0)?
An ARIMA(0, 1, 0) series, when differenced once, becomes an ARMA(0, 0), which is random, uncorrelated, noise.
If $X_1, X_2, X_3, \ldots$ are the random variables in the series, this means that
$$X_{i+ |
30,203 | How to interpret ARIMA(0,1,0)? | You can simulate it in R quite easily:
plot(arima.sim(model = list(order=c(0,1,0)),n=1000))
The result obtained is:
We observe that this plot is reminiscent of a Random Walk of order 1. | How to interpret ARIMA(0,1,0)? | You can simulate it in R quite easily:
plot(arima.sim(model = list(order=c(0,1,0)),n=1000))
The result obtained is:
We observe that this plot is reminiscent of a Random Walk of order 1. | How to interpret ARIMA(0,1,0)?
You can simulate it in R quite easily:
plot(arima.sim(model = list(order=c(0,1,0)),n=1000))
The result obtained is:
We observe that this plot is reminiscent of a Random Walk of order 1. | How to interpret ARIMA(0,1,0)?
You can simulate it in R quite easily:
plot(arima.sim(model = list(order=c(0,1,0)),n=1000))
The result obtained is:
We observe that this plot is reminiscent of a Random Walk of order 1. |
30,204 | What's the difference between "classifier" and "model" in classification? | I'm definitely no expert in the domain so take my answer with a grain of salt but from what I have understood you have:
Classifier : The algorithm, the core of your machine learning process. It can be an SVM, Naive bayes or even a neural network classifier. Basically it's a big "set of rules" on how you want to classify your input.
Model : It is what you get once you have finished training your classifier, it's the resulting object of the training phase. You can see it as an "intelligent" black box to whom you feed and input sample and it gives you a label as an ouput.
Hope my answer is clear enough, but yeah the difference is rather subtle between the two terms. | What's the difference between "classifier" and "model" in classification? | I'm definitely no expert in the domain so take my answer with a grain of salt but from what I have understood you have:
Classifier : The algorithm, the core of your machine learning process. It can b | What's the difference between "classifier" and "model" in classification?
I'm definitely no expert in the domain so take my answer with a grain of salt but from what I have understood you have:
Classifier : The algorithm, the core of your machine learning process. It can be an SVM, Naive bayes or even a neural network classifier. Basically it's a big "set of rules" on how you want to classify your input.
Model : It is what you get once you have finished training your classifier, it's the resulting object of the training phase. You can see it as an "intelligent" black box to whom you feed and input sample and it gives you a label as an ouput.
Hope my answer is clear enough, but yeah the difference is rather subtle between the two terms. | What's the difference between "classifier" and "model" in classification?
I'm definitely no expert in the domain so take my answer with a grain of salt but from what I have understood you have:
Classifier : The algorithm, the core of your machine learning process. It can b |
30,205 | What's the difference between "classifier" and "model" in classification? | A classifier is a specific type of model, the output variable of which is discrete, often nominal. As pointed out by others, the terminology is loose. | What's the difference between "classifier" and "model" in classification? | A classifier is a specific type of model, the output variable of which is discrete, often nominal. As pointed out by others, the terminology is loose. | What's the difference between "classifier" and "model" in classification?
A classifier is a specific type of model, the output variable of which is discrete, often nominal. As pointed out by others, the terminology is loose. | What's the difference between "classifier" and "model" in classification?
A classifier is a specific type of model, the output variable of which is discrete, often nominal. As pointed out by others, the terminology is loose. |
30,206 | What's the difference between "classifier" and "model" in classification? | I don't think there's an unified terminology here, but usually classifier refers to the algorithm to assess classification rules, while the rules themselves is what people often call a model. Otherwise, people call the rules a classifier too, and the algorithms are also referred as models. Also, you can refer to your modelling framework as a model in itself. | What's the difference between "classifier" and "model" in classification? | I don't think there's an unified terminology here, but usually classifier refers to the algorithm to assess classification rules, while the rules themselves is what people often call a model. Otherwis | What's the difference between "classifier" and "model" in classification?
I don't think there's an unified terminology here, but usually classifier refers to the algorithm to assess classification rules, while the rules themselves is what people often call a model. Otherwise, people call the rules a classifier too, and the algorithms are also referred as models. Also, you can refer to your modelling framework as a model in itself. | What's the difference between "classifier" and "model" in classification?
I don't think there's an unified terminology here, but usually classifier refers to the algorithm to assess classification rules, while the rules themselves is what people often call a model. Otherwis |
30,207 | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follows? | Regardless of whether $X$ and $Y$ are normal or not, it is true
(whenever the various expectations exist) that
\begin{align}
\mu_{X+Y} &= \mu_X + \mu_Y\\
\sigma_{X+Y}^2 &= \sigma_{X}^2 + \sigma_{Y}^2 + 2\operatorname{cov}(X,Y)
\end{align}
where $\operatorname{cov}(X,Y)=0$ whenever $X$ and $Y$ are independent or
uncorrelated. The only issue is whether $X+Y$ is normal or not
and the answer to this is that $X+Y$ is normal when $X$ and $Y$
are jointly normal (including, as a special case, when $X$ and $Y$
are independent random variables). To forestall the inevitable
follow-up question,
No, $X$ and $Y$ being
uncorrelated normal random variables does not
suffice to assert normality of $X+Y$. If $X$ and $Y$ are
jointly normal, then they also are marginally normal.
If they are jointly normal as well as uncorrelated, then they are
marginally normal (as stated in the previous sentence) and they are independent as well. But, regardless of whether they are independent
or dependent, correlated or uncorrelated, the sum of
jointly normal random variables has a normal distribution with
mean and variance as given above.
In a comment following this answer, ssdecontrol raised another question:
is
joint normality just a sufficient condition to assert normality of
$X+Y$, or is it necessary as well?
Is it possible to find
marginally normal $X$ and $Y$ that are not jointly normal
but their sum $X+Y$ is normal? This question was asked
in the comments below by
Moderator Glen_b. This is indeed possible, and I have
given an example in an answer to this question.
Is it possible to find $X$ and $Y$ such that they are not
jointly normal but their sum $X+Y$ is normal? Here, we do not
insist on $X$ and $Y$ being marginally normal. The answer is Yes,
and an example is given by kjetil b halvorsen. Another, perhaps
simpler, answer is as follows. Consider $U$ and $V$ be independent
standard normal random variables and $W$ a discrete random variable
taking on each of the values $+1$ and $-1$ with probability $\frac 12$. Then,
$X = U+W$ and $Y=V-W$ are not marginally normal (they have identical
mixture Gaussian density $\frac{\phi(t+1)+\phi(t-1)}{2}$), and so
are not jointly normal either. But their sum $X+Y = U+V$ is
a $N(0,2)$ random variable. | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follow | Regardless of whether $X$ and $Y$ are normal or not, it is true
(whenever the various expectations exist) that
\begin{align}
\mu_{X+Y} &= \mu_X + \mu_Y\\
\sigma_{X+Y}^2 &= \sigma_{X}^2 + \sigma_{Y}^ | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follows?
Regardless of whether $X$ and $Y$ are normal or not, it is true
(whenever the various expectations exist) that
\begin{align}
\mu_{X+Y} &= \mu_X + \mu_Y\\
\sigma_{X+Y}^2 &= \sigma_{X}^2 + \sigma_{Y}^2 + 2\operatorname{cov}(X,Y)
\end{align}
where $\operatorname{cov}(X,Y)=0$ whenever $X$ and $Y$ are independent or
uncorrelated. The only issue is whether $X+Y$ is normal or not
and the answer to this is that $X+Y$ is normal when $X$ and $Y$
are jointly normal (including, as a special case, when $X$ and $Y$
are independent random variables). To forestall the inevitable
follow-up question,
No, $X$ and $Y$ being
uncorrelated normal random variables does not
suffice to assert normality of $X+Y$. If $X$ and $Y$ are
jointly normal, then they also are marginally normal.
If they are jointly normal as well as uncorrelated, then they are
marginally normal (as stated in the previous sentence) and they are independent as well. But, regardless of whether they are independent
or dependent, correlated or uncorrelated, the sum of
jointly normal random variables has a normal distribution with
mean and variance as given above.
In a comment following this answer, ssdecontrol raised another question:
is
joint normality just a sufficient condition to assert normality of
$X+Y$, or is it necessary as well?
Is it possible to find
marginally normal $X$ and $Y$ that are not jointly normal
but their sum $X+Y$ is normal? This question was asked
in the comments below by
Moderator Glen_b. This is indeed possible, and I have
given an example in an answer to this question.
Is it possible to find $X$ and $Y$ such that they are not
jointly normal but their sum $X+Y$ is normal? Here, we do not
insist on $X$ and $Y$ being marginally normal. The answer is Yes,
and an example is given by kjetil b halvorsen. Another, perhaps
simpler, answer is as follows. Consider $U$ and $V$ be independent
standard normal random variables and $W$ a discrete random variable
taking on each of the values $+1$ and $-1$ with probability $\frac 12$. Then,
$X = U+W$ and $Y=V-W$ are not marginally normal (they have identical
mixture Gaussian density $\frac{\phi(t+1)+\phi(t-1)}{2}$), and so
are not jointly normal either. But their sum $X+Y = U+V$ is
a $N(0,2)$ random variable. | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follow
Regardless of whether $X$ and $Y$ are normal or not, it is true
(whenever the various expectations exist) that
\begin{align}
\mu_{X+Y} &= \mu_X + \mu_Y\\
\sigma_{X+Y}^2 &= \sigma_{X}^2 + \sigma_{Y}^ |
30,208 | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follows? | Let $Z = X + Y$ be, when X and Y are independent you can prove it using the moment-generating function
$ M_{X,Y}(t) = E(\exp(t(X+Y)) \\
M_{X,Y}(t)= E(\exp\{tX\}\exp\{tY\}))\\
M_{X,Y}(t) = M_x(t)M_Y(t) \\
M_{X,Y}(t) = \exp\{ (\mu_X + \mu_Y)t + \frac{\sigma^2_X + \sigma^2_Y}{2}t^2 \} $
Then, $Z \sim N( \mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y )$ | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follow | Let $Z = X + Y$ be, when X and Y are independent you can prove it using the moment-generating function
$ M_{X,Y}(t) = E(\exp(t(X+Y)) \\
M_{X,Y}(t)= E(\exp\{tX\}\exp\{tY\}))\\
M_{X,Y}(t) = M_x(t)M_Y( | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follows?
Let $Z = X + Y$ be, when X and Y are independent you can prove it using the moment-generating function
$ M_{X,Y}(t) = E(\exp(t(X+Y)) \\
M_{X,Y}(t)= E(\exp\{tX\}\exp\{tY\}))\\
M_{X,Y}(t) = M_x(t)M_Y(t) \\
M_{X,Y}(t) = \exp\{ (\mu_X + \mu_Y)t + \frac{\sigma^2_X + \sigma^2_Y}{2}t^2 \} $
Then, $Z \sim N( \mu_X + \mu_Y, \sigma^2_X + \sigma^2_Y )$ | If $X$ and $Y$ are normally distributed random variables, what kind of distribution their sum follow
Let $Z = X + Y$ be, when X and Y are independent you can prove it using the moment-generating function
$ M_{X,Y}(t) = E(\exp(t(X+Y)) \\
M_{X,Y}(t)= E(\exp\{tX\}\exp\{tY\}))\\
M_{X,Y}(t) = M_x(t)M_Y( |
30,209 | Do I need to remove duplicate objects for cluster analysis of objects? | It changes the results. With k-means this should be straightforward to see: the mean of 0, 0 and 1 is different from 0 and 1. Usually this will also be the case for hierarchical clustering, but it depends on the linkage criteria, For example, complete linkage shouldn't be affected.
Speaking generally, I would argue for leaving it in. Having duplicates indicate that those are particularly likely combinations of variable values, which should get a higher weight because of that. This means observations with the same values do not become redundant.
Do you really have performance problems with these two algorithms? | Do I need to remove duplicate objects for cluster analysis of objects? | It changes the results. With k-means this should be straightforward to see: the mean of 0, 0 and 1 is different from 0 and 1. Usually this will also be the case for hierarchical clustering, but it dep | Do I need to remove duplicate objects for cluster analysis of objects?
It changes the results. With k-means this should be straightforward to see: the mean of 0, 0 and 1 is different from 0 and 1. Usually this will also be the case for hierarchical clustering, but it depends on the linkage criteria, For example, complete linkage shouldn't be affected.
Speaking generally, I would argue for leaving it in. Having duplicates indicate that those are particularly likely combinations of variable values, which should get a higher weight because of that. This means observations with the same values do not become redundant.
Do you really have performance problems with these two algorithms? | Do I need to remove duplicate objects for cluster analysis of objects?
It changes the results. With k-means this should be straightforward to see: the mean of 0, 0 and 1 is different from 0 and 1. Usually this will also be the case for hierarchical clustering, but it dep |
30,210 | Do I need to remove duplicate objects for cluster analysis of objects? | If you remove duplicates, you need to add weights to your data instead, otherwise the result may change (except for single-linkage clustering, I guess).
If your data set has few duplicates, this will likely cost you some runtime.
If your data set has lots of duplicates, it can accelerate the processing a lot to merge them and use weights instead. If you have on average 10 duplicates of each object, and an algorithm with quadratic runtime, the speedup can be 100 fold. That is substantial, and well worth the effort to merge duplicates. | Do I need to remove duplicate objects for cluster analysis of objects? | If you remove duplicates, you need to add weights to your data instead, otherwise the result may change (except for single-linkage clustering, I guess).
If your data set has few duplicates, this will | Do I need to remove duplicate objects for cluster analysis of objects?
If you remove duplicates, you need to add weights to your data instead, otherwise the result may change (except for single-linkage clustering, I guess).
If your data set has few duplicates, this will likely cost you some runtime.
If your data set has lots of duplicates, it can accelerate the processing a lot to merge them and use weights instead. If you have on average 10 duplicates of each object, and an algorithm with quadratic runtime, the speedup can be 100 fold. That is substantial, and well worth the effort to merge duplicates. | Do I need to remove duplicate objects for cluster analysis of objects?
If you remove duplicates, you need to add weights to your data instead, otherwise the result may change (except for single-linkage clustering, I guess).
If your data set has few duplicates, this will |
30,211 | Height of a Normal distribution curve | The height of the mode in a normal density is $\frac{1}{\sqrt{2\pi}\sigma}\approx \frac{.3989}{\sigma}$ (or roughly 0.4/$\sigma$). You can see this by substituting the mode (which is also the mean, $\mu$) for $x$ in the formula for a normal density.
So there's no single "ideal height" -- it depends on the standard deviation
edit: see here:
Indeed the same thing can be seen from the wikipedia diagram you linked to -- it shows four different normal densities, and only one of them has a height near 0.4
A normal distribution with mean 0 and standard deviation 1 is called a 'standard normal distribution' | Height of a Normal distribution curve | The height of the mode in a normal density is $\frac{1}{\sqrt{2\pi}\sigma}\approx \frac{.3989}{\sigma}$ (or roughly 0.4/$\sigma$). You can see this by substituting the mode (which is also the mean, $\ | Height of a Normal distribution curve
The height of the mode in a normal density is $\frac{1}{\sqrt{2\pi}\sigma}\approx \frac{.3989}{\sigma}$ (or roughly 0.4/$\sigma$). You can see this by substituting the mode (which is also the mean, $\mu$) for $x$ in the formula for a normal density.
So there's no single "ideal height" -- it depends on the standard deviation
edit: see here:
Indeed the same thing can be seen from the wikipedia diagram you linked to -- it shows four different normal densities, and only one of them has a height near 0.4
A normal distribution with mean 0 and standard deviation 1 is called a 'standard normal distribution' | Height of a Normal distribution curve
The height of the mode in a normal density is $\frac{1}{\sqrt{2\pi}\sigma}\approx \frac{.3989}{\sigma}$ (or roughly 0.4/$\sigma$). You can see this by substituting the mode (which is also the mean, $\ |
30,212 | Is Maximum Likelihood Estimation (MLE) a parametric approach? | Usually, maximum likelihood is used in a parametric context. But the same principle can be used nonparametrically. For example, if you have data consisting in observation from a continuous random variable $X$, say observations $x_1, x_2, \dots, x_n$, and the model is unrestricted, that is, just saying the data comes from a distribution with cumulative distribution function $F$, then the empirical distribution function
$$
\hat{F}_n(x) = \frac{\text{number of observations $x_i$ with $x_i \le x$}}{n}
$$
the non-parametric maximum likelihood estimator.
This is related to bootstrapping. In bootstrapping, we are repeatedly sampling with replacement from the original sample $X_1,X_2, \dots, X_n$. That is exactly the same as taking an iid sample from $\hat{F}_n$ defined above. In that way, bootstrapping can be seen as nonparametric maximum likelihood.
EDIT (answer to question in comments by @Martijn Weterings)
If the model is $X_1, X_2, \dotsc, X_n$ IID from some distribution with cdf $F$, without any restrictions on $F$, then one can show that $\hat{F}_n(x)$ is the mle (maximum likelihood estimator) of $F(x)$. That is done in What inferential method produces the empirical CDF? so I will not repeat it here. Now, if $\theta$ is a real parameter describing some aspect of $F$, it can be written as a function $\theta(F)$. This is called a functional parameter. Some examples is
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\E_F X=\int x \; dF(x)\quad (\text{The Stieltjes Integral}) \\
\text{median}_F X = F^{-1}(0.5)
$$ and many others. The parameter space is
$$\Theta =\left\{ F \colon \text{$F$ is a distribution function on the real line } \right\}$$
By the invariance property (Invariance property of maximum likelihood estimator?) we then find mle's by
$$
\widehat{\E_F X} = \int x \; d\hat{F}_n(x) \\
\widehat{\text{median}_F X}= \hat{F}_n^{-1}(0.5).
$$
It should be clearer now. We don't (as you ask about) use the empirical distribution function to define the likelihood, the likelihood function is completely nonparametric, and the $\hat{F}_n$ is the mle. The bootstrap is then used to describe the variability/uncertainty in mle's of $\theta(F)$'s of interest by resampling (which is simple random sampling from the $\hat{F}_n$.)
EDIT In the comment thread many seems to disbelieve this (which really is a standard result!) result. So trying to make it clearer. The likelihood function is nonparametric, the parameter is $F$, the unknown cumulative distribution function. For a given cutoff point in $\mathbb{R}$, a function of the parameter is $\DeclareMathOperator{\P}{\mathbb{P}} x(F)=F(x)=\P(X \le x)$. A corresponding transformation of the random variable $X$ is $I_x=\mathbb{I}(X\le x)$ which is a Bernoulli random variable with parameter $x(F)$. The maximum likelihood estimate of $x(F)$ based on the sample of $I_x(X_1), \dotsc, I_x(X_n)$ is the usual fraction of $X_i$'s that is lesser or equal to $x$, and the empirical cumulative distribution function expresses this simultaneously for all $x$. Hopes this is clearer now! | Is Maximum Likelihood Estimation (MLE) a parametric approach? | Usually, maximum likelihood is used in a parametric context. But the same principle can be used nonparametrically. For example, if you have data consisting in observation from a continuous random va | Is Maximum Likelihood Estimation (MLE) a parametric approach?
Usually, maximum likelihood is used in a parametric context. But the same principle can be used nonparametrically. For example, if you have data consisting in observation from a continuous random variable $X$, say observations $x_1, x_2, \dots, x_n$, and the model is unrestricted, that is, just saying the data comes from a distribution with cumulative distribution function $F$, then the empirical distribution function
$$
\hat{F}_n(x) = \frac{\text{number of observations $x_i$ with $x_i \le x$}}{n}
$$
the non-parametric maximum likelihood estimator.
This is related to bootstrapping. In bootstrapping, we are repeatedly sampling with replacement from the original sample $X_1,X_2, \dots, X_n$. That is exactly the same as taking an iid sample from $\hat{F}_n$ defined above. In that way, bootstrapping can be seen as nonparametric maximum likelihood.
EDIT (answer to question in comments by @Martijn Weterings)
If the model is $X_1, X_2, \dotsc, X_n$ IID from some distribution with cdf $F$, without any restrictions on $F$, then one can show that $\hat{F}_n(x)$ is the mle (maximum likelihood estimator) of $F(x)$. That is done in What inferential method produces the empirical CDF? so I will not repeat it here. Now, if $\theta$ is a real parameter describing some aspect of $F$, it can be written as a function $\theta(F)$. This is called a functional parameter. Some examples is
$$ \DeclareMathOperator{\E}{\mathbb{E}}
\E_F X=\int x \; dF(x)\quad (\text{The Stieltjes Integral}) \\
\text{median}_F X = F^{-1}(0.5)
$$ and many others. The parameter space is
$$\Theta =\left\{ F \colon \text{$F$ is a distribution function on the real line } \right\}$$
By the invariance property (Invariance property of maximum likelihood estimator?) we then find mle's by
$$
\widehat{\E_F X} = \int x \; d\hat{F}_n(x) \\
\widehat{\text{median}_F X}= \hat{F}_n^{-1}(0.5).
$$
It should be clearer now. We don't (as you ask about) use the empirical distribution function to define the likelihood, the likelihood function is completely nonparametric, and the $\hat{F}_n$ is the mle. The bootstrap is then used to describe the variability/uncertainty in mle's of $\theta(F)$'s of interest by resampling (which is simple random sampling from the $\hat{F}_n$.)
EDIT In the comment thread many seems to disbelieve this (which really is a standard result!) result. So trying to make it clearer. The likelihood function is nonparametric, the parameter is $F$, the unknown cumulative distribution function. For a given cutoff point in $\mathbb{R}$, a function of the parameter is $\DeclareMathOperator{\P}{\mathbb{P}} x(F)=F(x)=\P(X \le x)$. A corresponding transformation of the random variable $X$ is $I_x=\mathbb{I}(X\le x)$ which is a Bernoulli random variable with parameter $x(F)$. The maximum likelihood estimate of $x(F)$ based on the sample of $I_x(X_1), \dotsc, I_x(X_n)$ is the usual fraction of $X_i$'s that is lesser or equal to $x$, and the empirical cumulative distribution function expresses this simultaneously for all $x$. Hopes this is clearer now! | Is Maximum Likelihood Estimation (MLE) a parametric approach?
Usually, maximum likelihood is used in a parametric context. But the same principle can be used nonparametrically. For example, if you have data consisting in observation from a continuous random va |
30,213 | Is Maximum Likelihood Estimation (MLE) a parametric approach? | It is applied to both, parametric and nonparametric models.
Parametric example. Let $x_1,\dots,x_n$ be an independent sample from an $Exp(\lambda)$. We can find the MLE of the parameter $\lambda$ by maximising the corresponding likelihood function.
Nonparametric example. Maximum likelihood density estimation. In this recent paper you can find an example of a maximum likelihood estimator of a multivariate density. This can be considered as a nonparametric problem, which incidentally represents an interesting alternative to the KDE mentioned in your question. | Is Maximum Likelihood Estimation (MLE) a parametric approach? | It is applied to both, parametric and nonparametric models.
Parametric example. Let $x_1,\dots,x_n$ be an independent sample from an $Exp(\lambda)$. We can find the MLE of the parameter $\lambda$ by m | Is Maximum Likelihood Estimation (MLE) a parametric approach?
It is applied to both, parametric and nonparametric models.
Parametric example. Let $x_1,\dots,x_n$ be an independent sample from an $Exp(\lambda)$. We can find the MLE of the parameter $\lambda$ by maximising the corresponding likelihood function.
Nonparametric example. Maximum likelihood density estimation. In this recent paper you can find an example of a maximum likelihood estimator of a multivariate density. This can be considered as a nonparametric problem, which incidentally represents an interesting alternative to the KDE mentioned in your question. | Is Maximum Likelihood Estimation (MLE) a parametric approach?
It is applied to both, parametric and nonparametric models.
Parametric example. Let $x_1,\dots,x_n$ be an independent sample from an $Exp(\lambda)$. We can find the MLE of the parameter $\lambda$ by m |
30,214 | Is Maximum Likelihood Estimation (MLE) a parametric approach? | Nonparametric maximum likelihood estimates exist only if you impose special constraints on the class of allowed densities. Suppose that you have a random sample $x_1,\dots,x_n$ from some density $f$ with respect to Lebesgue measure. In the nonparametric setting, the likelihood is a functional which for each density $f$ outputs a real number
$$
L_x[f] = \prod_{i=1}^n f(x_i) \, .
$$
If you are allowed to choose any density $f$, then for $\epsilon>0$ you can pick
$$
f_\epsilon(t) = \frac{1}{n}\sum_{i=1}^n \frac{e^{-(t-x_i)^2/2\epsilon^2}}{\sqrt{2\pi}\epsilon} \,.
$$
But then, because
$$
L_x[f_\epsilon] \geq \frac{1}{\left(n\sqrt{2\pi}\epsilon\right)^n} \, ,
$$
making $\epsilon$ small you can make $L_x[f_\epsilon]$ grow unboundedly. Hence, there is no density $f$ which is the maximum likelihood estimate. Grenander proposed the method of sieves, in which we make the class of allowed densities grow with the sample size, as a remedy to this aspect of nonparametric maximum likelihood. Exagerating a little bit, we may say that this property of nonparametric maximum likelihood is "the mother of all overfitting" in Machine Learning, but I digress. | Is Maximum Likelihood Estimation (MLE) a parametric approach? | Nonparametric maximum likelihood estimates exist only if you impose special constraints on the class of allowed densities. Suppose that you have a random sample $x_1,\dots,x_n$ from some density $f$ w | Is Maximum Likelihood Estimation (MLE) a parametric approach?
Nonparametric maximum likelihood estimates exist only if you impose special constraints on the class of allowed densities. Suppose that you have a random sample $x_1,\dots,x_n$ from some density $f$ with respect to Lebesgue measure. In the nonparametric setting, the likelihood is a functional which for each density $f$ outputs a real number
$$
L_x[f] = \prod_{i=1}^n f(x_i) \, .
$$
If you are allowed to choose any density $f$, then for $\epsilon>0$ you can pick
$$
f_\epsilon(t) = \frac{1}{n}\sum_{i=1}^n \frac{e^{-(t-x_i)^2/2\epsilon^2}}{\sqrt{2\pi}\epsilon} \,.
$$
But then, because
$$
L_x[f_\epsilon] \geq \frac{1}{\left(n\sqrt{2\pi}\epsilon\right)^n} \, ,
$$
making $\epsilon$ small you can make $L_x[f_\epsilon]$ grow unboundedly. Hence, there is no density $f$ which is the maximum likelihood estimate. Grenander proposed the method of sieves, in which we make the class of allowed densities grow with the sample size, as a remedy to this aspect of nonparametric maximum likelihood. Exagerating a little bit, we may say that this property of nonparametric maximum likelihood is "the mother of all overfitting" in Machine Learning, but I digress. | Is Maximum Likelihood Estimation (MLE) a parametric approach?
Nonparametric maximum likelihood estimates exist only if you impose special constraints on the class of allowed densities. Suppose that you have a random sample $x_1,\dots,x_n$ from some density $f$ w |
30,215 | Is Maximum Likelihood Estimation (MLE) a parametric approach? | Not necessarily. You can use maximum likelihood to fit nonparametric models such as infinite mixture model. (Definition of "nonparametric model" is not always clear-cut though.) | Is Maximum Likelihood Estimation (MLE) a parametric approach? | Not necessarily. You can use maximum likelihood to fit nonparametric models such as infinite mixture model. (Definition of "nonparametric model" is not always clear-cut though.) | Is Maximum Likelihood Estimation (MLE) a parametric approach?
Not necessarily. You can use maximum likelihood to fit nonparametric models such as infinite mixture model. (Definition of "nonparametric model" is not always clear-cut though.) | Is Maximum Likelihood Estimation (MLE) a parametric approach?
Not necessarily. You can use maximum likelihood to fit nonparametric models such as infinite mixture model. (Definition of "nonparametric model" is not always clear-cut though.) |
30,216 | Why do we use the determinant of the covariance matrix when using the multivariate normal? | Instead of jumping to the multivariate case in matrix form, look at the bivariate case first:
Can you recognize the portion of the denominator that is the determinant of the variance-covariance matrix below?
In the univariate case you don't have a determinant because $\sum$ consists of just one term. You don't have another variable, so you don't need to take into account any interaction between them. | Why do we use the determinant of the covariance matrix when using the multivariate normal? | Instead of jumping to the multivariate case in matrix form, look at the bivariate case first:
Can you recognize the portion of the denominator that is the determinant of the variance-covariance matri | Why do we use the determinant of the covariance matrix when using the multivariate normal?
Instead of jumping to the multivariate case in matrix form, look at the bivariate case first:
Can you recognize the portion of the denominator that is the determinant of the variance-covariance matrix below?
In the univariate case you don't have a determinant because $\sum$ consists of just one term. You don't have another variable, so you don't need to take into account any interaction between them. | Why do we use the determinant of the covariance matrix when using the multivariate normal?
Instead of jumping to the multivariate case in matrix form, look at the bivariate case first:
Can you recognize the portion of the denominator that is the determinant of the variance-covariance matri |
30,217 | Why do we use the determinant of the covariance matrix when using the multivariate normal? | Here another practical way to feel more confident about the vector and matrix notation of the multivariate normal. How did the transformation to the multivariate case work and generate $\Sigma_\mathcal{E}$ and $\mathbf{(y-\mu)}'(\Sigma_\mathcal{E}^{-1})(\mathbf{y-\mu})$ in the new joint density? We will show this briefly.
Lets say we have a random vector $\mathbf{y}$ with three variables as in
$$\mathbf{y}=\begin{pmatrix} y_1 \\ y_2 \\ y_3\end{pmatrix}, \mathbb{E}[\mathbf{y}]=\begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3\end{pmatrix}, \Sigma_\mathcal{E}= \begin{pmatrix}
\sigma_1^2 & 0 & 0 \\
0 & \sigma_2^2 & 0 \\
0 & 0 & \sigma_3^2
\end{pmatrix}$$
Clearly when these random values appear we observe them simultaneously and therefore the joint probability of observing particular values of this random vector simultaneously is equal to
$$P(y_1)\cdot P(y_2)\cdot P(y_3)$$
Where $P(...)$ represents the (marginal) probability distribution of the random variables. Since we reason from mutually independent standard normal random variables, we implicitly assume independence between the variables. This is crucial because only then can we simply multiply the marginal probabilities to arrive at joint probabilities. The logic is much the same as throwing two $6$ dots with two dices at once, under natural independence the probability is $(1/6)(1/6)≈2.8\%$. For our values of $y_1,y_2$ and $y_3$ we can apply the univariate Normal PDF, that is
$$P(y_1)=\dfrac{1}{\sqrt{2\pi}\sigma_1}\exp\left(-\dfrac{(y_1-\mu_1)^2}{2\sigma_1^2}\right) $$
$$P(y_2)=\dfrac{1}{\sqrt{2\pi}\sigma_2}\exp\left(-\dfrac{(y_2-\mu_2)^2}{2\sigma_2^2}\right) $$
$$P(y_3)=\dfrac{1}{\sqrt{2\pi}\sigma_3}\exp\left(-\dfrac{(y_3-\mu_3)^2}{2\sigma_3^2}\right) $$
And hence if we multiply we get
$$P(y_1)\cdot P(y_2)\cdot P(y_3)=\left(\dfrac{1}{\sqrt{2\pi}\sigma_1}\right)\left(\dfrac{1}{\sqrt{2\pi}\sigma_2}\right)\left(\dfrac{1}{\sqrt{2\pi}\sigma_3}\right)\exp\left(-\dfrac{(y_1-\mu_1)^2}{2\sigma_1^2}-\dfrac{(y_2-\mu_2)^2}{2\sigma_2^2}-\dfrac{(y_3-\mu_3)^2}{2\sigma_3^2}\right)$$
$$=\left(\dfrac{1}{\sqrt{2\pi}^3\sigma_1\sigma_2 \sigma_3}\right)\exp\left(-\dfrac{1}{2}\left(\dfrac{(y_1-\mu_1)^2}{\sigma_1^2}+\dfrac{(y_2-\mu_2)^2}{\sigma_2^2}+\dfrac{(y_3-\mu_3)^2}{\sigma_3^2}\right)\right)$$
Now notice that
$$\sigma_1^2 \sigma_2^2 \sigma_3^2=\det(\Sigma_\mathcal{E})=\sigma_1^2\left| {\begin{array}{cc}
\sigma_2^2 & 0 \\
0 & \sigma_3^2
\end{array} } \right|=\sigma_1^2 \sigma_2^2 \sigma_3^2 $$
and notice that
$$\mathbf{(y-\mu)}'(\Sigma_\mathcal{E}^{-1})(\mathbf{y-\mu}) = \\= \begin{pmatrix}
y_1-\mu_1 & y_2-\mu_2 & y_3-\mu_3 \\\end{pmatrix} \begin{pmatrix}
\sigma_1^2 & 0 & 0 \\
0 & \sigma_2^2 & 0 \\
0 & 0 & \sigma_3^2
\end{pmatrix}^{-1}\begin{pmatrix}
y_1-\mu_1 \\
y_2-\mu_2 \\
y_3-\mu_3 \\\end{pmatrix} $$
$$=\begin{pmatrix}
y_1-\mu_1 & y_2-\mu_2 & y_3-\mu_3 \\\end{pmatrix} \begin{pmatrix}
1/\sigma_1^2 & 0 & 0 \\
0 & 1/\sigma_2^2 & 0 \\
0 & 0 & 1/\sigma_3^2
\end{pmatrix}\begin{pmatrix}
y_1-\mu_1 \\
y_2-\mu_2 \\
y_3-\mu_3 \\\end{pmatrix} $$
$$ = \begin{pmatrix}
(y_1-\mu_1)/\sigma_1^2 & (y_2-\mu_2)/\sigma_2^2 & (y_3-\mu_3)/\sigma_3^2 \\\end{pmatrix} \begin{pmatrix}
y_1-\mu_1 \\
y_2-\mu_2 \\
y_3-\mu_3 \\\end{pmatrix}$$
$$= \left(\dfrac{(y_1-\mu_1)^2}{\sigma_1^2}+\dfrac{(y_2-\mu_2)^2}{\sigma_2^2}+\dfrac{(y_3-\mu_3)^2}{\sigma_3^2}\right)$$
Therefore we can write the multivariate normal (joint distribution) as
$$\dfrac{1}{\sqrt{(2\pi)^k\det(\Sigma_\mathcal{E})}}\exp\left(-\dfrac{\mathbf{(y-\mu)}'(\Sigma_\mathcal{E}^{-1})(\mathbf{y-\mu})}{2}\right)$$ | Why do we use the determinant of the covariance matrix when using the multivariate normal? | Here another practical way to feel more confident about the vector and matrix notation of the multivariate normal. How did the transformation to the multivariate case work and generate $\Sigma_\mathca | Why do we use the determinant of the covariance matrix when using the multivariate normal?
Here another practical way to feel more confident about the vector and matrix notation of the multivariate normal. How did the transformation to the multivariate case work and generate $\Sigma_\mathcal{E}$ and $\mathbf{(y-\mu)}'(\Sigma_\mathcal{E}^{-1})(\mathbf{y-\mu})$ in the new joint density? We will show this briefly.
Lets say we have a random vector $\mathbf{y}$ with three variables as in
$$\mathbf{y}=\begin{pmatrix} y_1 \\ y_2 \\ y_3\end{pmatrix}, \mathbb{E}[\mathbf{y}]=\begin{pmatrix} \mu_1 \\ \mu_2 \\ \mu_3\end{pmatrix}, \Sigma_\mathcal{E}= \begin{pmatrix}
\sigma_1^2 & 0 & 0 \\
0 & \sigma_2^2 & 0 \\
0 & 0 & \sigma_3^2
\end{pmatrix}$$
Clearly when these random values appear we observe them simultaneously and therefore the joint probability of observing particular values of this random vector simultaneously is equal to
$$P(y_1)\cdot P(y_2)\cdot P(y_3)$$
Where $P(...)$ represents the (marginal) probability distribution of the random variables. Since we reason from mutually independent standard normal random variables, we implicitly assume independence between the variables. This is crucial because only then can we simply multiply the marginal probabilities to arrive at joint probabilities. The logic is much the same as throwing two $6$ dots with two dices at once, under natural independence the probability is $(1/6)(1/6)≈2.8\%$. For our values of $y_1,y_2$ and $y_3$ we can apply the univariate Normal PDF, that is
$$P(y_1)=\dfrac{1}{\sqrt{2\pi}\sigma_1}\exp\left(-\dfrac{(y_1-\mu_1)^2}{2\sigma_1^2}\right) $$
$$P(y_2)=\dfrac{1}{\sqrt{2\pi}\sigma_2}\exp\left(-\dfrac{(y_2-\mu_2)^2}{2\sigma_2^2}\right) $$
$$P(y_3)=\dfrac{1}{\sqrt{2\pi}\sigma_3}\exp\left(-\dfrac{(y_3-\mu_3)^2}{2\sigma_3^2}\right) $$
And hence if we multiply we get
$$P(y_1)\cdot P(y_2)\cdot P(y_3)=\left(\dfrac{1}{\sqrt{2\pi}\sigma_1}\right)\left(\dfrac{1}{\sqrt{2\pi}\sigma_2}\right)\left(\dfrac{1}{\sqrt{2\pi}\sigma_3}\right)\exp\left(-\dfrac{(y_1-\mu_1)^2}{2\sigma_1^2}-\dfrac{(y_2-\mu_2)^2}{2\sigma_2^2}-\dfrac{(y_3-\mu_3)^2}{2\sigma_3^2}\right)$$
$$=\left(\dfrac{1}{\sqrt{2\pi}^3\sigma_1\sigma_2 \sigma_3}\right)\exp\left(-\dfrac{1}{2}\left(\dfrac{(y_1-\mu_1)^2}{\sigma_1^2}+\dfrac{(y_2-\mu_2)^2}{\sigma_2^2}+\dfrac{(y_3-\mu_3)^2}{\sigma_3^2}\right)\right)$$
Now notice that
$$\sigma_1^2 \sigma_2^2 \sigma_3^2=\det(\Sigma_\mathcal{E})=\sigma_1^2\left| {\begin{array}{cc}
\sigma_2^2 & 0 \\
0 & \sigma_3^2
\end{array} } \right|=\sigma_1^2 \sigma_2^2 \sigma_3^2 $$
and notice that
$$\mathbf{(y-\mu)}'(\Sigma_\mathcal{E}^{-1})(\mathbf{y-\mu}) = \\= \begin{pmatrix}
y_1-\mu_1 & y_2-\mu_2 & y_3-\mu_3 \\\end{pmatrix} \begin{pmatrix}
\sigma_1^2 & 0 & 0 \\
0 & \sigma_2^2 & 0 \\
0 & 0 & \sigma_3^2
\end{pmatrix}^{-1}\begin{pmatrix}
y_1-\mu_1 \\
y_2-\mu_2 \\
y_3-\mu_3 \\\end{pmatrix} $$
$$=\begin{pmatrix}
y_1-\mu_1 & y_2-\mu_2 & y_3-\mu_3 \\\end{pmatrix} \begin{pmatrix}
1/\sigma_1^2 & 0 & 0 \\
0 & 1/\sigma_2^2 & 0 \\
0 & 0 & 1/\sigma_3^2
\end{pmatrix}\begin{pmatrix}
y_1-\mu_1 \\
y_2-\mu_2 \\
y_3-\mu_3 \\\end{pmatrix} $$
$$ = \begin{pmatrix}
(y_1-\mu_1)/\sigma_1^2 & (y_2-\mu_2)/\sigma_2^2 & (y_3-\mu_3)/\sigma_3^2 \\\end{pmatrix} \begin{pmatrix}
y_1-\mu_1 \\
y_2-\mu_2 \\
y_3-\mu_3 \\\end{pmatrix}$$
$$= \left(\dfrac{(y_1-\mu_1)^2}{\sigma_1^2}+\dfrac{(y_2-\mu_2)^2}{\sigma_2^2}+\dfrac{(y_3-\mu_3)^2}{\sigma_3^2}\right)$$
Therefore we can write the multivariate normal (joint distribution) as
$$\dfrac{1}{\sqrt{(2\pi)^k\det(\Sigma_\mathcal{E})}}\exp\left(-\dfrac{\mathbf{(y-\mu)}'(\Sigma_\mathcal{E}^{-1})(\mathbf{y-\mu})}{2}\right)$$ | Why do we use the determinant of the covariance matrix when using the multivariate normal?
Here another practical way to feel more confident about the vector and matrix notation of the multivariate normal. How did the transformation to the multivariate case work and generate $\Sigma_\mathca |
30,218 | Why do we use the determinant of the covariance matrix when using the multivariate normal? | Simply put, the determinant really is a Jacobian determinant from a transformation.
See, $\sigma$ is outside the exponential because it is a location scale family. If $y\sim N(\mu,\sigma)$ and $x\sim N(0,1)$, then
$$
f_y(y) = \dfrac{1}{\sigma}f_x(\dfrac{y-\mu}{\sigma}).
$$
This is because the Jacobian of the transformation is $\dfrac{1}{\sigma}$
One way to generalize "location scale" to a multivariate context is through elliptically contoured distributions. If $y\sim N_m(\mu,\Sigma)$ and $x\sim N_m(0,I)$ then
$$
f_y(y) = |\Sigma|^{-m/2}f_x((y-\mu)'\Sigma^{-1}(y-\mu)).
$$
In the latter case, the Jacobian is a matrix $\Sigma^{-m/2}$, and to finish the transformation we need its determinant. | Why do we use the determinant of the covariance matrix when using the multivariate normal? | Simply put, the determinant really is a Jacobian determinant from a transformation.
See, $\sigma$ is outside the exponential because it is a location scale family. If $y\sim N(\mu,\sigma)$ and $x\sim | Why do we use the determinant of the covariance matrix when using the multivariate normal?
Simply put, the determinant really is a Jacobian determinant from a transformation.
See, $\sigma$ is outside the exponential because it is a location scale family. If $y\sim N(\mu,\sigma)$ and $x\sim N(0,1)$, then
$$
f_y(y) = \dfrac{1}{\sigma}f_x(\dfrac{y-\mu}{\sigma}).
$$
This is because the Jacobian of the transformation is $\dfrac{1}{\sigma}$
One way to generalize "location scale" to a multivariate context is through elliptically contoured distributions. If $y\sim N_m(\mu,\Sigma)$ and $x\sim N_m(0,I)$ then
$$
f_y(y) = |\Sigma|^{-m/2}f_x((y-\mu)'\Sigma^{-1}(y-\mu)).
$$
In the latter case, the Jacobian is a matrix $\Sigma^{-m/2}$, and to finish the transformation we need its determinant. | Why do we use the determinant of the covariance matrix when using the multivariate normal?
Simply put, the determinant really is a Jacobian determinant from a transformation.
See, $\sigma$ is outside the exponential because it is a location scale family. If $y\sim N(\mu,\sigma)$ and $x\sim |
30,219 | How to fit data that looks like a gaussian? [duplicate] | There's a difference between fitting a gaussian distribution and fitting a gaussian density curve. What normalmixEM is doing is the former. What you want is (I guess) the latter.
Fitting a distribution is, roughly speaking, what you'd do if you made a histogram of your data, and tried to see what sort of shape it had. What you're doing, instead, is simply plotting a curve. That curve happens to have a hump in the middle, like what you get by plotting a gaussian density function.
To get what you want, you can use something like optim to fit the curve to your data. The following code will use nonlinear least-squares to find the three parameters giving the best-fitting gaussian curve: m is the gaussian mean, s is the standard deviation, and k is an arbitrary scaling parameter (since the gaussian density is constrained to integrate to 1, whereas your data isn't).
x <- seq_along(r)
f <- function(par)
{
m <- par[1]
sd <- par[2]
k <- par[3]
rhat <- k * exp(-0.5 * ((x - m)/sd)^2)
sum((r - rhat)^2)
}
optim(c(15, 2, 1), f, method="BFGS", control=list(reltol=1e-9)) | How to fit data that looks like a gaussian? [duplicate] | There's a difference between fitting a gaussian distribution and fitting a gaussian density curve. What normalmixEM is doing is the former. What you want is (I guess) the latter.
Fitting a distributio | How to fit data that looks like a gaussian? [duplicate]
There's a difference between fitting a gaussian distribution and fitting a gaussian density curve. What normalmixEM is doing is the former. What you want is (I guess) the latter.
Fitting a distribution is, roughly speaking, what you'd do if you made a histogram of your data, and tried to see what sort of shape it had. What you're doing, instead, is simply plotting a curve. That curve happens to have a hump in the middle, like what you get by plotting a gaussian density function.
To get what you want, you can use something like optim to fit the curve to your data. The following code will use nonlinear least-squares to find the three parameters giving the best-fitting gaussian curve: m is the gaussian mean, s is the standard deviation, and k is an arbitrary scaling parameter (since the gaussian density is constrained to integrate to 1, whereas your data isn't).
x <- seq_along(r)
f <- function(par)
{
m <- par[1]
sd <- par[2]
k <- par[3]
rhat <- k * exp(-0.5 * ((x - m)/sd)^2)
sum((r - rhat)^2)
}
optim(c(15, 2, 1), f, method="BFGS", control=list(reltol=1e-9)) | How to fit data that looks like a gaussian? [duplicate]
There's a difference between fitting a gaussian distribution and fitting a gaussian density curve. What normalmixEM is doing is the former. What you want is (I guess) the latter.
Fitting a distributio |
30,220 | How to fit data that looks like a gaussian? [duplicate] | I propose to use non-linear least squares for this analysis.
# First present the data in a data-frame
tab <- data.frame(x=seq_along(r), r=r)
#Apply function nls
(res <- nls( r ~ k*exp(-1/2*(x-mu)^2/sigma^2), start=c(mu=15,sigma=5,k=1) , data = tab))
And from the output, I was able to obtain the following fitted "Gaussian curve":
v <- summary(res)$parameters[,"Estimate"]
plot(r~x, data=tab)
plot(function(x) v[3]*exp(-1/2*(x-v[1])^2/v[2]^2),col=2,add=T,xlim=range(tab$x) )
The fit is not amazing... Wouldn't a $x \mapsto \sin(x) / x$ function be a better model? | How to fit data that looks like a gaussian? [duplicate] | I propose to use non-linear least squares for this analysis.
# First present the data in a data-frame
tab <- data.frame(x=seq_along(r), r=r)
#Apply function nls
(res <- nls( r ~ k*exp(-1/2*(x-mu)^2/si | How to fit data that looks like a gaussian? [duplicate]
I propose to use non-linear least squares for this analysis.
# First present the data in a data-frame
tab <- data.frame(x=seq_along(r), r=r)
#Apply function nls
(res <- nls( r ~ k*exp(-1/2*(x-mu)^2/sigma^2), start=c(mu=15,sigma=5,k=1) , data = tab))
And from the output, I was able to obtain the following fitted "Gaussian curve":
v <- summary(res)$parameters[,"Estimate"]
plot(r~x, data=tab)
plot(function(x) v[3]*exp(-1/2*(x-v[1])^2/v[2]^2),col=2,add=T,xlim=range(tab$x) )
The fit is not amazing... Wouldn't a $x \mapsto \sin(x) / x$ function be a better model? | How to fit data that looks like a gaussian? [duplicate]
I propose to use non-linear least squares for this analysis.
# First present the data in a data-frame
tab <- data.frame(x=seq_along(r), r=r)
#Apply function nls
(res <- nls( r ~ k*exp(-1/2*(x-mu)^2/si |
30,221 | How to do regression with known correlations among the errors? | In general, the Gauss Markov theorem gives that generalized least squares:
$$\hat{\beta}_{GLS} = \left( X^T D^{-1} X\right)^{-1} \left( X^T D^{-1} Y\right)$$
is the BLUE (best linear unbiased estimator) of $\beta$. See Seber and Lee (1973).
Note, even when $D$ is diagonal, it may not be proportional to $I$ by a factor of $\sigma^2$ (heteroscedatistic errors), so the resulting estimand gives the inverse variance weighted least squares for more efficient inference and estimation. | How to do regression with known correlations among the errors? | In general, the Gauss Markov theorem gives that generalized least squares:
$$\hat{\beta}_{GLS} = \left( X^T D^{-1} X\right)^{-1} \left( X^T D^{-1} Y\right)$$
is the BLUE (best linear unbiased estimato | How to do regression with known correlations among the errors?
In general, the Gauss Markov theorem gives that generalized least squares:
$$\hat{\beta}_{GLS} = \left( X^T D^{-1} X\right)^{-1} \left( X^T D^{-1} Y\right)$$
is the BLUE (best linear unbiased estimator) of $\beta$. See Seber and Lee (1973).
Note, even when $D$ is diagonal, it may not be proportional to $I$ by a factor of $\sigma^2$ (heteroscedatistic errors), so the resulting estimand gives the inverse variance weighted least squares for more efficient inference and estimation. | How to do regression with known correlations among the errors?
In general, the Gauss Markov theorem gives that generalized least squares:
$$\hat{\beta}_{GLS} = \left( X^T D^{-1} X\right)^{-1} \left( X^T D^{-1} Y\right)$$
is the BLUE (best linear unbiased estimato |
30,222 | How to do regression with known correlations among the errors? | I had to write a proof that $\hat{\beta}_{GLS}$ is BLUE and the Seber and Lee reference recommended by @AdamO helped me a lot.
However I did not find an openly available version of the proof online. So I figured as I already did the work of typing it in LaTex, I might as well copy it here for those who have a hard time accessing Seber and Lee. The proof follows Seber and Lee closely. Please feel free to mention any typos or flaws in the argument.
Proof
We would like to find a transform of the model that would lead us back to the homoskedastic case where we know that the simple version of Gauss-Markov applies (i.e. the Gauss-Markov theorem under homoskedasticity). Notice that because ${\rm Var}(e |X) = E(ee'|X) = D$, a diagonal matrix with positive diagonal entries, one can write $D = D^{1/2}D^{1/2}$, where $D^{1/2}$ is a diagonal matrix the diagonal elements of which are the square roots of the diagonal elements of $D$.
Suppose $D$ is invertible, that is $\sigma_i >0$ for all $i = 1, \dots, n$. Then $D^{1/2}$ also has an inverse that we denote $D^{-1/2} $. So assume we have a model with errors $\gamma = D^{-1/2} e$ for some linear transformations of the $X$ and $y$ observations, say $Z$ and $w$. We would then have
\begin{align*}
Var(\gamma |Z) & = E( \gamma \gamma'|Z) \\
&= E( D^{-1/2} e e' D^{-1/2} |Z) \\
&= D^{-1/2} E( e e' |Z) D^{-1/2} \\
&= D^{-1/2} D^{1/2}D^{1/2}D^{-1/2} \qquad ,\text{ as $Z$ is a linear transformation of $X$}\\
&= I
\end{align*}
and this model would be homoskedastic.
Notice that such configuration of the errors is provided by the model
\begin{align*}
\underbrace{D^{-1/2} y}_{:=w} = \underbrace{D^{-1/2}X}_{:= Z} \beta + \underbrace{D^{-1/2}e}_{:=\gamma}
\end{align*}
Also, applying the usual result for least-square estimation, we have that the value of $\beta$ which minimizes the squared errors in the transformed model is
\begin{align*}
\beta^* & = (Z'Z)^{-1} Z'w\\
&= ( X' D^{-1/2} D^{-1/2}X)^{-1} X' D^{-1/2} D^{-1/2} y \\
& = \underbrace{(X' D^{-1}X)^{-1} X' D^{-1}}_{:= A_0} y \\
\end{align*}
the generalized least square estimator.
Because the transformed model is homoskedastic, $\beta^*$ is BLUE in the transformed model, by the simple Gauss-Markov theorem. So ${\rm Var}(\beta'|Z) - {\rm Var}(\beta^*|Z) $ is positive semi-definite for every $\beta'$, a linear unbiased estimation of the true parameter.
Now notice that
\begin{align*}
& {\rm Var}(\beta^*|Z) = {\rm Var}(A_0' y|Z) = A_0' {\rm Var}(e|Z) A_0\quad\quad = A_0' D A_0\\
& {\rm Var}( \beta' | Z) = {\rm Var}(A' w|Z)\ = A' {\rm Var}(D^{-1/2}e|Z) A = A' D^{-1/2} D D^{-1/2} A = A'A
\end{align*}
and by the simple Gauss-Markov theorem we have $\tilde{A}'\tilde{A} - A_0' D A_0$ is positive semi-definite for any $\tilde{A}$ such that $\tilde{A}'Z = I$.
Clearly, $A_0$ also yields an unbiased estimator in the original model, as $A_0' X = I$ (See Hansen, 4.6, freely available online).
So to finally get back to the original heteroskedastic model, take any $A$ such that $A'X =I$ and let $\tilde{A}' = A'D^{-1/2}$.
Notice that $\tilde{A}' Z = A'D^{-1/2} D^{1/2} X = A'X = I$. So by Gauss-Markov-first-part again, $\tilde{A}'\tilde{A} - A_0' D A_0$ is positive semi-definite.
But $\tilde{A}'\tilde{A} = A'D^{-1/2}D^{1/2} A = A'DA$.
So wrapping up we have $A_0 X = I$ and for any $A$ such that $AX =I$, $A'DA - A_0 DA_0$ is positive semi-definite, the desired result. | How to do regression with known correlations among the errors? | I had to write a proof that $\hat{\beta}_{GLS}$ is BLUE and the Seber and Lee reference recommended by @AdamO helped me a lot.
However I did not find an openly available version of the proof online. | How to do regression with known correlations among the errors?
I had to write a proof that $\hat{\beta}_{GLS}$ is BLUE and the Seber and Lee reference recommended by @AdamO helped me a lot.
However I did not find an openly available version of the proof online. So I figured as I already did the work of typing it in LaTex, I might as well copy it here for those who have a hard time accessing Seber and Lee. The proof follows Seber and Lee closely. Please feel free to mention any typos or flaws in the argument.
Proof
We would like to find a transform of the model that would lead us back to the homoskedastic case where we know that the simple version of Gauss-Markov applies (i.e. the Gauss-Markov theorem under homoskedasticity). Notice that because ${\rm Var}(e |X) = E(ee'|X) = D$, a diagonal matrix with positive diagonal entries, one can write $D = D^{1/2}D^{1/2}$, where $D^{1/2}$ is a diagonal matrix the diagonal elements of which are the square roots of the diagonal elements of $D$.
Suppose $D$ is invertible, that is $\sigma_i >0$ for all $i = 1, \dots, n$. Then $D^{1/2}$ also has an inverse that we denote $D^{-1/2} $. So assume we have a model with errors $\gamma = D^{-1/2} e$ for some linear transformations of the $X$ and $y$ observations, say $Z$ and $w$. We would then have
\begin{align*}
Var(\gamma |Z) & = E( \gamma \gamma'|Z) \\
&= E( D^{-1/2} e e' D^{-1/2} |Z) \\
&= D^{-1/2} E( e e' |Z) D^{-1/2} \\
&= D^{-1/2} D^{1/2}D^{1/2}D^{-1/2} \qquad ,\text{ as $Z$ is a linear transformation of $X$}\\
&= I
\end{align*}
and this model would be homoskedastic.
Notice that such configuration of the errors is provided by the model
\begin{align*}
\underbrace{D^{-1/2} y}_{:=w} = \underbrace{D^{-1/2}X}_{:= Z} \beta + \underbrace{D^{-1/2}e}_{:=\gamma}
\end{align*}
Also, applying the usual result for least-square estimation, we have that the value of $\beta$ which minimizes the squared errors in the transformed model is
\begin{align*}
\beta^* & = (Z'Z)^{-1} Z'w\\
&= ( X' D^{-1/2} D^{-1/2}X)^{-1} X' D^{-1/2} D^{-1/2} y \\
& = \underbrace{(X' D^{-1}X)^{-1} X' D^{-1}}_{:= A_0} y \\
\end{align*}
the generalized least square estimator.
Because the transformed model is homoskedastic, $\beta^*$ is BLUE in the transformed model, by the simple Gauss-Markov theorem. So ${\rm Var}(\beta'|Z) - {\rm Var}(\beta^*|Z) $ is positive semi-definite for every $\beta'$, a linear unbiased estimation of the true parameter.
Now notice that
\begin{align*}
& {\rm Var}(\beta^*|Z) = {\rm Var}(A_0' y|Z) = A_0' {\rm Var}(e|Z) A_0\quad\quad = A_0' D A_0\\
& {\rm Var}( \beta' | Z) = {\rm Var}(A' w|Z)\ = A' {\rm Var}(D^{-1/2}e|Z) A = A' D^{-1/2} D D^{-1/2} A = A'A
\end{align*}
and by the simple Gauss-Markov theorem we have $\tilde{A}'\tilde{A} - A_0' D A_0$ is positive semi-definite for any $\tilde{A}$ such that $\tilde{A}'Z = I$.
Clearly, $A_0$ also yields an unbiased estimator in the original model, as $A_0' X = I$ (See Hansen, 4.6, freely available online).
So to finally get back to the original heteroskedastic model, take any $A$ such that $A'X =I$ and let $\tilde{A}' = A'D^{-1/2}$.
Notice that $\tilde{A}' Z = A'D^{-1/2} D^{1/2} X = A'X = I$. So by Gauss-Markov-first-part again, $\tilde{A}'\tilde{A} - A_0' D A_0$ is positive semi-definite.
But $\tilde{A}'\tilde{A} = A'D^{-1/2}D^{1/2} A = A'DA$.
So wrapping up we have $A_0 X = I$ and for any $A$ such that $AX =I$, $A'DA - A_0 DA_0$ is positive semi-definite, the desired result. | How to do regression with known correlations among the errors?
I had to write a proof that $\hat{\beta}_{GLS}$ is BLUE and the Seber and Lee reference recommended by @AdamO helped me a lot.
However I did not find an openly available version of the proof online. |
30,223 | How to do regression with known correlations among the errors? | As a practical extension of @AdamO's response: Note that there are ways to find a "Square Root" of a matrix (the Cholesky decomposition is one example) such that $A'A=B$, so if you find such a root matrix of the $D^{-1}$ matrix mentioned above and multiply it by your $X$ matrix and $Y$ vector to get a new matrix and vector $X^* = AX$ and $Y^*=AY$ and plug $X^*$ and $Y^*$ into the regular OLS regression equation it is easy to derive the GLS formula given above. This means that if you have computer software that computes OLS regression, but not GLS regression you can just find a root matrix and multiply it against your data, then feed the transformed variables to the OLS routine and it will give you the estimates for the GLS fit. | How to do regression with known correlations among the errors? | As a practical extension of @AdamO's response: Note that there are ways to find a "Square Root" of a matrix (the Cholesky decomposition is one example) such that $A'A=B$, so if you find such a root m | How to do regression with known correlations among the errors?
As a practical extension of @AdamO's response: Note that there are ways to find a "Square Root" of a matrix (the Cholesky decomposition is one example) such that $A'A=B$, so if you find such a root matrix of the $D^{-1}$ matrix mentioned above and multiply it by your $X$ matrix and $Y$ vector to get a new matrix and vector $X^* = AX$ and $Y^*=AY$ and plug $X^*$ and $Y^*$ into the regular OLS regression equation it is easy to derive the GLS formula given above. This means that if you have computer software that computes OLS regression, but not GLS regression you can just find a root matrix and multiply it against your data, then feed the transformed variables to the OLS routine and it will give you the estimates for the GLS fit. | How to do regression with known correlations among the errors?
As a practical extension of @AdamO's response: Note that there are ways to find a "Square Root" of a matrix (the Cholesky decomposition is one example) such that $A'A=B$, so if you find such a root m |
30,224 | Regression through the origin | The Ordinary Least Squares estimate of the slope when the intercept is suppressed is:
$$
\hat{\beta}=\frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}
$$ | Regression through the origin | The Ordinary Least Squares estimate of the slope when the intercept is suppressed is:
$$
\hat{\beta}=\frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}
$$ | Regression through the origin
The Ordinary Least Squares estimate of the slope when the intercept is suppressed is:
$$
\hat{\beta}=\frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}
$$ | Regression through the origin
The Ordinary Least Squares estimate of the slope when the intercept is suppressed is:
$$
\hat{\beta}=\frac{\sum_{i=1}^N x_iy_i}{\sum_{i=1}^N x_i^2}
$$ |
30,225 | Regression through the origin | @gung has given the OLS estimate. That's what you were seeking.
However, when dealing with physical quantities where the line must go through the origin, it's common for the scale of the error to vary with the x-values (to have, roughly, constant relative error). In that situation, ordinary unweighted least squares would be inappropriate.
In that situation, one approach (of several possibilities) would be to take logs, subtract the x's from the y's and estimate the log-slope (of the original variables) by the mean of the differences.
Alternatively, weighted least squares could be used. In the case of constant relative error, it would reduce to using the estimator $\hat{\beta}=\frac{1}{N}\sum_{i=1}^N \frac{y_i}{x_i}$ (the average of all the slopes through the origin).
There are other approaches (GLMs for example), but if you're doing it on a calculator, I'd lean toward my first suggestion.
You should also consider the appropriateness of any assumptions you make.
I thought it might be instructive to add the derivation of the WLS line through the origin and then my "average of slopes" and gungs OLS are special cases:
The model is $y_i=\beta x_i+\varepsilon_i\,,$ where $\text{Var}(\varepsilon_i)=w_i\sigma^2$
We want to minimize $S = \sum_i w_i(y_i-\beta x_i)^2$
$\frac{\partial S}{\partial \beta} = -\sum_i 2x_i.w_i(y_i-\beta x_i)$
Setting equal to zero to obtain the LS solution $\hat{\beta}$ we obtain $\sum w_ix_iy_i = \hat{\beta} \sum w_ix_i^2$, or $\hat{\beta}=\frac{\sum w_ix_iy_i}{\sum w_ix_i^2}$.
When $w_i\propto 1$ for all $i$, this yields gung's OLS solution.
When $w_i \propto 1/x_i^2$ (which is optimum for the case where spread increases with mean), this yields the above "average of slopes" solution. | Regression through the origin | @gung has given the OLS estimate. That's what you were seeking.
However, when dealing with physical quantities where the line must go through the origin, it's common for the scale of the error to vary | Regression through the origin
@gung has given the OLS estimate. That's what you were seeking.
However, when dealing with physical quantities where the line must go through the origin, it's common for the scale of the error to vary with the x-values (to have, roughly, constant relative error). In that situation, ordinary unweighted least squares would be inappropriate.
In that situation, one approach (of several possibilities) would be to take logs, subtract the x's from the y's and estimate the log-slope (of the original variables) by the mean of the differences.
Alternatively, weighted least squares could be used. In the case of constant relative error, it would reduce to using the estimator $\hat{\beta}=\frac{1}{N}\sum_{i=1}^N \frac{y_i}{x_i}$ (the average of all the slopes through the origin).
There are other approaches (GLMs for example), but if you're doing it on a calculator, I'd lean toward my first suggestion.
You should also consider the appropriateness of any assumptions you make.
I thought it might be instructive to add the derivation of the WLS line through the origin and then my "average of slopes" and gungs OLS are special cases:
The model is $y_i=\beta x_i+\varepsilon_i\,,$ where $\text{Var}(\varepsilon_i)=w_i\sigma^2$
We want to minimize $S = \sum_i w_i(y_i-\beta x_i)^2$
$\frac{\partial S}{\partial \beta} = -\sum_i 2x_i.w_i(y_i-\beta x_i)$
Setting equal to zero to obtain the LS solution $\hat{\beta}$ we obtain $\sum w_ix_iy_i = \hat{\beta} \sum w_ix_i^2$, or $\hat{\beta}=\frac{\sum w_ix_iy_i}{\sum w_ix_i^2}$.
When $w_i\propto 1$ for all $i$, this yields gung's OLS solution.
When $w_i \propto 1/x_i^2$ (which is optimum for the case where spread increases with mean), this yields the above "average of slopes" solution. | Regression through the origin
@gung has given the OLS estimate. That's what you were seeking.
However, when dealing with physical quantities where the line must go through the origin, it's common for the scale of the error to vary |
30,226 | Difference among bias, systematic bias, and systematic error? | The term "bias" appears in two ways in the fundamental literature on statistics:
"...the bias $\mathbb{E}_\theta[\delta(X)] - g(\theta)$, sometimes called the systematic error, ..." [E. L. Lehmann, Theory of Point Estimation, 1983. This is a classic text.] In Lehmann's notation, which is standard, $\mathbb{E}_\theta$ is the expectation when the distribution is given by the parameter $\theta$, $\delta$ is an estimator, $X$ is an observation, and $g(\theta)$ is a property of the distribution to be estimated (the estimand). In other words, the observation (or sequence thereof) is a random variable, which makes the estimate random, and the bias is the expected deviation between the estimate and the estimand. It depends on the (unknown but true) distribution $\theta$, making it a function of the true distribution. Lehmann devotes an entire chapter to unbiased estimators: those with zero bias regardless of the value of $\theta$.
In measurement theory, "bias" (or "systematic error") is a difference between the expectation of a measurement and the true underlying value. Bias can result from calibration errors or instrumental drift, for example. Contrast this usage with the previous: here, a bias is a property of a measurement, which is a physical process, whereas before it was a property of a statistical estimator (which is a mathematically defined procedure to make guesses from data).
"Systematic bias" appears to be used only when distinguishing bias from random "error": the term "error" tends to be used primarily for random terms with zero expectation.
In many cases, bias in the first sense decreases as the amount of data increases: many biased estimators in practice become less and less biased with more data (although this is not theoretically guaranteed, because the concept of bias is so broad). A good example is the maximum likelihood estimator of the variance of a distribution when $n$ independent draws $x_i$ from that distribution are available. The ML estimator is
$$\hat{v} = \frac{1}{n}\sum_{i=1}^n (x_i - \bar{x})^2,$$
for $\bar{x} = \frac{1}{n}\sum_{i=1}^n x_i$. It is well known that this is biased; the estimator $\frac{n}{n-1}\hat{v}$ is unbiased. Whence, as $n\to\infty$, $\hat{v}\to\frac{n}{n-1}\hat{v}$ becomes asymptotically unbiased.
Bias in the measurement context (the second sense), however, is usually not reducible by taking more measurements: the bias is inherent in the measurement procedure itself. One has to estimate and reduce the bias by calibrating the measurement procedure or comparing it to other procedures known to have no (or less) bias, estimating the bias, and compensating for that.
This brief description of the terminology as it is used for statistical inference does not supplant the extended and more specialized replies already posted. Instead, it is intended to serve as an introduction to them and as a mild warning to be wary of universal generalizations made in limited contexts, such as "all three [terms] are equivalent to 'systematic error'," which clearly can be correct only in a narrow sense, because the two definitions I have quoted are not equivalent. Reading the other replies has alerted me to the possibility that the literature in specialized fields like epidemiology may be using familiar, standard statistical terms like "bias" in unexpected ways, some of which may actually contradict statistical definitions. In the end, in any particular situation we need to look for a clear definition that is appropriate for the context. | Difference among bias, systematic bias, and systematic error? | The term "bias" appears in two ways in the fundamental literature on statistics:
"...the bias $\mathbb{E}_\theta[\delta(X)] - g(\theta)$, sometimes called the systematic error, ..." [E. L. Lehmann, T | Difference among bias, systematic bias, and systematic error?
The term "bias" appears in two ways in the fundamental literature on statistics:
"...the bias $\mathbb{E}_\theta[\delta(X)] - g(\theta)$, sometimes called the systematic error, ..." [E. L. Lehmann, Theory of Point Estimation, 1983. This is a classic text.] In Lehmann's notation, which is standard, $\mathbb{E}_\theta$ is the expectation when the distribution is given by the parameter $\theta$, $\delta$ is an estimator, $X$ is an observation, and $g(\theta)$ is a property of the distribution to be estimated (the estimand). In other words, the observation (or sequence thereof) is a random variable, which makes the estimate random, and the bias is the expected deviation between the estimate and the estimand. It depends on the (unknown but true) distribution $\theta$, making it a function of the true distribution. Lehmann devotes an entire chapter to unbiased estimators: those with zero bias regardless of the value of $\theta$.
In measurement theory, "bias" (or "systematic error") is a difference between the expectation of a measurement and the true underlying value. Bias can result from calibration errors or instrumental drift, for example. Contrast this usage with the previous: here, a bias is a property of a measurement, which is a physical process, whereas before it was a property of a statistical estimator (which is a mathematically defined procedure to make guesses from data).
"Systematic bias" appears to be used only when distinguishing bias from random "error": the term "error" tends to be used primarily for random terms with zero expectation.
In many cases, bias in the first sense decreases as the amount of data increases: many biased estimators in practice become less and less biased with more data (although this is not theoretically guaranteed, because the concept of bias is so broad). A good example is the maximum likelihood estimator of the variance of a distribution when $n$ independent draws $x_i$ from that distribution are available. The ML estimator is
$$\hat{v} = \frac{1}{n}\sum_{i=1}^n (x_i - \bar{x})^2,$$
for $\bar{x} = \frac{1}{n}\sum_{i=1}^n x_i$. It is well known that this is biased; the estimator $\frac{n}{n-1}\hat{v}$ is unbiased. Whence, as $n\to\infty$, $\hat{v}\to\frac{n}{n-1}\hat{v}$ becomes asymptotically unbiased.
Bias in the measurement context (the second sense), however, is usually not reducible by taking more measurements: the bias is inherent in the measurement procedure itself. One has to estimate and reduce the bias by calibrating the measurement procedure or comparing it to other procedures known to have no (or less) bias, estimating the bias, and compensating for that.
This brief description of the terminology as it is used for statistical inference does not supplant the extended and more specialized replies already posted. Instead, it is intended to serve as an introduction to them and as a mild warning to be wary of universal generalizations made in limited contexts, such as "all three [terms] are equivalent to 'systematic error'," which clearly can be correct only in a narrow sense, because the two definitions I have quoted are not equivalent. Reading the other replies has alerted me to the possibility that the literature in specialized fields like epidemiology may be using familiar, standard statistical terms like "bias" in unexpected ways, some of which may actually contradict statistical definitions. In the end, in any particular situation we need to look for a clear definition that is appropriate for the context. | Difference among bias, systematic bias, and systematic error?
The term "bias" appears in two ways in the fundamental literature on statistics:
"...the bias $\mathbb{E}_\theta[\delta(X)] - g(\theta)$, sometimes called the systematic error, ..." [E. L. Lehmann, T |
30,227 | Difference among bias, systematic bias, and systematic error? | If I’ve learnt anything through my epidemiology studies is that this is a mine-field where there is no true right or wrong. I like statistics because it at least has a fundament in math while epidemiology is more opinion. That said I’ll try to answer your question.
From M. Porta A Dictionary of Epidemiology 5th ed. there is no mentioning of systematic bias and systematic error says “See BIAS”.
This leaves bias that is described as: “Systematic deviation of results or inferences from truth. …leading to results or conclusions that are systematically (opposed to randomly) different from the truth.” I would say that there is no unsystematic bias since they all deviate your results away from the true risk estimate. The most important thing about bias is that you can’t reduce it by increasing sample size.
There are many types of bias, I’ve heard that one of the original articles on bias contained over 300 different types. The important thing is to identify them before you start your study and then try to set up your study/experiment to avoid bias. In epidemiological studies it is very useful to separate bias into three categories:
Selection bias
Information bias
Confounding
Selection bias is when you select the wrong type of individuals for your study. Let’s say you’re interested in seeing if working in a coal mine is a risk – if you look for your study individuals at the coal mine you might find that they’re healthier than the general population just because the fact that the ones that are sick are no longer working at the coal mine i.e. you select the healthiest individuals and your no longer studying the source population but a subsample. Selection bias is usually the most malignant type of bias because it’s so hard to identify.
Information bias is when your data collection concerning outcome or exposure is faulty. A common error is the surgeon that asks his patient if he’s better after the surgery. Here both the patient might not want to disappoint the surgeon and reports a better outcome that he/she would otherwise and the surgeon might not want to admit that the surgery was a failure, reporting and interviewer bias.
Information bias is also known as observational bias. When it is an error in a continuous variable it’s a measurement error while in the setting of classification you have misclassification bias. Misclassification means that a study individual can end up in the wrong category, a smoker can be misclassified as a non-smoker either by chance or by reporting bias. Even if the misclassification is by chance (non-differential misclassification) it will still tend to underestimate the risk in a systematic way, especially when you have few categories. Although an excellent study by Jurek et al. 2005 showed that you should be careful making this assumption based on a single study. In regard to your question I might imagine that this is the “non-systematic bias” that the systematic bias relates.
Confounding is factors that are associated with both the exposure and the outcome and relates mor closely to the study individual. For instance Lambe et al. 2006 showed that smoking during pregnancy increases the risk for low school performance but when looking at siblings in a subpopulation where the mother had stopped smoking during her second pregnancy their school performance was just as bad. This suggests that smoking is not the cause for bad school performance but perhaps a confounder for other social factors.
This article by Sica et al. 2006 goes into more detail. What you have to be prepared for is that there really is a lack of consensus in the field for the terminology. My dream is that WHO one day produces a list of definitions that is easy to understand, makes intuitive sense and where the debate finally may end. | Difference among bias, systematic bias, and systematic error? | If I’ve learnt anything through my epidemiology studies is that this is a mine-field where there is no true right or wrong. I like statistics because it at least has a fundament in math while epidemio | Difference among bias, systematic bias, and systematic error?
If I’ve learnt anything through my epidemiology studies is that this is a mine-field where there is no true right or wrong. I like statistics because it at least has a fundament in math while epidemiology is more opinion. That said I’ll try to answer your question.
From M. Porta A Dictionary of Epidemiology 5th ed. there is no mentioning of systematic bias and systematic error says “See BIAS”.
This leaves bias that is described as: “Systematic deviation of results or inferences from truth. …leading to results or conclusions that are systematically (opposed to randomly) different from the truth.” I would say that there is no unsystematic bias since they all deviate your results away from the true risk estimate. The most important thing about bias is that you can’t reduce it by increasing sample size.
There are many types of bias, I’ve heard that one of the original articles on bias contained over 300 different types. The important thing is to identify them before you start your study and then try to set up your study/experiment to avoid bias. In epidemiological studies it is very useful to separate bias into three categories:
Selection bias
Information bias
Confounding
Selection bias is when you select the wrong type of individuals for your study. Let’s say you’re interested in seeing if working in a coal mine is a risk – if you look for your study individuals at the coal mine you might find that they’re healthier than the general population just because the fact that the ones that are sick are no longer working at the coal mine i.e. you select the healthiest individuals and your no longer studying the source population but a subsample. Selection bias is usually the most malignant type of bias because it’s so hard to identify.
Information bias is when your data collection concerning outcome or exposure is faulty. A common error is the surgeon that asks his patient if he’s better after the surgery. Here both the patient might not want to disappoint the surgeon and reports a better outcome that he/she would otherwise and the surgeon might not want to admit that the surgery was a failure, reporting and interviewer bias.
Information bias is also known as observational bias. When it is an error in a continuous variable it’s a measurement error while in the setting of classification you have misclassification bias. Misclassification means that a study individual can end up in the wrong category, a smoker can be misclassified as a non-smoker either by chance or by reporting bias. Even if the misclassification is by chance (non-differential misclassification) it will still tend to underestimate the risk in a systematic way, especially when you have few categories. Although an excellent study by Jurek et al. 2005 showed that you should be careful making this assumption based on a single study. In regard to your question I might imagine that this is the “non-systematic bias” that the systematic bias relates.
Confounding is factors that are associated with both the exposure and the outcome and relates mor closely to the study individual. For instance Lambe et al. 2006 showed that smoking during pregnancy increases the risk for low school performance but when looking at siblings in a subpopulation where the mother had stopped smoking during her second pregnancy their school performance was just as bad. This suggests that smoking is not the cause for bad school performance but perhaps a confounder for other social factors.
This article by Sica et al. 2006 goes into more detail. What you have to be prepared for is that there really is a lack of consensus in the field for the terminology. My dream is that WHO one day produces a list of definitions that is easy to understand, makes intuitive sense and where the debate finally may end. | Difference among bias, systematic bias, and systematic error?
If I’ve learnt anything through my epidemiology studies is that this is a mine-field where there is no true right or wrong. I like statistics because it at least has a fundament in math while epidemio |
30,228 | Difference among bias, systematic bias, and systematic error? | Terminologies may vary from from field to field. However, using terms defined in the comments below:
Is there any difference among the following terms or they are same?
No, all three are equivalent to 'systematic error'.
Can these errors be reduced when one increase the sample size?
No, increasing sample size reduces random error, not systematic error.
Comment
These terms are taken from the field of epidemiology, specifically from Rothman and colleagues discussion of error in chapters 9 and 10 of Modern Epidemiology.
To summarize:
The goal of an investigator is to provide an accurate estimate of some measure (e.g. mean, relative risk, hazard ratio, et cetera) within a population. An accurate estimate is one that is both valid and precise. A valid estimate will have a point estimate (eg. mean, relative risk, hazard ratio, et cetera) that is close to the true value in the population. A precise estimate will have narrow confidence levels around the point estimate. In addition, an estimate can be internally-valid, relative to the study population, and externally-valid, relative to a generalized population.
Departures from accuracy are caused by error. There are two main types of error: systemic error and random error.
Systemic error, often referred to as bias, results in estimates that are not valid. Systemic error includes error due to confounding, selection bias, and information bias. Confounding can generally be corrected for with techniques such as stratification or regression. Selection and information bias have traditionally been either ignored or only qualitatively assessed in analyses, probably due to unfamiliarity with appropriate bias analyses. However, methodologies for qunatitative bias analysis do exist (e.g. Lash TL and AK Fink (2003)).
Random error results in estimates that are not precise. Random error includes sampling error and random measurement error, among others. Methods to increase precision include increaseing study size, increasing study efficiency, and precision-optimizing statistical analyses such as pooling and regression.
Update
To illustrate why increasing sample size does not decrease systematic error with the dartboard analogy (copied from this CV post):
No matter how many darts are thrown at the board, the point estimate is not going to shift towards the true bulls-eye when there is 'high bias'. Here 'bias' is equivalent to 'systematic error', and 'variance' is equivalent to 'random error'. | Difference among bias, systematic bias, and systematic error? | Terminologies may vary from from field to field. However, using terms defined in the comments below:
Is there any difference among the following terms or they are same?
No, all three are equivalent | Difference among bias, systematic bias, and systematic error?
Terminologies may vary from from field to field. However, using terms defined in the comments below:
Is there any difference among the following terms or they are same?
No, all three are equivalent to 'systematic error'.
Can these errors be reduced when one increase the sample size?
No, increasing sample size reduces random error, not systematic error.
Comment
These terms are taken from the field of epidemiology, specifically from Rothman and colleagues discussion of error in chapters 9 and 10 of Modern Epidemiology.
To summarize:
The goal of an investigator is to provide an accurate estimate of some measure (e.g. mean, relative risk, hazard ratio, et cetera) within a population. An accurate estimate is one that is both valid and precise. A valid estimate will have a point estimate (eg. mean, relative risk, hazard ratio, et cetera) that is close to the true value in the population. A precise estimate will have narrow confidence levels around the point estimate. In addition, an estimate can be internally-valid, relative to the study population, and externally-valid, relative to a generalized population.
Departures from accuracy are caused by error. There are two main types of error: systemic error and random error.
Systemic error, often referred to as bias, results in estimates that are not valid. Systemic error includes error due to confounding, selection bias, and information bias. Confounding can generally be corrected for with techniques such as stratification or regression. Selection and information bias have traditionally been either ignored or only qualitatively assessed in analyses, probably due to unfamiliarity with appropriate bias analyses. However, methodologies for qunatitative bias analysis do exist (e.g. Lash TL and AK Fink (2003)).
Random error results in estimates that are not precise. Random error includes sampling error and random measurement error, among others. Methods to increase precision include increaseing study size, increasing study efficiency, and precision-optimizing statistical analyses such as pooling and regression.
Update
To illustrate why increasing sample size does not decrease systematic error with the dartboard analogy (copied from this CV post):
No matter how many darts are thrown at the board, the point estimate is not going to shift towards the true bulls-eye when there is 'high bias'. Here 'bias' is equivalent to 'systematic error', and 'variance' is equivalent to 'random error'. | Difference among bias, systematic bias, and systematic error?
Terminologies may vary from from field to field. However, using terms defined in the comments below:
Is there any difference among the following terms or they are same?
No, all three are equivalent |
30,229 | Difference among bias, systematic bias, and systematic error? | These power point excerpts have some info to supplement what jthetzel and Max Gordon have given. They're oriented toward survey data, and they're not rigorous or formal, but then if you wanted that type of answer you'd probably be looking in textbooks on measurement theory or survey methods. | Difference among bias, systematic bias, and systematic error? | These power point excerpts have some info to supplement what jthetzel and Max Gordon have given. They're oriented toward survey data, and they're not rigorous or formal, but then if you wanted that t | Difference among bias, systematic bias, and systematic error?
These power point excerpts have some info to supplement what jthetzel and Max Gordon have given. They're oriented toward survey data, and they're not rigorous or formal, but then if you wanted that type of answer you'd probably be looking in textbooks on measurement theory or survey methods. | Difference among bias, systematic bias, and systematic error?
These power point excerpts have some info to supplement what jthetzel and Max Gordon have given. They're oriented toward survey data, and they're not rigorous or formal, but then if you wanted that t |
30,230 | Does this distribution have a name? Or what is a stochastic process that could generate it? | It's a discrete power law.
(This is a description--whose meaning will be made precise below--rather than a technical term. The phrase "discrete power law" has a slightly different technical meaning, as indicated by @Cardinal in comments to this answer.)
To see this, observe that the partial fraction decomposition can be written
$$p(x;k) = \frac{k}{(x+k)(x+k-1)} = \frac{1}{1 + (x-1)/k} - \frac{1}{1 + x/k}.$$
The CDF telescopes into a closed form:
$$\eqalign{
&\text{CDF}(i) = \sum_{x=1}^i p(x;k) \\
= &[\frac{1}{1 + 0/k} - \frac{1}{1 + 1/k}] + [\frac{1}{1 + 1/k} - \frac{1}{1 + 2/k}] + \cdots + [\frac{1}{1 + (i-1)/k} - \frac{1}{1 + i/k}] \\
= &\frac{1}{1 + 0/k} + [- \frac{1}{1 + 1/k} + \frac{1}{1 + 1/k}] + [ - \frac{1}{1 + 2/k} + \cdots + \frac{1}{1 + (i-1)/k}] - \frac{1}{1 + i/k} \\
= &1 + 0 + \cdots + 0 - \frac{1}{1 + i/k} \\
= &\frac{i}{i+k}.
}$$
(Incidentally, because this is easily inverted, it immediately provides an efficient way to generate random variables from this distribution: simply compute $\lceil \frac{k u}{1 - u} \rceil$ where $u$ is uniformly distributed on $(0,1)$.)
Differentiating this expression with respect to $i$ shows how the CDF can be written as an integral,
$$\text{CDF}(i) = \frac{i}{i+k} = \int_0^i \frac{dt/k}{(1 + t/k)^2} = \sum_{x=1}^i \int_{x-1}^x \frac{dt/k}{(1 + t/k)^2},$$
whence
$$p(x;k) = \int_{x-1}^x \frac{dt/k}{(1 + t/k)^2}.$$
This form of writing it exhibits $k$ as a scale parameter for the family of (continuous) distributions determined by the density
$$f(\xi)d\xi = (1 + \xi)^{-2}\, d\xi$$
and shows how $p(x;k)$ is the discretized version of $f$ (scaled by $k$) obtained by integrating the continuous probability over the interval from $x-1$ to $x$. That's obviously a power law with exponent $-2$. This observation gives you an entrance into extensive literature on power laws and how they arise in science, engineering, and statistics, which may suggest many answers to your last two questions. | Does this distribution have a name? Or what is a stochastic process that could generate it? | It's a discrete power law.
(This is a description--whose meaning will be made precise below--rather than a technical term. The phrase "discrete power law" has a slightly different technical meaning, | Does this distribution have a name? Or what is a stochastic process that could generate it?
It's a discrete power law.
(This is a description--whose meaning will be made precise below--rather than a technical term. The phrase "discrete power law" has a slightly different technical meaning, as indicated by @Cardinal in comments to this answer.)
To see this, observe that the partial fraction decomposition can be written
$$p(x;k) = \frac{k}{(x+k)(x+k-1)} = \frac{1}{1 + (x-1)/k} - \frac{1}{1 + x/k}.$$
The CDF telescopes into a closed form:
$$\eqalign{
&\text{CDF}(i) = \sum_{x=1}^i p(x;k) \\
= &[\frac{1}{1 + 0/k} - \frac{1}{1 + 1/k}] + [\frac{1}{1 + 1/k} - \frac{1}{1 + 2/k}] + \cdots + [\frac{1}{1 + (i-1)/k} - \frac{1}{1 + i/k}] \\
= &\frac{1}{1 + 0/k} + [- \frac{1}{1 + 1/k} + \frac{1}{1 + 1/k}] + [ - \frac{1}{1 + 2/k} + \cdots + \frac{1}{1 + (i-1)/k}] - \frac{1}{1 + i/k} \\
= &1 + 0 + \cdots + 0 - \frac{1}{1 + i/k} \\
= &\frac{i}{i+k}.
}$$
(Incidentally, because this is easily inverted, it immediately provides an efficient way to generate random variables from this distribution: simply compute $\lceil \frac{k u}{1 - u} \rceil$ where $u$ is uniformly distributed on $(0,1)$.)
Differentiating this expression with respect to $i$ shows how the CDF can be written as an integral,
$$\text{CDF}(i) = \frac{i}{i+k} = \int_0^i \frac{dt/k}{(1 + t/k)^2} = \sum_{x=1}^i \int_{x-1}^x \frac{dt/k}{(1 + t/k)^2},$$
whence
$$p(x;k) = \int_{x-1}^x \frac{dt/k}{(1 + t/k)^2}.$$
This form of writing it exhibits $k$ as a scale parameter for the family of (continuous) distributions determined by the density
$$f(\xi)d\xi = (1 + \xi)^{-2}\, d\xi$$
and shows how $p(x;k)$ is the discretized version of $f$ (scaled by $k$) obtained by integrating the continuous probability over the interval from $x-1$ to $x$. That's obviously a power law with exponent $-2$. This observation gives you an entrance into extensive literature on power laws and how they arise in science, engineering, and statistics, which may suggest many answers to your last two questions. | Does this distribution have a name? Or what is a stochastic process that could generate it?
It's a discrete power law.
(This is a description--whose meaning will be made precise below--rather than a technical term. The phrase "discrete power law" has a slightly different technical meaning, |
30,231 | Does this distribution have a name? Or what is a stochastic process that could generate it? | Okay, after a bit more investigation, I found some more details.
It's a special case of a continuous mixture of a geometric distribution with a Beta, so could be called a Beta-geometric distribution. Specifically, if:
$$P \sim \mathrm{Beta}(1,k) $$
and:
$$X|P \sim \mathrm{Geometric}(P)$$
then the marginal distribution of $Y = X+1$ has this distribution. As such, it's a special case of a Beta-Negative binomial distribution.
It has a couple of other interesting properties:
It has an infinite mean
It describes its own tail distribution: if $X$ has this distribution with parameter $k$, then $X-t | X>t$ has parameter $t+k$. | Does this distribution have a name? Or what is a stochastic process that could generate it? | Okay, after a bit more investigation, I found some more details.
It's a special case of a continuous mixture of a geometric distribution with a Beta, so could be called a Beta-geometric distribution. | Does this distribution have a name? Or what is a stochastic process that could generate it?
Okay, after a bit more investigation, I found some more details.
It's a special case of a continuous mixture of a geometric distribution with a Beta, so could be called a Beta-geometric distribution. Specifically, if:
$$P \sim \mathrm{Beta}(1,k) $$
and:
$$X|P \sim \mathrm{Geometric}(P)$$
then the marginal distribution of $Y = X+1$ has this distribution. As such, it's a special case of a Beta-Negative binomial distribution.
It has a couple of other interesting properties:
It has an infinite mean
It describes its own tail distribution: if $X$ has this distribution with parameter $k$, then $X-t | X>t$ has parameter $t+k$. | Does this distribution have a name? Or what is a stochastic process that could generate it?
Okay, after a bit more investigation, I found some more details.
It's a special case of a continuous mixture of a geometric distribution with a Beta, so could be called a Beta-geometric distribution. |
30,232 | How to use variables derived from factor analysis as predictors in logistic regression? | If I understand you correctly, you are using FA to extract two subscales from your 11-item questionnaire. They are supposed to reflect some specific dimensions of self-efficacy (for example, self-regulatory vs. self-assertive efficacy).
Then, you are free to use individual mean (or sum) scores computed on the two subscales as predictors in a regression model. In others words, instead of considering 11 item scores, you are now working with 2 subscores, computed as described above for each individual. The only assumption that is made is that those scores reflect one's location on an "hypothetical construct" or latent variable, defined as a continuous scale.
As @JMS said, there are other issues that you might further clarify, especially which kind of FA was done. A subtle issue is that measurement error will not be accounted for by a standard regression approach. An alternative is to use Structural Equation Models or any latent variables model (e.g. those coming from the IRT literature), but here the regression approach should provide a good approximation. The analysis of ordinal variables (Likert-type item) has been discussed elsewhere on this site.
However, in current practice, your approach is what is commonly found when validating a questionnaire or constructing scoring rules: We use weighted or unweighted combination of item scores (hence, they are treated as numeric variables) to report individual location on the latent trait(s) under consideration. | How to use variables derived from factor analysis as predictors in logistic regression? | If I understand you correctly, you are using FA to extract two subscales from your 11-item questionnaire. They are supposed to reflect some specific dimensions of self-efficacy (for example, self-regu | How to use variables derived from factor analysis as predictors in logistic regression?
If I understand you correctly, you are using FA to extract two subscales from your 11-item questionnaire. They are supposed to reflect some specific dimensions of self-efficacy (for example, self-regulatory vs. self-assertive efficacy).
Then, you are free to use individual mean (or sum) scores computed on the two subscales as predictors in a regression model. In others words, instead of considering 11 item scores, you are now working with 2 subscores, computed as described above for each individual. The only assumption that is made is that those scores reflect one's location on an "hypothetical construct" or latent variable, defined as a continuous scale.
As @JMS said, there are other issues that you might further clarify, especially which kind of FA was done. A subtle issue is that measurement error will not be accounted for by a standard regression approach. An alternative is to use Structural Equation Models or any latent variables model (e.g. those coming from the IRT literature), but here the regression approach should provide a good approximation. The analysis of ordinal variables (Likert-type item) has been discussed elsewhere on this site.
However, in current practice, your approach is what is commonly found when validating a questionnaire or constructing scoring rules: We use weighted or unweighted combination of item scores (hence, they are treated as numeric variables) to report individual location on the latent trait(s) under consideration. | How to use variables derived from factor analysis as predictors in logistic regression?
If I understand you correctly, you are using FA to extract two subscales from your 11-item questionnaire. They are supposed to reflect some specific dimensions of self-efficacy (for example, self-regu |
30,233 | How to use variables derived from factor analysis as predictors in logistic regression? | Using factor scores as predictors
Yes, you can use variables derived from a factor analysis as predictors in subsequent analyses.
Other options include running some form of structural equation model where you posit a latent variable with the items or bundles of items as observed variables.
Mean as scale score
Yes, in your case, the mean would be a typical option for computing a scale score.
If you have any reversed items, you have to deal with this.
You could also use factor saved scores instead of taking the mean. Although when all items load reasonably well on each factor and all items are on the same scale and all items are positively worded, there is rarely much difference between the mean and factor saved scores.
You could also look at methods that acknowledge the ordinal nature of the scale and therefore do not treat the scale options as equally distant. | How to use variables derived from factor analysis as predictors in logistic regression? | Using factor scores as predictors
Yes, you can use variables derived from a factor analysis as predictors in subsequent analyses.
Other options include running some form of structural equation model | How to use variables derived from factor analysis as predictors in logistic regression?
Using factor scores as predictors
Yes, you can use variables derived from a factor analysis as predictors in subsequent analyses.
Other options include running some form of structural equation model where you posit a latent variable with the items or bundles of items as observed variables.
Mean as scale score
Yes, in your case, the mean would be a typical option for computing a scale score.
If you have any reversed items, you have to deal with this.
You could also use factor saved scores instead of taking the mean. Although when all items load reasonably well on each factor and all items are on the same scale and all items are positively worded, there is rarely much difference between the mean and factor saved scores.
You could also look at methods that acknowledge the ordinal nature of the scale and therefore do not treat the scale options as equally distant. | How to use variables derived from factor analysis as predictors in logistic regression?
Using factor scores as predictors
Yes, you can use variables derived from a factor analysis as predictors in subsequent analyses.
Other options include running some form of structural equation model |
30,234 | How to use variables derived from factor analysis as predictors in logistic regression? | Everything have be said by chl and Jeromy for the theorical part... If you don't have use sum/mean of variables you identify with FA you can use scores of FA.
Regarding the syntax you use you're probably using SAS. So to do a correct use of factor analysis you must use the score of observations and not the mean of variables.
You find below the code to obtain score for 2 factors with an FA. Scores you'll have to use will be call Factor1, Factor2, ... by SAS.
This is a 2 steps... 1) First FA then 2) call the proc score to compute Scores.
proc factor
data = Data
method = ml
rotate = promax
outstat = FAstats
n=3
heywood residuals msa score
;
var x:;
run;
proc score data=Data score=FAstats out=MyScores;
var x:;
run;
The variables to use are Factor1, Factor2, ... in MyScores datasets. | How to use variables derived from factor analysis as predictors in logistic regression? | Everything have be said by chl and Jeromy for the theorical part... If you don't have use sum/mean of variables you identify with FA you can use scores of FA.
Regarding the syntax you use you're proba | How to use variables derived from factor analysis as predictors in logistic regression?
Everything have be said by chl and Jeromy for the theorical part... If you don't have use sum/mean of variables you identify with FA you can use scores of FA.
Regarding the syntax you use you're probably using SAS. So to do a correct use of factor analysis you must use the score of observations and not the mean of variables.
You find below the code to obtain score for 2 factors with an FA. Scores you'll have to use will be call Factor1, Factor2, ... by SAS.
This is a 2 steps... 1) First FA then 2) call the proc score to compute Scores.
proc factor
data = Data
method = ml
rotate = promax
outstat = FAstats
n=3
heywood residuals msa score
;
var x:;
run;
proc score data=Data score=FAstats out=MyScores;
var x:;
run;
The variables to use are Factor1, Factor2, ... in MyScores datasets. | How to use variables derived from factor analysis as predictors in logistic regression?
Everything have be said by chl and Jeromy for the theorical part... If you don't have use sum/mean of variables you identify with FA you can use scores of FA.
Regarding the syntax you use you're proba |
30,235 | How to use variables derived from factor analysis as predictors in logistic regression? | Continuous latent variables with discrete (polytomous in your case) manifest variables is part of item response analysis. Package 'ltm' in R covers a variety of such models. I refer you to this paper, which deals with exactly same problem. | How to use variables derived from factor analysis as predictors in logistic regression? | Continuous latent variables with discrete (polytomous in your case) manifest variables is part of item response analysis. Package 'ltm' in R covers a variety of such models. I refer you to this paper, | How to use variables derived from factor analysis as predictors in logistic regression?
Continuous latent variables with discrete (polytomous in your case) manifest variables is part of item response analysis. Package 'ltm' in R covers a variety of such models. I refer you to this paper, which deals with exactly same problem. | How to use variables derived from factor analysis as predictors in logistic regression?
Continuous latent variables with discrete (polytomous in your case) manifest variables is part of item response analysis. Package 'ltm' in R covers a variety of such models. I refer you to this paper, |
30,236 | Explain data visualization | When I teach very basic statistics to Secondary School Students I talk about evolution and how we have evolved to spot patterns in pictures rather than lists of numbers and that data visualisation is one of the techniques we use to take advantage of this fact.
Plus I try to talk about recent news stories where statistical insight contradicts what the press is implying, making use of sites like Gapminder to find the representation before choosing the story. | Explain data visualization | When I teach very basic statistics to Secondary School Students I talk about evolution and how we have evolved to spot patterns in pictures rather than lists of numbers and that data visualisation is | Explain data visualization
When I teach very basic statistics to Secondary School Students I talk about evolution and how we have evolved to spot patterns in pictures rather than lists of numbers and that data visualisation is one of the techniques we use to take advantage of this fact.
Plus I try to talk about recent news stories where statistical insight contradicts what the press is implying, making use of sites like Gapminder to find the representation before choosing the story. | Explain data visualization
When I teach very basic statistics to Secondary School Students I talk about evolution and how we have evolved to spot patterns in pictures rather than lists of numbers and that data visualisation is |
30,237 | Explain data visualization | I would explain it to a layman as:
Data visualization is taking data, and making a picture out of it. This allows you to easily see and understand relationships within the data much more easily than just looking at the numbers. | Explain data visualization | I would explain it to a layman as:
Data visualization is taking data, and making a picture out of it. This allows you to easily see and understand relationships within the data much more easily than | Explain data visualization
I would explain it to a layman as:
Data visualization is taking data, and making a picture out of it. This allows you to easily see and understand relationships within the data much more easily than just looking at the numbers. | Explain data visualization
I would explain it to a layman as:
Data visualization is taking data, and making a picture out of it. This allows you to easily see and understand relationships within the data much more easily than |
30,238 | Explain data visualization | I would show them the raw data of Anscombe's Quartet (JSTOR link to the paper) in a big table, alongside another table showing the Mean & Variance of x and y, the correlation coefficient, and the equation of the linear regression line. Ask them to explain the differences between each of the 4 datasets. They will be confused.
Then show them 4 graphs. They will be enlightened. | Explain data visualization | I would show them the raw data of Anscombe's Quartet (JSTOR link to the paper) in a big table, alongside another table showing the Mean & Variance of x and y, the correlation coefficient, and the equa | Explain data visualization
I would show them the raw data of Anscombe's Quartet (JSTOR link to the paper) in a big table, alongside another table showing the Mean & Variance of x and y, the correlation coefficient, and the equation of the linear regression line. Ask them to explain the differences between each of the 4 datasets. They will be confused.
Then show them 4 graphs. They will be enlightened. | Explain data visualization
I would show them the raw data of Anscombe's Quartet (JSTOR link to the paper) in a big table, alongside another table showing the Mean & Variance of x and y, the correlation coefficient, and the equa |
30,239 | Explain data visualization | From Wikipedia: Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the units of information"
Data viz is important for visualizing trends in data, telling a story - See Minard's map of Napoleon's march - possibly one of the best data graphics ever printed.
Also see any of Edward Tufte's books - especially Visual Display of Quantitative Information. | Explain data visualization | From Wikipedia: Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the u | Explain data visualization
From Wikipedia: Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the units of information"
Data viz is important for visualizing trends in data, telling a story - See Minard's map of Napoleon's march - possibly one of the best data graphics ever printed.
Also see any of Edward Tufte's books - especially Visual Display of Quantitative Information. | Explain data visualization
From Wikipedia: Data visualization is the study of the visual representation of data, meaning "information which has been abstracted in some schematic form, including attributes or variables for the u |
30,240 | Explain data visualization | For me Illuminating the Path report has been always good point of reference.
For more recent overview you can also have a look at good article by Heer and colleagues.
But what would explain better than visualization itself?
(Source) | Explain data visualization | For me Illuminating the Path report has been always good point of reference.
For more recent overview you can also have a look at good article by Heer and colleagues.
But what would explain better tha | Explain data visualization
For me Illuminating the Path report has been always good point of reference.
For more recent overview you can also have a look at good article by Heer and colleagues.
But what would explain better than visualization itself?
(Source) | Explain data visualization
For me Illuminating the Path report has been always good point of reference.
For more recent overview you can also have a look at good article by Heer and colleagues.
But what would explain better tha |
30,241 | How would a bayesian estimate a mean from a large sample? | With a Bayesian method we could also consider $\bar{X} = \frac{1}{n} \sum_{k=1}^n X_k$ as the observed statistic and it has approximately a normal distribution if we assume that the values have finite variance and converges quickly.
So we could use the likelihood function $\mathcal{L}(\mu \vert \bar{X}) \approx \frac{1}{\sqrt{2 \pi \sigma^2/n}} \exp \left(\frac{(\bar X - \mu)^2}{2 \sigma^2/n} \right)$
Then we still need priors for $\sigma$ and $\mu$ but that is like any other Bayesian problem. The issue with the likelihood has been solved by assuming a normal distribution just like with the frequentist method.
Related question: Would you say this is a trade off between frequentist and Bayesian stats? | How would a bayesian estimate a mean from a large sample? | With a Bayesian method we could also consider $\bar{X} = \frac{1}{n} \sum_{k=1}^n X_k$ as the observed statistic and it has approximately a normal distribution if we assume that the values have finite | How would a bayesian estimate a mean from a large sample?
With a Bayesian method we could also consider $\bar{X} = \frac{1}{n} \sum_{k=1}^n X_k$ as the observed statistic and it has approximately a normal distribution if we assume that the values have finite variance and converges quickly.
So we could use the likelihood function $\mathcal{L}(\mu \vert \bar{X}) \approx \frac{1}{\sqrt{2 \pi \sigma^2/n}} \exp \left(\frac{(\bar X - \mu)^2}{2 \sigma^2/n} \right)$
Then we still need priors for $\sigma$ and $\mu$ but that is like any other Bayesian problem. The issue with the likelihood has been solved by assuming a normal distribution just like with the frequentist method.
Related question: Would you say this is a trade off between frequentist and Bayesian stats? | How would a bayesian estimate a mean from a large sample?
With a Bayesian method we could also consider $\bar{X} = \frac{1}{n} \sum_{k=1}^n X_k$ as the observed statistic and it has approximately a normal distribution if we assume that the values have finite |
30,242 | How would a bayesian estimate a mean from a large sample? | There are various flavours of Bayesian statistics. One of them is subjectivist (e.g., according to de Finetti). Subjectivist Bayesians hold that probability applies to an individual's state of belief and information but not to underlying data generating processes, which can never be infinitely repeated, which would be necessary to define a true frequentist probability. For this reason (and potentially some others that are harder to discuss), according to a subjectivist, there is no such thing as a true underlying distribution. So the job of the subjective Bayesian in this problem is not to guess the underlying distribution, but rather to specify a distribution that summarises her belief and knowledge about the expected distribution of the data given $\mu$. Not only $p(\mu)$ is a prior choice, also what you call $f_{x|\mu}$!
In fact, this is even the case in what many call "objectivist Bayes", as long as the probabilities are epistemic, i.e., do refer to a state of knowledge rather than really existing underlying data generating processes. The objectivist also will have to choose an $f_{x|\mu}$ that expresses all existing information about the expected distribution of the data given $\mu$ (except that subjective belief is not supposed to play a role here; although in reality it is often hard to bring existing information into a suitable formal form without any subjective choices).
These are the major streams of traditional Bayesian philosophy. In the present, much of Bayesian data analysis is based on an implicit assumption that there is a true underlying distribution, which we have called "falsificationist Bayes" here:
https://rss.onlinelibrary.wiley.com/doi/10.1111/rssa.12276
Even here (as in frequentism), the task would be to specify a model that makes sense from a subject matter perspective, and that can then be checked, for example by comparing data generated from it with your actual data, as in so-called posterior predictive checks (hence "falsificationist").
There is also the field of Bayesian nonparametrics, which is about very large models with potentially infinite-dimensional parameters covering large sets of the model space in case you don't want to commit to a specific simple one. This may be relevant regardless of whether your probability model is interpreted in an epistemic or frequentist (underlying data generating process) sense. | How would a bayesian estimate a mean from a large sample? | There are various flavours of Bayesian statistics. One of them is subjectivist (e.g., according to de Finetti). Subjectivist Bayesians hold that probability applies to an individual's state of belief | How would a bayesian estimate a mean from a large sample?
There are various flavours of Bayesian statistics. One of them is subjectivist (e.g., according to de Finetti). Subjectivist Bayesians hold that probability applies to an individual's state of belief and information but not to underlying data generating processes, which can never be infinitely repeated, which would be necessary to define a true frequentist probability. For this reason (and potentially some others that are harder to discuss), according to a subjectivist, there is no such thing as a true underlying distribution. So the job of the subjective Bayesian in this problem is not to guess the underlying distribution, but rather to specify a distribution that summarises her belief and knowledge about the expected distribution of the data given $\mu$. Not only $p(\mu)$ is a prior choice, also what you call $f_{x|\mu}$!
In fact, this is even the case in what many call "objectivist Bayes", as long as the probabilities are epistemic, i.e., do refer to a state of knowledge rather than really existing underlying data generating processes. The objectivist also will have to choose an $f_{x|\mu}$ that expresses all existing information about the expected distribution of the data given $\mu$ (except that subjective belief is not supposed to play a role here; although in reality it is often hard to bring existing information into a suitable formal form without any subjective choices).
These are the major streams of traditional Bayesian philosophy. In the present, much of Bayesian data analysis is based on an implicit assumption that there is a true underlying distribution, which we have called "falsificationist Bayes" here:
https://rss.onlinelibrary.wiley.com/doi/10.1111/rssa.12276
Even here (as in frequentism), the task would be to specify a model that makes sense from a subject matter perspective, and that can then be checked, for example by comparing data generated from it with your actual data, as in so-called posterior predictive checks (hence "falsificationist").
There is also the field of Bayesian nonparametrics, which is about very large models with potentially infinite-dimensional parameters covering large sets of the model space in case you don't want to commit to a specific simple one. This may be relevant regardless of whether your probability model is interpreted in an epistemic or frequentist (underlying data generating process) sense. | How would a bayesian estimate a mean from a large sample?
There are various flavours of Bayesian statistics. One of them is subjectivist (e.g., according to de Finetti). Subjectivist Bayesians hold that probability applies to an individual's state of belief |
30,243 | How would a bayesian estimate a mean from a large sample? | You're essentially asking if you can do Bayesian statistics without a likelihood function. The answer is no. The likelihood function is an essential ingredient in Bayesian statistics. Without a likelihood, you have no way to update your prior.
If you can't specify a likelihood, or are unable to evaluate it, you can use approximate Bayesian computation to sample from the posterior. This still requires specifying a likelihood, but it's a working likelihood that you know isn't correct. | How would a bayesian estimate a mean from a large sample? | You're essentially asking if you can do Bayesian statistics without a likelihood function. The answer is no. The likelihood function is an essential ingredient in Bayesian statistics. Without a likeli | How would a bayesian estimate a mean from a large sample?
You're essentially asking if you can do Bayesian statistics without a likelihood function. The answer is no. The likelihood function is an essential ingredient in Bayesian statistics. Without a likelihood, you have no way to update your prior.
If you can't specify a likelihood, or are unable to evaluate it, you can use approximate Bayesian computation to sample from the posterior. This still requires specifying a likelihood, but it's a working likelihood that you know isn't correct. | How would a bayesian estimate a mean from a large sample?
You're essentially asking if you can do Bayesian statistics without a likelihood function. The answer is no. The likelihood function is an essential ingredient in Bayesian statistics. Without a likeli |
30,244 | How would a bayesian estimate a mean from a large sample? | You seem to be asking about a nonparametric estimator for the mean.
First, let's make it clear: for Bayesian statistics, you always need to make distributional assumptions. You can proceed as suggested by Sextus Empiricus (+1), but this does assume Gaussian distribution. If you really didn't want to make any assumptions, in practice, you would probably just estimate the arithmetic mean.
But let's try coming up with a nonparametric solution. One thing that comes to my mind is Bayesian bootstrap also described by Rasmus Bååth who provided code example. With Bayesian bootstrap, you would assume the Dirichlet-uniform distribution for the probabilities, resample with replacement the datapoints and evaluate the statistic, so arithmetic mean on those samples. But in such a case, to approximate the frequentist estimator, you would use the same estimator on the samples to find their distribution. Not very helpful, isn't it?
Let's start again. The definition of expected value is
$$
E[X] = \int x \, f(x)\, dx
$$
The problem is that you don't know the probability densities $f(x)$. With the parametric model, you would solve it by finding a parametric distribution for $f$. In the frequentist setting, we would estimate the expected value using the arithmetic average with $\phi_i$ weights equal to the empirical probabilities
$$
\widehat{E[X]} = \sum_{i=1}^N x_i \, \phi_i
$$
Same as with Bayesian bootstrap, you could assume a prior Dirichlet-uniform prior for the $\phi_i$ weights, sampling the weights, calculating the weighted average, and repeating this many times to find the distribution of the estimates. If you think about it, It's the most trivial case of the Bayesian bootstrap for a weighted statistic. Yes, this makes a number of unreasonable assumptions like approximating continuous distribution with a discrete one. It's also not necessarily very useful, but if you insist on a Bayesian nonparametric estimator, that's a possibility. As you can see from the example below, it gives the same results as the frequentist estimator and standard bootstrap.
set.seed(42)
N <- 500
X <- rnorm(N, 53, 37)
mean(x)
## [1] 51.88829
R <- 5000
mean.boot <- replicate(R, mean(sample(X, replace=TRUE)))
summary(mean.boot)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 46.41 50.81 51.90 51.91 53.03 57.88
# weighted.mean itself normalizes the weights
wmean.boot <- replicate(R, weighted.mean(X, w=rexp(N, 1)))
summary(wmean.boot)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 47.09 50.82 51.93 51.91 52.99 57.65
par(mfrow=c(1, 2))
hist(mean.boot, 100, freq=FALSE, main="Standard bootstrap")
curve(dnorm(x, mean(X), sd(X)/sqrt(N)), 45, 60, add=T, col="red", lw=2)
hist(wmean.boot, 100, freq=FALSE, main="Sampled weights")
curve(dnorm(x, mean(X), sd(X)/sqrt(N)), 45, 60, add=T, col="red", lw=2)
In the end, it's the same as the frequentist estimator, so you could just use the frequentist estimator. | How would a bayesian estimate a mean from a large sample? | You seem to be asking about a nonparametric estimator for the mean.
First, let's make it clear: for Bayesian statistics, you always need to make distributional assumptions. You can proceed as suggeste | How would a bayesian estimate a mean from a large sample?
You seem to be asking about a nonparametric estimator for the mean.
First, let's make it clear: for Bayesian statistics, you always need to make distributional assumptions. You can proceed as suggested by Sextus Empiricus (+1), but this does assume Gaussian distribution. If you really didn't want to make any assumptions, in practice, you would probably just estimate the arithmetic mean.
But let's try coming up with a nonparametric solution. One thing that comes to my mind is Bayesian bootstrap also described by Rasmus Bååth who provided code example. With Bayesian bootstrap, you would assume the Dirichlet-uniform distribution for the probabilities, resample with replacement the datapoints and evaluate the statistic, so arithmetic mean on those samples. But in such a case, to approximate the frequentist estimator, you would use the same estimator on the samples to find their distribution. Not very helpful, isn't it?
Let's start again. The definition of expected value is
$$
E[X] = \int x \, f(x)\, dx
$$
The problem is that you don't know the probability densities $f(x)$. With the parametric model, you would solve it by finding a parametric distribution for $f$. In the frequentist setting, we would estimate the expected value using the arithmetic average with $\phi_i$ weights equal to the empirical probabilities
$$
\widehat{E[X]} = \sum_{i=1}^N x_i \, \phi_i
$$
Same as with Bayesian bootstrap, you could assume a prior Dirichlet-uniform prior for the $\phi_i$ weights, sampling the weights, calculating the weighted average, and repeating this many times to find the distribution of the estimates. If you think about it, It's the most trivial case of the Bayesian bootstrap for a weighted statistic. Yes, this makes a number of unreasonable assumptions like approximating continuous distribution with a discrete one. It's also not necessarily very useful, but if you insist on a Bayesian nonparametric estimator, that's a possibility. As you can see from the example below, it gives the same results as the frequentist estimator and standard bootstrap.
set.seed(42)
N <- 500
X <- rnorm(N, 53, 37)
mean(x)
## [1] 51.88829
R <- 5000
mean.boot <- replicate(R, mean(sample(X, replace=TRUE)))
summary(mean.boot)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 46.41 50.81 51.90 51.91 53.03 57.88
# weighted.mean itself normalizes the weights
wmean.boot <- replicate(R, weighted.mean(X, w=rexp(N, 1)))
summary(wmean.boot)
## Min. 1st Qu. Median Mean 3rd Qu. Max.
## 47.09 50.82 51.93 51.91 52.99 57.65
par(mfrow=c(1, 2))
hist(mean.boot, 100, freq=FALSE, main="Standard bootstrap")
curve(dnorm(x, mean(X), sd(X)/sqrt(N)), 45, 60, add=T, col="red", lw=2)
hist(wmean.boot, 100, freq=FALSE, main="Sampled weights")
curve(dnorm(x, mean(X), sd(X)/sqrt(N)), 45, 60, add=T, col="red", lw=2)
In the end, it's the same as the frequentist estimator, so you could just use the frequentist estimator. | How would a bayesian estimate a mean from a large sample?
You seem to be asking about a nonparametric estimator for the mean.
First, let's make it clear: for Bayesian statistics, you always need to make distributional assumptions. You can proceed as suggeste |
30,245 | Can I delete missing data? | If you decide to "delete" the missing data prior to analysis, that is called a "complete-case analysis" (i.e., you are only using data points that have complete information). That is quite a simple and common method of analysis, but it has some risks. In particular, if the variables under analysis are statistically related to the "missingness" then ignoring the missing data will induce bias in your inferences.
Imputation methods are created in order to try to approximately model statistical dependence between missing values and the "missingness" in the data. In cases where entire classes of data points are missing, it may be the case that there is no information available to support imputation, in whih case you may have to fall back on complete-case analysis, with appropriate caveats and caution in your conclusions. In any case missing data methods require quite a bit of learning to implement correctly, but imputation methods can perform better than complete-case analysis in a wide variety of problems where there is sufficient information to estimate relationships between missing data values and the "missingness" indicators.
If you would like to learn more about missing data methods, you can find a simple educational introduction in Pigott (2001) and a more detailed exposition in Little and Rubin (2002). | Can I delete missing data? | If you decide to "delete" the missing data prior to analysis, that is called a "complete-case analysis" (i.e., you are only using data points that have complete information). That is quite a simple a | Can I delete missing data?
If you decide to "delete" the missing data prior to analysis, that is called a "complete-case analysis" (i.e., you are only using data points that have complete information). That is quite a simple and common method of analysis, but it has some risks. In particular, if the variables under analysis are statistically related to the "missingness" then ignoring the missing data will induce bias in your inferences.
Imputation methods are created in order to try to approximately model statistical dependence between missing values and the "missingness" in the data. In cases where entire classes of data points are missing, it may be the case that there is no information available to support imputation, in whih case you may have to fall back on complete-case analysis, with appropriate caveats and caution in your conclusions. In any case missing data methods require quite a bit of learning to implement correctly, but imputation methods can perform better than complete-case analysis in a wide variety of problems where there is sufficient information to estimate relationships between missing data values and the "missingness" indicators.
If you would like to learn more about missing data methods, you can find a simple educational introduction in Pigott (2001) and a more detailed exposition in Little and Rubin (2002). | Can I delete missing data?
If you decide to "delete" the missing data prior to analysis, that is called a "complete-case analysis" (i.e., you are only using data points that have complete information). That is quite a simple a |
30,246 | Can I delete missing data? | It is important to think about the mechanism leading to missing data. There are three kind of missing data that can happen:
Missing completely at random (MCAR). It means that the probability that an entry is missing is the fixed, independent of its (unobserved) value and independent of other variables. In that case, deleting incomplete data is OK and will not bias your result. However, doing multiple imputation may be more efficient, since you don't need to delete any valuable data. It will also depend on how much of your dataset is missing (maybe you will lose too much data doing complete case analysis, or maybe there's so few missing data that it's not worth the effort of imputation).
Missing at random (MAR). It means that the probability of an entry being missing depends on the other variables, but not on the unobserved value. In this case, ignoring missing data may bias your results, and multiple imputation is recommended.
Missing not at random (MNAR). In this case, the probability of missingness does depend on the unobserved value. An extreme example of this would be censoring. In this situation, neither imputation nor complete case analysis will remove bias, and there is no general solution here.
If you are sure that you are in a MCAR scenario (unlikely) or that the fraction of missing data is tiny, you can do complete case analysis. Otherwise, you should try imputation. If you are in a MNAR situation, you may have to rethink if your dataset can answer the questions you ask in an unbiased way.
I think multiple imputation may still work for completely missing rows (at least Bayesian model-based imputation would work, I am not sure about other methods), but these rows are not informative at all, so I think it's safe to delete those anyway. | Can I delete missing data? | It is important to think about the mechanism leading to missing data. There are three kind of missing data that can happen:
Missing completely at random (MCAR). It means that the probability that an | Can I delete missing data?
It is important to think about the mechanism leading to missing data. There are three kind of missing data that can happen:
Missing completely at random (MCAR). It means that the probability that an entry is missing is the fixed, independent of its (unobserved) value and independent of other variables. In that case, deleting incomplete data is OK and will not bias your result. However, doing multiple imputation may be more efficient, since you don't need to delete any valuable data. It will also depend on how much of your dataset is missing (maybe you will lose too much data doing complete case analysis, or maybe there's so few missing data that it's not worth the effort of imputation).
Missing at random (MAR). It means that the probability of an entry being missing depends on the other variables, but not on the unobserved value. In this case, ignoring missing data may bias your results, and multiple imputation is recommended.
Missing not at random (MNAR). In this case, the probability of missingness does depend on the unobserved value. An extreme example of this would be censoring. In this situation, neither imputation nor complete case analysis will remove bias, and there is no general solution here.
If you are sure that you are in a MCAR scenario (unlikely) or that the fraction of missing data is tiny, you can do complete case analysis. Otherwise, you should try imputation. If you are in a MNAR situation, you may have to rethink if your dataset can answer the questions you ask in an unbiased way.
I think multiple imputation may still work for completely missing rows (at least Bayesian model-based imputation would work, I am not sure about other methods), but these rows are not informative at all, so I think it's safe to delete those anyway. | Can I delete missing data?
It is important to think about the mechanism leading to missing data. There are three kind of missing data that can happen:
Missing completely at random (MCAR). It means that the probability that an |
30,247 | Can I delete missing data? | The structure of your dataset may lend itself to making this more difficult, if I read your question right. Staying out of the higher statistics (covered in the other answers) and just in the basic survey/study design realm, you might have three kinds of respondents:
Non Responders - people who did not respond at any point in the survey
Partial responders - people who responded to some of the years, but not all
Completes - people who responded to each year in the survey
Each group has some data, hopefully, available about them, even the non responders. Sometimes, in surveys, you have data from the sample frame, data used to draw the sample, which often includes some demographic information such that you can draw a balanced sample. That information is available whether they respond or not. This is only true if the sample was drawn from a known population - not if it was drawn by, say, random digit dialing. In that case, you may have no information about non responders, but you also would probably not have them in the data file.
The partial responders, people who responded to the initial round of the survey but then later on left (or missed a year but came back later), will have much more information available of course.
Either way, you need a dataset that is respondent level that has all of the starting demographic data you have about these respondents, whether they are nonresponders, partial responders, or completes. It sounds like your data is not organized this way - so, reorganize it! This doesn't have to be attached to the rows of data from each year - it can be a separate dataset.
Then, you use this baseline demographic information for whatever imputation you're doing, or for weighting. The responses to the demographic questions from the first year would be used to impute the later years' variables, including their demographic variables. You also could design a more complicated model that rolled from year to year - birth predicts 1, 1 predicts 3, 3 predicts 5, and so on. That would likely be better, but I don't know your data nor your skill level with designing models like this; I'd often err on the side of simpler, as it's more likely I get it right!
I'm not an expert in imputation, so I won't speak to the specific choices - but hopefully this gets you an idea of where to start, and then you can use one of these other great answers to solve your imputation/deleting/etc. problem. | Can I delete missing data? | The structure of your dataset may lend itself to making this more difficult, if I read your question right. Staying out of the higher statistics (covered in the other answers) and just in the basic s | Can I delete missing data?
The structure of your dataset may lend itself to making this more difficult, if I read your question right. Staying out of the higher statistics (covered in the other answers) and just in the basic survey/study design realm, you might have three kinds of respondents:
Non Responders - people who did not respond at any point in the survey
Partial responders - people who responded to some of the years, but not all
Completes - people who responded to each year in the survey
Each group has some data, hopefully, available about them, even the non responders. Sometimes, in surveys, you have data from the sample frame, data used to draw the sample, which often includes some demographic information such that you can draw a balanced sample. That information is available whether they respond or not. This is only true if the sample was drawn from a known population - not if it was drawn by, say, random digit dialing. In that case, you may have no information about non responders, but you also would probably not have them in the data file.
The partial responders, people who responded to the initial round of the survey but then later on left (or missed a year but came back later), will have much more information available of course.
Either way, you need a dataset that is respondent level that has all of the starting demographic data you have about these respondents, whether they are nonresponders, partial responders, or completes. It sounds like your data is not organized this way - so, reorganize it! This doesn't have to be attached to the rows of data from each year - it can be a separate dataset.
Then, you use this baseline demographic information for whatever imputation you're doing, or for weighting. The responses to the demographic questions from the first year would be used to impute the later years' variables, including their demographic variables. You also could design a more complicated model that rolled from year to year - birth predicts 1, 1 predicts 3, 3 predicts 5, and so on. That would likely be better, but I don't know your data nor your skill level with designing models like this; I'd often err on the side of simpler, as it's more likely I get it right!
I'm not an expert in imputation, so I won't speak to the specific choices - but hopefully this gets you an idea of where to start, and then you can use one of these other great answers to solve your imputation/deleting/etc. problem. | Can I delete missing data?
The structure of your dataset may lend itself to making this more difficult, if I read your question right. Staying out of the higher statistics (covered in the other answers) and just in the basic s |
30,248 | Can I delete missing data? | It's not a good a idea at all deleting missing data using tree-base models (if you're deleting the rows with at least one missing data) it'd be better you just ignore missing data when you're splitting a node on a decision tree because this model will just ignore the missing values unlike ignoring the whole instance (where there may be so many non-missing values since you got to much columns ) | Can I delete missing data? | It's not a good a idea at all deleting missing data using tree-base models (if you're deleting the rows with at least one missing data) it'd be better you just ignore missing data when you're splittin | Can I delete missing data?
It's not a good a idea at all deleting missing data using tree-base models (if you're deleting the rows with at least one missing data) it'd be better you just ignore missing data when you're splitting a node on a decision tree because this model will just ignore the missing values unlike ignoring the whole instance (where there may be so many non-missing values since you got to much columns ) | Can I delete missing data?
It's not a good a idea at all deleting missing data using tree-base models (if you're deleting the rows with at least one missing data) it'd be better you just ignore missing data when you're splittin |
30,249 | In Machine learning, how does normalization help in convergence of gradient descent? | Rescaling is preconditioning
Steepest descent can take steps that oscillate wildly away from the optimum, even if the function is strongly convex or even quadratic.
Consider $f(x)=x_1^2 + 25x_2^2$. This is convex because it is a quadratic with positive coefficients. By inspection, we can see that it has a global minimum at $x=[0,0]^\top$. It has gradient
$$
\nabla f(x)=
\begin{bmatrix}
2x_1 \\
50x_2
\end{bmatrix}
$$
With a learning rate of $\alpha=0.035$, and initial guess $x^{(0)}=[0.5, 0.5]^\top,$ we have the gradient update
$$
x^{(1)} =x^{(0)}-\alpha \nabla f\left(x^{(0)}\right)
$$
which exhibits this wildly oscillating progress towards the minimum.
Each step is wildly oscillating because the function is much steeper in the $x_2$ direction than the $x_1$ direction. Because of this fact, we can infer that the gradient is not always, or even usually, pointing toward the minimum. This is a general property of gradient descent when the eigenvalues of the Hessian $\nabla^2 f(x)$ are on dissimilar scales. Progress is slow in directions corresponding to the eigenvectors with the smallest corresponding eigenvalues, and fastest in the directions with the largest eigenvalues. It is this property, in combination with the choice of learning rate, that determines how quickly gradient descent progresses.
The direct path to the minimum would be to move "diagonally" instead of in this fashion which is strongly dominated by vertical oscillations. However, gradient descent only has information about local steepness, so it "doesn't know" that strategy would be more efficient, and it is subject to the vagaries of the Hessian having eigenvalues on different scales.
Rescaling the input data changes the Hessian matrix to be spherical. In turn, this means that steepest descent can move more directly towards the minimum instead of sharply oscillating.
Rescaling prevents early saturation
If you're using sigmoidal (logistic, tanh, softmax, etc.) activations, then these have flat gradients for inputs above a certain size. This implies that if the product of the network inputs and the initial weights is too small, the units will immediately be saturated and the gradients will be tiny. Scaling inputs to reasonable ranges and using small values for initial weights can ameliorate this and allow learning to proceed more quickly.
Effect of rescaling of inputs on loss for a simple neural network
A common method is to scale the data to have 0 mean and unit variance. But there are other methods, such as min-max scaling (very common for tasks like MNIST), or computing Winsorized means and standard deviations (which might be better if your data contains very large outliers). The particular choice of a scaling method is usually unimportant as long as it provides preconditioning and prevents early saturation of units.
Neural Networks input data normalization and centering
More Reading
In "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", Sergey Ioffe and Christian Szegedy write
It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated.
So you might also find that the network gets better results if you decorrelate the inputs in addition to applying zero mean and unit variances.
Following the citations provides more description and context.
LeCun, Y., Bottou, L., Orr, G., and Muller, K. "Efficient
backprop." In Orr, G. and K., Muller (eds.), Neural Networks: Tricks of the trade. Springer, 1998b.
Wiesler, Simon and Ney, Hermann. "A convergence analysis of log-linear training." In Shawe-Taylor, J., Zemel,
R.S., Bartlett, P., Pereira, F.C.N., and Weinberger, K.Q.
(eds.), Advances in Neural Information Processing Systems 24, pp. 657–665, Granada, Spain, December 2011
This answer borrows this example and figure from Neural Networks Design (2nd Ed.) Chapter 9 by Martin T. Hagan, Howard B. Demuth, Mark Hudson Beale, Orlando De Jesús. | In Machine learning, how does normalization help in convergence of gradient descent? | Rescaling is preconditioning
Steepest descent can take steps that oscillate wildly away from the optimum, even if the function is strongly convex or even quadratic.
Consider $f(x)=x_1^2 + 25x_2^2$. Th | In Machine learning, how does normalization help in convergence of gradient descent?
Rescaling is preconditioning
Steepest descent can take steps that oscillate wildly away from the optimum, even if the function is strongly convex or even quadratic.
Consider $f(x)=x_1^2 + 25x_2^2$. This is convex because it is a quadratic with positive coefficients. By inspection, we can see that it has a global minimum at $x=[0,0]^\top$. It has gradient
$$
\nabla f(x)=
\begin{bmatrix}
2x_1 \\
50x_2
\end{bmatrix}
$$
With a learning rate of $\alpha=0.035$, and initial guess $x^{(0)}=[0.5, 0.5]^\top,$ we have the gradient update
$$
x^{(1)} =x^{(0)}-\alpha \nabla f\left(x^{(0)}\right)
$$
which exhibits this wildly oscillating progress towards the minimum.
Each step is wildly oscillating because the function is much steeper in the $x_2$ direction than the $x_1$ direction. Because of this fact, we can infer that the gradient is not always, or even usually, pointing toward the minimum. This is a general property of gradient descent when the eigenvalues of the Hessian $\nabla^2 f(x)$ are on dissimilar scales. Progress is slow in directions corresponding to the eigenvectors with the smallest corresponding eigenvalues, and fastest in the directions with the largest eigenvalues. It is this property, in combination with the choice of learning rate, that determines how quickly gradient descent progresses.
The direct path to the minimum would be to move "diagonally" instead of in this fashion which is strongly dominated by vertical oscillations. However, gradient descent only has information about local steepness, so it "doesn't know" that strategy would be more efficient, and it is subject to the vagaries of the Hessian having eigenvalues on different scales.
Rescaling the input data changes the Hessian matrix to be spherical. In turn, this means that steepest descent can move more directly towards the minimum instead of sharply oscillating.
Rescaling prevents early saturation
If you're using sigmoidal (logistic, tanh, softmax, etc.) activations, then these have flat gradients for inputs above a certain size. This implies that if the product of the network inputs and the initial weights is too small, the units will immediately be saturated and the gradients will be tiny. Scaling inputs to reasonable ranges and using small values for initial weights can ameliorate this and allow learning to proceed more quickly.
Effect of rescaling of inputs on loss for a simple neural network
A common method is to scale the data to have 0 mean and unit variance. But there are other methods, such as min-max scaling (very common for tasks like MNIST), or computing Winsorized means and standard deviations (which might be better if your data contains very large outliers). The particular choice of a scaling method is usually unimportant as long as it provides preconditioning and prevents early saturation of units.
Neural Networks input data normalization and centering
More Reading
In "Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift", Sergey Ioffe and Christian Szegedy write
It has been long known (LeCun et al., 1998b; Wiesler & Ney, 2011) that the network training converges faster if its inputs are whitened – i.e., linearly transformed to have zero means and unit variances, and decorrelated.
So you might also find that the network gets better results if you decorrelate the inputs in addition to applying zero mean and unit variances.
Following the citations provides more description and context.
LeCun, Y., Bottou, L., Orr, G., and Muller, K. "Efficient
backprop." In Orr, G. and K., Muller (eds.), Neural Networks: Tricks of the trade. Springer, 1998b.
Wiesler, Simon and Ney, Hermann. "A convergence analysis of log-linear training." In Shawe-Taylor, J., Zemel,
R.S., Bartlett, P., Pereira, F.C.N., and Weinberger, K.Q.
(eds.), Advances in Neural Information Processing Systems 24, pp. 657–665, Granada, Spain, December 2011
This answer borrows this example and figure from Neural Networks Design (2nd Ed.) Chapter 9 by Martin T. Hagan, Howard B. Demuth, Mark Hudson Beale, Orlando De Jesús. | In Machine learning, how does normalization help in convergence of gradient descent?
Rescaling is preconditioning
Steepest descent can take steps that oscillate wildly away from the optimum, even if the function is strongly convex or even quadratic.
Consider $f(x)=x_1^2 + 25x_2^2$. Th |
30,250 | In Machine learning, how does normalization help in convergence of gradient descent? | Gradient descent pushes you towards the steepest direction. If there is scale difference between dimensions, your level curves will typically look like ellipses. If they were circular around the local optimum, the gradient would point towards the center, which is the local optimum; however since they are elliptical, the gradient points towards the steepest direction which might be very off if you consider points around the corner of a very long ellipse. For seeing the steepest directions, just draw an ellipse, pick some points on the boundary draw lines perpendicular to the boundary. You'll see that these directions can be irrelevant of the vector pointing towards the center. | In Machine learning, how does normalization help in convergence of gradient descent? | Gradient descent pushes you towards the steepest direction. If there is scale difference between dimensions, your level curves will typically look like ellipses. If they were circular around the local | In Machine learning, how does normalization help in convergence of gradient descent?
Gradient descent pushes you towards the steepest direction. If there is scale difference between dimensions, your level curves will typically look like ellipses. If they were circular around the local optimum, the gradient would point towards the center, which is the local optimum; however since they are elliptical, the gradient points towards the steepest direction which might be very off if you consider points around the corner of a very long ellipse. For seeing the steepest directions, just draw an ellipse, pick some points on the boundary draw lines perpendicular to the boundary. You'll see that these directions can be irrelevant of the vector pointing towards the center. | In Machine learning, how does normalization help in convergence of gradient descent?
Gradient descent pushes you towards the steepest direction. If there is scale difference between dimensions, your level curves will typically look like ellipses. If they were circular around the local |
30,251 | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric? | Great question. I will answer it using t-SNE because I assume it is familiar to more people. I think UMAP is very promising and is a great contribution but to be honest I am getting a little bit annoyed with all the marketing and the hype that surrounds it. People think that t-SNE cannot embed new points but UMAP miraculously can. In reality, t-SNE can do it just as well as UMAP can; it is just a matter of convenient implementation.
A figure to attract attention:
they are non-parametric, i.e. there is no easy straightforward way to embed new data
This is not quite correct. It is true that t-SNE is non-parametric. What this actually means is that t-SNE does not construct a function $f(x):\mathbb R^p\to \mathbb R^2$ that would map high-dimensional points $x$ down to 2D. Instead it positions all the points on a plane and lets them "interact": similar points attract each other and dissimilar points repel each other, and after a while similar points gather together in clusters. In practical implementations, each point only feels attraction from its nearest $k$ neighbours for some small value of $k$.
Now imagine you get a new point $x_\mathrm{test}$. There is no function $f()$ that would give you its 2D position as $f(x_\mathrm{test})$. However, you can put it somewhere in the existing t-SNE embedding and let it "interact" with all existing points: it will be attracted to the points most similar to it (its nearest neighbours) and repeled from all other points. Only this point is allowed to move, while all existing points remain in place. If everything works well, $x_\mathrm{test}$ will arrive to its place somewhere close to its nearest neighbours.
When actually doing it, it is very helpful to position it initially somewhere close to its nearest neighbours (e.g. mean location of its $k$ nearest neighbours), because this will make the convergence much faster and much more reliable. In fact, simply positioning it at the mean location of its $k$ nearest neighbours can already work so well that no further optimisation would be needed at all.
[As an aside: if one has a whole bunch of test points, then one can deal with them independently one by one, or try to embed them all together and let them interact between each other as well. This can have very different outcomes if all test points are similar to each other but dissimilar to the original points. In the former case the test points will be "forced" into the existing embedding. In the latter case they will gather together as a separate cluster.]
I know several biology papers that used some variation of this method. Berman 2014 and Macosko 2015 are two such examples. Here is a very nice and very fast recent Python implementation of t-SNE https://github.com/pavlin-policar/openTSNE that allows embedding of new points out of the box. To quote the documentation https://opentsne.readthedocs.io/en/latest/,
[t-SNE has had several criticisms over the years, one of which is that] t-SNE is nonparametric therefore it is impossible to add new samples to an existing embedding. This argument is often repeated and likely comes from the fact that most software packages simply did not take the time to implement this functionality. t-SNE is nonparametric meaning that it does not learn a function $f$ that projects samples from the ambient space into the embedding space. However, the objective function of t-SNE is well defined and new samples can easily be added into an existing embedding by taking a data point and optimizing its position with respect to the existing embedding. This is the only available implementation we know of that allows adding new points to an existing embedding.
The figure above is from https://github.com/berenslab/rna-seq-tsne/ which is a companion repository to this paper: https://www.nature.com/articles/s41467-019-13056-x.
Regarding UMAP, as you say, the math behind the test set embeddings is not explicitly described anywhere, but I am quite sure that this is what it does. Briefly looking at the source code seems to confirm it. | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric? | Great question. I will answer it using t-SNE because I assume it is familiar to more people. I think UMAP is very promising and is a great contribution but to be honest I am getting a little bit annoy | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric?
Great question. I will answer it using t-SNE because I assume it is familiar to more people. I think UMAP is very promising and is a great contribution but to be honest I am getting a little bit annoyed with all the marketing and the hype that surrounds it. People think that t-SNE cannot embed new points but UMAP miraculously can. In reality, t-SNE can do it just as well as UMAP can; it is just a matter of convenient implementation.
A figure to attract attention:
they are non-parametric, i.e. there is no easy straightforward way to embed new data
This is not quite correct. It is true that t-SNE is non-parametric. What this actually means is that t-SNE does not construct a function $f(x):\mathbb R^p\to \mathbb R^2$ that would map high-dimensional points $x$ down to 2D. Instead it positions all the points on a plane and lets them "interact": similar points attract each other and dissimilar points repel each other, and after a while similar points gather together in clusters. In practical implementations, each point only feels attraction from its nearest $k$ neighbours for some small value of $k$.
Now imagine you get a new point $x_\mathrm{test}$. There is no function $f()$ that would give you its 2D position as $f(x_\mathrm{test})$. However, you can put it somewhere in the existing t-SNE embedding and let it "interact" with all existing points: it will be attracted to the points most similar to it (its nearest neighbours) and repeled from all other points. Only this point is allowed to move, while all existing points remain in place. If everything works well, $x_\mathrm{test}$ will arrive to its place somewhere close to its nearest neighbours.
When actually doing it, it is very helpful to position it initially somewhere close to its nearest neighbours (e.g. mean location of its $k$ nearest neighbours), because this will make the convergence much faster and much more reliable. In fact, simply positioning it at the mean location of its $k$ nearest neighbours can already work so well that no further optimisation would be needed at all.
[As an aside: if one has a whole bunch of test points, then one can deal with them independently one by one, or try to embed them all together and let them interact between each other as well. This can have very different outcomes if all test points are similar to each other but dissimilar to the original points. In the former case the test points will be "forced" into the existing embedding. In the latter case they will gather together as a separate cluster.]
I know several biology papers that used some variation of this method. Berman 2014 and Macosko 2015 are two such examples. Here is a very nice and very fast recent Python implementation of t-SNE https://github.com/pavlin-policar/openTSNE that allows embedding of new points out of the box. To quote the documentation https://opentsne.readthedocs.io/en/latest/,
[t-SNE has had several criticisms over the years, one of which is that] t-SNE is nonparametric therefore it is impossible to add new samples to an existing embedding. This argument is often repeated and likely comes from the fact that most software packages simply did not take the time to implement this functionality. t-SNE is nonparametric meaning that it does not learn a function $f$ that projects samples from the ambient space into the embedding space. However, the objective function of t-SNE is well defined and new samples can easily be added into an existing embedding by taking a data point and optimizing its position with respect to the existing embedding. This is the only available implementation we know of that allows adding new points to an existing embedding.
The figure above is from https://github.com/berenslab/rna-seq-tsne/ which is a companion repository to this paper: https://www.nature.com/articles/s41467-019-13056-x.
Regarding UMAP, as you say, the math behind the test set embeddings is not explicitly described anywhere, but I am quite sure that this is what it does. Briefly looking at the source code seems to confirm it. | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric?
Great question. I will answer it using t-SNE because I assume it is familiar to more people. I think UMAP is very promising and is a great contribution but to be honest I am getting a little bit annoy |
30,252 | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric? | In addition to @amoeba's answer, here is what Laurens van der Maaten, the author of t-SNE (https://lvdmaaten.github.io/tsne/) suggests:
t-SNE learns a non-parametric mapping, which means that it does not learn an explicit function that maps data from the input space to the map. Therefore, it is not possible to embed test points in an existing map (although you could re-run t-SNE on the full dataset). A potential approach to deal with this would be to train a multivariate regressor to predict the map location from the input data. Alternatively, you could also make such a regressor minimize the t-SNE loss directly, which is what I did in this paper. | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric? | In addition to @amoeba's answer, here is what Laurens van der Maaten, the author of t-SNE (https://lvdmaaten.github.io/tsne/) suggests:
t-SNE learns a non-parametric mapping, which means that it does | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric?
In addition to @amoeba's answer, here is what Laurens van der Maaten, the author of t-SNE (https://lvdmaaten.github.io/tsne/) suggests:
t-SNE learns a non-parametric mapping, which means that it does not learn an explicit function that maps data from the input space to the map. Therefore, it is not possible to embed test points in an existing map (although you could re-run t-SNE on the full dataset). A potential approach to deal with this would be to train a multivariate regressor to predict the map location from the input data. Alternatively, you could also make such a regressor minimize the t-SNE loss directly, which is what I did in this paper. | How can t-SNE or UMAP embed new (test) data, given that they are nonparametric?
In addition to @amoeba's answer, here is what Laurens van der Maaten, the author of t-SNE (https://lvdmaaten.github.io/tsne/) suggests:
t-SNE learns a non-parametric mapping, which means that it does |
30,253 | Sampling from Skew Normal Distribution | The most direct way of simulating a random variable from a distribution with cdf $F$ is to first simulate a Uniform variate $U\sim\mathcal{U}(0,1)$ and second return the inverse cdf transform $F^{-1}(U)$. When the inverse $F^{-1}$ is not available in closed form, a numerical inversion can be used. Numerical inversion may however be costly, especially in the tails.
One can also use accept-reject algorithms when the density $f$ is available and dominated by another density $g$, i.e., that there exists a constant $M$ such that$$f(x)<M g(x)$$. In the case of the skew-Normal distribution, the density is$$f(x)=2\varphi(x)\Phi(\alpha x)$$when $\varphi$ and $\Phi$ are the pdf and cdf of the standard Normal distribution, respectively. (Adding a location and a scale parameter does not modify the algorithm, since the outcome simply needs to be rescaled and translated.)
This density seems ideally suited for accept-reject since$$2\varphi(x)\Phi(\alpha x)< 2\varphi(x)$$as $\Phi$ is a cdf. This inequality implies that a first option to run accept-reject is to pick the Normal pdf for $g$ and $M=2$. This works out, as shown by the following picture when $\alpha=-3$:
and leads to an algorithm of the kind
T=1e3 #number of simulations
x=NULL
while (length(x)<T){
y=rnorm(2*T)
x=c(x,y[runif(2*T)<pnorm(alpha*y)])}
x=x[1:T]
which returns a reasonable fit of the pdf by the histogram:
There are however transforms of standard distributions that result in skew Normal variates: If $X_1,X_2$ are iid $\mathcal{N}(0,1)$, then
$$\dfrac{\alpha|X_1|+X_2}{\sqrt{1+\alpha^2}}$$
$$\dfrac{1+\alpha}{\sqrt{2(1+\alpha^2)}}\max\{X_1,X_2\}+\dfrac{1-\alpha}{\sqrt{2(1+\alpha^2)}}\min\{X_1,X_2\}$$
are skew Normal variates with parameter $\alpha$. (Both representations are identical when considering that $(X_1+X_2,X_1-X_2)/\sqrt{2}$ is an iid $\mathcal{N}(0,1)$ pair.) | Sampling from Skew Normal Distribution | The most direct way of simulating a random variable from a distribution with cdf $F$ is to first simulate a Uniform variate $U\sim\mathcal{U}(0,1)$ and second return the inverse cdf transform $F^{-1}( | Sampling from Skew Normal Distribution
The most direct way of simulating a random variable from a distribution with cdf $F$ is to first simulate a Uniform variate $U\sim\mathcal{U}(0,1)$ and second return the inverse cdf transform $F^{-1}(U)$. When the inverse $F^{-1}$ is not available in closed form, a numerical inversion can be used. Numerical inversion may however be costly, especially in the tails.
One can also use accept-reject algorithms when the density $f$ is available and dominated by another density $g$, i.e., that there exists a constant $M$ such that$$f(x)<M g(x)$$. In the case of the skew-Normal distribution, the density is$$f(x)=2\varphi(x)\Phi(\alpha x)$$when $\varphi$ and $\Phi$ are the pdf and cdf of the standard Normal distribution, respectively. (Adding a location and a scale parameter does not modify the algorithm, since the outcome simply needs to be rescaled and translated.)
This density seems ideally suited for accept-reject since$$2\varphi(x)\Phi(\alpha x)< 2\varphi(x)$$as $\Phi$ is a cdf. This inequality implies that a first option to run accept-reject is to pick the Normal pdf for $g$ and $M=2$. This works out, as shown by the following picture when $\alpha=-3$:
and leads to an algorithm of the kind
T=1e3 #number of simulations
x=NULL
while (length(x)<T){
y=rnorm(2*T)
x=c(x,y[runif(2*T)<pnorm(alpha*y)])}
x=x[1:T]
which returns a reasonable fit of the pdf by the histogram:
There are however transforms of standard distributions that result in skew Normal variates: If $X_1,X_2$ are iid $\mathcal{N}(0,1)$, then
$$\dfrac{\alpha|X_1|+X_2}{\sqrt{1+\alpha^2}}$$
$$\dfrac{1+\alpha}{\sqrt{2(1+\alpha^2)}}\max\{X_1,X_2\}+\dfrac{1-\alpha}{\sqrt{2(1+\alpha^2)}}\min\{X_1,X_2\}$$
are skew Normal variates with parameter $\alpha$. (Both representations are identical when considering that $(X_1+X_2,X_1-X_2)/\sqrt{2}$ is an iid $\mathcal{N}(0,1)$ pair.) | Sampling from Skew Normal Distribution
The most direct way of simulating a random variable from a distribution with cdf $F$ is to first simulate a Uniform variate $U\sim\mathcal{U}(0,1)$ and second return the inverse cdf transform $F^{-1}( |
30,254 | Sampling from Skew Normal Distribution | I stumbled over this via googling. Rejection sampling is not needed. Instead, it is sufficient to flip the sign if the sample would be rejected!
This is because we can use that $\Phi(-ax)+\Phi(ax)=1$ and thus
$(f(x)+f(-x))/2= \phi(x) \Phi(ax)+ \phi(-x) \Phi(-ax)= \phi(x)$
Therefore, we can sample a skew-normal random variable by first sampling a standard normal variable $x$ and than flip its sign with probability $1-\Phi(ax)=\Phi(-ax)$. Beautiful! | Sampling from Skew Normal Distribution | I stumbled over this via googling. Rejection sampling is not needed. Instead, it is sufficient to flip the sign if the sample would be rejected!
This is because we can use that $\Phi(-ax)+\Phi(ax)=1$ | Sampling from Skew Normal Distribution
I stumbled over this via googling. Rejection sampling is not needed. Instead, it is sufficient to flip the sign if the sample would be rejected!
This is because we can use that $\Phi(-ax)+\Phi(ax)=1$ and thus
$(f(x)+f(-x))/2= \phi(x) \Phi(ax)+ \phi(-x) \Phi(-ax)= \phi(x)$
Therefore, we can sample a skew-normal random variable by first sampling a standard normal variable $x$ and than flip its sign with probability $1-\Phi(ax)=\Phi(-ax)$. Beautiful! | Sampling from Skew Normal Distribution
I stumbled over this via googling. Rejection sampling is not needed. Instead, it is sufficient to flip the sign if the sample would be rejected!
This is because we can use that $\Phi(-ax)+\Phi(ax)=1$ |
30,255 | Sampling from Skew Normal Distribution | So the accepted answer uses rejection sampling and is very slow. The second answer sounds like a nice idea, but I don't follow the logic, and when I implemented it, the histograms did not match the ground truth PDFs.
However, SciPy has implemented this for the univariate case, and their code references the relevant paper to extend this to multivariate settings: Statistical applications of the multivariate skew-normal distribution.
Here is some code in Python:
def sample(shape, cov, size=1):
dim = len(shape)
assert(cov.shape == (dim, dim))
aCa = shape @ cov @ shape
delta = (1 / np.sqrt(1 + aCa)) * cov @ shape
cov_star = np.block([[np.ones(1), delta], [delta[:, None], cov]])
x = mvn(np.zeros(dim+1), cov_star).rvs(size)
x0, x1 = x[:, 0], x[:, 1:]
inds = x0 <= 0
x1[inds] = -1 * x1[inds]
return x1
which I used to generate these histograms with groundtruth PDFs in white. I have more details on my blog. | Sampling from Skew Normal Distribution | So the accepted answer uses rejection sampling and is very slow. The second answer sounds like a nice idea, but I don't follow the logic, and when I implemented it, the histograms did not match the gr | Sampling from Skew Normal Distribution
So the accepted answer uses rejection sampling and is very slow. The second answer sounds like a nice idea, but I don't follow the logic, and when I implemented it, the histograms did not match the ground truth PDFs.
However, SciPy has implemented this for the univariate case, and their code references the relevant paper to extend this to multivariate settings: Statistical applications of the multivariate skew-normal distribution.
Here is some code in Python:
def sample(shape, cov, size=1):
dim = len(shape)
assert(cov.shape == (dim, dim))
aCa = shape @ cov @ shape
delta = (1 / np.sqrt(1 + aCa)) * cov @ shape
cov_star = np.block([[np.ones(1), delta], [delta[:, None], cov]])
x = mvn(np.zeros(dim+1), cov_star).rvs(size)
x0, x1 = x[:, 0], x[:, 1:]
inds = x0 <= 0
x1[inds] = -1 * x1[inds]
return x1
which I used to generate these histograms with groundtruth PDFs in white. I have more details on my blog. | Sampling from Skew Normal Distribution
So the accepted answer uses rejection sampling and is very slow. The second answer sounds like a nice idea, but I don't follow the logic, and when I implemented it, the histograms did not match the gr |
30,256 | How is the augmented Dickey–Fuller test (ADF) table of critical values calculated? | I am not sure an easy answer is possible here.
As can be found in many textbooks, the limiting null distribution of the "Dickey-Fuller t-statistic" is that of a nonstandard random variable which may be expressed through a functional of a Brownian motion $W$.
Denote by $\hat{\rho}_T$ the OLS estimate of a regression of $y_t$ on $y_{t-1}$, and by $t_T$ the standard t-ratio for the null that $\rho=1$.
In the simplest case without constant or trend in the test regression, we have
\begin{eqnarray*}
t_T&=&\frac{\hat{\rho}_T-1}{s.e.(\hat{\rho}_T)}\\&=&\frac{T(\hat{\rho}_T-1)}{\{s^2_T\}^{1/2}}\left\{T^{-2}\sum_{t=1}^Ty_{t-1}^2\right\}^{1/2}\\
&\Rightarrow&\frac{1/2\{W(1)^2-1\}}{\int_0^1W(r)^2d r}\frac{1}{\sigma}\left\{\sigma^2\int_0^1W(r)^2dr\right\}^{1/2}\\
&=&\frac{W(1)^2-1}{2 \left\{\int_0^1W(r)^2dr\right\}^{1/2}}
\end{eqnarray*}
This random variable has no easy expression for its density or cdf, but can be simulated, noting that, for a suitable distribution of $u$ like the standard normal, $1/\sqrt{T}\sum_{t=1}^{[sT]}u_t$ will behave like $W(s)$ for $T$ "large", where $[sT]$ denotes the integer part of $sT$.
Hence, the DF distribution can be simulated as follows:
T <- 5000
reps <- 50000
DFstats <- rep(NA,reps)
for (i in 1:reps){
u <- rnorm(T)
W <- 1/sqrt(T)*cumsum(u)
DFstats[i] <- (W[T]^2-1)/(2*sqrt(mean(W^2)))
}
The resulting (magenta) simulated (kernel-density estimated) distribution is given here, with the green standard normal for comparison:
plot(density(DFstats),lwd=2,col=c("deeppink2"))
xax <- seq(-4,4,by=.1)
lines(xax,dnorm(xax),lwd=2,col=c("chartreuse4"))
We see that the DF distribution is shifted to the left relative to the standard normal, and skewed.
The critical values are then, as usual, the quantiles of the null distribution of the test statistic, and can be retrieved via (note that we - typically - perform the DF test against the left-tailed alternative of stationarity that $|\rho|<1$)
CriticalValues <- sort(DFstats)[c(0.01,0.05,0.1)*reps]
> CriticalValues
[1] -2.571179 -1.943025 -1.611253
That is, these are the values such that 1%, 5% or 10% of the probability mass is to the left (=the rejection region) of them, producing the desired rejection rates.
Graphically (zooming in on the more relevant left part of the distribution):
plot(DFdensity,lwd=2,col=c("deeppink2"), xlim=c(-4,0))
xshade1 <- DFdensity$x[DFdensity$x <= CriticalValues[1]]
yshade1 <- DFdensity$y[DFdensity$x <= CriticalValues[1]]
polygon(c(xshade1[1],xshade1,CriticalValues[1]),c(0,yshade1,0),col="deeppink2", border = "deeppink2", lwd=2)
xshade2 <- DFdensity$x[DFdensity$x > CriticalValues[1] & DFdensity$x <= CriticalValues[2]]
yshade2 <- DFdensity$y[DFdensity$x > CriticalValues[1] & DFdensity$x <= CriticalValues[2]]
polygon(c(xshade2[1],xshade2,CriticalValues[2]),c(0,yshade2,0),col="deeppink4", border = "deeppink4", lwd=2)
xshade3 <- DFdensity$x[DFdensity$x > CriticalValues[2] & DFdensity$x <= CriticalValues[3]]
yshade3 <- DFdensity$y[DFdensity$x > CriticalValues[2] & DFdensity$x <= CriticalValues[3]]
polygon(c(xshade3[1],xshade3,CriticalValues[3]),c(0,yshade3,0),col="darkorchid4", border = "darkorchid4", lwd=2)
Given that these are simulated critical values, these may thus differ slightly from those reported in published tables.
EDIT: In response to the comment below, for the case with constant we would modify the code as follows, following Dickey-Fuller unit root test with no trend and supressed constant in Stata
for (i in 1:reps){
u <- rnorm(T)
W <- 1/sqrt(T)*cumsum(u)
W_mu <- W - mean(W)
DFstats[i] <- (W_mu[T]^2-W_mu[1]^2-1)/(2*sqrt(mean(W_mu^2)))
}
In particular, the line W_mu <- W - mean(W) creates the demeaned Brownian motion $W^\mu(r)=W(r)-\int W(s)ds$. Its first element W_mu[1]^2corresponds to the initial entry of that demeaned Brownian motion, $W^\mu(0)^2$, from the link.
The link also deals with the expression for the trend case, which may therefore be simulated via
s <- seq(0,1,length.out = T)
for (i in 1:reps){
u <- rnorm(T)
W <- 1/sqrt(T)*cumsum(u)
W_tau <- W - (4-6*s)*mean(W) - (12*s-6)*mean(s*W)
DFstats[i] <- (W_tau[T]^2-W_tau[1]^2-1)/(2*sqrt(mean(W_tau^2)))
}
(CriticalValues <- sort(DFstats)[c(0.01,0.05,0.1)*reps]) | How is the augmented Dickey–Fuller test (ADF) table of critical values calculated? | I am not sure an easy answer is possible here.
As can be found in many textbooks, the limiting null distribution of the "Dickey-Fuller t-statistic" is that of a nonstandard random variable which may b | How is the augmented Dickey–Fuller test (ADF) table of critical values calculated?
I am not sure an easy answer is possible here.
As can be found in many textbooks, the limiting null distribution of the "Dickey-Fuller t-statistic" is that of a nonstandard random variable which may be expressed through a functional of a Brownian motion $W$.
Denote by $\hat{\rho}_T$ the OLS estimate of a regression of $y_t$ on $y_{t-1}$, and by $t_T$ the standard t-ratio for the null that $\rho=1$.
In the simplest case without constant or trend in the test regression, we have
\begin{eqnarray*}
t_T&=&\frac{\hat{\rho}_T-1}{s.e.(\hat{\rho}_T)}\\&=&\frac{T(\hat{\rho}_T-1)}{\{s^2_T\}^{1/2}}\left\{T^{-2}\sum_{t=1}^Ty_{t-1}^2\right\}^{1/2}\\
&\Rightarrow&\frac{1/2\{W(1)^2-1\}}{\int_0^1W(r)^2d r}\frac{1}{\sigma}\left\{\sigma^2\int_0^1W(r)^2dr\right\}^{1/2}\\
&=&\frac{W(1)^2-1}{2 \left\{\int_0^1W(r)^2dr\right\}^{1/2}}
\end{eqnarray*}
This random variable has no easy expression for its density or cdf, but can be simulated, noting that, for a suitable distribution of $u$ like the standard normal, $1/\sqrt{T}\sum_{t=1}^{[sT]}u_t$ will behave like $W(s)$ for $T$ "large", where $[sT]$ denotes the integer part of $sT$.
Hence, the DF distribution can be simulated as follows:
T <- 5000
reps <- 50000
DFstats <- rep(NA,reps)
for (i in 1:reps){
u <- rnorm(T)
W <- 1/sqrt(T)*cumsum(u)
DFstats[i] <- (W[T]^2-1)/(2*sqrt(mean(W^2)))
}
The resulting (magenta) simulated (kernel-density estimated) distribution is given here, with the green standard normal for comparison:
plot(density(DFstats),lwd=2,col=c("deeppink2"))
xax <- seq(-4,4,by=.1)
lines(xax,dnorm(xax),lwd=2,col=c("chartreuse4"))
We see that the DF distribution is shifted to the left relative to the standard normal, and skewed.
The critical values are then, as usual, the quantiles of the null distribution of the test statistic, and can be retrieved via (note that we - typically - perform the DF test against the left-tailed alternative of stationarity that $|\rho|<1$)
CriticalValues <- sort(DFstats)[c(0.01,0.05,0.1)*reps]
> CriticalValues
[1] -2.571179 -1.943025 -1.611253
That is, these are the values such that 1%, 5% or 10% of the probability mass is to the left (=the rejection region) of them, producing the desired rejection rates.
Graphically (zooming in on the more relevant left part of the distribution):
plot(DFdensity,lwd=2,col=c("deeppink2"), xlim=c(-4,0))
xshade1 <- DFdensity$x[DFdensity$x <= CriticalValues[1]]
yshade1 <- DFdensity$y[DFdensity$x <= CriticalValues[1]]
polygon(c(xshade1[1],xshade1,CriticalValues[1]),c(0,yshade1,0),col="deeppink2", border = "deeppink2", lwd=2)
xshade2 <- DFdensity$x[DFdensity$x > CriticalValues[1] & DFdensity$x <= CriticalValues[2]]
yshade2 <- DFdensity$y[DFdensity$x > CriticalValues[1] & DFdensity$x <= CriticalValues[2]]
polygon(c(xshade2[1],xshade2,CriticalValues[2]),c(0,yshade2,0),col="deeppink4", border = "deeppink4", lwd=2)
xshade3 <- DFdensity$x[DFdensity$x > CriticalValues[2] & DFdensity$x <= CriticalValues[3]]
yshade3 <- DFdensity$y[DFdensity$x > CriticalValues[2] & DFdensity$x <= CriticalValues[3]]
polygon(c(xshade3[1],xshade3,CriticalValues[3]),c(0,yshade3,0),col="darkorchid4", border = "darkorchid4", lwd=2)
Given that these are simulated critical values, these may thus differ slightly from those reported in published tables.
EDIT: In response to the comment below, for the case with constant we would modify the code as follows, following Dickey-Fuller unit root test with no trend and supressed constant in Stata
for (i in 1:reps){
u <- rnorm(T)
W <- 1/sqrt(T)*cumsum(u)
W_mu <- W - mean(W)
DFstats[i] <- (W_mu[T]^2-W_mu[1]^2-1)/(2*sqrt(mean(W_mu^2)))
}
In particular, the line W_mu <- W - mean(W) creates the demeaned Brownian motion $W^\mu(r)=W(r)-\int W(s)ds$. Its first element W_mu[1]^2corresponds to the initial entry of that demeaned Brownian motion, $W^\mu(0)^2$, from the link.
The link also deals with the expression for the trend case, which may therefore be simulated via
s <- seq(0,1,length.out = T)
for (i in 1:reps){
u <- rnorm(T)
W <- 1/sqrt(T)*cumsum(u)
W_tau <- W - (4-6*s)*mean(W) - (12*s-6)*mean(s*W)
DFstats[i] <- (W_tau[T]^2-W_tau[1]^2-1)/(2*sqrt(mean(W_tau^2)))
}
(CriticalValues <- sort(DFstats)[c(0.01,0.05,0.1)*reps]) | How is the augmented Dickey–Fuller test (ADF) table of critical values calculated?
I am not sure an easy answer is possible here.
As can be found in many textbooks, the limiting null distribution of the "Dickey-Fuller t-statistic" is that of a nonstandard random variable which may b |
30,257 | What are the error distribution and link functions of a model family in R? | You don't specify the "error" distribution, you specify the conditional distribution of the response.
When you type the name of the family (such as binomial) that specifies the conditional distribution to be binomial, and that implies the variance function (e.g. in the case of the binomial it is $\mu(1-\mu)$). If you choose a different family you get a different variance function (for Poisson it's $\mu$, for Gamma it's $\mu^2$, for Gaussian it's constant, for inverse Gaussian its $\mu^3$, and so on).
[For some cases (e.g. logistic regression) you can take a latent-variable approach to the GLM - and in that case, you might possibly regard the distribution of the latent variable as a form of "error distribution".]
The link function determines how the mean ($\mu$) and the linear predictor ($\eta=X\beta$) are related. Specifically, if $\eta=g(\mu)$ then $g$ is called the link function.
You can find tables of the variance functions and the canonical link functions (which have some convenient properties) for commonly-used members of the exponential class in many standard books as well as all over the place on the internet. Here's a small one:
\begin{array}{lcll}
\textit{Family} & \textit{ Variance fn } & \textit{Canonical link function } & \textit{Other common links } \\
\hline
\text{Gaussian} & \text{constant} &\:\:\:\: \mu\qquad\qquad \text{(identity)} & \\
\text{Binomial} &\: \mu(1-\mu) & \log(\frac{\mu}{1-\mu})\;\qquad \:\:\:\,\text{(logit)} & \text{probit, cloglog} \\
\text{Poisson} &\: \mu &\: \log(\mu)\qquad\qquad\:\:\, \text{(log)} & \text{identity} \\
\text{Gamma} &\: \mu^2 &\:\: 1/\mu\quad\:\:\:\qquad \text{(inverse)} & \log \\
\text{Inverse Gaussian} &\: \mu^3 &\:\: 1/\mu^2 & \log
\end{array}
(R implements these in fairly typical fashion, and in the cases mentioned above will use the canonical link if you don't specify one) | What are the error distribution and link functions of a model family in R? | You don't specify the "error" distribution, you specify the conditional distribution of the response.
When you type the name of the family (such as binomial) that specifies the conditional distributio | What are the error distribution and link functions of a model family in R?
You don't specify the "error" distribution, you specify the conditional distribution of the response.
When you type the name of the family (such as binomial) that specifies the conditional distribution to be binomial, and that implies the variance function (e.g. in the case of the binomial it is $\mu(1-\mu)$). If you choose a different family you get a different variance function (for Poisson it's $\mu$, for Gamma it's $\mu^2$, for Gaussian it's constant, for inverse Gaussian its $\mu^3$, and so on).
[For some cases (e.g. logistic regression) you can take a latent-variable approach to the GLM - and in that case, you might possibly regard the distribution of the latent variable as a form of "error distribution".]
The link function determines how the mean ($\mu$) and the linear predictor ($\eta=X\beta$) are related. Specifically, if $\eta=g(\mu)$ then $g$ is called the link function.
You can find tables of the variance functions and the canonical link functions (which have some convenient properties) for commonly-used members of the exponential class in many standard books as well as all over the place on the internet. Here's a small one:
\begin{array}{lcll}
\textit{Family} & \textit{ Variance fn } & \textit{Canonical link function } & \textit{Other common links } \\
\hline
\text{Gaussian} & \text{constant} &\:\:\:\: \mu\qquad\qquad \text{(identity)} & \\
\text{Binomial} &\: \mu(1-\mu) & \log(\frac{\mu}{1-\mu})\;\qquad \:\:\:\,\text{(logit)} & \text{probit, cloglog} \\
\text{Poisson} &\: \mu &\: \log(\mu)\qquad\qquad\:\:\, \text{(log)} & \text{identity} \\
\text{Gamma} &\: \mu^2 &\:\: 1/\mu\quad\:\:\:\qquad \text{(inverse)} & \log \\
\text{Inverse Gaussian} &\: \mu^3 &\:\: 1/\mu^2 & \log
\end{array}
(R implements these in fairly typical fashion, and in the cases mentioned above will use the canonical link if you don't specify one) | What are the error distribution and link functions of a model family in R?
You don't specify the "error" distribution, you specify the conditional distribution of the response.
When you type the name of the family (such as binomial) that specifies the conditional distributio |
30,258 | What are the error distribution and link functions of a model family in R? | In R, if you read the documentation for the function ?family, you will see the default links in a list at the top:
Usage
family(object, ...)
binomial(link = "logit")
gaussian(link = "identity")
Gamma(link = "inverse")
inverse.gaussian(link = "1/mu^2")
poisson(link = "log")
quasi(link = "identity", variance = "constant")
quasibinomial(link = "logit")
quasipoisson(link = "log")
You might notice that the default links tend to be the canonical links for the various distributions. However, you can specify alternative links (e.g., family=binomial(link="probit")), if you prefer. Any function that maps the range of the parameter being fitted (e.g., for logistic regression $\pi_i \in (0, 1)$) to the possible range of the model's right hand side (always $(-\infty, \infty)$) can be acceptable. In fact, you can use a function that doesn't meet this standard so long as the data in your sample don't cause the fitted parameter to go outside of the acceptable range. (For instance, people sometimes use the identity function as their link with count data, or with proportions—e.g., polling results—when the fitted values aren't near the bounds.)
I suspect you would benefit from an overview of the generalized linear model and link functions. It may help you to read my answer here: Difference between logit and probit models, which ends up doing some of that even though it was written in a different context. You can also peruse some of the threads categorized under the link-function tag. | What are the error distribution and link functions of a model family in R? | In R, if you read the documentation for the function ?family, you will see the default links in a list at the top:
Usage
family(object, ...)
binomial(link = "logit")
gaussian(link = "identity | What are the error distribution and link functions of a model family in R?
In R, if you read the documentation for the function ?family, you will see the default links in a list at the top:
Usage
family(object, ...)
binomial(link = "logit")
gaussian(link = "identity")
Gamma(link = "inverse")
inverse.gaussian(link = "1/mu^2")
poisson(link = "log")
quasi(link = "identity", variance = "constant")
quasibinomial(link = "logit")
quasipoisson(link = "log")
You might notice that the default links tend to be the canonical links for the various distributions. However, you can specify alternative links (e.g., family=binomial(link="probit")), if you prefer. Any function that maps the range of the parameter being fitted (e.g., for logistic regression $\pi_i \in (0, 1)$) to the possible range of the model's right hand side (always $(-\infty, \infty)$) can be acceptable. In fact, you can use a function that doesn't meet this standard so long as the data in your sample don't cause the fitted parameter to go outside of the acceptable range. (For instance, people sometimes use the identity function as their link with count data, or with proportions—e.g., polling results—when the fitted values aren't near the bounds.)
I suspect you would benefit from an overview of the generalized linear model and link functions. It may help you to read my answer here: Difference between logit and probit models, which ends up doing some of that even though it was written in a different context. You can also peruse some of the threads categorized under the link-function tag. | What are the error distribution and link functions of a model family in R?
In R, if you read the documentation for the function ?family, you will see the default links in a list at the top:
Usage
family(object, ...)
binomial(link = "logit")
gaussian(link = "identity |
30,259 | What are some reasons iteratively reweighted least squares would not converge when used for logistic regression? | In case the two classes are separable, iteratively reweighted least squares (IRLS) would break. In such a scenario, any hyperplane that separates the two classes is a solution and there are infinitely many of them. IRLS is meant to find a maximum likelihood solution. Maximum likelihood does not have a mechanism to favor any of these solutions over the other (e.g. no concept of maximum margin). Depending on the initialization, IRLS should go toward one of these solutions and would break due to numerical problems (don't know the details of IRLS; an educated guess).
Another problem arises in case of linear-separability of the training data. Any of the hyperplane solutions corresponds to a heaviside function. Therefore, all the probabilities are either 0 or 1. The linear regression solution would be a hard classifier rather than a probabilistic classifier.
To clarify using mathematical notation, the heaviside function is $\lim_{|\mathbf{w}| \rightarrow \infty}\sigma(\mathbf{w}^T x + b)$, the limit of sigmoid function, where $\sigma$ is the sigmoid function and $(\mathbf{w}, b)$ determines the hyperplane solution. So IRLS theoretically does not stop and goes toward a $\mathbf{w}$ with increasing magnitude but would break in practice due to numerical problems. | What are some reasons iteratively reweighted least squares would not converge when used for logistic | In case the two classes are separable, iteratively reweighted least squares (IRLS) would break. In such a scenario, any hyperplane that separates the two classes is a solution and there are infinitely | What are some reasons iteratively reweighted least squares would not converge when used for logistic regression?
In case the two classes are separable, iteratively reweighted least squares (IRLS) would break. In such a scenario, any hyperplane that separates the two classes is a solution and there are infinitely many of them. IRLS is meant to find a maximum likelihood solution. Maximum likelihood does not have a mechanism to favor any of these solutions over the other (e.g. no concept of maximum margin). Depending on the initialization, IRLS should go toward one of these solutions and would break due to numerical problems (don't know the details of IRLS; an educated guess).
Another problem arises in case of linear-separability of the training data. Any of the hyperplane solutions corresponds to a heaviside function. Therefore, all the probabilities are either 0 or 1. The linear regression solution would be a hard classifier rather than a probabilistic classifier.
To clarify using mathematical notation, the heaviside function is $\lim_{|\mathbf{w}| \rightarrow \infty}\sigma(\mathbf{w}^T x + b)$, the limit of sigmoid function, where $\sigma$ is the sigmoid function and $(\mathbf{w}, b)$ determines the hyperplane solution. So IRLS theoretically does not stop and goes toward a $\mathbf{w}$ with increasing magnitude but would break in practice due to numerical problems. | What are some reasons iteratively reweighted least squares would not converge when used for logistic
In case the two classes are separable, iteratively reweighted least squares (IRLS) would break. In such a scenario, any hyperplane that separates the two classes is a solution and there are infinitely |
30,260 | What are some reasons iteratively reweighted least squares would not converge when used for logistic regression? | On top of linear separation (in which the MLE is at the boundary of the parameter space), the Fisher Scoring procedure in R is not completely numerically stable. It takes steps of fixed size, which in certain pathological cases can lead to non-convergence (when the true MLE is indeed an interior point).
For example,
y <- c(1,1,1,0)
x <- rep(1,4)
fit1 <- glm.fit(x,y, family=binomial(link="logit"),start=-1.81)
yields a coefficient of $2 \times 10^{15}$ rather than the expected logit$(3/4) \approx 1.0986$.
The CRAN package glm2 provides a drop-in replacement for glm.fit that adjusts step size to ensure monotone convergence. | What are some reasons iteratively reweighted least squares would not converge when used for logistic | On top of linear separation (in which the MLE is at the boundary of the parameter space), the Fisher Scoring procedure in R is not completely numerically stable. It takes steps of fixed size, which i | What are some reasons iteratively reweighted least squares would not converge when used for logistic regression?
On top of linear separation (in which the MLE is at the boundary of the parameter space), the Fisher Scoring procedure in R is not completely numerically stable. It takes steps of fixed size, which in certain pathological cases can lead to non-convergence (when the true MLE is indeed an interior point).
For example,
y <- c(1,1,1,0)
x <- rep(1,4)
fit1 <- glm.fit(x,y, family=binomial(link="logit"),start=-1.81)
yields a coefficient of $2 \times 10^{15}$ rather than the expected logit$(3/4) \approx 1.0986$.
The CRAN package glm2 provides a drop-in replacement for glm.fit that adjusts step size to ensure monotone convergence. | What are some reasons iteratively reweighted least squares would not converge when used for logistic
On top of linear separation (in which the MLE is at the boundary of the parameter space), the Fisher Scoring procedure in R is not completely numerically stable. It takes steps of fixed size, which i |
30,261 | Proof / derivation of skewness and kurtosis formulas | You should not expect proof, since skewness and kurtosis are somewhat vague notions.
Symmetry is mathematically precise, but skewness by contrast is surprisingly slippery. Kurtosis is perhaps even more so.
There have been a fairly large number of attempts to give measures corresponding to these notions, but these often-useful measures can be surprisingly counter-intuitive at times. For example, the moment-based skewness can be zero when the distribution is asymmetric (contradicting an assertion one can surprisingly often find when reading elementary texts which discuss skewness).
Can anyone explain to me where the formula of skewness or kurtosis comes from?
Both skewness and kurtosis are somewhat vague terms with several different measures.
These days people mostly mean the moment-based measures, based on standardized third and fourth moments respectively.
Some history
The term "skewness" as applied to a probability distribution seems from an initial look to originate with Karl Pearson, 1895$^{\text{[1]}}$. He begins by talking about asymmetry.
The term "kurtosis" as applied to a probability distribution seems to also originate with Karl Pearson, 1905$^{\text{[2]}}$.
Pearson has formulas for the moment-kurtosis and the square of the moment skewness ($\beta_2$ and $\beta_1$) in his 1895 paper, and they're being used in some sense to help describe shape, even though the notion of kurtosis is not particularly developed there.
However, the idea that higher (standardized) moments than the second can be thought of as some measure of shape or at least of deviation from normality appears to be older than this.
Note, in particular, the historical information contained in Nick Cox's article$^{\text{[3]}}$, here, which makes clear we should give much of the priority to Thiele (1889)$^{\text{[4]}}$.
However, there are other measures than the moment-based quantities. For example, in the case of skewness, there's Pearson's first and second skewness coefficients, which are based on the simple notion of scaling the difference between the mean-and-mode and the mean-and-median respectively. (I think these also date to the 1895 paper but I haven't checked this.)
These different measures relating to the same underlying notion can suggest quite different things (even be opposite in sign). e.g. see here. A reason to be cautious about over-interpreting the moment-kurtosis can be seen here
Edit: Additional information --
It seems that the entries on skewness and kurtosis in Earliest Known Uses of
Some of the Words of Mathematics, by Miller et al.$^{[5]}$ agrees with me that it seems they originate with Karl Pearson in 1895 and 1905 respectively.
Nice to have some degree of confirmation.
What I mean to say is that why the skewness is defined as third central moment, and not fifth or any other number? What's the logic behind it?
Okay, so we're specifically dealing with moment skewness.
First, why the third moment makes some sense. Let's begin by thinking of skewness in a somewhat intuitive way rather than rely on a formal definition and see what it might imply.
Recall that $\sigma$ represents a kind of "typical" distance of observations from the mean, and consider what happens when we take a symmetric-looking distribution and make it more right-skew (while trying to keep the area, $\mu$ and $\sigma$ constant):
Here we divide up the axis into four regions - roughly placed at more than one standard deviation below the mean, less than a standard deviation below the mean, less than a standard deviation above the mean and more than a standard deviation above the mean - sections $A$ to $D$ respectively.
For a slight increase in skewness, we tend to see relatively more of the probability immediately to the left of the mean (region $B$) and far above the mean (region $D$), while seeing less immediately above the mean (region $C$) and far below (region $A$).
Indeed, by trying to keep the area constant, if we lift the far tail (the right end of $D$), we're forced to have less probability elsewhere. But if we want to keep the mean constant we must have more probability somewhere lower down. If we put the compensating probability into "$A$" (lowering $B$ and $C$) we could keep the area and mean constant, but we'd end up with a symmetric change (effectively, we would be increasing the variance, not making it more skew).
We can't lift $C$ because lifting both $C$ and $D$ would shift the mean up.
So if we lift the right tail ($D$), while holding the area, mean and standard deviation constant, we can lift the probability in $B$ as well, and reduce the other two -- if we get the relative amounts and positions right within those regions. Keeping $\sigma$ nearly constant constrains what we do more than I have really described (the diagram is slightly inaccurate and as drawn suggests an increase in $\sigma$).
So to make it look a little more right skew, we would tend to shift as described in roughly those areas.
But what does that imply if we try to construct a simple moment-based measure? Note that with third central moments, more area in $A$ or $B$ will tend to reduce it while more area in $C$ or $D$ would tend to increase it (other things being equal), but as we saw, we can't add area without taking it away somewhere else. Cubing things above 1 pulls them out more than cubing things below 1 can "pull them in", so if we add to $D$ while taking away from $C$, the third moment will still increase. Similarly if we add to $B$ while taking away from $A$, the third moment will again tend to increase. That is, the rough-sense "increase in skewness" we just arrived at seems to correspond quite nicely to the third moment.
Now this discussion doesn't rule out fifth and higher moments at all - indeed an increase in third moment will also tend to increase the fifth (unless you do it very carefully indeed), but the (standardized) third moment will be about the simplest way of capturing the notion of skewness using a moment-based measure; while fifth moments are more complex and can move in ways that don't capture our sense of skewness as well as the third moment does.
The third standardized moment doesn't correspond perfectly to our sense of skewness, but it's a pretty good, simple measure that does mostly correspond to it.
The only explanation I found out was that since the mean is center of the data and raising it to the odd power would cancel the term and final answer will be Zero if it data distribution is symmetric and not zero if it is not symmetric. Is this true?
1) In spite of such comments being very easy to find in elementary treatments**, strictly speaking, in respect of probability distributions, neither part is actually true.
a) It's possible for a distribution to be symmetric but not have zero third moment. A simple counterexample is any t-distribution with 3 or fewer degrees of freedom. In samples, however, symmetry implies zero third moment - but samples are almost never perfectly symmetric, so it's not much use there either.
b) It's possible for an asymmetric distribution to have zero third moment.
So symmetry doesn't necessarily imply zero third moment and zero third moment doesn't necessarily imply symmetry.
2) In any case, that wouldn't explain "why third moment rather than fifth", since the fifth power would be just as odd as the third.
** (indeed since I am often asked whether I'd recommend a particular book, the second part (i.e. a claim that $\gamma_1=0$ implies symmetry) is one of several 'quick tests' I use when evaluating whether an elementary book is worth examining more closely -- if a text stumbles over two or more of the common basic errors I tend to see, I don't bother wasting further time looking.)
References
[1]: Pearson, K. (1895),
"Contributions to the Mathematical Theory of Evolution, II: Skew Variation in Homogeneous Material,"
Philosophical Transactions of the Royal Society, Series A, 186, 343-414
[Out of copyright. Freely available here]
[2]: Pearson, K. (1905),
"Das Fehlergesetz und Seine Verallgemeinerungen Durch Fechner und Pearson.", a rejoinder (Skew variation, a rejoinder),
Biometrika, 4 (1905), pp. 169–212.
[While this is also well out of copyright (I can find copies of a Biometrika from 3 years later, for example), I can't locate a copy of this I can link you to. Oxford Journals wants to charge $38 for one day of access to something that is long out of copyright. If you don't have institutional access to one of the places that supply access to it you may be out of luck on this one.]
[3]: Cox, N. J. (2010),
"Speaking Stata: The limits of sample skewness and kurtosis",
The Stata Journal, 10, Number 3, pp. 482–495
(available online here)
[4]: Thiele, T. N. (1889),
Forlæsinger over Almindelig Iagttagelseslære: Sandsynlighedsregning og Mindste Kvadraters Methode,
Copenhagen: C. A. Reitzel.
[Out of copyright. There's an English translation - see the Thiele reference in [3].]
[5]: Miller, Jeff; et al., (Accessed 16 August 2014),
Earliest Known Uses of Some of the Words of Mathematics,
See here | Proof / derivation of skewness and kurtosis formulas | You should not expect proof, since skewness and kurtosis are somewhat vague notions.
Symmetry is mathematically precise, but skewness by contrast is surprisingly slippery. Kurtosis is perhaps even mo | Proof / derivation of skewness and kurtosis formulas
You should not expect proof, since skewness and kurtosis are somewhat vague notions.
Symmetry is mathematically precise, but skewness by contrast is surprisingly slippery. Kurtosis is perhaps even more so.
There have been a fairly large number of attempts to give measures corresponding to these notions, but these often-useful measures can be surprisingly counter-intuitive at times. For example, the moment-based skewness can be zero when the distribution is asymmetric (contradicting an assertion one can surprisingly often find when reading elementary texts which discuss skewness).
Can anyone explain to me where the formula of skewness or kurtosis comes from?
Both skewness and kurtosis are somewhat vague terms with several different measures.
These days people mostly mean the moment-based measures, based on standardized third and fourth moments respectively.
Some history
The term "skewness" as applied to a probability distribution seems from an initial look to originate with Karl Pearson, 1895$^{\text{[1]}}$. He begins by talking about asymmetry.
The term "kurtosis" as applied to a probability distribution seems to also originate with Karl Pearson, 1905$^{\text{[2]}}$.
Pearson has formulas for the moment-kurtosis and the square of the moment skewness ($\beta_2$ and $\beta_1$) in his 1895 paper, and they're being used in some sense to help describe shape, even though the notion of kurtosis is not particularly developed there.
However, the idea that higher (standardized) moments than the second can be thought of as some measure of shape or at least of deviation from normality appears to be older than this.
Note, in particular, the historical information contained in Nick Cox's article$^{\text{[3]}}$, here, which makes clear we should give much of the priority to Thiele (1889)$^{\text{[4]}}$.
However, there are other measures than the moment-based quantities. For example, in the case of skewness, there's Pearson's first and second skewness coefficients, which are based on the simple notion of scaling the difference between the mean-and-mode and the mean-and-median respectively. (I think these also date to the 1895 paper but I haven't checked this.)
These different measures relating to the same underlying notion can suggest quite different things (even be opposite in sign). e.g. see here. A reason to be cautious about over-interpreting the moment-kurtosis can be seen here
Edit: Additional information --
It seems that the entries on skewness and kurtosis in Earliest Known Uses of
Some of the Words of Mathematics, by Miller et al.$^{[5]}$ agrees with me that it seems they originate with Karl Pearson in 1895 and 1905 respectively.
Nice to have some degree of confirmation.
What I mean to say is that why the skewness is defined as third central moment, and not fifth or any other number? What's the logic behind it?
Okay, so we're specifically dealing with moment skewness.
First, why the third moment makes some sense. Let's begin by thinking of skewness in a somewhat intuitive way rather than rely on a formal definition and see what it might imply.
Recall that $\sigma$ represents a kind of "typical" distance of observations from the mean, and consider what happens when we take a symmetric-looking distribution and make it more right-skew (while trying to keep the area, $\mu$ and $\sigma$ constant):
Here we divide up the axis into four regions - roughly placed at more than one standard deviation below the mean, less than a standard deviation below the mean, less than a standard deviation above the mean and more than a standard deviation above the mean - sections $A$ to $D$ respectively.
For a slight increase in skewness, we tend to see relatively more of the probability immediately to the left of the mean (region $B$) and far above the mean (region $D$), while seeing less immediately above the mean (region $C$) and far below (region $A$).
Indeed, by trying to keep the area constant, if we lift the far tail (the right end of $D$), we're forced to have less probability elsewhere. But if we want to keep the mean constant we must have more probability somewhere lower down. If we put the compensating probability into "$A$" (lowering $B$ and $C$) we could keep the area and mean constant, but we'd end up with a symmetric change (effectively, we would be increasing the variance, not making it more skew).
We can't lift $C$ because lifting both $C$ and $D$ would shift the mean up.
So if we lift the right tail ($D$), while holding the area, mean and standard deviation constant, we can lift the probability in $B$ as well, and reduce the other two -- if we get the relative amounts and positions right within those regions. Keeping $\sigma$ nearly constant constrains what we do more than I have really described (the diagram is slightly inaccurate and as drawn suggests an increase in $\sigma$).
So to make it look a little more right skew, we would tend to shift as described in roughly those areas.
But what does that imply if we try to construct a simple moment-based measure? Note that with third central moments, more area in $A$ or $B$ will tend to reduce it while more area in $C$ or $D$ would tend to increase it (other things being equal), but as we saw, we can't add area without taking it away somewhere else. Cubing things above 1 pulls them out more than cubing things below 1 can "pull them in", so if we add to $D$ while taking away from $C$, the third moment will still increase. Similarly if we add to $B$ while taking away from $A$, the third moment will again tend to increase. That is, the rough-sense "increase in skewness" we just arrived at seems to correspond quite nicely to the third moment.
Now this discussion doesn't rule out fifth and higher moments at all - indeed an increase in third moment will also tend to increase the fifth (unless you do it very carefully indeed), but the (standardized) third moment will be about the simplest way of capturing the notion of skewness using a moment-based measure; while fifth moments are more complex and can move in ways that don't capture our sense of skewness as well as the third moment does.
The third standardized moment doesn't correspond perfectly to our sense of skewness, but it's a pretty good, simple measure that does mostly correspond to it.
The only explanation I found out was that since the mean is center of the data and raising it to the odd power would cancel the term and final answer will be Zero if it data distribution is symmetric and not zero if it is not symmetric. Is this true?
1) In spite of such comments being very easy to find in elementary treatments**, strictly speaking, in respect of probability distributions, neither part is actually true.
a) It's possible for a distribution to be symmetric but not have zero third moment. A simple counterexample is any t-distribution with 3 or fewer degrees of freedom. In samples, however, symmetry implies zero third moment - but samples are almost never perfectly symmetric, so it's not much use there either.
b) It's possible for an asymmetric distribution to have zero third moment.
So symmetry doesn't necessarily imply zero third moment and zero third moment doesn't necessarily imply symmetry.
2) In any case, that wouldn't explain "why third moment rather than fifth", since the fifth power would be just as odd as the third.
** (indeed since I am often asked whether I'd recommend a particular book, the second part (i.e. a claim that $\gamma_1=0$ implies symmetry) is one of several 'quick tests' I use when evaluating whether an elementary book is worth examining more closely -- if a text stumbles over two or more of the common basic errors I tend to see, I don't bother wasting further time looking.)
References
[1]: Pearson, K. (1895),
"Contributions to the Mathematical Theory of Evolution, II: Skew Variation in Homogeneous Material,"
Philosophical Transactions of the Royal Society, Series A, 186, 343-414
[Out of copyright. Freely available here]
[2]: Pearson, K. (1905),
"Das Fehlergesetz und Seine Verallgemeinerungen Durch Fechner und Pearson.", a rejoinder (Skew variation, a rejoinder),
Biometrika, 4 (1905), pp. 169–212.
[While this is also well out of copyright (I can find copies of a Biometrika from 3 years later, for example), I can't locate a copy of this I can link you to. Oxford Journals wants to charge $38 for one day of access to something that is long out of copyright. If you don't have institutional access to one of the places that supply access to it you may be out of luck on this one.]
[3]: Cox, N. J. (2010),
"Speaking Stata: The limits of sample skewness and kurtosis",
The Stata Journal, 10, Number 3, pp. 482–495
(available online here)
[4]: Thiele, T. N. (1889),
Forlæsinger over Almindelig Iagttagelseslære: Sandsynlighedsregning og Mindste Kvadraters Methode,
Copenhagen: C. A. Reitzel.
[Out of copyright. There's an English translation - see the Thiele reference in [3].]
[5]: Miller, Jeff; et al., (Accessed 16 August 2014),
Earliest Known Uses of Some of the Words of Mathematics,
See here | Proof / derivation of skewness and kurtosis formulas
You should not expect proof, since skewness and kurtosis are somewhat vague notions.
Symmetry is mathematically precise, but skewness by contrast is surprisingly slippery. Kurtosis is perhaps even mo |
30,262 | Complete machine learning library for Java/Scala [closed] | You may find helpful this extensive curated list of ML libraries, frameworks and software tools. In particular, it contains resources that you're looking for - ML lists for Java and for Scala. | Complete machine learning library for Java/Scala [closed] | You may find helpful this extensive curated list of ML libraries, frameworks and software tools. In particular, it contains resources that you're looking for - ML lists for Java and for Scala. | Complete machine learning library for Java/Scala [closed]
You may find helpful this extensive curated list of ML libraries, frameworks and software tools. In particular, it contains resources that you're looking for - ML lists for Java and for Scala. | Complete machine learning library for Java/Scala [closed]
You may find helpful this extensive curated list of ML libraries, frameworks and software tools. In particular, it contains resources that you're looking for - ML lists for Java and for Scala. |
30,263 | Complete machine learning library for Java/Scala [closed] | Apache Spark and specifically its component MLlib looks like exactly what you are looking for. MLlib contains implementations for classification, regression, dimensionality reduction etc. You can program in Scala,Java and Python.
Its basically a very fast distributed computing framework that can be run in an Hadoop cluster. For development purposes, you can easily run it in standalone mode (without Hadoop) on your local machine too.
Check out the MLlib guide here : https://spark.apache.org/docs/latest/mllib-guide.html | Complete machine learning library for Java/Scala [closed] | Apache Spark and specifically its component MLlib looks like exactly what you are looking for. MLlib contains implementations for classification, regression, dimensionality reduction etc. You can prog | Complete machine learning library for Java/Scala [closed]
Apache Spark and specifically its component MLlib looks like exactly what you are looking for. MLlib contains implementations for classification, regression, dimensionality reduction etc. You can program in Scala,Java and Python.
Its basically a very fast distributed computing framework that can be run in an Hadoop cluster. For development purposes, you can easily run it in standalone mode (without Hadoop) on your local machine too.
Check out the MLlib guide here : https://spark.apache.org/docs/latest/mllib-guide.html | Complete machine learning library for Java/Scala [closed]
Apache Spark and specifically its component MLlib looks like exactly what you are looking for. MLlib contains implementations for classification, regression, dimensionality reduction etc. You can prog |
30,264 | Complete machine learning library for Java/Scala [closed] | Hava a look at JavaML (http://java-ml.sourceforge.net/) and Encog (http://www.heatonresearch.com/encog). The latter focuses rather on Neural Networks than on many algorithms.
Also, weka might not have very friendly java API (because, first of all, it's a GUI application, not a library), but when you get used to it, you start appreciating how many things are implemented there.
I have used successfully all of them. | Complete machine learning library for Java/Scala [closed] | Hava a look at JavaML (http://java-ml.sourceforge.net/) and Encog (http://www.heatonresearch.com/encog). The latter focuses rather on Neural Networks than on many algorithms.
Also, weka might not have | Complete machine learning library for Java/Scala [closed]
Hava a look at JavaML (http://java-ml.sourceforge.net/) and Encog (http://www.heatonresearch.com/encog). The latter focuses rather on Neural Networks than on many algorithms.
Also, weka might not have very friendly java API (because, first of all, it's a GUI application, not a library), but when you get used to it, you start appreciating how many things are implemented there.
I have used successfully all of them. | Complete machine learning library for Java/Scala [closed]
Hava a look at JavaML (http://java-ml.sourceforge.net/) and Encog (http://www.heatonresearch.com/encog). The latter focuses rather on Neural Networks than on many algorithms.
Also, weka might not have |
30,265 | Power of the t-test under unequal sample sizes | (Note, by $n$, I usually mean the total sample size, so I interpret your last sentence to be 'where $\bf{.5}$$n$ equals the size of the smaller sample'.)
No, not quite. Consider this simulation (conducted with R):
set.seed(9)
power1010 = vector(length=10000)
power9010 = vector(length=10000)
for(i in 1:10000){
n1a = rnorm(10, mean=0, sd=1)
n2a = rnorm(10, mean=.5, sd=1)
n1c = rnorm(90, mean=0, sd=1)
n2c = rnorm(10, mean=.5, sd=1)
power1010[i] = t.test(n1a, n2a, var.equal=T)$p.value
power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value
}
mean(power1010<.05)
[1] 0.184
mean(power9010<.05)
[1] 0.323
What we see here is that when the total sample size is $20$, with equal group sizes, $n_1=n_2=10$, power is $18\%$; but when the total sample size is $100$, but the smaller group has $n_2=10$, power is $32\%$. Thus power can increase when the size of the larger group goes up even though the smaller sample size stays the same.
This answer is adapted from my answer here: How should one interpret the comparison of means from different sample sizes?, which you will probably want to read for more on this topic. | Power of the t-test under unequal sample sizes | (Note, by $n$, I usually mean the total sample size, so I interpret your last sentence to be 'where $\bf{.5}$$n$ equals the size of the smaller sample'.)
No, not quite. Consider this simulation (co | Power of the t-test under unequal sample sizes
(Note, by $n$, I usually mean the total sample size, so I interpret your last sentence to be 'where $\bf{.5}$$n$ equals the size of the smaller sample'.)
No, not quite. Consider this simulation (conducted with R):
set.seed(9)
power1010 = vector(length=10000)
power9010 = vector(length=10000)
for(i in 1:10000){
n1a = rnorm(10, mean=0, sd=1)
n2a = rnorm(10, mean=.5, sd=1)
n1c = rnorm(90, mean=0, sd=1)
n2c = rnorm(10, mean=.5, sd=1)
power1010[i] = t.test(n1a, n2a, var.equal=T)$p.value
power9010[i] = t.test(n1c, n2c, var.equal=T)$p.value
}
mean(power1010<.05)
[1] 0.184
mean(power9010<.05)
[1] 0.323
What we see here is that when the total sample size is $20$, with equal group sizes, $n_1=n_2=10$, power is $18\%$; but when the total sample size is $100$, but the smaller group has $n_2=10$, power is $32\%$. Thus power can increase when the size of the larger group goes up even though the smaller sample size stays the same.
This answer is adapted from my answer here: How should one interpret the comparison of means from different sample sizes?, which you will probably want to read for more on this topic. | Power of the t-test under unequal sample sizes
(Note, by $n$, I usually mean the total sample size, so I interpret your last sentence to be 'where $\bf{.5}$$n$ equals the size of the smaller sample'.)
No, not quite. Consider this simulation (co |
30,266 | Power of the t-test under unequal sample sizes | To understand the comment, consider the effect of letting the second sample get larger and larger while the first stays constant in size. Eventually, the sample mean for the second sample converges to the population mean it was drawn from, and the standard error of the mean becomes zero. If you examine the ordinary two-sample test statistic
$$t = \frac{\bar {X}_1 - \bar{X}_2}{S_p \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$$
then as $n_2$ gets large, $\bar{X}_2$ goes to $\mu_2$, and the term in the square root goes to $1/n_1$.
Now look at $S_p$:
$$S_p = \sqrt{\frac{(n_1-1)S_{X_1}^2+(n_2-1)S_{X_2}^2}{n_1+n_2-2}}$$
Let $w_1 = \frac{n_1-1}{(n_1-1)+(n_2-1)}$ and similarly for $w_2$
$$S_p = \sqrt{w_1 S_{X_1}^2+w_2 S_{X_2}^2}$$
As $n_2$ gets large, $w_2$ goes to 1, while $S_{X_2}$ goes to $\sigma_2$; $S_p$ becomes $\sigma_2$.
So what are we left with? The statistic now looks like this:
$$ \frac{\bar {X}_1 - \mu_2}{\sigma_2 / \sqrt{n_1}}$$
An ordinary one-sample z-test ... whose power is a function of $n_1$.
You might find it instructive to consider the same calculation in the case of the Welch t-test (I believe you get a one-sample t statistic). | Power of the t-test under unequal sample sizes | To understand the comment, consider the effect of letting the second sample get larger and larger while the first stays constant in size. Eventually, the sample mean for the second sample converges to | Power of the t-test under unequal sample sizes
To understand the comment, consider the effect of letting the second sample get larger and larger while the first stays constant in size. Eventually, the sample mean for the second sample converges to the population mean it was drawn from, and the standard error of the mean becomes zero. If you examine the ordinary two-sample test statistic
$$t = \frac{\bar {X}_1 - \bar{X}_2}{S_p \cdot \sqrt{\frac{1}{n_1}+\frac{1}{n_2}}}$$
then as $n_2$ gets large, $\bar{X}_2$ goes to $\mu_2$, and the term in the square root goes to $1/n_1$.
Now look at $S_p$:
$$S_p = \sqrt{\frac{(n_1-1)S_{X_1}^2+(n_2-1)S_{X_2}^2}{n_1+n_2-2}}$$
Let $w_1 = \frac{n_1-1}{(n_1-1)+(n_2-1)}$ and similarly for $w_2$
$$S_p = \sqrt{w_1 S_{X_1}^2+w_2 S_{X_2}^2}$$
As $n_2$ gets large, $w_2$ goes to 1, while $S_{X_2}$ goes to $\sigma_2$; $S_p$ becomes $\sigma_2$.
So what are we left with? The statistic now looks like this:
$$ \frac{\bar {X}_1 - \mu_2}{\sigma_2 / \sqrt{n_1}}$$
An ordinary one-sample z-test ... whose power is a function of $n_1$.
You might find it instructive to consider the same calculation in the case of the Welch t-test (I believe you get a one-sample t statistic). | Power of the t-test under unequal sample sizes
To understand the comment, consider the effect of letting the second sample get larger and larger while the first stays constant in size. Eventually, the sample mean for the second sample converges to |
30,267 | How should you express a negative binomial distribution in an exponential family form? | Warning: The negative binomial distribution has several alternative formulations for which the formulas below change.
A distribution $f(x;\theta)$ belongs to an exponential family if it can be represented in the form:
$$
f(x;\theta)=h(x)\exp\left[\eta(\theta)T(x)-A(\theta)\right]
$$
The value $\eta$ is called the canonical (natural) parameter of the family, $T(x)$ is a sufficient statistic for $p$, $A(\theta)$ is called the log-partition function (it's a normalization factor and sometimes called log-normalizer or cumulant generating function), and $h(x)$ is an arbitrary function called base measure or carrier measure which is 1 in many cases (e.g. exponential distribution, gamma distribution, Bernoulli distribution, ...). If the negative binomial distribution with known parameter $r$ (if $r$ is unknown, the negative binomial family is not an exponential family) has the following distribution:
$$
f(k;r,p)=\binom{k+r-1}{k}(1-p)^{r}p^k~~~~~\text{for}~k=0,1,2,\ldots
$$
Then it can be rewritten in exponential form as:
$$
\begin{align}
f(k;r,p) &=\binom{k+r-1}{k}\exp\left[\ln(p^{k}(1-p)^{r})\right] \\
&=\binom{k+r-1}{k}\exp\left[k\ln(p) + r\ln(1-p)\right] \\
\end{align}
$$
So the parameter $\theta$ of the distribution is $p$ (i.e. $\theta=p$) and the natural parameter is $\eta=\ln(p)$, the sufficient statistic for $p$ is $T(k)=k$ (i.e. $T(k)=\sum X_{i}$, the sample sum), the log-partition function is $A(\eta)=-r\ln(1-p)$ and $h(k)=\binom{k+r-1}{k}$. See for example the Wikipedia page for a nice overview of the theory and many common distributions. Nice references can also be found here and here. | How should you express a negative binomial distribution in an exponential family form? | Warning: The negative binomial distribution has several alternative formulations for which the formulas below change.
A distribution $f(x;\theta)$ belongs to an exponential family if it can be represe | How should you express a negative binomial distribution in an exponential family form?
Warning: The negative binomial distribution has several alternative formulations for which the formulas below change.
A distribution $f(x;\theta)$ belongs to an exponential family if it can be represented in the form:
$$
f(x;\theta)=h(x)\exp\left[\eta(\theta)T(x)-A(\theta)\right]
$$
The value $\eta$ is called the canonical (natural) parameter of the family, $T(x)$ is a sufficient statistic for $p$, $A(\theta)$ is called the log-partition function (it's a normalization factor and sometimes called log-normalizer or cumulant generating function), and $h(x)$ is an arbitrary function called base measure or carrier measure which is 1 in many cases (e.g. exponential distribution, gamma distribution, Bernoulli distribution, ...). If the negative binomial distribution with known parameter $r$ (if $r$ is unknown, the negative binomial family is not an exponential family) has the following distribution:
$$
f(k;r,p)=\binom{k+r-1}{k}(1-p)^{r}p^k~~~~~\text{for}~k=0,1,2,\ldots
$$
Then it can be rewritten in exponential form as:
$$
\begin{align}
f(k;r,p) &=\binom{k+r-1}{k}\exp\left[\ln(p^{k}(1-p)^{r})\right] \\
&=\binom{k+r-1}{k}\exp\left[k\ln(p) + r\ln(1-p)\right] \\
\end{align}
$$
So the parameter $\theta$ of the distribution is $p$ (i.e. $\theta=p$) and the natural parameter is $\eta=\ln(p)$, the sufficient statistic for $p$ is $T(k)=k$ (i.e. $T(k)=\sum X_{i}$, the sample sum), the log-partition function is $A(\eta)=-r\ln(1-p)$ and $h(k)=\binom{k+r-1}{k}$. See for example the Wikipedia page for a nice overview of the theory and many common distributions. Nice references can also be found here and here. | How should you express a negative binomial distribution in an exponential family form?
Warning: The negative binomial distribution has several alternative formulations for which the formulas below change.
A distribution $f(x;\theta)$ belongs to an exponential family if it can be represe |
30,268 | Change point analysis | There are 3 main functions in the changepoint package, cpt.mean, cpt.var and cpt.meanvar. As a practitioner these are the only functions in the package that you should need. If you think that your data may contain a change in mean then you use the cpt.mean function, etc.
The next question you should ask yourself if whether you are looking for a single or multiple changes within your data. The method argument handles this, there is AMOC for At Most One Change, and PELT, BinSeg and SegNeigh for multiple changes. Which multiple changepoint method you want to use depends on:
a) Your choice of distribution / distribution-free method (see below) and
b) How much time you have / how accurate you want your answer to be.
The BinSeg is quick but approximate, PELT is exact and quick but cannot be used in all distributions, SegNeigh is exact but slow.
The next question is what assumptions you can / are willing to make about your data. The key here is that the assumption applies to each set of data between changes and not for the entire data. For example, you may be able to assume a Normal distribution but if you do a test for Normality on the entire data it will most likely fail (due to the potential changes). Thus typically we make an assumption, run the changepoint analysis then check the assumptions based on the changes identified.
Again, depending on the type of change there are different distribution and distribution-free methods. See the documentation for each function for the choices and feel free to comment which test statistic you are thinking of using and I can list the assumptions.
Finally, you look at the penalty. The penalty provides a compromise between lots of small changes and no changes. Thus if you set the penalty to 0 then you get a change at every possible location and if you set the penalty to infinity then you get no changes. The appropriate value of the penalty depends on your data and the question you want to answer.
For example, you might have changes in mean of 0.5 units but you might only be interested in changes of 1+ units.
There are many ways to choose your penalty:
"by-eye", i.e. try a few different values until you find one that looks appropriate for your problem.
"elbow-plot", i.e. plot the number of changepoints identified against the penalty used. This creates a curve whereby small values of the penalty produces large (spurious) changes and as the penalty decreases these spurious changes drop off at a fast rate, this rate slows as only true changes are left before slowly dropping down to no changes for larger penalties. The idea is to fit 2 straight lines to this curve and choose the penalty where they cross. This produces an ad-hoc but more objective way to choose the penalty than 1.
use an information criterion. There are some such as AIC, BIC/SIC, Hannan-Quinn included in the package. There are others that are not included in the package but you can provide a formula for pen.value if you wish.
If you need any more information or clarification on specific points, just comment and i'll try to answer. | Change point analysis | There are 3 main functions in the changepoint package, cpt.mean, cpt.var and cpt.meanvar. As a practitioner these are the only functions in the package that you should need. If you think that your d | Change point analysis
There are 3 main functions in the changepoint package, cpt.mean, cpt.var and cpt.meanvar. As a practitioner these are the only functions in the package that you should need. If you think that your data may contain a change in mean then you use the cpt.mean function, etc.
The next question you should ask yourself if whether you are looking for a single or multiple changes within your data. The method argument handles this, there is AMOC for At Most One Change, and PELT, BinSeg and SegNeigh for multiple changes. Which multiple changepoint method you want to use depends on:
a) Your choice of distribution / distribution-free method (see below) and
b) How much time you have / how accurate you want your answer to be.
The BinSeg is quick but approximate, PELT is exact and quick but cannot be used in all distributions, SegNeigh is exact but slow.
The next question is what assumptions you can / are willing to make about your data. The key here is that the assumption applies to each set of data between changes and not for the entire data. For example, you may be able to assume a Normal distribution but if you do a test for Normality on the entire data it will most likely fail (due to the potential changes). Thus typically we make an assumption, run the changepoint analysis then check the assumptions based on the changes identified.
Again, depending on the type of change there are different distribution and distribution-free methods. See the documentation for each function for the choices and feel free to comment which test statistic you are thinking of using and I can list the assumptions.
Finally, you look at the penalty. The penalty provides a compromise between lots of small changes and no changes. Thus if you set the penalty to 0 then you get a change at every possible location and if you set the penalty to infinity then you get no changes. The appropriate value of the penalty depends on your data and the question you want to answer.
For example, you might have changes in mean of 0.5 units but you might only be interested in changes of 1+ units.
There are many ways to choose your penalty:
"by-eye", i.e. try a few different values until you find one that looks appropriate for your problem.
"elbow-plot", i.e. plot the number of changepoints identified against the penalty used. This creates a curve whereby small values of the penalty produces large (spurious) changes and as the penalty decreases these spurious changes drop off at a fast rate, this rate slows as only true changes are left before slowly dropping down to no changes for larger penalties. The idea is to fit 2 straight lines to this curve and choose the penalty where they cross. This produces an ad-hoc but more objective way to choose the penalty than 1.
use an information criterion. There are some such as AIC, BIC/SIC, Hannan-Quinn included in the package. There are others that are not included in the package but you can provide a formula for pen.value if you wish.
If you need any more information or clarification on specific points, just comment and i'll try to answer. | Change point analysis
There are 3 main functions in the changepoint package, cpt.mean, cpt.var and cpt.meanvar. As a practitioner these are the only functions in the package that you should need. If you think that your d |
30,269 | ANCOVA and its disturbing assumptions | +1 to @FrankHarrell. To be honest, I find a lot of terminology in statistics to be used inconsistently, confusing, or generally unhelpful. It's best to concentrate on the underlying logical structure of your situation. For example, an ANOVA isn't fundamentally different from a multiple regression model. An ANOVA is just a MR where all the explanatory / predictor variables are categorical. An ANCOVA is just a MR where there are categorical explanatory variables (that are of primary interest), and also some continuous covariates (that are assumed to contribute to the DV, but are regarded as nuisance variables not of substantive interest), but no interactions between the factors and the covariates (hence, the assumption of parallel lines, as you state). Note that not everyone seems to use the term ANCOVA in this (traditional) way. Of course, you can also have a MR model with both categorical and continuous variables and interactions between them. The world does not end when this occurs, you just no longer have an 'ANCOVA'. | ANCOVA and its disturbing assumptions | +1 to @FrankHarrell. To be honest, I find a lot of terminology in statistics to be used inconsistently, confusing, or generally unhelpful. It's best to concentrate on the underlying logical structur | ANCOVA and its disturbing assumptions
+1 to @FrankHarrell. To be honest, I find a lot of terminology in statistics to be used inconsistently, confusing, or generally unhelpful. It's best to concentrate on the underlying logical structure of your situation. For example, an ANOVA isn't fundamentally different from a multiple regression model. An ANOVA is just a MR where all the explanatory / predictor variables are categorical. An ANCOVA is just a MR where there are categorical explanatory variables (that are of primary interest), and also some continuous covariates (that are assumed to contribute to the DV, but are regarded as nuisance variables not of substantive interest), but no interactions between the factors and the covariates (hence, the assumption of parallel lines, as you state). Note that not everyone seems to use the term ANCOVA in this (traditional) way. Of course, you can also have a MR model with both categorical and continuous variables and interactions between them. The world does not end when this occurs, you just no longer have an 'ANCOVA'. | ANCOVA and its disturbing assumptions
+1 to @FrankHarrell. To be honest, I find a lot of terminology in statistics to be used inconsistently, confusing, or generally unhelpful. It's best to concentrate on the underlying logical structur |
30,270 | ANCOVA and its disturbing assumptions | This is just a nomenclature problem. ANCOVA in its original incarnation often implied an additive model for which non-parallelism was feared and tested. If we used the more general name "linear model" we would avoid this connotation (or perhaps the even more general phrase "multivariable regression model").
Besides rightly worrying about the additivity assumption you should spend a lot of effort examining linearity assumptions (plus constant variance, normality, etc.). The linearity assumption, in my experience, is the most frequently violated assumption that has high impact, causes such problems as apparently significant interactions that are just stand-ins for omitted main effects or nonlinear terms. | ANCOVA and its disturbing assumptions | This is just a nomenclature problem. ANCOVA in its original incarnation often implied an additive model for which non-parallelism was feared and tested. If we used the more general name "linear mode | ANCOVA and its disturbing assumptions
This is just a nomenclature problem. ANCOVA in its original incarnation often implied an additive model for which non-parallelism was feared and tested. If we used the more general name "linear model" we would avoid this connotation (or perhaps the even more general phrase "multivariable regression model").
Besides rightly worrying about the additivity assumption you should spend a lot of effort examining linearity assumptions (plus constant variance, normality, etc.). The linearity assumption, in my experience, is the most frequently violated assumption that has high impact, causes such problems as apparently significant interactions that are just stand-ins for omitted main effects or nonlinear terms. | ANCOVA and its disturbing assumptions
This is just a nomenclature problem. ANCOVA in its original incarnation often implied an additive model for which non-parallelism was feared and tested. If we used the more general name "linear mode |
30,271 | Is 0 a valid value in a Likert scale? | Let me make a couple of points. First, if you just have 1 question, you don't technically have a likert scale, but just an ordinal rating. At any rate, I can't see as how there will be any meaningful difference. This is just a linear shift. This will neither make a difference whether you use an ordinal analysis like ordinal logistic regression or a Mann-Whitney U test, or a more standard option like OLS regression or a t-test. | Is 0 a valid value in a Likert scale? | Let me make a couple of points. First, if you just have 1 question, you don't technically have a likert scale, but just an ordinal rating. At any rate, I can't see as how there will be any meaningfu | Is 0 a valid value in a Likert scale?
Let me make a couple of points. First, if you just have 1 question, you don't technically have a likert scale, but just an ordinal rating. At any rate, I can't see as how there will be any meaningful difference. This is just a linear shift. This will neither make a difference whether you use an ordinal analysis like ordinal logistic regression or a Mann-Whitney U test, or a more standard option like OLS regression or a t-test. | Is 0 a valid value in a Likert scale?
Let me make a couple of points. First, if you just have 1 question, you don't technically have a likert scale, but just an ordinal rating. At any rate, I can't see as how there will be any meaningfu |
30,272 | Is 0 a valid value in a Likert scale? | I must partially disagree with @MichaelChernick. While answers to a single Likert question (whether 0 to 5 or 1 to 6 or whatever) are clearly ordinal, usually there is a sum of several Likert scale items. At some point, the number of possible values becomes so high that it is essentially continuous.
As you know (but the poster of the question may not) OLS regression does not assume that the dependent variable is normally distributed, only that the errors (as estimated by the residuals) are.
If we sum a bunch of Likert items, do we know that the intervals are really equal? No, not really. But do we know that for, say IQ? Or even income? Is the difference between an IQ of 130 and 140 the same as 100 and 110? Does that question even make sense? What about a \$10,000 raise for someone who makes \$10,000 vs. $100,000 per year?
I wrote a whole blog post on this.
In addition, it's not clear to me whether this Likert scale is going to be a dependent or independent variable. | Is 0 a valid value in a Likert scale? | I must partially disagree with @MichaelChernick. While answers to a single Likert question (whether 0 to 5 or 1 to 6 or whatever) are clearly ordinal, usually there is a sum of several Likert scale it | Is 0 a valid value in a Likert scale?
I must partially disagree with @MichaelChernick. While answers to a single Likert question (whether 0 to 5 or 1 to 6 or whatever) are clearly ordinal, usually there is a sum of several Likert scale items. At some point, the number of possible values becomes so high that it is essentially continuous.
As you know (but the poster of the question may not) OLS regression does not assume that the dependent variable is normally distributed, only that the errors (as estimated by the residuals) are.
If we sum a bunch of Likert items, do we know that the intervals are really equal? No, not really. But do we know that for, say IQ? Or even income? Is the difference between an IQ of 130 and 140 the same as 100 and 110? Does that question even make sense? What about a \$10,000 raise for someone who makes \$10,000 vs. $100,000 per year?
I wrote a whole blog post on this.
In addition, it's not clear to me whether this Likert scale is going to be a dependent or independent variable. | Is 0 a valid value in a Likert scale?
I must partially disagree with @MichaelChernick. While answers to a single Likert question (whether 0 to 5 or 1 to 6 or whatever) are clearly ordinal, usually there is a sum of several Likert scale it |
30,273 | Is 0 a valid value in a Likert scale? | In following up on @caracal's reference suggestions, I found an almost-direct answer (no, these two rating systems are not equivalent if presented as number options to respondents) from Schwarz, Knäuper, Hippler, Noelle-Neumann, and Clark (1991). They present data on responses to the question, "How successful have you been in life, so far?" One version gave rating options from 0–10 to 480 participants; the other version had options from (-5)–(+5) with zero as the midpoint, and was seen by 552 participants. The endpoints were labelled “not at all successful” and “extremely successful” in both versions. "Undecided" was also an option on both. Here's how things shook out:
$$\begin{array}{ccc|ccc}&\text{0–10 Scale}&&&-5\text{ to +5 Scale}&\\\hline\small\text{Scale Value}&\small\text{Percentage}&\small\text{Cumulative}&\small\text{Scale Value}&\small\text{Percentage}&\small\text{Cumulative}\\\hline0&...&...&-5&1&1\\1&...&...&-4&...&1\\2&2&2&-3&1&2\\3&5&7&-2&1&3\\4&7&14&-1&1&4\\5&20&34&0&9&13\\6&14&48&+1&9&22\\7&20&68&+2&23&45\\8&20&88&+3&35&80\\9&6&94&+4&14&94\\10&3&97&+5&4&98\\\text{Undecided}&3&100&\rm{Undecided}&2&100\end{array}$$
Quite different, clearly! They also report $\chi^2(10)=105.1,p<.0001$ for this difference. Of course, this difference won't appear if the difference is only behind the scenes in terms of how you code responses, not visible to participants as a way for them to provide responses.
There are simple survey design methods that allow one to avoid worrying about the psychological effects of equating rating anchors with numbers. Basically, you can just avoid using numbers! E.g.:
Allow respondents to check cells in a table corresponding to their answer preference: each row can be a different item, and each column can be labeled with your rating anchor, or vice versa – no numbers involved. Here's how that might look (if one were to answer wisely):
$\begin{array}{|c|c|c|c|c|c|c|}\hline&\tiny\text{Strongly Disagree}&\tiny\text{Disagree}&\tiny\text{Mildly Disagree}&\tiny\text{Mildly Agree}&\tiny\text{Agree}&\tiny\text{Strongly Agree}\\\hline\tiny\text{Tumblers: better than pumpers!}^*&&&&&&\checkmark\\\hline\tiny\text{I look fat in this dress.}&\checkmark\\\hline\end{array}$*
Wikipedia gives another style using marked options (by Nicholas Smith):
Letter codes can also be substituted for numeric options if blanks are to be filled for a list of very many items; e.g., {SD,D,MD,MA,A,SA}. Just don't forget to include the legend!
Reference
Schwarz, N., Knäuper, B., Hippler, H. J., Noelle-Neumann, E., & Clark, L. (1991). Rating scales numeric values may change the meaning of scale labels. Public Opinion Quarterly, 55(4), 570–582. | Is 0 a valid value in a Likert scale? | In following up on @caracal's reference suggestions, I found an almost-direct answer (no, these two rating systems are not equivalent if presented as number options to respondents) from Schwarz, Knäup | Is 0 a valid value in a Likert scale?
In following up on @caracal's reference suggestions, I found an almost-direct answer (no, these two rating systems are not equivalent if presented as number options to respondents) from Schwarz, Knäuper, Hippler, Noelle-Neumann, and Clark (1991). They present data on responses to the question, "How successful have you been in life, so far?" One version gave rating options from 0–10 to 480 participants; the other version had options from (-5)–(+5) with zero as the midpoint, and was seen by 552 participants. The endpoints were labelled “not at all successful” and “extremely successful” in both versions. "Undecided" was also an option on both. Here's how things shook out:
$$\begin{array}{ccc|ccc}&\text{0–10 Scale}&&&-5\text{ to +5 Scale}&\\\hline\small\text{Scale Value}&\small\text{Percentage}&\small\text{Cumulative}&\small\text{Scale Value}&\small\text{Percentage}&\small\text{Cumulative}\\\hline0&...&...&-5&1&1\\1&...&...&-4&...&1\\2&2&2&-3&1&2\\3&5&7&-2&1&3\\4&7&14&-1&1&4\\5&20&34&0&9&13\\6&14&48&+1&9&22\\7&20&68&+2&23&45\\8&20&88&+3&35&80\\9&6&94&+4&14&94\\10&3&97&+5&4&98\\\text{Undecided}&3&100&\rm{Undecided}&2&100\end{array}$$
Quite different, clearly! They also report $\chi^2(10)=105.1,p<.0001$ for this difference. Of course, this difference won't appear if the difference is only behind the scenes in terms of how you code responses, not visible to participants as a way for them to provide responses.
There are simple survey design methods that allow one to avoid worrying about the psychological effects of equating rating anchors with numbers. Basically, you can just avoid using numbers! E.g.:
Allow respondents to check cells in a table corresponding to their answer preference: each row can be a different item, and each column can be labeled with your rating anchor, or vice versa – no numbers involved. Here's how that might look (if one were to answer wisely):
$\begin{array}{|c|c|c|c|c|c|c|}\hline&\tiny\text{Strongly Disagree}&\tiny\text{Disagree}&\tiny\text{Mildly Disagree}&\tiny\text{Mildly Agree}&\tiny\text{Agree}&\tiny\text{Strongly Agree}\\\hline\tiny\text{Tumblers: better than pumpers!}^*&&&&&&\checkmark\\\hline\tiny\text{I look fat in this dress.}&\checkmark\\\hline\end{array}$*
Wikipedia gives another style using marked options (by Nicholas Smith):
Letter codes can also be substituted for numeric options if blanks are to be filled for a list of very many items; e.g., {SD,D,MD,MA,A,SA}. Just don't forget to include the legend!
Reference
Schwarz, N., Knäuper, B., Hippler, H. J., Noelle-Neumann, E., & Clark, L. (1991). Rating scales numeric values may change the meaning of scale labels. Public Opinion Quarterly, 55(4), 570–582. | Is 0 a valid value in a Likert scale?
In following up on @caracal's reference suggestions, I found an almost-direct answer (no, these two rating systems are not equivalent if presented as number options to respondents) from Schwarz, Knäup |
30,274 | Is 0 a valid value in a Likert scale? | To do analysis with ordinal scales like the Likert you would use nonparametric methods based on ranks. What matters with ordinal scales is the order if 5 is best, 0 is worst, 1 is better than 0, 2 is better than 1 etc.
Both ratios and intervals are meaningless for ordinal data. So a scale of 1-6 versus 0-5 doesn't matter and won't affect the analysis. Starting with 1 is due to tradition rather than necessity. | Is 0 a valid value in a Likert scale? | To do analysis with ordinal scales like the Likert you would use nonparametric methods based on ranks. What matters with ordinal scales is the order if 5 is best, 0 is worst, 1 is better than 0, 2 is | Is 0 a valid value in a Likert scale?
To do analysis with ordinal scales like the Likert you would use nonparametric methods based on ranks. What matters with ordinal scales is the order if 5 is best, 0 is worst, 1 is better than 0, 2 is better than 1 etc.
Both ratios and intervals are meaningless for ordinal data. So a scale of 1-6 versus 0-5 doesn't matter and won't affect the analysis. Starting with 1 is due to tradition rather than necessity. | Is 0 a valid value in a Likert scale?
To do analysis with ordinal scales like the Likert you would use nonparametric methods based on ranks. What matters with ordinal scales is the order if 5 is best, 0 is worst, 1 is better than 0, 2 is |
30,275 | Is 0 a valid value in a Likert scale? | I think the points should be determine as per the framing of questions. Like as, if the questions are related to attitude then points should be given from 1 to 5, not 0 to 4 because we are trying to know the attitude. And attitude can not be lie at 0 level; even respondent mark on the Strongly Disagree option but we can not mention 0 (zero) on this response.
Similar can be other variables.
So, as a researcher we should try to determine the points from 1-5; 1-7 etc. | Is 0 a valid value in a Likert scale? | I think the points should be determine as per the framing of questions. Like as, if the questions are related to attitude then points should be given from 1 to 5, not 0 to 4 because we are trying to k | Is 0 a valid value in a Likert scale?
I think the points should be determine as per the framing of questions. Like as, if the questions are related to attitude then points should be given from 1 to 5, not 0 to 4 because we are trying to know the attitude. And attitude can not be lie at 0 level; even respondent mark on the Strongly Disagree option but we can not mention 0 (zero) on this response.
Similar can be other variables.
So, as a researcher we should try to determine the points from 1-5; 1-7 etc. | Is 0 a valid value in a Likert scale?
I think the points should be determine as per the framing of questions. Like as, if the questions are related to attitude then points should be given from 1 to 5, not 0 to 4 because we are trying to k |
30,276 | Probabilistic graphical models textbook | Yes, it's written as such and contains sample questions, for which you can request the answers here
You might also want to have a look at Pattern Recognition and Machine Learning by Chris Bishop and Information Theory, Inference and Learning Algorithms by David MacKay, which can also be downloaded for free. Both of these cover some aspects of graphical models as well as giving a general insight into probabilistic methods. | Probabilistic graphical models textbook | Yes, it's written as such and contains sample questions, for which you can request the answers here
You might also want to have a look at Pattern Recognition and Machine Learning by Chris Bishop and I | Probabilistic graphical models textbook
Yes, it's written as such and contains sample questions, for which you can request the answers here
You might also want to have a look at Pattern Recognition and Machine Learning by Chris Bishop and Information Theory, Inference and Learning Algorithms by David MacKay, which can also be downloaded for free. Both of these cover some aspects of graphical models as well as giving a general insight into probabilistic methods. | Probabilistic graphical models textbook
Yes, it's written as such and contains sample questions, for which you can request the answers here
You might also want to have a look at Pattern Recognition and Machine Learning by Chris Bishop and I |
30,277 | Probabilistic graphical models textbook | I spent a little while reading the first couple of chapters of Koller & Friedman, and I wasn't happy with it as an introductory text. On several occasions, the book gives a motivating example, but the example cannot be understood without background material later in the chapter. This kind of exposition works for me only if the example explicitly says what upcoming material will be relevant; otherwise, the examples are just incomprehensible magic.
That said, it's a hefty tome, and probably an excellent reference for practitioners.
A student might have better luck with Neapolitan, "Learning Bayesian Networks". | Probabilistic graphical models textbook | I spent a little while reading the first couple of chapters of Koller & Friedman, and I wasn't happy with it as an introductory text. On several occasions, the book gives a motivating example, but the | Probabilistic graphical models textbook
I spent a little while reading the first couple of chapters of Koller & Friedman, and I wasn't happy with it as an introductory text. On several occasions, the book gives a motivating example, but the example cannot be understood without background material later in the chapter. This kind of exposition works for me only if the example explicitly says what upcoming material will be relevant; otherwise, the examples are just incomprehensible magic.
That said, it's a hefty tome, and probably an excellent reference for practitioners.
A student might have better luck with Neapolitan, "Learning Bayesian Networks". | Probabilistic graphical models textbook
I spent a little while reading the first couple of chapters of Koller & Friedman, and I wasn't happy with it as an introductory text. On several occasions, the book gives a motivating example, but the |
30,278 | Probabilistic graphical models textbook | I would prefer the book Graphical Models by Steffen L. Lauritzen, and his lecture at Oxford. | Probabilistic graphical models textbook | I would prefer the book Graphical Models by Steffen L. Lauritzen, and his lecture at Oxford. | Probabilistic graphical models textbook
I would prefer the book Graphical Models by Steffen L. Lauritzen, and his lecture at Oxford. | Probabilistic graphical models textbook
I would prefer the book Graphical Models by Steffen L. Lauritzen, and his lecture at Oxford. |
30,279 | Cox regression and time scale | Usually, age at baseline is used as a covariate (because it is often associated to disease/death), but it can be used as your time scale as well (I think it is used in some longitudinal studies, because you need to have enough people at risk along the time scale, but I can't remember actually -- just found these slides about Analysing cohort studies assuming a continuous time scale which talk about cohort studies). In the interpretation, you should replace event time by age, and you might include age at diagnosis as a covariate. This would make sense when you study age-specific mortality of a particular disease (as illustrated in these slides).
Maybe this article is interesting since it contrasts the two approaches, time-on-study vs. chronological age: Time Scales in Cox Model: Effect of Variability Among Entry Ages on Coefficient Estimates. Here is another paper:
Cheung, YB, Gao, F, and Khoo, KS (2003). Age at diagnosis and the choice of survival analysis methods in cancer epidemiology. Journal of Clinical Epidemiology, 56(1), 38-43.
But there are certainly better papers. | Cox regression and time scale | Usually, age at baseline is used as a covariate (because it is often associated to disease/death), but it can be used as your time scale as well (I think it is used in some longitudinal studies, becau | Cox regression and time scale
Usually, age at baseline is used as a covariate (because it is often associated to disease/death), but it can be used as your time scale as well (I think it is used in some longitudinal studies, because you need to have enough people at risk along the time scale, but I can't remember actually -- just found these slides about Analysing cohort studies assuming a continuous time scale which talk about cohort studies). In the interpretation, you should replace event time by age, and you might include age at diagnosis as a covariate. This would make sense when you study age-specific mortality of a particular disease (as illustrated in these slides).
Maybe this article is interesting since it contrasts the two approaches, time-on-study vs. chronological age: Time Scales in Cox Model: Effect of Variability Among Entry Ages on Coefficient Estimates. Here is another paper:
Cheung, YB, Gao, F, and Khoo, KS (2003). Age at diagnosis and the choice of survival analysis methods in cancer epidemiology. Journal of Clinical Epidemiology, 56(1), 38-43.
But there are certainly better papers. | Cox regression and time scale
Usually, age at baseline is used as a covariate (because it is often associated to disease/death), but it can be used as your time scale as well (I think it is used in some longitudinal studies, becau |
30,280 | Cox regression and time scale | No, it doesn't always have to be time. Many censored responses can be modeled with survival analysis techniques. In his book Nondetects and Data Analysis, Dennis Helsel advocates using the negative of a concentration in place of time (in order to cope with nondetects, which when negated become right-censored values). A synopsis is available on the Web (pdf format) and an R package, NADA, implements this. | Cox regression and time scale | No, it doesn't always have to be time. Many censored responses can be modeled with survival analysis techniques. In his book Nondetects and Data Analysis, Dennis Helsel advocates using the negative | Cox regression and time scale
No, it doesn't always have to be time. Many censored responses can be modeled with survival analysis techniques. In his book Nondetects and Data Analysis, Dennis Helsel advocates using the negative of a concentration in place of time (in order to cope with nondetects, which when negated become right-censored values). A synopsis is available on the Web (pdf format) and an R package, NADA, implements this. | Cox regression and time scale
No, it doesn't always have to be time. Many censored responses can be modeled with survival analysis techniques. In his book Nondetects and Data Analysis, Dennis Helsel advocates using the negative |
30,281 | Cox regression and time scale | On the age-scale vs. time-scale issue, chl has some good references and captures the essentials -- in particular, the requirement that the at-risk set contain sufficient subjects from all ages as would arise in a longitudinal study.
I would only note that there is no general consensus around this yet, but there is some literature to suggest that age should be preferred as the time scale in certain cases. In particular, if you have a situation where time doesn't accumulate in the same way for all subjects, for example due to exposure to some toxic material, then age may be more appropriate.
On the other hand, you can handle that specific example on a time-scale Cox PH model by using age as a time varying covariate -- rather than a fixed covariate at start time. You need to think about the mechanism behind your object of study to figure out which time scale is more appropriate. Sometimes it's worth fitting both models to existing data to see if discrepancies arise and how they might be explained before designing your new study.
Finally, the obvious difference in analyzing the two is that on an age-scale, the interpretation of survival is with respect to an absolute scale (age), whereas on a time-scale, it's relative to the start/entry date of the study. | Cox regression and time scale | On the age-scale vs. time-scale issue, chl has some good references and captures the essentials -- in particular, the requirement that the at-risk set contain sufficient subjects from all ages as woul | Cox regression and time scale
On the age-scale vs. time-scale issue, chl has some good references and captures the essentials -- in particular, the requirement that the at-risk set contain sufficient subjects from all ages as would arise in a longitudinal study.
I would only note that there is no general consensus around this yet, but there is some literature to suggest that age should be preferred as the time scale in certain cases. In particular, if you have a situation where time doesn't accumulate in the same way for all subjects, for example due to exposure to some toxic material, then age may be more appropriate.
On the other hand, you can handle that specific example on a time-scale Cox PH model by using age as a time varying covariate -- rather than a fixed covariate at start time. You need to think about the mechanism behind your object of study to figure out which time scale is more appropriate. Sometimes it's worth fitting both models to existing data to see if discrepancies arise and how they might be explained before designing your new study.
Finally, the obvious difference in analyzing the two is that on an age-scale, the interpretation of survival is with respect to an absolute scale (age), whereas on a time-scale, it's relative to the start/entry date of the study. | Cox regression and time scale
On the age-scale vs. time-scale issue, chl has some good references and captures the essentials -- in particular, the requirement that the at-risk set contain sufficient subjects from all ages as woul |
30,282 | Cox regression and time scale | Per the OP's request, heres another application I have seen survival analysis used in a spatial context (although obviously different than measuring environmental substances mentioned by whuber) is modeling the distance between events in space. Heres one example in criminology and here is one in epidemiology.
The reasoning behind using survival analysis to measure the distance between events is not per say an issue of censoring (although censoring can definately occur in a spatial context), it is more so because of the similar distributions between time to event characteristics and distance between events characteristics (i.e. they both have similar types of error structures (frequently distance decay) that violate OLS and so the non-parametric solutions are ideal for both).
Because of my poor citation practices I had to spend and hour finding the correct link/reference to the link above.
For the example in criminology,
Kikuchi, George, Mamoru Amemiya, Tomonori Saito, Takahito Shimada & Yutaka Harada. 2010. A Spatio-Temporal analysis of near repeat victimization in Japan. 8th National Crime Mapping Conference. Jill Dando Institute of Crime Science. PDF currently available at referenced webpage.
In epidemiology,
Reader, Steven. 2000. Using survival analysis to study spatial point patterns in geographical epidemiology. Social Science & Medicine 50(7-8): 985-1000. | Cox regression and time scale | Per the OP's request, heres another application I have seen survival analysis used in a spatial context (although obviously different than measuring environmental substances mentioned by whuber) is mo | Cox regression and time scale
Per the OP's request, heres another application I have seen survival analysis used in a spatial context (although obviously different than measuring environmental substances mentioned by whuber) is modeling the distance between events in space. Heres one example in criminology and here is one in epidemiology.
The reasoning behind using survival analysis to measure the distance between events is not per say an issue of censoring (although censoring can definately occur in a spatial context), it is more so because of the similar distributions between time to event characteristics and distance between events characteristics (i.e. they both have similar types of error structures (frequently distance decay) that violate OLS and so the non-parametric solutions are ideal for both).
Because of my poor citation practices I had to spend and hour finding the correct link/reference to the link above.
For the example in criminology,
Kikuchi, George, Mamoru Amemiya, Tomonori Saito, Takahito Shimada & Yutaka Harada. 2010. A Spatio-Temporal analysis of near repeat victimization in Japan. 8th National Crime Mapping Conference. Jill Dando Institute of Crime Science. PDF currently available at referenced webpage.
In epidemiology,
Reader, Steven. 2000. Using survival analysis to study spatial point patterns in geographical epidemiology. Social Science & Medicine 50(7-8): 985-1000. | Cox regression and time scale
Per the OP's request, heres another application I have seen survival analysis used in a spatial context (although obviously different than measuring environmental substances mentioned by whuber) is mo |
30,283 | Kaplan-Meier, survival analysis and plots in R | Try CRAN Task View: http://cran.at.r-project.org/web/views/Survival.html | Kaplan-Meier, survival analysis and plots in R | Try CRAN Task View: http://cran.at.r-project.org/web/views/Survival.html | Kaplan-Meier, survival analysis and plots in R
Try CRAN Task View: http://cran.at.r-project.org/web/views/Survival.html | Kaplan-Meier, survival analysis and plots in R
Try CRAN Task View: http://cran.at.r-project.org/web/views/Survival.html |
30,284 | Kaplan-Meier, survival analysis and plots in R | I think that it's fair to say that the survival package is the "recommended" package in general, as it's included in base R (i.e. does not need to be installed separately). There are many good tutorials online for this. But you need to be more specific to get a more specific answer. | Kaplan-Meier, survival analysis and plots in R | I think that it's fair to say that the survival package is the "recommended" package in general, as it's included in base R (i.e. does not need to be installed separately). There are many good tutori | Kaplan-Meier, survival analysis and plots in R
I think that it's fair to say that the survival package is the "recommended" package in general, as it's included in base R (i.e. does not need to be installed separately). There are many good tutorials online for this. But you need to be more specific to get a more specific answer. | Kaplan-Meier, survival analysis and plots in R
I think that it's fair to say that the survival package is the "recommended" package in general, as it's included in base R (i.e. does not need to be installed separately). There are many good tutori |
30,285 | Kaplan-Meier, survival analysis and plots in R | In an article in The American Statistician, Wolkewitz et al. use packages Epi, mvna, and survival. See Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar Nominees, v. 64 no. 3 (August 2010) pp 205-211. This exposition introduces multistate survival models and focuses on the use of a "Lexis diagram" to assess possible forms of bias. | Kaplan-Meier, survival analysis and plots in R | In an article in The American Statistician, Wolkewitz et al. use packages Epi, mvna, and survival. See Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar | Kaplan-Meier, survival analysis and plots in R
In an article in The American Statistician, Wolkewitz et al. use packages Epi, mvna, and survival. See Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar Nominees, v. 64 no. 3 (August 2010) pp 205-211. This exposition introduces multistate survival models and focuses on the use of a "Lexis diagram" to assess possible forms of bias. | Kaplan-Meier, survival analysis and plots in R
In an article in The American Statistician, Wolkewitz et al. use packages Epi, mvna, and survival. See Two Pitfalls in Survival Analyses of Time-Dependent Exposure: A Case Study in a Cohort of Oscar |
30,286 | Kaplan-Meier, survival analysis and plots in R | Survival plots have never been so informative with
http://www.r-bloggers.com/survival-plots-have-never-been-so-informative/ | Kaplan-Meier, survival analysis and plots in R | Survival plots have never been so informative with
http://www.r-bloggers.com/survival-plots-have-never-been-so-informative/ | Kaplan-Meier, survival analysis and plots in R
Survival plots have never been so informative with
http://www.r-bloggers.com/survival-plots-have-never-been-so-informative/ | Kaplan-Meier, survival analysis and plots in R
Survival plots have never been so informative with
http://www.r-bloggers.com/survival-plots-have-never-been-so-informative/ |
30,287 | How would you write mathematically that a random variable follows some unknown distribution? | The notation I tend to see is something like $X\sim F_X$ to denote that $X$ is a random variable with $F_X$ as its CDF. I have seen people try to be brief and just write $X\sim F$, but this could mislead others into thinking that $X$ has an $F$ distribution, when that could be far from the case. | How would you write mathematically that a random variable follows some unknown distribution? | The notation I tend to see is something like $X\sim F_X$ to denote that $X$ is a random variable with $F_X$ as its CDF. I have seen people try to be brief and just write $X\sim F$, but this could misl | How would you write mathematically that a random variable follows some unknown distribution?
The notation I tend to see is something like $X\sim F_X$ to denote that $X$ is a random variable with $F_X$ as its CDF. I have seen people try to be brief and just write $X\sim F$, but this could mislead others into thinking that $X$ has an $F$ distribution, when that could be far from the case. | How would you write mathematically that a random variable follows some unknown distribution?
The notation I tend to see is something like $X\sim F_X$ to denote that $X$ is a random variable with $F_X$ as its CDF. I have seen people try to be brief and just write $X\sim F$, but this could misl |
30,288 | How would you write mathematically that a random variable follows some unknown distribution? | Standard notation is $X\sim F$ or $X\sim F(x).$.
Update: the latter notation, while common shorthand, could be misunderstood since $F(x)$ is a probability. | How would you write mathematically that a random variable follows some unknown distribution? | Standard notation is $X\sim F$ or $X\sim F(x).$.
Update: the latter notation, while common shorthand, could be misunderstood since $F(x)$ is a probability. | How would you write mathematically that a random variable follows some unknown distribution?
Standard notation is $X\sim F$ or $X\sim F(x).$.
Update: the latter notation, while common shorthand, could be misunderstood since $F(x)$ is a probability. | How would you write mathematically that a random variable follows some unknown distribution?
Standard notation is $X\sim F$ or $X\sim F(x).$.
Update: the latter notation, while common shorthand, could be misunderstood since $F(x)$ is a probability. |
30,289 | Computing variance from moment generating function of exponential distribution | $M_X^{(2)}(0)$ is not a variance, it is $E(X^2)$. So the variance can be obtained by
$$Var(X) = E(X^2) - E(X)^2 = M_X^{(2)}(0) - [M_X^{(1)}(0)]^2 = \frac{1}{\lambda^2}$$ | Computing variance from moment generating function of exponential distribution | $M_X^{(2)}(0)$ is not a variance, it is $E(X^2)$. So the variance can be obtained by
$$Var(X) = E(X^2) - E(X)^2 = M_X^{(2)}(0) - [M_X^{(1)}(0)]^2 = \frac{1}{\lambda^2}$$ | Computing variance from moment generating function of exponential distribution
$M_X^{(2)}(0)$ is not a variance, it is $E(X^2)$. So the variance can be obtained by
$$Var(X) = E(X^2) - E(X)^2 = M_X^{(2)}(0) - [M_X^{(1)}(0)]^2 = \frac{1}{\lambda^2}$$ | Computing variance from moment generating function of exponential distribution
$M_X^{(2)}(0)$ is not a variance, it is $E(X^2)$. So the variance can be obtained by
$$Var(X) = E(X^2) - E(X)^2 = M_X^{(2)}(0) - [M_X^{(1)}(0)]^2 = \frac{1}{\lambda^2}$$ |
30,290 | Computing variance from moment generating function of exponential distribution | The second moment gives you
$$E[X^2]$$
and the variance is defined as
$$E[X^2]-E[X]^2$$
so that you get
$$2/\lambda^2-(1/\lambda)^2$$
which will then give you the desired result. | Computing variance from moment generating function of exponential distribution | The second moment gives you
$$E[X^2]$$
and the variance is defined as
$$E[X^2]-E[X]^2$$
so that you get
$$2/\lambda^2-(1/\lambda)^2$$
which will then give you the desired result. | Computing variance from moment generating function of exponential distribution
The second moment gives you
$$E[X^2]$$
and the variance is defined as
$$E[X^2]-E[X]^2$$
so that you get
$$2/\lambda^2-(1/\lambda)^2$$
which will then give you the desired result. | Computing variance from moment generating function of exponential distribution
The second moment gives you
$$E[X^2]$$
and the variance is defined as
$$E[X^2]-E[X]^2$$
so that you get
$$2/\lambda^2-(1/\lambda)^2$$
which will then give you the desired result. |
30,291 | Books Similar to Introduction to Statistical learning | For time series analysis: "Forecasting Principles and Practices" by Hyndman and Athanasopoulos is absolutely excellent and is roughly on the same order of mathematical complexity as ISLR (i.e. enough, but not too much). It has the additional bonus of being available for free online, and having many code examples. It has one weak point: It doesn't do a good job of providing business context or intuitive aspects of TS modeling. For that I recommend "Demand Forecasting for Managers" by Stephan Kolassa and Enno Siemsen.
For GLM's: Chapter 4 of "Machine Learning and Pattern Recognition" by Bishop gives a brief, but pretty good explanation of GLMs within the context of classification, and does so at the level of theoretical math you are looking for. No code samples though, and I don't think a free version was ever released.
For Survival Analysis, I can't give you one specific reference. But in general, I would recommend looking in Operations Research or Industrial Engineering textbooks and course materials for the mid-level theoretical content and intuitive explanations that you are seeking. | Books Similar to Introduction to Statistical learning | For time series analysis: "Forecasting Principles and Practices" by Hyndman and Athanasopoulos is absolutely excellent and is roughly on the same order of mathematical complexity as ISLR (i.e. enough, | Books Similar to Introduction to Statistical learning
For time series analysis: "Forecasting Principles and Practices" by Hyndman and Athanasopoulos is absolutely excellent and is roughly on the same order of mathematical complexity as ISLR (i.e. enough, but not too much). It has the additional bonus of being available for free online, and having many code examples. It has one weak point: It doesn't do a good job of providing business context or intuitive aspects of TS modeling. For that I recommend "Demand Forecasting for Managers" by Stephan Kolassa and Enno Siemsen.
For GLM's: Chapter 4 of "Machine Learning and Pattern Recognition" by Bishop gives a brief, but pretty good explanation of GLMs within the context of classification, and does so at the level of theoretical math you are looking for. No code samples though, and I don't think a free version was ever released.
For Survival Analysis, I can't give you one specific reference. But in general, I would recommend looking in Operations Research or Industrial Engineering textbooks and course materials for the mid-level theoretical content and intuitive explanations that you are seeking. | Books Similar to Introduction to Statistical learning
For time series analysis: "Forecasting Principles and Practices" by Hyndman and Athanasopoulos is absolutely excellent and is roughly on the same order of mathematical complexity as ISLR (i.e. enough, |
30,292 | Books Similar to Introduction to Statistical learning | If you’re interested in Bayesian Inference then there’s a wonderful book (goes into GLMs quite a lot) called Statistical Rethinking by Richard McElreath. The second edition is just out and there’s lecture series on YouTube. The most recent series (called Winter 2019 IIRC) follows the second edition. | Books Similar to Introduction to Statistical learning | If you’re interested in Bayesian Inference then there’s a wonderful book (goes into GLMs quite a lot) called Statistical Rethinking by Richard McElreath. The second edition is just out and there’s lec | Books Similar to Introduction to Statistical learning
If you’re interested in Bayesian Inference then there’s a wonderful book (goes into GLMs quite a lot) called Statistical Rethinking by Richard McElreath. The second edition is just out and there’s lecture series on YouTube. The most recent series (called Winter 2019 IIRC) follows the second edition. | Books Similar to Introduction to Statistical learning
If you’re interested in Bayesian Inference then there’s a wonderful book (goes into GLMs quite a lot) called Statistical Rethinking by Richard McElreath. The second edition is just out and there’s lec |
30,293 | Books Similar to Introduction to Statistical learning | Haven't read this new edition, but the first edition is a classic, so this one, available starting September 2020, will be a great reference for sure.
https://www.amazon.com/Regression-Stories-Analytical-Methods-Research/dp/110702398X.
I second the recommendation of "Statistical Rethinking" by Mooks, that's a great one. | Books Similar to Introduction to Statistical learning | Haven't read this new edition, but the first edition is a classic, so this one, available starting September 2020, will be a great reference for sure.
https://www.amazon.com/Regression-Stories-Analyt | Books Similar to Introduction to Statistical learning
Haven't read this new edition, but the first edition is a classic, so this one, available starting September 2020, will be a great reference for sure.
https://www.amazon.com/Regression-Stories-Analytical-Methods-Research/dp/110702398X.
I second the recommendation of "Statistical Rethinking" by Mooks, that's a great one. | Books Similar to Introduction to Statistical learning
Haven't read this new edition, but the first edition is a classic, so this one, available starting September 2020, will be a great reference for sure.
https://www.amazon.com/Regression-Stories-Analyt |
30,294 | Books Similar to Introduction to Statistical learning | For GLMs I recommend Faraway's Extending the Linear Model with R. I would also recommend Frank Harrell's Regression Modeling Strategies, which provides a nice in depth explanation of regression as a whole and various extensions including survival modeling. Both textbooks include code in R. | Books Similar to Introduction to Statistical learning | For GLMs I recommend Faraway's Extending the Linear Model with R. I would also recommend Frank Harrell's Regression Modeling Strategies, which provides a nice in depth explanation of regression as a w | Books Similar to Introduction to Statistical learning
For GLMs I recommend Faraway's Extending the Linear Model with R. I would also recommend Frank Harrell's Regression Modeling Strategies, which provides a nice in depth explanation of regression as a whole and various extensions including survival modeling. Both textbooks include code in R. | Books Similar to Introduction to Statistical learning
For GLMs I recommend Faraway's Extending the Linear Model with R. I would also recommend Frank Harrell's Regression Modeling Strategies, which provides a nice in depth explanation of regression as a w |
30,295 | Books Similar to Introduction to Statistical learning | For survival analysis, Kleinbaum (2013) - Survival Analysis -- A self-learning text is straightforward with R examples. It's even freely available on Springer now due to COVID-related university lockdowns: https://link.springer.com/book/10.1007%2F978-1-4419-6646-9.
I think Frank Harrell's Regression Modelling Strategies is also freely available now for the same reason. At any rate, it was a short while ago. | Books Similar to Introduction to Statistical learning | For survival analysis, Kleinbaum (2013) - Survival Analysis -- A self-learning text is straightforward with R examples. It's even freely available on Springer now due to COVID-related university lockd | Books Similar to Introduction to Statistical learning
For survival analysis, Kleinbaum (2013) - Survival Analysis -- A self-learning text is straightforward with R examples. It's even freely available on Springer now due to COVID-related university lockdowns: https://link.springer.com/book/10.1007%2F978-1-4419-6646-9.
I think Frank Harrell's Regression Modelling Strategies is also freely available now for the same reason. At any rate, it was a short while ago. | Books Similar to Introduction to Statistical learning
For survival analysis, Kleinbaum (2013) - Survival Analysis -- A self-learning text is straightforward with R examples. It's even freely available on Springer now due to COVID-related university lockd |
30,296 | Books Similar to Introduction to Statistical learning | From some of the same authors, there is another book focused more on the intuition and practicalities than the the math:
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning. Springer Verlag. https://web.stanford.edu/~hastie/ElemStatLearn/
Efron and Hastie also have a great book that is doable even if you skip over the math:
Efron, B., & Hastie, T. (2016). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press. | Books Similar to Introduction to Statistical learning | From some of the same authors, there is another book focused more on the intuition and practicalities than the the math:
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical | Books Similar to Introduction to Statistical learning
From some of the same authors, there is another book focused more on the intuition and practicalities than the the math:
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical Learning. Springer Verlag. https://web.stanford.edu/~hastie/ElemStatLearn/
Efron and Hastie also have a great book that is doable even if you skip over the math:
Efron, B., & Hastie, T. (2016). Computer Age Statistical Inference: Algorithms, Evidence, and Data Science. Cambridge University Press. | Books Similar to Introduction to Statistical learning
From some of the same authors, there is another book focused more on the intuition and practicalities than the the math:
Hastie, T., Tibshirani, R., & Friedman, J. (2009). The Elements of Statistical |
30,297 | Should I use normalized data for correlation calculation or not? | Since the formula for calculating the correlation coefficient standardizes the variables, changes in scale or units of measurement will not affect its value. For this reason, normalizing will NOT affect the correlation. | Should I use normalized data for correlation calculation or not? | Since the formula for calculating the correlation coefficient standardizes the variables, changes in scale or units of measurement will not affect its value. For this reason, normalizing will NOT affe | Should I use normalized data for correlation calculation or not?
Since the formula for calculating the correlation coefficient standardizes the variables, changes in scale or units of measurement will not affect its value. For this reason, normalizing will NOT affect the correlation. | Should I use normalized data for correlation calculation or not?
Since the formula for calculating the correlation coefficient standardizes the variables, changes in scale or units of measurement will not affect its value. For this reason, normalizing will NOT affe |
30,298 | Uniform vs Beta(1,1) prior | They both are equivalent.
$P(\theta) = { \Gamma(\alpha + \beta) \over \Gamma(\alpha)\Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1}$
if $\alpha = \beta = 1$
$P(\theta) = { \Gamma(\alpha + \beta) \over \Gamma(\alpha)\Gamma(\beta)} \theta^{0}(1-\theta)^{0} = {\Gamma(2) \over \Gamma(1)\Gamma(1) } = {1 \over 1} = 1$
As you can see $\theta| \beta=1, \alpha = 1 \sim U(0,1)$
Because a density function identifies uniquely a distribution, and the density of a uniform in the interval $(c=0, \ d=1)$ is:
$f(x) = {1\over c -d} = {1 \over 1} = 1 \quad x \in (0,1) $ | Uniform vs Beta(1,1) prior | They both are equivalent.
$P(\theta) = { \Gamma(\alpha + \beta) \over \Gamma(\alpha)\Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1}$
if $\alpha = \beta = 1$
$P(\theta) = { \Gamma(\alpha + \beta) | Uniform vs Beta(1,1) prior
They both are equivalent.
$P(\theta) = { \Gamma(\alpha + \beta) \over \Gamma(\alpha)\Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1}$
if $\alpha = \beta = 1$
$P(\theta) = { \Gamma(\alpha + \beta) \over \Gamma(\alpha)\Gamma(\beta)} \theta^{0}(1-\theta)^{0} = {\Gamma(2) \over \Gamma(1)\Gamma(1) } = {1 \over 1} = 1$
As you can see $\theta| \beta=1, \alpha = 1 \sim U(0,1)$
Because a density function identifies uniquely a distribution, and the density of a uniform in the interval $(c=0, \ d=1)$ is:
$f(x) = {1\over c -d} = {1 \over 1} = 1 \quad x \in (0,1) $ | Uniform vs Beta(1,1) prior
They both are equivalent.
$P(\theta) = { \Gamma(\alpha + \beta) \over \Gamma(\alpha)\Gamma(\beta)} \theta^{\alpha-1}(1-\theta)^{\beta-1}$
if $\alpha = \beta = 1$
$P(\theta) = { \Gamma(\alpha + \beta) |
30,299 | Uniform vs Beta(1,1) prior | There is a difference in that the Beta is the conjugate prior of the Bernoulli... So you have nice analytical formulas to help you update the Beta when new data comes is. In my limited experience, if you are modelling a probability, it's much better to use a Beta(1,1) prior rather than a Uniform(0,1), even for complicated models in pymc3 (where the update won't be analytical). | Uniform vs Beta(1,1) prior | There is a difference in that the Beta is the conjugate prior of the Bernoulli... So you have nice analytical formulas to help you update the Beta when new data comes is. In my limited experience, if | Uniform vs Beta(1,1) prior
There is a difference in that the Beta is the conjugate prior of the Bernoulli... So you have nice analytical formulas to help you update the Beta when new data comes is. In my limited experience, if you are modelling a probability, it's much better to use a Beta(1,1) prior rather than a Uniform(0,1), even for complicated models in pymc3 (where the update won't be analytical). | Uniform vs Beta(1,1) prior
There is a difference in that the Beta is the conjugate prior of the Bernoulli... So you have nice analytical formulas to help you update the Beta when new data comes is. In my limited experience, if |
30,300 | beta-binomial as conjugate to hypergeometric | The problem with the Wikipedia article and the reference therein (Fink D., 1997) is that there is some key information missing.
Specifically, the given posterior is for $M-x$ (i.e. the number of target individuals in the population shifted by the number observed in the sample), not for $M$. Furthermore, the posterior parameter corresponding to the number of observations is missing and should be $N-n$ (i.e. the population size minus the sample size). These two corrections fixes the support problem that you correctly noticed, as shown below.
Suppose that $0 \leq X \leq n$ is the number of target
individuals in a sample of size $n$ from a population of size $N$ with
$0 \leq M \leq N$ total target individuals.
Then, $X \sim \text{HG}(n, M, N)$ with support in $[\max(0, n-N+M), \min(n, M)]$.
If $M \sim \text{BB}(N, \alpha, \beta)$ is the prior distribution of $M$, the posterior distribution for $M - x$ is also Beta-Binomial-distributed:
$$M - x\,|\,x,\alpha,\beta \sim \text{BB}(N-n, \alpha + x, \beta + n - x)$$
If you write the probability mass function for $M$ you will find @Tim's answer above.
As an illustration, for $N = 20$ and $n = 10$, let's assume a non-informative prior distribution for $M$ with $M \sim \text{BB}(N, .5, .5)$.
Suppose that we observe $x = 9$.
library(extraDistr)
library(tidyverse)
N = 20
n = 10
a0 <- b0 <- .5
x <- 9
data.frame(
m = 0:N
) %>%
mutate(
prior = dbbinom(m, size = N, alpha = a0, beta = b0),
post = dbbinom(m-x, size = N-n, a0+x, b0+n-x)
) %>%
gather(key, dens, -m) %>%
ggplot(aes(m, dens, col = key)) +
geom_line() +
geom_point()
Created on 2018-10-10 by the reprex package (v0.2.1)
Note that the posterior support is correctly [x, N − n + x].
Dyer, D. and Pierce, R.L. (1993). On the Choice of the Prior
Distribution in Hypergeometric Sampling. Communications in Statistics
- Theory and Methods, 22(8), 2125-2146. | beta-binomial as conjugate to hypergeometric | The problem with the Wikipedia article and the reference therein (Fink D., 1997) is that there is some key information missing.
Specifically, the given posterior is for $M-x$ (i.e. the number of targe | beta-binomial as conjugate to hypergeometric
The problem with the Wikipedia article and the reference therein (Fink D., 1997) is that there is some key information missing.
Specifically, the given posterior is for $M-x$ (i.e. the number of target individuals in the population shifted by the number observed in the sample), not for $M$. Furthermore, the posterior parameter corresponding to the number of observations is missing and should be $N-n$ (i.e. the population size minus the sample size). These two corrections fixes the support problem that you correctly noticed, as shown below.
Suppose that $0 \leq X \leq n$ is the number of target
individuals in a sample of size $n$ from a population of size $N$ with
$0 \leq M \leq N$ total target individuals.
Then, $X \sim \text{HG}(n, M, N)$ with support in $[\max(0, n-N+M), \min(n, M)]$.
If $M \sim \text{BB}(N, \alpha, \beta)$ is the prior distribution of $M$, the posterior distribution for $M - x$ is also Beta-Binomial-distributed:
$$M - x\,|\,x,\alpha,\beta \sim \text{BB}(N-n, \alpha + x, \beta + n - x)$$
If you write the probability mass function for $M$ you will find @Tim's answer above.
As an illustration, for $N = 20$ and $n = 10$, let's assume a non-informative prior distribution for $M$ with $M \sim \text{BB}(N, .5, .5)$.
Suppose that we observe $x = 9$.
library(extraDistr)
library(tidyverse)
N = 20
n = 10
a0 <- b0 <- .5
x <- 9
data.frame(
m = 0:N
) %>%
mutate(
prior = dbbinom(m, size = N, alpha = a0, beta = b0),
post = dbbinom(m-x, size = N-n, a0+x, b0+n-x)
) %>%
gather(key, dens, -m) %>%
ggplot(aes(m, dens, col = key)) +
geom_line() +
geom_point()
Created on 2018-10-10 by the reprex package (v0.2.1)
Note that the posterior support is correctly [x, N − n + x].
Dyer, D. and Pierce, R.L. (1993). On the Choice of the Prior
Distribution in Hypergeometric Sampling. Communications in Statistics
- Theory and Methods, 22(8), 2125-2146. | beta-binomial as conjugate to hypergeometric
The problem with the Wikipedia article and the reference therein (Fink D., 1997) is that there is some key information missing.
Specifically, the given posterior is for $M-x$ (i.e. the number of targe |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.