idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
28,301 | Regression using circular variable (hour from 0~23) as predictor | Circular regression most often would refer to regression with a circular outcome.
In this case, we have linear regression with a circular predictor. In that case, we would add both the sine and the cosine of the angle to the regression, so that we predict the outcome as $\hat{y} = \beta_1\cos(\pi * \text{hour} / 12) + \beta_2\sin(\pi * \text{hour} / 12).$ Adding both the sine and cosine naturally resolves the issue you mention. Note that here, different than you, I've assumed that you represent hour in hours rather than degrees.
For a more elaborate answer on how to do this and what it means, please see the answer to this SO question. | Regression using circular variable (hour from 0~23) as predictor | Circular regression most often would refer to regression with a circular outcome.
In this case, we have linear regression with a circular predictor. In that case, we would add both the sine and the c | Regression using circular variable (hour from 0~23) as predictor
Circular regression most often would refer to regression with a circular outcome.
In this case, we have linear regression with a circular predictor. In that case, we would add both the sine and the cosine of the angle to the regression, so that we predict the outcome as $\hat{y} = \beta_1\cos(\pi * \text{hour} / 12) + \beta_2\sin(\pi * \text{hour} / 12).$ Adding both the sine and cosine naturally resolves the issue you mention. Note that here, different than you, I've assumed that you represent hour in hours rather than degrees.
For a more elaborate answer on how to do this and what it means, please see the answer to this SO question. | Regression using circular variable (hour from 0~23) as predictor
Circular regression most often would refer to regression with a circular outcome.
In this case, we have linear regression with a circular predictor. In that case, we would add both the sine and the c |
28,302 | Regression using circular variable (hour from 0~23) as predictor | You want to map the interval $(0,24)$ to the interval $(0,2\pi)$ - a full cycle -; the function to do so is
$$2\pi \frac{\mathrm{hour}}{24}$$
You then need two terms in your linear model (recall that an equivalent non-linear parametrization uses phase & amplitude):
$$\beta_1 \sin\left(2\pi\frac{\mathrm{hour}}{24}\right) + \beta_2 \cos\left(2\pi \frac{\mathrm{hour}}{24}\right)$$
Noon & midnight aren't constrained to result in equal predictor values because the phase is estimated from your data. Noon might be at the peak and midnight at the trough of the wave.
And you can continue with harmonics in an analogous way to higher-order polynomial terms: $$\ldots +\beta_3 \sin\left(2\times 2\pi\frac{\mathrm{hour}}{24}\right) + \beta_4 \cos\left(2\times 2\pi \frac{\mathrm{hour}}{24}\right)+\ldots$$ | Regression using circular variable (hour from 0~23) as predictor | You want to map the interval $(0,24)$ to the interval $(0,2\pi)$ - a full cycle -; the function to do so is
$$2\pi \frac{\mathrm{hour}}{24}$$
You then need two terms in your linear model (recall that | Regression using circular variable (hour from 0~23) as predictor
You want to map the interval $(0,24)$ to the interval $(0,2\pi)$ - a full cycle -; the function to do so is
$$2\pi \frac{\mathrm{hour}}{24}$$
You then need two terms in your linear model (recall that an equivalent non-linear parametrization uses phase & amplitude):
$$\beta_1 \sin\left(2\pi\frac{\mathrm{hour}}{24}\right) + \beta_2 \cos\left(2\pi \frac{\mathrm{hour}}{24}\right)$$
Noon & midnight aren't constrained to result in equal predictor values because the phase is estimated from your data. Noon might be at the peak and midnight at the trough of the wave.
And you can continue with harmonics in an analogous way to higher-order polynomial terms: $$\ldots +\beta_3 \sin\left(2\times 2\pi\frac{\mathrm{hour}}{24}\right) + \beta_4 \cos\left(2\times 2\pi \frac{\mathrm{hour}}{24}\right)+\ldots$$ | Regression using circular variable (hour from 0~23) as predictor
You want to map the interval $(0,24)$ to the interval $(0,2\pi)$ - a full cycle -; the function to do so is
$$2\pi \frac{\mathrm{hour}}{24}$$
You then need two terms in your linear model (recall that |
28,303 | What is a pivotal statistic? | The population standard deviation $\sigma$ depends on the (unknown) distribution, $E$. The sample standard deviation $\hat{\sigma}$ depends only on the (known) data, $\boldsymbol{x}$.
Because it is a consistent estimator, $\hat{\sigma}\to\sigma$ as sample size goes to infinity. But with a finite sample, only $\hat{\sigma}$ is knowable. | What is a pivotal statistic? | The population standard deviation $\sigma$ depends on the (unknown) distribution, $E$. The sample standard deviation $\hat{\sigma}$ depends only on the (known) data, $\boldsymbol{x}$.
Because it is a | What is a pivotal statistic?
The population standard deviation $\sigma$ depends on the (unknown) distribution, $E$. The sample standard deviation $\hat{\sigma}$ depends only on the (known) data, $\boldsymbol{x}$.
Because it is a consistent estimator, $\hat{\sigma}\to\sigma$ as sample size goes to infinity. But with a finite sample, only $\hat{\sigma}$ is knowable. | What is a pivotal statistic?
The population standard deviation $\sigma$ depends on the (unknown) distribution, $E$. The sample standard deviation $\hat{\sigma}$ depends only on the (known) data, $\boldsymbol{x}$.
Because it is a |
28,304 | What is a pivotal statistic? | To start off, there is a typo in statement 3: the population variance $\sigma^2$ is a parameter (unknown but fixed value), not a random variable, so it cannot be a pivot. It's the statistic $\widehat{\theta}$ which is not pivotal.
So the question boils down to comparing two statistics for the difference in means $\mu_2 - \mu_1$:
$$
\begin{aligned}
\widehat{\theta} &= \overline{\mathbf{x}}_2 - \overline{\mathbf{x}}_1 \\
T &= \frac{\overline{\mathbf{x}}_2 - \overline{\mathbf{x}}_1}{\widehat{\sigma}\left(\frac{1}{n_1}+\frac{1}{n_2}\right)^{1/2}}
\end{aligned}
$$
Since $\mathbf{x}_1$ is iid $N(\mu_1,\sigma^2)$ and $\mathbf{x}_2$ is iid $N(\mu_2,\sigma^2)$, under the null hypothesis $H_0:\mu_1 = \mu_2$:
$$
\begin{aligned}
\widehat{\theta} &\sim N\left(0,\sigma^2\left(\frac{1}{n_1} + \frac{1}{n_2}\right)\right) \\
T &\sim t_{n_1+n_2-2}
\end{aligned}
$$
$\widehat{\theta}$ is not pivotal because its distribution depends on an unknown parameter; $T$ is pivotal because its distribution depends only on the sample sizes, $n_1$ and $n_2$, which are known.
Here we are interested in comparing the two means, $\mu_1$ and $\mu_2$, not in estimating the variance: $\sigma^2$ is a nuisance parameter. So it's convenient that the distribution of $T$ doesn't depend on the variance. There is no need to plug in an estimator $\widehat{\sigma}$ for $\sigma$ or use any of the other frequentist "devices" described in Section 2.1, in order to test the null hypothesis $\mu_1=\mu_2$. And our analysis (confidence intervals & p-values) will be equally valid for all values of $\sigma^2$. | What is a pivotal statistic? | To start off, there is a typo in statement 3: the population variance $\sigma^2$ is a parameter (unknown but fixed value), not a random variable, so it cannot be a pivot. It's the statistic $\widehat{ | What is a pivotal statistic?
To start off, there is a typo in statement 3: the population variance $\sigma^2$ is a parameter (unknown but fixed value), not a random variable, so it cannot be a pivot. It's the statistic $\widehat{\theta}$ which is not pivotal.
So the question boils down to comparing two statistics for the difference in means $\mu_2 - \mu_1$:
$$
\begin{aligned}
\widehat{\theta} &= \overline{\mathbf{x}}_2 - \overline{\mathbf{x}}_1 \\
T &= \frac{\overline{\mathbf{x}}_2 - \overline{\mathbf{x}}_1}{\widehat{\sigma}\left(\frac{1}{n_1}+\frac{1}{n_2}\right)^{1/2}}
\end{aligned}
$$
Since $\mathbf{x}_1$ is iid $N(\mu_1,\sigma^2)$ and $\mathbf{x}_2$ is iid $N(\mu_2,\sigma^2)$, under the null hypothesis $H_0:\mu_1 = \mu_2$:
$$
\begin{aligned}
\widehat{\theta} &\sim N\left(0,\sigma^2\left(\frac{1}{n_1} + \frac{1}{n_2}\right)\right) \\
T &\sim t_{n_1+n_2-2}
\end{aligned}
$$
$\widehat{\theta}$ is not pivotal because its distribution depends on an unknown parameter; $T$ is pivotal because its distribution depends only on the sample sizes, $n_1$ and $n_2$, which are known.
Here we are interested in comparing the two means, $\mu_1$ and $\mu_2$, not in estimating the variance: $\sigma^2$ is a nuisance parameter. So it's convenient that the distribution of $T$ doesn't depend on the variance. There is no need to plug in an estimator $\widehat{\sigma}$ for $\sigma$ or use any of the other frequentist "devices" described in Section 2.1, in order to test the null hypothesis $\mu_1=\mu_2$. And our analysis (confidence intervals & p-values) will be equally valid for all values of $\sigma^2$. | What is a pivotal statistic?
To start off, there is a typo in statement 3: the population variance $\sigma^2$ is a parameter (unknown but fixed value), not a random variable, so it cannot be a pivot. It's the statistic $\widehat{ |
28,305 | L2-regularization vs random effects shrinkage | That's a bit oversimplified. The shrinkage in a mixed-effects regression is weighted by overall balance between "classes"/"groups" in the random-effects structures, so it's not that you don't get to to choose, but rather that your group size and strength of evidence chooses. (Think of it as like a weighted grand mean). Moreover, mixed-effects models are very useful when you have a number of groups but only very little data in each group: the overall structure and partial pooling allows for better inferences even within each group!
There are also LASSO (L1-regularized), ridge (L2-regularized), and elastic net (combination of L1 and L2 regularization) variants of mixed models. In other words, these things are orthogonal. In Bayesian terms, you get mixed-effects shrinkage via your hierarchical/multilevel model structure and regularization via your choice of prior on the distribution of model coefficients.
Perhaps the confusion arises from the frequent use of regularization in "machine learning" (where prediction is the goal) but the frequent use of mixed-effects in "statistics" (where inference is the goal), but that's more a side effect of other aspects of common datasets in such areas (e.g. size) and computational concerns. Mixed-effects models are generally harder to fit, so if a regularized fixed-effect model that ignores some structure of the data is good enough for the predictions you need, it may not be worthwhile to fit a mixed-effects model. But if you need to make inferences on your data, then ignoring its structure would be a bad idea. | L2-regularization vs random effects shrinkage | That's a bit oversimplified. The shrinkage in a mixed-effects regression is weighted by overall balance between "classes"/"groups" in the random-effects structures, so it's not that you don't get to t | L2-regularization vs random effects shrinkage
That's a bit oversimplified. The shrinkage in a mixed-effects regression is weighted by overall balance between "classes"/"groups" in the random-effects structures, so it's not that you don't get to to choose, but rather that your group size and strength of evidence chooses. (Think of it as like a weighted grand mean). Moreover, mixed-effects models are very useful when you have a number of groups but only very little data in each group: the overall structure and partial pooling allows for better inferences even within each group!
There are also LASSO (L1-regularized), ridge (L2-regularized), and elastic net (combination of L1 and L2 regularization) variants of mixed models. In other words, these things are orthogonal. In Bayesian terms, you get mixed-effects shrinkage via your hierarchical/multilevel model structure and regularization via your choice of prior on the distribution of model coefficients.
Perhaps the confusion arises from the frequent use of regularization in "machine learning" (where prediction is the goal) but the frequent use of mixed-effects in "statistics" (where inference is the goal), but that's more a side effect of other aspects of common datasets in such areas (e.g. size) and computational concerns. Mixed-effects models are generally harder to fit, so if a regularized fixed-effect model that ignores some structure of the data is good enough for the predictions you need, it may not be worthwhile to fit a mixed-effects model. But if you need to make inferences on your data, then ignoring its structure would be a bad idea. | L2-regularization vs random effects shrinkage
That's a bit oversimplified. The shrinkage in a mixed-effects regression is weighted by overall balance between "classes"/"groups" in the random-effects structures, so it's not that you don't get to t |
28,306 | When we run many chains at once in an MCMC model, how are they combined together for the posterior draws? | In general, the burn-in iterations for each chain are dropped and the rest of the iterations are simply combined, as if from one long chain. The main reason for having separate chains is to better diagnose convergence issues.
You may also find this question helpful. | When we run many chains at once in an MCMC model, how are they combined together for the posterior d | In general, the burn-in iterations for each chain are dropped and the rest of the iterations are simply combined, as if from one long chain. The main reason for having separate chains is to better dia | When we run many chains at once in an MCMC model, how are they combined together for the posterior draws?
In general, the burn-in iterations for each chain are dropped and the rest of the iterations are simply combined, as if from one long chain. The main reason for having separate chains is to better diagnose convergence issues.
You may also find this question helpful. | When we run many chains at once in an MCMC model, how are they combined together for the posterior d
In general, the burn-in iterations for each chain are dropped and the rest of the iterations are simply combined, as if from one long chain. The main reason for having separate chains is to better dia |
28,307 | Similarities and differences between IRT model and Logistic regression model | Have a look at Section 1.6 ("The linear regression perspective") in De Boeck and Wilson (2008) Explanatory Item Response Models (http://www.springer.com/de/book/9780387402758) and Formann, A. K. (2007), (Almost) Equivalence between conditional and mixture maximum likelihood estimates for some models of the Rasch type, In M. von Davier & C. H. Carstensen (Eds.), Multivariate and mixture distribution Rasch models (pp. 177-189), New York: Springer.
In short:
IRT models are generalized nonlinear mixed effects models:
the score $Y_{pi}\in\left\{ 0,1\right\} $ of a student $p$
to an item $i$ is the dependent variable,
given a randomly sampled student's trait, e.g. $\theta_{p}\sim N\left(\mu,\sigma^{2}\right)$, the responses are assumend to be independent Bernoulli distributed,
given $\theta_{p}$, the predictor $\eta_{pi}=\textrm{logit}\left(P\left(Y_{pi}=1\right)\right)$
is a linear combination of item characteristics
$$\eta_{pi}=\sum_{k=0}^{K}b_{k}X_{ik}+\theta_{p}+\varepsilon_{pi},$$
let $X_{ik}=-1,$ if $i=k$, and $X_{ik}=0$, otherwise -
thus obtain the Rasch model
$$P\left(Y_{pi}=1\mid\theta_{p}\right)=\frac{\exp\left(\theta_{p}-b_{i}\right)}{1+\exp\left(\theta_{p}-b_{i}\right)};$$
Note that IRT models are extended towards different aspects:
With respect to discriminatory power (2PL) and guessing ratio (3PL) of an item
$$
P\left(Y_{pi}=1\mid\theta_{p}\right)= c_i+(1-c_i)\frac{\exp\left(a_{i}\left(\theta_{p}-b_{i}\right)\right)}{1+\exp\left(a_{i}\left(\theta_{p}-b_{i}\right)\right)}
$$
With respect to polytomous scores
$$
P\left(Y_{pi}=k\mid\theta_{p}\right)=\frac{\exp\left(a_{ik}\theta_{p}-b_{ik}\right)}{\sum_{k=0}^{K}\exp\left(a_{ik}\theta_{p}-b_{ik}\right)}
$$
With respect to known student characteristics constituting the population (e.g., sex, migration status)
$$
\theta_{p}\sim N\left(\mathbf{Z}\boldsymbol{\beta},\sigma^{2}\right),
$$
With respect to construct dimensionality
$$
P\left(Y_{pi}=1\mid\theta_{p}\right)=\frac{\exp(\sum_{d}a_{id}\theta_{pd}-b_{i})}{1+\exp(\sum_{d}a_{id}\theta_{pd}-b_{i})},\quad\theta_{p}\sim N^{d}\left(\boldsymbol{\mu},\Sigma\right)
$$
With respect to discrete skill classes (continuous distributions can be easily approximated by discrete ones)
$$
P\left(Y_{pi}=1\mid\theta_{p(l)}\right)=\frac{\exp(\theta_{p(l)}-b_{i(l)})}{1+\exp(\theta_{p(l)}-b_{i(l)})},\quad\theta_{p(l)}\in\left\{ \theta_{p(1)},\dots,\theta_{p(L)}\right\}
$$
(taken from the useR!2015 slides for the R package TAM) | Similarities and differences between IRT model and Logistic regression model | Have a look at Section 1.6 ("The linear regression perspective") in De Boeck and Wilson (2008) Explanatory Item Response Models (http://www.springer.com/de/book/9780387402758) and Formann, A. K. (2007 | Similarities and differences between IRT model and Logistic regression model
Have a look at Section 1.6 ("The linear regression perspective") in De Boeck and Wilson (2008) Explanatory Item Response Models (http://www.springer.com/de/book/9780387402758) and Formann, A. K. (2007), (Almost) Equivalence between conditional and mixture maximum likelihood estimates for some models of the Rasch type, In M. von Davier & C. H. Carstensen (Eds.), Multivariate and mixture distribution Rasch models (pp. 177-189), New York: Springer.
In short:
IRT models are generalized nonlinear mixed effects models:
the score $Y_{pi}\in\left\{ 0,1\right\} $ of a student $p$
to an item $i$ is the dependent variable,
given a randomly sampled student's trait, e.g. $\theta_{p}\sim N\left(\mu,\sigma^{2}\right)$, the responses are assumend to be independent Bernoulli distributed,
given $\theta_{p}$, the predictor $\eta_{pi}=\textrm{logit}\left(P\left(Y_{pi}=1\right)\right)$
is a linear combination of item characteristics
$$\eta_{pi}=\sum_{k=0}^{K}b_{k}X_{ik}+\theta_{p}+\varepsilon_{pi},$$
let $X_{ik}=-1,$ if $i=k$, and $X_{ik}=0$, otherwise -
thus obtain the Rasch model
$$P\left(Y_{pi}=1\mid\theta_{p}\right)=\frac{\exp\left(\theta_{p}-b_{i}\right)}{1+\exp\left(\theta_{p}-b_{i}\right)};$$
Note that IRT models are extended towards different aspects:
With respect to discriminatory power (2PL) and guessing ratio (3PL) of an item
$$
P\left(Y_{pi}=1\mid\theta_{p}\right)= c_i+(1-c_i)\frac{\exp\left(a_{i}\left(\theta_{p}-b_{i}\right)\right)}{1+\exp\left(a_{i}\left(\theta_{p}-b_{i}\right)\right)}
$$
With respect to polytomous scores
$$
P\left(Y_{pi}=k\mid\theta_{p}\right)=\frac{\exp\left(a_{ik}\theta_{p}-b_{ik}\right)}{\sum_{k=0}^{K}\exp\left(a_{ik}\theta_{p}-b_{ik}\right)}
$$
With respect to known student characteristics constituting the population (e.g., sex, migration status)
$$
\theta_{p}\sim N\left(\mathbf{Z}\boldsymbol{\beta},\sigma^{2}\right),
$$
With respect to construct dimensionality
$$
P\left(Y_{pi}=1\mid\theta_{p}\right)=\frac{\exp(\sum_{d}a_{id}\theta_{pd}-b_{i})}{1+\exp(\sum_{d}a_{id}\theta_{pd}-b_{i})},\quad\theta_{p}\sim N^{d}\left(\boldsymbol{\mu},\Sigma\right)
$$
With respect to discrete skill classes (continuous distributions can be easily approximated by discrete ones)
$$
P\left(Y_{pi}=1\mid\theta_{p(l)}\right)=\frac{\exp(\theta_{p(l)}-b_{i(l)})}{1+\exp(\theta_{p(l)}-b_{i(l)})},\quad\theta_{p(l)}\in\left\{ \theta_{p(1)},\dots,\theta_{p(L)}\right\}
$$
(taken from the useR!2015 slides for the R package TAM) | Similarities and differences between IRT model and Logistic regression model
Have a look at Section 1.6 ("The linear regression perspective") in De Boeck and Wilson (2008) Explanatory Item Response Models (http://www.springer.com/de/book/9780387402758) and Formann, A. K. (2007 |
28,308 | Similarities and differences between IRT model and Logistic regression model | @Tom's response is excellent, but I'd like to offer a version that's more heuristic and that introduces an additional concept.
Logistic regression
Imagine we have a number of binary questions. If we are interested in the probability of responding yes to any one of the questions, and if we're interested in the effect of some independent variables on that probability, we use logistic regression:
$P(y_i = 1) = \frac{1}{1 + exp(X\beta)} = logit^-1(X\beta)$
where i indexes the questions (i.e. the items), X is a vector of characteristics of the respondents, and $\beta$ is the effect of each of those characteristics in log odds terms.
IRT
Now, note that I said we had a number of binary questions. Those questions might all get at some kind of latent trait, e.g. verbal ability, level of depression, level of extraversion. Often, we are interested in the level of the latent trait itself.
For example, in the Graduate Record Exam, we're interested in characterizing the verbal and math ability of various applicants. We want some good measure of their score. We could obviously count how many questions someone got correct, but that does treat all questions as being worth the same amount - it doesn't explicitly account for the fact that questions might vary in difficulty. The solution is item response theory. Again, we're (for now) not interested in either X or $\beta$, but we're just interested in the person's verbal ability, which we'll call $\theta$. We use each person's pattern of responses to all the questions to estimate $\theta$:
$P(y_i = 1) = logit^-1[a_i(\theta_j - b_i)]$
where $a_i$ is discrimination of item i and $b_i$ is its difficulty.
So, that's one obvious distinction between regular logistic regression and IRT. In the former, we're interested in the effects of independent variables on one binary dependent variable. In the latter, we use a bunch of binary (or categorical) variables to predict some latent trait. The original post said that $\theta$ is our independent variable. I'd respectfully disagree, I think it's more like this is the dependent variable in IRT.
I used binary items and logistic regression for simplicity, but the approach generalizes to ordered items and ordered logistic regression.
Explanatory IRT
What if you were interested in the things that predict the latent trait, though, i.e. the Xs and $\beta$s previously mentioned?
As mentioned earlier, one model to estimate the latent trait is just count the number of correct answers, or add up all the values of your Likert (i.e. categorical) items. That has its flaws; you're assuming that each item (or each level of each item) is worth the same amount of the latent trait. This approach is common enough in many fields.
Perhaps you can see where I'm going with this: you can use IRT to predict the level of the latent trait, then conduct a regular linear regression. That would ignore the uncertainty in each person's latent trait, though.
A more principled approach would be to use explanatory IRT: you simultaneously estimate $\theta$ using an IRT model and you estimate the effect of your Xs on $\theta$ as if you were using linear regression. You can even extend this approach to include random effects to represent, for example, the fact that students are nested in schools.
More reading available on Phil Chalmers' excellent intro to his mirt package. If you understand the nuts and bolts of IRT, I'd go to the Mixed Effects IRT section of these slides. Stata is also capable of fitting explanatory IRT models (albeit I believe it can't fit random effects explanatory IRT models as I described above). | Similarities and differences between IRT model and Logistic regression model | @Tom's response is excellent, but I'd like to offer a version that's more heuristic and that introduces an additional concept.
Logistic regression
Imagine we have a number of binary questions. If we a | Similarities and differences between IRT model and Logistic regression model
@Tom's response is excellent, but I'd like to offer a version that's more heuristic and that introduces an additional concept.
Logistic regression
Imagine we have a number of binary questions. If we are interested in the probability of responding yes to any one of the questions, and if we're interested in the effect of some independent variables on that probability, we use logistic regression:
$P(y_i = 1) = \frac{1}{1 + exp(X\beta)} = logit^-1(X\beta)$
where i indexes the questions (i.e. the items), X is a vector of characteristics of the respondents, and $\beta$ is the effect of each of those characteristics in log odds terms.
IRT
Now, note that I said we had a number of binary questions. Those questions might all get at some kind of latent trait, e.g. verbal ability, level of depression, level of extraversion. Often, we are interested in the level of the latent trait itself.
For example, in the Graduate Record Exam, we're interested in characterizing the verbal and math ability of various applicants. We want some good measure of their score. We could obviously count how many questions someone got correct, but that does treat all questions as being worth the same amount - it doesn't explicitly account for the fact that questions might vary in difficulty. The solution is item response theory. Again, we're (for now) not interested in either X or $\beta$, but we're just interested in the person's verbal ability, which we'll call $\theta$. We use each person's pattern of responses to all the questions to estimate $\theta$:
$P(y_i = 1) = logit^-1[a_i(\theta_j - b_i)]$
where $a_i$ is discrimination of item i and $b_i$ is its difficulty.
So, that's one obvious distinction between regular logistic regression and IRT. In the former, we're interested in the effects of independent variables on one binary dependent variable. In the latter, we use a bunch of binary (or categorical) variables to predict some latent trait. The original post said that $\theta$ is our independent variable. I'd respectfully disagree, I think it's more like this is the dependent variable in IRT.
I used binary items and logistic regression for simplicity, but the approach generalizes to ordered items and ordered logistic regression.
Explanatory IRT
What if you were interested in the things that predict the latent trait, though, i.e. the Xs and $\beta$s previously mentioned?
As mentioned earlier, one model to estimate the latent trait is just count the number of correct answers, or add up all the values of your Likert (i.e. categorical) items. That has its flaws; you're assuming that each item (or each level of each item) is worth the same amount of the latent trait. This approach is common enough in many fields.
Perhaps you can see where I'm going with this: you can use IRT to predict the level of the latent trait, then conduct a regular linear regression. That would ignore the uncertainty in each person's latent trait, though.
A more principled approach would be to use explanatory IRT: you simultaneously estimate $\theta$ using an IRT model and you estimate the effect of your Xs on $\theta$ as if you were using linear regression. You can even extend this approach to include random effects to represent, for example, the fact that students are nested in schools.
More reading available on Phil Chalmers' excellent intro to his mirt package. If you understand the nuts and bolts of IRT, I'd go to the Mixed Effects IRT section of these slides. Stata is also capable of fitting explanatory IRT models (albeit I believe it can't fit random effects explanatory IRT models as I described above). | Similarities and differences between IRT model and Logistic regression model
@Tom's response is excellent, but I'd like to offer a version that's more heuristic and that introduces an additional concept.
Logistic regression
Imagine we have a number of binary questions. If we a |
28,309 | Random variable with zero variance | $E[(X-E[X])^2] =0 \implies X = E[X]$
Thus $X$ is almost surely constant. A better description for such random variables is that it follows a degenerate distribution. | Random variable with zero variance | $E[(X-E[X])^2] =0 \implies X = E[X]$
Thus $X$ is almost surely constant. A better description for such random variables is that it follows a degenerate distribution. | Random variable with zero variance
$E[(X-E[X])^2] =0 \implies X = E[X]$
Thus $X$ is almost surely constant. A better description for such random variables is that it follows a degenerate distribution. | Random variable with zero variance
$E[(X-E[X])^2] =0 \implies X = E[X]$
Thus $X$ is almost surely constant. A better description for such random variables is that it follows a degenerate distribution. |
28,310 | Unbiased estimator of poisson parameter | If $X\sim \text{Pois}(\lambda)$, then $P(X = k) = \lambda^ke^{-\lambda}/k!$, for $k\geq 0$. It is hard to compute
$$E[X^n] = \sum_{k\geq 0} k^n P(X = k)\text{,}$$
but is is much easier to compute $E[X^{\underline{n}}]$, where $X^{\underline{n}} = X(X - 1)\cdots (X - n + 1)$:
$$E[X^\underline{n}]=\lambda^n\text{.}$$
You can prove this by yourself - it is an easy exercise. Also, I will let you prove by yourself the following: If $X_1,\cdots, X_N$ are i.i.d as $\text{Pois}(\lambda)$, then $U = \sum_i X_i\sim \text{Pois}(N\lambda)$, hence
$$E[U^{\underline{n}}] = (N\lambda)^n = N^n \lambda^n\quad\text{and}\quad E[U^\underline{n}/N^n] = \lambda^n\text{.}$$
Let $Z_n = U^{\underline{n}}/N^n$. It follows that
$Z_n$'s are functions of your measurements $X_1$, $\dots$, $X_N$
$E[Z_n] = \lambda^n$,
Since $e^\lambda = \sum_{n\geq 0}\lambda^n /n!$, we can deduce that
$$E\left[\sum_{n\geq 0}\frac{Z_n}{n!}\right] =\sum_{n\geq 0} \frac{\lambda^n}{n!} = e^\lambda\text{,}$$
hence, your unbiased estimator is $W = \sum_{n\geq 0} Z_n/n!$, i.e, $E[W] = e^\lambda$. However, to compute $W$, one must evaluate a sum that seems to be infinite, but note that $U\in \mathbb{N}_0$, hence $U^\underline{n} = 0$ for $n>U$. It follows that $Z_n = 0$ for $n>U$, hence the sum is finite.
We can see that by using this method, you can find the unbiased estimator for any function of $\lambda$ that can be expressed as $f(\lambda) = \sum_{n\geq 0}a_n\lambda^n$. | Unbiased estimator of poisson parameter | If $X\sim \text{Pois}(\lambda)$, then $P(X = k) = \lambda^ke^{-\lambda}/k!$, for $k\geq 0$. It is hard to compute
$$E[X^n] = \sum_{k\geq 0} k^n P(X = k)\text{,}$$
but is is much easier to compute $E[X | Unbiased estimator of poisson parameter
If $X\sim \text{Pois}(\lambda)$, then $P(X = k) = \lambda^ke^{-\lambda}/k!$, for $k\geq 0$. It is hard to compute
$$E[X^n] = \sum_{k\geq 0} k^n P(X = k)\text{,}$$
but is is much easier to compute $E[X^{\underline{n}}]$, where $X^{\underline{n}} = X(X - 1)\cdots (X - n + 1)$:
$$E[X^\underline{n}]=\lambda^n\text{.}$$
You can prove this by yourself - it is an easy exercise. Also, I will let you prove by yourself the following: If $X_1,\cdots, X_N$ are i.i.d as $\text{Pois}(\lambda)$, then $U = \sum_i X_i\sim \text{Pois}(N\lambda)$, hence
$$E[U^{\underline{n}}] = (N\lambda)^n = N^n \lambda^n\quad\text{and}\quad E[U^\underline{n}/N^n] = \lambda^n\text{.}$$
Let $Z_n = U^{\underline{n}}/N^n$. It follows that
$Z_n$'s are functions of your measurements $X_1$, $\dots$, $X_N$
$E[Z_n] = \lambda^n$,
Since $e^\lambda = \sum_{n\geq 0}\lambda^n /n!$, we can deduce that
$$E\left[\sum_{n\geq 0}\frac{Z_n}{n!}\right] =\sum_{n\geq 0} \frac{\lambda^n}{n!} = e^\lambda\text{,}$$
hence, your unbiased estimator is $W = \sum_{n\geq 0} Z_n/n!$, i.e, $E[W] = e^\lambda$. However, to compute $W$, one must evaluate a sum that seems to be infinite, but note that $U\in \mathbb{N}_0$, hence $U^\underline{n} = 0$ for $n>U$. It follows that $Z_n = 0$ for $n>U$, hence the sum is finite.
We can see that by using this method, you can find the unbiased estimator for any function of $\lambda$ that can be expressed as $f(\lambda) = \sum_{n\geq 0}a_n\lambda^n$. | Unbiased estimator of poisson parameter
If $X\sim \text{Pois}(\lambda)$, then $P(X = k) = \lambda^ke^{-\lambda}/k!$, for $k\geq 0$. It is hard to compute
$$E[X^n] = \sum_{k\geq 0} k^n P(X = k)\text{,}$$
but is is much easier to compute $E[X |
28,311 | Unbiased estimator of poisson parameter | It follows that $Y=\sum_{i=1}^{10} X_i \sim \text{Pois}(10\lambda)$. We want to estimate $\theta=e^\lambda$. As you say, a possible estimator would be
\begin{equation}
\hat\theta = e^{\bar X} = e^{Y/10}.
\end{equation}
Using the moment generating function of $Y$,
\begin{equation}
M_Y(t)=e^{10\lambda(e^t - 1)},
\end{equation}
we find that
\begin{equation}
E(\hat\theta) = E(e^{\frac1{10}Y}) = M_Y(\frac1{10}) = e^{10\lambda(e^{1/10} - 1)} = \theta^{10(e^{1/10}-1)},
\end{equation}
so $\hat\theta$ is biased. Some guesswork suggest that
\begin{equation}
\theta^* = e^{aY},
\end{equation}
may be unbiased for suitable choice of the correction factor $a$. Again, using the mgf of $Y$ we find that
\begin{equation}
E(\theta^*) = e^{10\lambda(e^a - 1)} = \theta^{10(e^a-1)},
\end{equation}
so this is unbiased if $10(e^a - 1) = 1$ which leads to $a=\ln\frac{11}{10}$ and $\theta^* = (\frac{11}{10})^Y$ as an unbiased estimator of $\theta=e^\lambda$.
By the Lehmann-Scheffé theorem, since $Y$ is a sufficient statistic for $\lambda$, the estimator $\theta^*$ (a function of $Y$) is UMVUE for $e^\lambda$. | Unbiased estimator of poisson parameter | It follows that $Y=\sum_{i=1}^{10} X_i \sim \text{Pois}(10\lambda)$. We want to estimate $\theta=e^\lambda$. As you say, a possible estimator would be
\begin{equation}
\hat\theta = e^{\bar X} = e^{Y | Unbiased estimator of poisson parameter
It follows that $Y=\sum_{i=1}^{10} X_i \sim \text{Pois}(10\lambda)$. We want to estimate $\theta=e^\lambda$. As you say, a possible estimator would be
\begin{equation}
\hat\theta = e^{\bar X} = e^{Y/10}.
\end{equation}
Using the moment generating function of $Y$,
\begin{equation}
M_Y(t)=e^{10\lambda(e^t - 1)},
\end{equation}
we find that
\begin{equation}
E(\hat\theta) = E(e^{\frac1{10}Y}) = M_Y(\frac1{10}) = e^{10\lambda(e^{1/10} - 1)} = \theta^{10(e^{1/10}-1)},
\end{equation}
so $\hat\theta$ is biased. Some guesswork suggest that
\begin{equation}
\theta^* = e^{aY},
\end{equation}
may be unbiased for suitable choice of the correction factor $a$. Again, using the mgf of $Y$ we find that
\begin{equation}
E(\theta^*) = e^{10\lambda(e^a - 1)} = \theta^{10(e^a-1)},
\end{equation}
so this is unbiased if $10(e^a - 1) = 1$ which leads to $a=\ln\frac{11}{10}$ and $\theta^* = (\frac{11}{10})^Y$ as an unbiased estimator of $\theta=e^\lambda$.
By the Lehmann-Scheffé theorem, since $Y$ is a sufficient statistic for $\lambda$, the estimator $\theta^*$ (a function of $Y$) is UMVUE for $e^\lambda$. | Unbiased estimator of poisson parameter
It follows that $Y=\sum_{i=1}^{10} X_i \sim \text{Pois}(10\lambda)$. We want to estimate $\theta=e^\lambda$. As you say, a possible estimator would be
\begin{equation}
\hat\theta = e^{\bar X} = e^{Y |
28,312 | Bayes optimal classifier vs Likelihood Ratio | They are not the same, but in you case they could be used for the same purpose.
Optimal Bayes classifier is
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{c \in C} p(c|X) $$
i.e., among all hypotheses, take the $c$ that maximizes the posterior probability. You use Bayes theorem
$$ \underbrace{p(c|X)}_{\text{posterior}} \propto \underbrace{p(X|c)}_{\text{likelihood}} \underbrace{p(c)}_{\text{prior}} $$
but since using uniform prior (all $c$ are equally likely, so $p(c) \propto 1$) it reduces to
the likelihood function
$$ p(c|X) \propto p(X|c) $$
The difference between maximizing the likelihood function and comparing the likelihood ratios, is that with likelihood ratio you compare only two likelihoods, while in maximizing the likelihood you may consider multiple hypothesis. So if you have only two hypotheses, then they will do essentially the same thing. However imagine that you had multiple classes, in such case comparing each of them with all the others pair by pair would be a really inefficient way to go.
Notice that likelihood ratio serves also other purpose than finding which of the two models has greater likelihood. Likelihood ratio can be used for hypothesis testing and it tells you how much more (or less) likely is is one of the models comparing to the other. Moreover, you can do the same when comparing the posterior distributions by using Bayes factor in similar fashion. | Bayes optimal classifier vs Likelihood Ratio | They are not the same, but in you case they could be used for the same purpose.
Optimal Bayes classifier is
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{c \in C} p(c|X) $$
i.e., among all hypo | Bayes optimal classifier vs Likelihood Ratio
They are not the same, but in you case they could be used for the same purpose.
Optimal Bayes classifier is
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{c \in C} p(c|X) $$
i.e., among all hypotheses, take the $c$ that maximizes the posterior probability. You use Bayes theorem
$$ \underbrace{p(c|X)}_{\text{posterior}} \propto \underbrace{p(X|c)}_{\text{likelihood}} \underbrace{p(c)}_{\text{prior}} $$
but since using uniform prior (all $c$ are equally likely, so $p(c) \propto 1$) it reduces to
the likelihood function
$$ p(c|X) \propto p(X|c) $$
The difference between maximizing the likelihood function and comparing the likelihood ratios, is that with likelihood ratio you compare only two likelihoods, while in maximizing the likelihood you may consider multiple hypothesis. So if you have only two hypotheses, then they will do essentially the same thing. However imagine that you had multiple classes, in such case comparing each of them with all the others pair by pair would be a really inefficient way to go.
Notice that likelihood ratio serves also other purpose than finding which of the two models has greater likelihood. Likelihood ratio can be used for hypothesis testing and it tells you how much more (or less) likely is is one of the models comparing to the other. Moreover, you can do the same when comparing the posterior distributions by using Bayes factor in similar fashion. | Bayes optimal classifier vs Likelihood Ratio
They are not the same, but in you case they could be used for the same purpose.
Optimal Bayes classifier is
$$ \DeclareMathOperator*{\argmax}{arg\,max} \argmax_{c \in C} p(c|X) $$
i.e., among all hypo |
28,313 | Can a paired (or two group) t-test test if the difference between two means is less than a specific value? | Sure, you can do that. You don't have to test against a null hypothesis of $0$ (sometimes called a "nil null"); you can test against any value. You also don't have to do a two-tailed test; you can perform a one-tailed test (when specified a-priori). The paired $t$-test is:
$$
t = \frac{\bar x_D - \mu_{\rm null}}{\frac{s_D}{\sqrt N}}
$$
Thus, to combine the two less typical possibilities noted above, you substitute your specific value for $\mu_{\rm null}$, and run a one-tailed test.
Here is a simple example (coded in R):
set.seed(2786) # this makes the example exactly reproducible
x1 = rnorm(20, mean=3, sd=5) # I'm generating data from a normal distribution
x2 = x1 - rnorm(20, mean=0, sd=1) # the true difference is 0
## this is a paired t-test of whether the difference is <1:
t.test(x1, x2, mu=1, alternative="less", paired=TRUE)
#
# Paired t-test
#
# data: x1 and x2
# t = -7.5783, df = 19, p-value = 1.855e-07
# alternative hypothesis: true difference in means is less than 1
# 95 percent confidence interval:
# -Inf -0.02484498
# sample estimates:
# mean of the differences
# -0.3278085 | Can a paired (or two group) t-test test if the difference between two means is less than a specific | Sure, you can do that. You don't have to test against a null hypothesis of $0$ (sometimes called a "nil null"); you can test against any value. You also don't have to do a two-tailed test; you can p | Can a paired (or two group) t-test test if the difference between two means is less than a specific value?
Sure, you can do that. You don't have to test against a null hypothesis of $0$ (sometimes called a "nil null"); you can test against any value. You also don't have to do a two-tailed test; you can perform a one-tailed test (when specified a-priori). The paired $t$-test is:
$$
t = \frac{\bar x_D - \mu_{\rm null}}{\frac{s_D}{\sqrt N}}
$$
Thus, to combine the two less typical possibilities noted above, you substitute your specific value for $\mu_{\rm null}$, and run a one-tailed test.
Here is a simple example (coded in R):
set.seed(2786) # this makes the example exactly reproducible
x1 = rnorm(20, mean=3, sd=5) # I'm generating data from a normal distribution
x2 = x1 - rnorm(20, mean=0, sd=1) # the true difference is 0
## this is a paired t-test of whether the difference is <1:
t.test(x1, x2, mu=1, alternative="less", paired=TRUE)
#
# Paired t-test
#
# data: x1 and x2
# t = -7.5783, df = 19, p-value = 1.855e-07
# alternative hypothesis: true difference in means is less than 1
# 95 percent confidence interval:
# -Inf -0.02484498
# sample estimates:
# mean of the differences
# -0.3278085 | Can a paired (or two group) t-test test if the difference between two means is less than a specific
Sure, you can do that. You don't have to test against a null hypothesis of $0$ (sometimes called a "nil null"); you can test against any value. You also don't have to do a two-tailed test; you can p |
28,314 | User segmentation by clustering with sparse data | $K$-Means is very unlikely to give meaningful clusters on such high dimensional space (see e.g. Curse of Dimensionality).
I agree with the suggestions in the comments: you need to reduce the dimensionality of your data and then do $K$-Means on the reduced space.
However I would not do PCA in the proper way: for PCA you need to do mean normalization, and that will turn a sparse matrix into a dense one. What you can do instead is SVD - without mean normalization - and then apply the clustering algorithm. Also note that Randomized SVD should work fine, but way faster.
Another potentially interesting technique that you can apply in Non-Negative Matrix Factorization. Since your data contains only positive values (if I got it correctly), NMF should suite well for the problem. Also, you can interpret the results of NMF as clustering: when we are doing $n$-dimensional NMF, we can think of the columns of the resulting matrix as clusters, with the value in the cell $i$ being the degree of association of the observation to the cluster $i$.
You can read more about applying NMF for clustering in "Document clustering based on non-negative matrix factorization." by Xu, Wei, Xin Liu, and Yihong Gong (pdf). | User segmentation by clustering with sparse data | $K$-Means is very unlikely to give meaningful clusters on such high dimensional space (see e.g. Curse of Dimensionality).
I agree with the suggestions in the comments: you need to reduce the dimensio | User segmentation by clustering with sparse data
$K$-Means is very unlikely to give meaningful clusters on such high dimensional space (see e.g. Curse of Dimensionality).
I agree with the suggestions in the comments: you need to reduce the dimensionality of your data and then do $K$-Means on the reduced space.
However I would not do PCA in the proper way: for PCA you need to do mean normalization, and that will turn a sparse matrix into a dense one. What you can do instead is SVD - without mean normalization - and then apply the clustering algorithm. Also note that Randomized SVD should work fine, but way faster.
Another potentially interesting technique that you can apply in Non-Negative Matrix Factorization. Since your data contains only positive values (if I got it correctly), NMF should suite well for the problem. Also, you can interpret the results of NMF as clustering: when we are doing $n$-dimensional NMF, we can think of the columns of the resulting matrix as clusters, with the value in the cell $i$ being the degree of association of the observation to the cluster $i$.
You can read more about applying NMF for clustering in "Document clustering based on non-negative matrix factorization." by Xu, Wei, Xin Liu, and Yihong Gong (pdf). | User segmentation by clustering with sparse data
$K$-Means is very unlikely to give meaningful clusters on such high dimensional space (see e.g. Curse of Dimensionality).
I agree with the suggestions in the comments: you need to reduce the dimensio |
28,315 | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$? | Here is a measure theoretic proof to complement the others, using only definitions. We work on a probability space $(\Omega, \mathcal F, P)$. Notice that $Y:=(X - \mathbb EX)^2 \geq 0$ and consider the integral $\mathbb EY :=\int Y(\omega) P(d\omega)$. Suppose that for some $\epsilon>0$, there exists $A\in \mathcal F$ such that $Y>\epsilon$ on $A$ and $P(A)>0$. Then $\epsilon I_A$ approximates $Y$ from below, so by the standard definition of $\mathbb E Y$ as the supremum of integrals of simple functions approximating from below, $$\mathbb EY\geq \int\epsilon I_AP(d\omega) = \epsilon P(A)>0,$$ which is a contradiction. Thus, $\forall \epsilon>0$, $P\left(\{\omega : Y>\epsilon \}\right) = 0$. Done. | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$? | Here is a measure theoretic proof to complement the others, using only definitions. We work on a probability space $(\Omega, \mathcal F, P)$. Notice that $Y:=(X - \mathbb EX)^2 \geq 0$ and consider th | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$?
Here is a measure theoretic proof to complement the others, using only definitions. We work on a probability space $(\Omega, \mathcal F, P)$. Notice that $Y:=(X - \mathbb EX)^2 \geq 0$ and consider the integral $\mathbb EY :=\int Y(\omega) P(d\omega)$. Suppose that for some $\epsilon>0$, there exists $A\in \mathcal F$ such that $Y>\epsilon$ on $A$ and $P(A)>0$. Then $\epsilon I_A$ approximates $Y$ from below, so by the standard definition of $\mathbb E Y$ as the supremum of integrals of simple functions approximating from below, $$\mathbb EY\geq \int\epsilon I_AP(d\omega) = \epsilon P(A)>0,$$ which is a contradiction. Thus, $\forall \epsilon>0$, $P\left(\{\omega : Y>\epsilon \}\right) = 0$. Done. | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$?
Here is a measure theoretic proof to complement the others, using only definitions. We work on a probability space $(\Omega, \mathcal F, P)$. Notice that $Y:=(X - \mathbb EX)^2 \geq 0$ and consider th |
28,316 | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$? | Prove this by contradiction. By the definition of the variance and your assumptions, you have
$$ 0 =\text{Var}X = \int_\mathbb{R} (x-k)^2\,f(x)\,dx, $$
where $f$ is the probability density of $X$. Note that both $(x-k)^2$ and $f(x)$ are nonnegative.
Now, if $P(X=k)<1$, then
$$U:=\big(\mathbb{R}\setminus\{k\}\big)\cap f^{-1}\big(]0,\infty[\big) $$
has measure greater than zero, and $k\notin U$. But then
$$ \int_U (x-k)^2\,f(x)\,dx > 0,$$
(some $\epsilon$-style argument could be included here) and therefore
$$ 0 =\text{Var}X = \int_\mathbb{R} (x-k)^2\,f(x)\,dx \geq \int_U (x-k)^2\,f(x)\,dx > 0,$$
and your contradiction. | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$? | Prove this by contradiction. By the definition of the variance and your assumptions, you have
$$ 0 =\text{Var}X = \int_\mathbb{R} (x-k)^2\,f(x)\,dx, $$
where $f$ is the probability density of $X$. Not | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$?
Prove this by contradiction. By the definition of the variance and your assumptions, you have
$$ 0 =\text{Var}X = \int_\mathbb{R} (x-k)^2\,f(x)\,dx, $$
where $f$ is the probability density of $X$. Note that both $(x-k)^2$ and $f(x)$ are nonnegative.
Now, if $P(X=k)<1$, then
$$U:=\big(\mathbb{R}\setminus\{k\}\big)\cap f^{-1}\big(]0,\infty[\big) $$
has measure greater than zero, and $k\notin U$. But then
$$ \int_U (x-k)^2\,f(x)\,dx > 0,$$
(some $\epsilon$-style argument could be included here) and therefore
$$ 0 =\text{Var}X = \int_\mathbb{R} (x-k)^2\,f(x)\,dx \geq \int_U (x-k)^2\,f(x)\,dx > 0,$$
and your contradiction. | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$?
Prove this by contradiction. By the definition of the variance and your assumptions, you have
$$ 0 =\text{Var}X = \int_\mathbb{R} (x-k)^2\,f(x)\,dx, $$
where $f$ is the probability density of $X$. Not |
28,317 | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$? | What is $X \equiv k$? Is that the same as $X = k$ a.s.?
ETA: Iirc, $X \equiv k \iff X(\omega) = k \ \forall \ \omega \in \Omega \to X=k \ \text{a.s.}$
Anyway, it is obvious that
$$(X-E[X])^2 \ge 0$$
Suppose
$$E[X-E[X])^2] = 0$$
Then
$$(X-E[X])^2 = 0 \ \text{a.s.}$$
The last step I believe involves continuity of probability...or what you did (You are right).
Theres's also Chebyshev's Inequality:
$\forall \epsilon > 0$,
$$P(|X-k| \ge \epsilon) \le \frac{0}{\epsilon^2} = 0$$
$$P(|X-k| \ge \epsilon) = 0$$
$$\to P(|X-k| < \epsilon) = 1$$
Good talking again.
Btw why is it that
$$\int_{\mathbb{R}}x\text{ d}F(x) = \int_{\mathbb{R}}x^2\text{ d}F(x)$$
?
It seems to me that $LHS = k$ while $RHS = k^2$ | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$? | What is $X \equiv k$? Is that the same as $X = k$ a.s.?
ETA: Iirc, $X \equiv k \iff X(\omega) = k \ \forall \ \omega \in \Omega \to X=k \ \text{a.s.}$
Anyway, it is obvious that
$$(X-E[X])^2 \ge 0$$
S | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$?
What is $X \equiv k$? Is that the same as $X = k$ a.s.?
ETA: Iirc, $X \equiv k \iff X(\omega) = k \ \forall \ \omega \in \Omega \to X=k \ \text{a.s.}$
Anyway, it is obvious that
$$(X-E[X])^2 \ge 0$$
Suppose
$$E[X-E[X])^2] = 0$$
Then
$$(X-E[X])^2 = 0 \ \text{a.s.}$$
The last step I believe involves continuity of probability...or what you did (You are right).
Theres's also Chebyshev's Inequality:
$\forall \epsilon > 0$,
$$P(|X-k| \ge \epsilon) \le \frac{0}{\epsilon^2} = 0$$
$$P(|X-k| \ge \epsilon) = 0$$
$$\to P(|X-k| < \epsilon) = 1$$
Good talking again.
Btw why is it that
$$\int_{\mathbb{R}}x\text{ d}F(x) = \int_{\mathbb{R}}x^2\text{ d}F(x)$$
?
It seems to me that $LHS = k$ while $RHS = k^2$ | If $\mathbb{E}[X] = k$ and $\text{Var}[X] = 0$, is $\Pr\left(X = k\right) = 1$?
What is $X \equiv k$? Is that the same as $X = k$ a.s.?
ETA: Iirc, $X \equiv k \iff X(\omega) = k \ \forall \ \omega \in \Omega \to X=k \ \text{a.s.}$
Anyway, it is obvious that
$$(X-E[X])^2 \ge 0$$
S |
28,318 | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point? | There is a saturation point.
Increasing the size of your training set can't help you surpass the assumptions of your modeling method. For example, if you use a linear model to classify data that is separable in a nonlinear way, you will never get perfect accuracy. As we almost never know the underlying process to its full extent, model mismatch is the norm. As George Box famously said "All models are wrong, but some are useful".
Powerful learning methods like neural networks (aka deep learning) or random forests can push the boundaries a little more than less flexible approaches (e.g. kernel methods), but even for them there is only so much that can be learned. Additionally, the amount of data and other resources you would need to gain worthwhile improvements become excessive at some point. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat | There is a saturation point.
Increasing the size of your training set can't help you surpass the assumptions of your modeling method. For example, if you use a linear model to classify data that is se | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point?
There is a saturation point.
Increasing the size of your training set can't help you surpass the assumptions of your modeling method. For example, if you use a linear model to classify data that is separable in a nonlinear way, you will never get perfect accuracy. As we almost never know the underlying process to its full extent, model mismatch is the norm. As George Box famously said "All models are wrong, but some are useful".
Powerful learning methods like neural networks (aka deep learning) or random forests can push the boundaries a little more than less flexible approaches (e.g. kernel methods), but even for them there is only so much that can be learned. Additionally, the amount of data and other resources you would need to gain worthwhile improvements become excessive at some point. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat
There is a saturation point.
Increasing the size of your training set can't help you surpass the assumptions of your modeling method. For example, if you use a linear model to classify data that is se |
28,319 | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point? | Your training dataset needs to be representative of the dataset you'll need to classify. Even if it's huge, if it doesn't capture the corner cases, they'll be misclassified. However, on the other hand, you'll need to be careful of overfitting, if it applies to your case.
Also, if you have a virtually unlimited annotated dataset at your disposal, you can repeatedly and randomly split it in training/validation/testing to make sure you have the best model possible. It will probably take days to run, but I think it will be worth it. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat | Your training dataset needs to be representative of the dataset you'll need to classify. Even if it's huge, if it doesn't capture the corner cases, they'll be misclassified. However, on the other hand | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point?
Your training dataset needs to be representative of the dataset you'll need to classify. Even if it's huge, if it doesn't capture the corner cases, they'll be misclassified. However, on the other hand, you'll need to be careful of overfitting, if it applies to your case.
Also, if you have a virtually unlimited annotated dataset at your disposal, you can repeatedly and randomly split it in training/validation/testing to make sure you have the best model possible. It will probably take days to run, but I think it will be worth it. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat
Your training dataset needs to be representative of the dataset you'll need to classify. Even if it's huge, if it doesn't capture the corner cases, they'll be misclassified. However, on the other hand |
28,320 | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point? | The key issue in my opinion is that we will never know the underlying process exactly.
We don't know which factors influence class membership. (I am a firm believer in so-called "tapering effect sizes": essentially, everything has an impact on everything else, just to a smaller and smaller extent.)
Often enough, we even have problems operationalizing those influencers we do know about. For instance, I'm sure that intelligence influences earnings, but I'm just as sure that "intelligence" is not perfectly (!) measured by IQ tests. Psychologists worry a lot about so-called "construct validity", and rightly so.
Even if we know a factor and have operationalized it well, we don't know whether its influence is linear, logarithmic or some other weird shape... and we have an entire tag devoted to the problem that a predictor's influence can change over its domain of definition. And I only have logistical regression in my mind as I write this - the same problem will also apply to any other kind of classifier.
And finally, all these problems are magnified indefinitely by the possibilities for interactions: two-way, three-way, four-way, ...
We might think that collecting more and more data and using more and more sophisticated algorithms will solve these problems. However, the number of "reasonable" models we can fit to any given size of dataset will always grow at least as quickly as the dataset, since there are just so many possible predictors, from the phase of the moon to what your participants ate for breakfast. In the end, you will always be tripped up by the bias-variance tradeoff. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat | The key issue in my opinion is that we will never know the underlying process exactly.
We don't know which factors influence class membership. (I am a firm believer in so-called "tapering effect size | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point?
The key issue in my opinion is that we will never know the underlying process exactly.
We don't know which factors influence class membership. (I am a firm believer in so-called "tapering effect sizes": essentially, everything has an impact on everything else, just to a smaller and smaller extent.)
Often enough, we even have problems operationalizing those influencers we do know about. For instance, I'm sure that intelligence influences earnings, but I'm just as sure that "intelligence" is not perfectly (!) measured by IQ tests. Psychologists worry a lot about so-called "construct validity", and rightly so.
Even if we know a factor and have operationalized it well, we don't know whether its influence is linear, logarithmic or some other weird shape... and we have an entire tag devoted to the problem that a predictor's influence can change over its domain of definition. And I only have logistical regression in my mind as I write this - the same problem will also apply to any other kind of classifier.
And finally, all these problems are magnified indefinitely by the possibilities for interactions: two-way, three-way, four-way, ...
We might think that collecting more and more data and using more and more sophisticated algorithms will solve these problems. However, the number of "reasonable" models we can fit to any given size of dataset will always grow at least as quickly as the dataset, since there are just so many possible predictors, from the phase of the moon to what your participants ate for breakfast. In the end, you will always be tripped up by the bias-variance tradeoff. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat
The key issue in my opinion is that we will never know the underlying process exactly.
We don't know which factors influence class membership. (I am a firm believer in so-called "tapering effect size |
28,321 | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point? | The maximal performance of the set of possible prediction models has an upper bound. As an example, look at a binary outcome $y$. For simplicitly assume we know that $y = 1$ with prior probability 0.5. This means both outcomes are equally likely. Let $x$ be a vector containing the values for your predictors. By Bayes:
$P(y=1|x)=\frac{P(x|y=1)}{P(x|y=1)+P(x|y=0) }$.
The theoretical best prediction model will predict the $y$ that has a higher likelihood of producing $x$.
But unless one of the two terms in the denominator is zero, the theorem of Bayes gives you a non-zero probability of the best prediction being wrong.
The easiest example would be $y$ and $x$ being completely unrelated. Then you predict anything for $y$ and will be always wrong with probability 0.5. And no method can improve on that.
In the best way your algorithm will converge towards the theoretical optimum. Then you will usually not achieve the optimum performance with any finite sample size, but the improvements get smaller and smaller. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat | The maximal performance of the set of possible prediction models has an upper bound. As an example, look at a binary outcome $y$. For simplicitly assume we know that $y = 1$ with prior probability 0.5 | Does increase in training set size help in increasing the accuracy perpetually or is there a saturation point?
The maximal performance of the set of possible prediction models has an upper bound. As an example, look at a binary outcome $y$. For simplicitly assume we know that $y = 1$ with prior probability 0.5. This means both outcomes are equally likely. Let $x$ be a vector containing the values for your predictors. By Bayes:
$P(y=1|x)=\frac{P(x|y=1)}{P(x|y=1)+P(x|y=0) }$.
The theoretical best prediction model will predict the $y$ that has a higher likelihood of producing $x$.
But unless one of the two terms in the denominator is zero, the theorem of Bayes gives you a non-zero probability of the best prediction being wrong.
The easiest example would be $y$ and $x$ being completely unrelated. Then you predict anything for $y$ and will be always wrong with probability 0.5. And no method can improve on that.
In the best way your algorithm will converge towards the theoretical optimum. Then you will usually not achieve the optimum performance with any finite sample size, but the improvements get smaller and smaller. | Does increase in training set size help in increasing the accuracy perpetually or is there a saturat
The maximal performance of the set of possible prediction models has an upper bound. As an example, look at a binary outcome $y$. For simplicitly assume we know that $y = 1$ with prior probability 0.5 |
28,322 | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | These techniques do different things.
Gradient descent is an optimization technique, therefore it is common in any statistical method that requires maximization (MLE, MAP).
Monte Carlo simulation is for computing integrals by sampling from a distribution and evaluating some function on the samples. Therefore it is commonly used with techniques that require computation of expectations (Bayesian Inference, Bayesian Hypothesis Testing). | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | These techniques do different things.
Gradient descent is an optimization technique, therefore it is common in any statistical method that requires maximization (MLE, MAP).
Monte Carlo simulation is | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
These techniques do different things.
Gradient descent is an optimization technique, therefore it is common in any statistical method that requires maximization (MLE, MAP).
Monte Carlo simulation is for computing integrals by sampling from a distribution and evaluating some function on the samples. Therefore it is commonly used with techniques that require computation of expectations (Bayesian Inference, Bayesian Hypothesis Testing). | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
These techniques do different things.
Gradient descent is an optimization technique, therefore it is common in any statistical method that requires maximization (MLE, MAP).
Monte Carlo simulation is |
28,323 | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | These are both huge families of algorithms, so it's difficult to give you a precise answer, but...
Gradient ascent (or descent) is useful when you want to find a maximum (or minimum). For example, you might be finding the mode of a probability distribution, or a combination of parameters that minimize some loss function. The "path" it takes to find these extrema can tell you a little bit about the overall shape of the function, but it's not intended to; in fact, the better it works, the less you'll know about everything but the extrema.
Monte Carlo methods are named after the Monte Carlo casino because they, like the casino, depend on randomization. It can be used in many different ways, but most of these focus on approximating distributions. Markov Chain Monte Carlo algorithms, for example, find ways to efficiently sample from complicated probability distributions. Other Monte Carlo simulations might generate distributions over possible outcomes. | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | These are both huge families of algorithms, so it's difficult to give you a precise answer, but...
Gradient ascent (or descent) is useful when you want to find a maximum (or minimum). For example, you | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
These are both huge families of algorithms, so it's difficult to give you a precise answer, but...
Gradient ascent (or descent) is useful when you want to find a maximum (or minimum). For example, you might be finding the mode of a probability distribution, or a combination of parameters that minimize some loss function. The "path" it takes to find these extrema can tell you a little bit about the overall shape of the function, but it's not intended to; in fact, the better it works, the less you'll know about everything but the extrema.
Monte Carlo methods are named after the Monte Carlo casino because they, like the casino, depend on randomization. It can be used in many different ways, but most of these focus on approximating distributions. Markov Chain Monte Carlo algorithms, for example, find ways to efficiently sample from complicated probability distributions. Other Monte Carlo simulations might generate distributions over possible outcomes. | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
These are both huge families of algorithms, so it's difficult to give you a precise answer, but...
Gradient ascent (or descent) is useful when you want to find a maximum (or minimum). For example, you |
28,324 | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | This answer is partially wrong. You can indeed combine Monte Carlo methods with gradient descent. You can use Monte Carlo methods to estimate the gradient of a loss function, which is then used by gradient descent to update the parameters. A popular Monte Carlo method to estimate the gradient is the score gradient estimator, which can e.g. be used in reinforcement learning. See Monte Carlo Gradient Estimation in Machine Learning (2019) by Shakir Mohamed et al. for more info. | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | This answer is partially wrong. You can indeed combine Monte Carlo methods with gradient descent. You can use Monte Carlo methods to estimate the gradient of a loss function, which is then used by gra | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
This answer is partially wrong. You can indeed combine Monte Carlo methods with gradient descent. You can use Monte Carlo methods to estimate the gradient of a loss function, which is then used by gradient descent to update the parameters. A popular Monte Carlo method to estimate the gradient is the score gradient estimator, which can e.g. be used in reinforcement learning. See Monte Carlo Gradient Estimation in Machine Learning (2019) by Shakir Mohamed et al. for more info. | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
This answer is partially wrong. You can indeed combine Monte Carlo methods with gradient descent. You can use Monte Carlo methods to estimate the gradient of a loss function, which is then used by gra |
28,325 | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | As explained by others, gradient descent/ascent performs optimisation, i.e. finds the maximum or minimum of a function. Monte Carlo is a method of stochastic simulation, i.e. approximates a cumulative distribution function via repeated random sampling. This is also called "Monte Carlo integration" because the c.d.f. of a continuous distribution is actually an integral.
What's common between gradient descent and Monte Carlo is that they're both particularly useful in problems where no closed-form solution exists. You may use simple differentiation to find the maximum or minimum point of any convex function whenever an analytical solution is feasible. When such a solution does not exist, you need to use an iterative method such as gradient descent. Is is the same for Monte Carlo simulation; you can basically use plain integration to calculate any c.d.f. analytically but there's no guarantee that such a closed form solution will always be possible. The problem becomes solvable again with Monte Carlo simulation.
Can you use gradient descent for simulation and Monte Carlo for optimisation? The simple answer is no. Monte Carlo needs a stochastic element (a distribution) to sample from and gradient descent has no means of handling stochastic information problems. You can, however, combine simulation with optimisation in order to produce more powerful stochastic optimisation algorithms that are able to solve very complex problems that simple gradient descent is unable to solve. An example of this would be Simulated Annealing Monte Carlo. | When to use Gradient descent vs Monte Carlo as a numerical optimization technique | As explained by others, gradient descent/ascent performs optimisation, i.e. finds the maximum or minimum of a function. Monte Carlo is a method of stochastic simulation, i.e. approximates a cumulative | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
As explained by others, gradient descent/ascent performs optimisation, i.e. finds the maximum or minimum of a function. Monte Carlo is a method of stochastic simulation, i.e. approximates a cumulative distribution function via repeated random sampling. This is also called "Monte Carlo integration" because the c.d.f. of a continuous distribution is actually an integral.
What's common between gradient descent and Monte Carlo is that they're both particularly useful in problems where no closed-form solution exists. You may use simple differentiation to find the maximum or minimum point of any convex function whenever an analytical solution is feasible. When such a solution does not exist, you need to use an iterative method such as gradient descent. Is is the same for Monte Carlo simulation; you can basically use plain integration to calculate any c.d.f. analytically but there's no guarantee that such a closed form solution will always be possible. The problem becomes solvable again with Monte Carlo simulation.
Can you use gradient descent for simulation and Monte Carlo for optimisation? The simple answer is no. Monte Carlo needs a stochastic element (a distribution) to sample from and gradient descent has no means of handling stochastic information problems. You can, however, combine simulation with optimisation in order to produce more powerful stochastic optimisation algorithms that are able to solve very complex problems that simple gradient descent is unable to solve. An example of this would be Simulated Annealing Monte Carlo. | When to use Gradient descent vs Monte Carlo as a numerical optimization technique
As explained by others, gradient descent/ascent performs optimisation, i.e. finds the maximum or minimum of a function. Monte Carlo is a method of stochastic simulation, i.e. approximates a cumulative |
28,326 | How to use anova for two models comparison? | Assuming your models are nested (i.e. same outcome variable and model 2 contains all the variables of model 1 plus 2 additional variables), then the ANOVA results state that the 2 additional variables jointly account for enough variance that you can reject the null hypothesis that the coefficients for both variables equal 0. This is effectively what you said. If both coefficients equal 0 then the models are the same.
Just as an additional note, in case you weren't aware, ANOVA is always equivalent to doing model comparisons. When you are looking at the ANOVA for a single model it gives you the effects for each predictor variable. That is equivalent to doing a model comparison between your full model and a model removing one of the variables. i.e. $Model 1: y=a+bx_1+cx_2+dx_3; Model 2: y=a+bx_1+cx_2$ will give you the sum of squares (type III) and test statistic for $x_3$. Just note that R gives you type I sum of squares. If you need type III, use car::Anova or use anova and keep changing the order of the variables in the model and only take the sum of squares for the last variable. | How to use anova for two models comparison? | Assuming your models are nested (i.e. same outcome variable and model 2 contains all the variables of model 1 plus 2 additional variables), then the ANOVA results state that the 2 additional variables | How to use anova for two models comparison?
Assuming your models are nested (i.e. same outcome variable and model 2 contains all the variables of model 1 plus 2 additional variables), then the ANOVA results state that the 2 additional variables jointly account for enough variance that you can reject the null hypothesis that the coefficients for both variables equal 0. This is effectively what you said. If both coefficients equal 0 then the models are the same.
Just as an additional note, in case you weren't aware, ANOVA is always equivalent to doing model comparisons. When you are looking at the ANOVA for a single model it gives you the effects for each predictor variable. That is equivalent to doing a model comparison between your full model and a model removing one of the variables. i.e. $Model 1: y=a+bx_1+cx_2+dx_3; Model 2: y=a+bx_1+cx_2$ will give you the sum of squares (type III) and test statistic for $x_3$. Just note that R gives you type I sum of squares. If you need type III, use car::Anova or use anova and keep changing the order of the variables in the model and only take the sum of squares for the last variable. | How to use anova for two models comparison?
Assuming your models are nested (i.e. same outcome variable and model 2 contains all the variables of model 1 plus 2 additional variables), then the ANOVA results state that the 2 additional variables |
28,327 | Trimmed mean vs median | Consider what a trimmed mean is: In the prototypical case, you first sort your data in increasing order. Then you count up to the trimming percentage from the bottom and discard those values. For example a 10% trimmed mean is common; in that case you count up from the lowest value until you've passed 10% of all the data in your set. The values below that mark are set aside. Likewise, you count down from the highest value until you've passed your trimming percentage, and set all values greater than that aside. You are now left with the middle 80%. You take the mean of that, and that is your 10% trimmed mean. (Note that you can trim unequal proportions from the two tails, or only trim one tail, but these approaches are less common and don't seem as applicable to your situation.)
Now think of what would happen if you calculated a 50% trimmed mean. The bottom half would be set aside, as would the top half. You would be left with only the single value in the middle (ordinally). You would take the mean of that (which is to say, you would just take that value) as your trimmed mean. Note however, that that value is the median. In other words, the median is a trimmed mean (it is a 50% trimmed mean). It is just a very aggressive one. It assumes, in essence, that 99% of your data are contaminated. This gives you the ultimate protection against outliers at the expense of the ultimate loss of power / efficiency.
My guess is a median / 50% trimmed mean is much more aggressive than is necessary for your data, and is too wasteful of the information available to you. If you have any sense of the proportion of outliers that exist, I would use that information to set the trimming percentage and use the appropriate trimmed mean. If you don't have any basis to choose the trimming percentage, you could select one by cross validation, or use a robust regression analysis with only an intercept. | Trimmed mean vs median | Consider what a trimmed mean is: In the prototypical case, you first sort your data in increasing order. Then you count up to the trimming percentage from the bottom and discard those values. For e | Trimmed mean vs median
Consider what a trimmed mean is: In the prototypical case, you first sort your data in increasing order. Then you count up to the trimming percentage from the bottom and discard those values. For example a 10% trimmed mean is common; in that case you count up from the lowest value until you've passed 10% of all the data in your set. The values below that mark are set aside. Likewise, you count down from the highest value until you've passed your trimming percentage, and set all values greater than that aside. You are now left with the middle 80%. You take the mean of that, and that is your 10% trimmed mean. (Note that you can trim unequal proportions from the two tails, or only trim one tail, but these approaches are less common and don't seem as applicable to your situation.)
Now think of what would happen if you calculated a 50% trimmed mean. The bottom half would be set aside, as would the top half. You would be left with only the single value in the middle (ordinally). You would take the mean of that (which is to say, you would just take that value) as your trimmed mean. Note however, that that value is the median. In other words, the median is a trimmed mean (it is a 50% trimmed mean). It is just a very aggressive one. It assumes, in essence, that 99% of your data are contaminated. This gives you the ultimate protection against outliers at the expense of the ultimate loss of power / efficiency.
My guess is a median / 50% trimmed mean is much more aggressive than is necessary for your data, and is too wasteful of the information available to you. If you have any sense of the proportion of outliers that exist, I would use that information to set the trimming percentage and use the appropriate trimmed mean. If you don't have any basis to choose the trimming percentage, you could select one by cross validation, or use a robust regression analysis with only an intercept. | Trimmed mean vs median
Consider what a trimmed mean is: In the prototypical case, you first sort your data in increasing order. Then you count up to the trimming percentage from the bottom and discard those values. For e |
28,328 | Trimmed mean vs median | First of all, remove the invalid data.
Secondly, you do not need to remove the outliers as they are observed values. In some cases, it is useful (like in linear regression) but in your case I don't see the point.
Finally, prefer using the median as it is more precise to find the center of your data. As you said, the mean can be sensitive to outliers (using trimmed mean can be biased). | Trimmed mean vs median | First of all, remove the invalid data.
Secondly, you do not need to remove the outliers as they are observed values. In some cases, it is useful (like in linear regression) but in your case I don't se | Trimmed mean vs median
First of all, remove the invalid data.
Secondly, you do not need to remove the outliers as they are observed values. In some cases, it is useful (like in linear regression) but in your case I don't see the point.
Finally, prefer using the median as it is more precise to find the center of your data. As you said, the mean can be sensitive to outliers (using trimmed mean can be biased). | Trimmed mean vs median
First of all, remove the invalid data.
Secondly, you do not need to remove the outliers as they are observed values. In some cases, it is useful (like in linear regression) but in your case I don't se |
28,329 | Are the estimates of the intercept and slope in simple linear regression independent? | Go to the same site on the following sub-page:
https://web.archive.org/web/20160914012624/https://onlinecourses.science.psu.edu/stat414/node/278
You will see more clearly that they specify the simple linear regression model with the regressor centered on its sample mean. And this explains why they subsequently say that $\hat \alpha$ and $\hat \beta$ are independent.
For the case when the coefficients are estimated with a regressor that is not centered, their covariance is
$$\text{Cov}(\hat \alpha,\hat \beta) = -\sigma^2(\bar x/S_{xx}), \;\;S_{xx} = \sum (x_i^2-\bar x^2) $$
So you see that if we use a regressor centered on $\bar x$, call it $\tilde x$, the above covariance expression will use the sample mean of the centered regressor, $\tilde {\bar x}$, which will be zero, and so it, too, will be zero, and the coefficient estimators will be independent.
This post, contains more on simple linear regression OLS algebra. | Are the estimates of the intercept and slope in simple linear regression independent? | Go to the same site on the following sub-page:
https://web.archive.org/web/20160914012624/https://onlinecourses.science.psu.edu/stat414/node/278
You will see more clearly that they specify the simple | Are the estimates of the intercept and slope in simple linear regression independent?
Go to the same site on the following sub-page:
https://web.archive.org/web/20160914012624/https://onlinecourses.science.psu.edu/stat414/node/278
You will see more clearly that they specify the simple linear regression model with the regressor centered on its sample mean. And this explains why they subsequently say that $\hat \alpha$ and $\hat \beta$ are independent.
For the case when the coefficients are estimated with a regressor that is not centered, their covariance is
$$\text{Cov}(\hat \alpha,\hat \beta) = -\sigma^2(\bar x/S_{xx}), \;\;S_{xx} = \sum (x_i^2-\bar x^2) $$
So you see that if we use a regressor centered on $\bar x$, call it $\tilde x$, the above covariance expression will use the sample mean of the centered regressor, $\tilde {\bar x}$, which will be zero, and so it, too, will be zero, and the coefficient estimators will be independent.
This post, contains more on simple linear regression OLS algebra. | Are the estimates of the intercept and slope in simple linear regression independent?
Go to the same site on the following sub-page:
https://web.archive.org/web/20160914012624/https://onlinecourses.science.psu.edu/stat414/node/278
You will see more clearly that they specify the simple |
28,330 | How to test for Zero-Inflation in a dataset? | The score test (referenced in the comments by Ben Bolker) is performed by first calculating the rate estimate $\hat{\lambda}= \bar{x}$. Then count the number of observed 0s denoted $n_0$ and the total number of observations $n$. Calculate $\tilde{p}_0=\exp[-\hat{\lambda}]$. Then the test statistic is calculated by the formula: $\frac{(n_0 - n\tilde{p}_0 )^2}{n\tilde{p}_0(1-\tilde{p}_0) - n\bar{x}\tilde{p}_0^2}$. This test statistic has a $\chi^2_1$ distribution which can be looked up in tables or via statistical software.
Here is some R code that will do this:
pois_data <-rpois(100,lambda=1)
lambda_est <- mean(pois_data)
p0_tilde <- exp(-lambda_est)
p0_tilde
n0 <- sum(1*(!(pois_data >0)))
n <- length(pois_data)
# number of observtions 'expected' to be zero
n*p0_tilde
#now lets perform the JVDB score test
numerator <- (n0 -n*p0_tilde)^2
denominator <- n*p0_tilde*(1-p0_tilde) - n*lambda_est*(p0_tilde^2)
test_stat <- numerator/denominator
pvalue <- pchisq(test_stat,df=1, ncp=0, lower.tail=FALSE)
pvalue | How to test for Zero-Inflation in a dataset? | The score test (referenced in the comments by Ben Bolker) is performed by first calculating the rate estimate $\hat{\lambda}= \bar{x}$. Then count the number of observed 0s denoted $n_0$ and the total | How to test for Zero-Inflation in a dataset?
The score test (referenced in the comments by Ben Bolker) is performed by first calculating the rate estimate $\hat{\lambda}= \bar{x}$. Then count the number of observed 0s denoted $n_0$ and the total number of observations $n$. Calculate $\tilde{p}_0=\exp[-\hat{\lambda}]$. Then the test statistic is calculated by the formula: $\frac{(n_0 - n\tilde{p}_0 )^2}{n\tilde{p}_0(1-\tilde{p}_0) - n\bar{x}\tilde{p}_0^2}$. This test statistic has a $\chi^2_1$ distribution which can be looked up in tables or via statistical software.
Here is some R code that will do this:
pois_data <-rpois(100,lambda=1)
lambda_est <- mean(pois_data)
p0_tilde <- exp(-lambda_est)
p0_tilde
n0 <- sum(1*(!(pois_data >0)))
n <- length(pois_data)
# number of observtions 'expected' to be zero
n*p0_tilde
#now lets perform the JVDB score test
numerator <- (n0 -n*p0_tilde)^2
denominator <- n*p0_tilde*(1-p0_tilde) - n*lambda_est*(p0_tilde^2)
test_stat <- numerator/denominator
pvalue <- pchisq(test_stat,df=1, ncp=0, lower.tail=FALSE)
pvalue | How to test for Zero-Inflation in a dataset?
The score test (referenced in the comments by Ben Bolker) is performed by first calculating the rate estimate $\hat{\lambda}= \bar{x}$. Then count the number of observed 0s denoted $n_0$ and the total |
28,331 | How to test for Zero-Inflation in a dataset? | I think there are different ways to do this. One thing you can do is to compare a zero-inflated negative binomial/Poisson model with its regular binomial/Poisson counter part without the zero-inflation component. It would look like this in R:
zinb <- read.csv("http://www.ats.ucla.edu/stat/data/fish.csv")
zinb <- within(zinb, {
nofish <- factor(nofish)
livebait <- factor(livebait)
camper <- factor(camper)
})
require(pscl)
require(MASS)
require(boot)
## fit a negative binomial model
m1 <- glm.nb(count ~ child + camper, data = zinb)
## fit a zero-inflated negative binomial model
m1_zi <- zeroinfl(count ~ child + camper| persons,
data = zinb, dist = "negbin", EM = TRUE)
## compare 2 models
vuong(m1, m1_zi)
For more information, see this ever useful tutorial. | How to test for Zero-Inflation in a dataset? | I think there are different ways to do this. One thing you can do is to compare a zero-inflated negative binomial/Poisson model with its regular binomial/Poisson counter part without the zero-inflatio | How to test for Zero-Inflation in a dataset?
I think there are different ways to do this. One thing you can do is to compare a zero-inflated negative binomial/Poisson model with its regular binomial/Poisson counter part without the zero-inflation component. It would look like this in R:
zinb <- read.csv("http://www.ats.ucla.edu/stat/data/fish.csv")
zinb <- within(zinb, {
nofish <- factor(nofish)
livebait <- factor(livebait)
camper <- factor(camper)
})
require(pscl)
require(MASS)
require(boot)
## fit a negative binomial model
m1 <- glm.nb(count ~ child + camper, data = zinb)
## fit a zero-inflated negative binomial model
m1_zi <- zeroinfl(count ~ child + camper| persons,
data = zinb, dist = "negbin", EM = TRUE)
## compare 2 models
vuong(m1, m1_zi)
For more information, see this ever useful tutorial. | How to test for Zero-Inflation in a dataset?
I think there are different ways to do this. One thing you can do is to compare a zero-inflated negative binomial/Poisson model with its regular binomial/Poisson counter part without the zero-inflatio |
28,332 | How to test for Zero-Inflation in a dataset? | Consider some model $f(x)$. If we want to turn $f(x)$ into a zero-inflated model, then we define $g(x)$ to equal $f(x)$ with proportion $p$ and to equal $0$ with proportion $1-p$.
In this case, there are two processes at work here. One process generates only zeros and one process generates results from $f(x)$. My understanding is that a zero-inflated model is only appropriate when there is an alternate process that generates only zeros. For example, if you are attempting to estimate the number of widgets different stores sell, but some stores do not have widgets for sale, then it seems like two processes are at work here: one process that generates only zeros (those stores that cannot sell widgets because they do not ever stock widgets for sale) and another process that generates different values (those stores that do stock widgets and therefore can sell some).
Rather than having a "test" to determine whether the data are zero-inflated, I would suggest determining whether it is plausible that there are two processes at work - one being a zero-generating process at work and another process that generates non-zero numbers. If it seems reasonable given the context of your data, then use a zero-inflated model. If it doesn't seem reasonable given the context of your data, then a zero-inflated model is probably inappropriate even though it may appear to fit your data better.
(It might not be clear from what I've written above, but I want to articulate the fact that both processes can generate zeros. One process generates only zeros and the other process can generate different values which may be zero. For example, a store can stock widgets and happen to sell zero widgets. This is different from a store that does not stock widgets and therefore must sell zero widgets by default.) | How to test for Zero-Inflation in a dataset? | Consider some model $f(x)$. If we want to turn $f(x)$ into a zero-inflated model, then we define $g(x)$ to equal $f(x)$ with proportion $p$ and to equal $0$ with proportion $1-p$.
In this case, there | How to test for Zero-Inflation in a dataset?
Consider some model $f(x)$. If we want to turn $f(x)$ into a zero-inflated model, then we define $g(x)$ to equal $f(x)$ with proportion $p$ and to equal $0$ with proportion $1-p$.
In this case, there are two processes at work here. One process generates only zeros and one process generates results from $f(x)$. My understanding is that a zero-inflated model is only appropriate when there is an alternate process that generates only zeros. For example, if you are attempting to estimate the number of widgets different stores sell, but some stores do not have widgets for sale, then it seems like two processes are at work here: one process that generates only zeros (those stores that cannot sell widgets because they do not ever stock widgets for sale) and another process that generates different values (those stores that do stock widgets and therefore can sell some).
Rather than having a "test" to determine whether the data are zero-inflated, I would suggest determining whether it is plausible that there are two processes at work - one being a zero-generating process at work and another process that generates non-zero numbers. If it seems reasonable given the context of your data, then use a zero-inflated model. If it doesn't seem reasonable given the context of your data, then a zero-inflated model is probably inappropriate even though it may appear to fit your data better.
(It might not be clear from what I've written above, but I want to articulate the fact that both processes can generate zeros. One process generates only zeros and the other process can generate different values which may be zero. For example, a store can stock widgets and happen to sell zero widgets. This is different from a store that does not stock widgets and therefore must sell zero widgets by default.) | How to test for Zero-Inflation in a dataset?
Consider some model $f(x)$. If we want to turn $f(x)$ into a zero-inflated model, then we define $g(x)$ to equal $f(x)$ with proportion $p$ and to equal $0$ with proportion $1-p$.
In this case, there |
28,333 | very high frequency time series analysis (seconds) and Forecasting (Python/R) | Question #1:
The problem is that in the MLE case, both the Python (statsmodels) and R procedures use state-space models to estimate the likelihood. In an SARIMAX class, the state-space grows linearly (or worse) with the number of seasons (because the state-space form incorporates all intermediate lags too - so if you have a lag at 3600, the state-space form also has all the 3599 intermediate lags).
So you now have a couple of issues - first, you're multiplying 3600+ matrices by each other, which is slow. Even worse, state space models need to be initialized and often they are by default initialized using a stationary initialization that requires solving a 3600 linear system. When I tested a 3600 seasonal order, it wasn't even getting past this part.
The R arima function accepts method='CSS' which uses least-squares (conditional MLE instead of full MLE) to solve the problem. Depending on how the arima function works, it could be much better in your case.
In Python, there aren't many good options. The SARIMAX class accepts a conserve_memory option, but if you do that, you can't forecast. To solve the initialization problem, you can call the initialize_approximate_diffuse method to avoid the 3600 linear system solving. However, even in these cases, you'll be multiplying 3600 x 3600 matrices together, which will be quite slow. I would like to update the SARIMAX class to work with sparse matrices (which would solve this problem) but that's probably quite a ways in the future. I don't know of any non-commercial program that implements state space models using sparse matrices.
Question #5:
This was a bug in the statsmodels code. It has been fixed in the repository (see https://github.com/ChadFulton/statsmodels/issues/2) | very high frequency time series analysis (seconds) and Forecasting (Python/R) | Question #1:
The problem is that in the MLE case, both the Python (statsmodels) and R procedures use state-space models to estimate the likelihood. In an SARIMAX class, the state-space grows linearly | very high frequency time series analysis (seconds) and Forecasting (Python/R)
Question #1:
The problem is that in the MLE case, both the Python (statsmodels) and R procedures use state-space models to estimate the likelihood. In an SARIMAX class, the state-space grows linearly (or worse) with the number of seasons (because the state-space form incorporates all intermediate lags too - so if you have a lag at 3600, the state-space form also has all the 3599 intermediate lags).
So you now have a couple of issues - first, you're multiplying 3600+ matrices by each other, which is slow. Even worse, state space models need to be initialized and often they are by default initialized using a stationary initialization that requires solving a 3600 linear system. When I tested a 3600 seasonal order, it wasn't even getting past this part.
The R arima function accepts method='CSS' which uses least-squares (conditional MLE instead of full MLE) to solve the problem. Depending on how the arima function works, it could be much better in your case.
In Python, there aren't many good options. The SARIMAX class accepts a conserve_memory option, but if you do that, you can't forecast. To solve the initialization problem, you can call the initialize_approximate_diffuse method to avoid the 3600 linear system solving. However, even in these cases, you'll be multiplying 3600 x 3600 matrices together, which will be quite slow. I would like to update the SARIMAX class to work with sparse matrices (which would solve this problem) but that's probably quite a ways in the future. I don't know of any non-commercial program that implements state space models using sparse matrices.
Question #5:
This was a bug in the statsmodels code. It has been fixed in the repository (see https://github.com/ChadFulton/statsmodels/issues/2) | very high frequency time series analysis (seconds) and Forecasting (Python/R)
Question #1:
The problem is that in the MLE case, both the Python (statsmodels) and R procedures use state-space models to estimate the likelihood. In an SARIMAX class, the state-space grows linearly |
28,334 | very high frequency time series analysis (seconds) and Forecasting (Python/R) | Regarding Question issue #1:
Maybe you could try to do seasonal adjustment before you estimate an ARIMA model. I am not sure if it would help, but you could at least try. For example, non-parametric seasonal adjustment procedure such as STL in R.
Regarding Question issue #4:
If you have an ACF graph with confidence intervals marked, it is no surprise that one or a few of the autocorrelation bars are sticking out (not fitting inside the interval). This can happen due to pure chance even if there is no aucorrelation in population. Here is why: if you have a 90% confidence interval, then on average you may expect 10% (=100%-90%) of the bars in your graph to stick out of the confidence interval when the null hypothesis of no autocorrelation at any lags holds. That is "built in" the definition of a confidence interval. | very high frequency time series analysis (seconds) and Forecasting (Python/R) | Regarding Question issue #1:
Maybe you could try to do seasonal adjustment before you estimate an ARIMA model. I am not sure if it would help, but you could at least try. For example, non-parametric s | very high frequency time series analysis (seconds) and Forecasting (Python/R)
Regarding Question issue #1:
Maybe you could try to do seasonal adjustment before you estimate an ARIMA model. I am not sure if it would help, but you could at least try. For example, non-parametric seasonal adjustment procedure such as STL in R.
Regarding Question issue #4:
If you have an ACF graph with confidence intervals marked, it is no surprise that one or a few of the autocorrelation bars are sticking out (not fitting inside the interval). This can happen due to pure chance even if there is no aucorrelation in population. Here is why: if you have a 90% confidence interval, then on average you may expect 10% (=100%-90%) of the bars in your graph to stick out of the confidence interval when the null hypothesis of no autocorrelation at any lags holds. That is "built in" the definition of a confidence interval. | very high frequency time series analysis (seconds) and Forecasting (Python/R)
Regarding Question issue #1:
Maybe you could try to do seasonal adjustment before you estimate an ARIMA model. I am not sure if it would help, but you could at least try. For example, non-parametric s |
28,335 | very high frequency time series analysis (seconds) and Forecasting (Python/R) | I think you should have a relook at the requirement for analysis at the "seconds" level. What purpose does it solve? For example if the analysis is to mark out anomalies in the time series, does it give enough reaction time for operations team to drill down, analyze and take corrective actions?
If this is for prediction purposes, does it help a user to predict a variable for next n seconds?
Having worked in the operations analytics space, I can say that anything more than a 15-20 minutes granularity for predictions/classifications in time series data does more harm than good, like:
1. Higher granularity decreases the signal to noise ratio and your techniques will need to be highly robust to noise.
2. Doesnt solve much purpose (unless it is AML kind of problems)
3. Puts enormous load on your hardware especially if dealing with multi variate cases (example tracking 1000 metrics of an application and flagging abnormal conditions for the entire application) | very high frequency time series analysis (seconds) and Forecasting (Python/R) | I think you should have a relook at the requirement for analysis at the "seconds" level. What purpose does it solve? For example if the analysis is to mark out anomalies in the time series, does it gi | very high frequency time series analysis (seconds) and Forecasting (Python/R)
I think you should have a relook at the requirement for analysis at the "seconds" level. What purpose does it solve? For example if the analysis is to mark out anomalies in the time series, does it give enough reaction time for operations team to drill down, analyze and take corrective actions?
If this is for prediction purposes, does it help a user to predict a variable for next n seconds?
Having worked in the operations analytics space, I can say that anything more than a 15-20 minutes granularity for predictions/classifications in time series data does more harm than good, like:
1. Higher granularity decreases the signal to noise ratio and your techniques will need to be highly robust to noise.
2. Doesnt solve much purpose (unless it is AML kind of problems)
3. Puts enormous load on your hardware especially if dealing with multi variate cases (example tracking 1000 metrics of an application and flagging abnormal conditions for the entire application) | very high frequency time series analysis (seconds) and Forecasting (Python/R)
I think you should have a relook at the requirement for analysis at the "seconds" level. What purpose does it solve? For example if the analysis is to mark out anomalies in the time series, does it gi |
28,336 | Survey Method on Personal Isues | It's an old (1965) and well-documented method called randomized response. It's used in various situations such as survey interviews or informal or ad hoc surveys conducted in classrooms or lecture halls. It's useful to think about the sample size that would be required to yield a given level of precision for the estimates one would obtain. | Survey Method on Personal Isues | It's an old (1965) and well-documented method called randomized response. It's used in various situations such as survey interviews or informal or ad hoc surveys conducted in classrooms or lecture ha | Survey Method on Personal Isues
It's an old (1965) and well-documented method called randomized response. It's used in various situations such as survey interviews or informal or ad hoc surveys conducted in classrooms or lecture halls. It's useful to think about the sample size that would be required to yield a given level of precision for the estimates one would obtain. | Survey Method on Personal Isues
It's an old (1965) and well-documented method called randomized response. It's used in various situations such as survey interviews or informal or ad hoc surveys conducted in classrooms or lecture ha |
28,337 | Mixture of beta distributions: full example | All items are not solved fully correctly in the question. I would recommend the following.
(0) The observations "y" do not need to be corrected as they are between 0 and 1 already. Applying the correction shouldn't create problems but it's not necessary either.
(1) cannot be answered by the likelihood ratio (LR) test. Generally in mixture models, the selection of the number of components cannot be based on the LR test because its regularity assumptions are not fulfilled. Instead, information criteria are often used and "flexmix" upon which betamix() is based offers AIC, BIC, and ICL. So you could choose the best BIC solution among 1, 2, 3 clusters via
library("flexmix")
set.seed(0)
m <- betamix(y ~ 1 | 1, data = d, k = 1:3)
(2) The parameters in betamix() are not mu and phi directly but additionally link functions are employed for both parameters. The defaults are logit and log, respectively. This ensure that the parameters are in their valid ranges (0, 1) and (0, inf), respectively. One could refit the models in both components to get easier access to the links and inverse links etc. However, here it is probably easiest to apply the inverse links by hand:
mu <- plogis(coef(m)[,1])
phi <- exp(coef(m)[,2])
This shows that the means are very different (0.25 and 0.77) while the precisions are rather similar (49.4 and 47.8). Then we can transform back to alpha and beta which gives 12.4, 37.0 and 36.7, 11.1 which is reasonably close to the original parameters in the simulation:
a <- mu * phi
b <- (1 - mu) * phi
(3) The clusters can be extracted using the clusters() function. This simply selects the component with the highest posterior() probability. In this case, the posterior() is really clear-cut, i.e., either close to zero or close to 1.
cl <- clusters(m)
(4) When visualizing the data with histograms, one can either visualize both components separately, i.e., each with its own density function. Or one can draw one joint histogram with the corresponding joint density. The difference is that the latter needs to factor in the different cluster sizes: the prior weights are about 1/3 and 2/3 here. The separate histograms can be drawn like this:
## separate histograms for both clusters
hist(subset(d, cl == 1)$y, breaks = 0:25/25, freq = FALSE,
col = hcl(0, 50, 80), main = "", xlab = "y", ylim = c(0, 9))
hist(subset(d, cl == 2)$y, breaks = 0:25/25, freq = FALSE,
col = hcl(240, 50, 80), main = "", xlab = "y", ylim = c(0, 9), add = TRUE)
## lines for fitted densities
ys <- seq(0, 1, by = 0.01)
lines(ys, dbeta(ys, shape1 = a[1], shape2 = b[1]),
col = hcl(0, 80, 50), lwd = 2)
lines(ys, dbeta(ys, shape1 = a[2], shape2 = b[2]),
col = hcl(240, 80, 50), lwd = 2)
## lines for corresponding means
abline(v = mu[1], col = hcl(0, 80, 50), lty = 2, lwd = 2)
abline(v = mu[2], col = hcl(240, 80, 50), lty = 2, lwd = 2)
And the joint histogram:
p <- prior(m$flexmix)
hist(d$y, breaks = 0:25/25, freq = FALSE,
main = "", xlab = "y", ylim = c(0, 4.5))
lines(ys, p[1] * dbeta(ys, shape1 = a[1], shape2 = b[1]) +
p[2] * dbeta(ys, shape1 = a[2], shape2 = b[2]), lwd = 2)
The resulting figure is included below. | Mixture of beta distributions: full example | All items are not solved fully correctly in the question. I would recommend the following.
(0) The observations "y" do not need to be corrected as they are between 0 and 1 already. Applying the correc | Mixture of beta distributions: full example
All items are not solved fully correctly in the question. I would recommend the following.
(0) The observations "y" do not need to be corrected as they are between 0 and 1 already. Applying the correction shouldn't create problems but it's not necessary either.
(1) cannot be answered by the likelihood ratio (LR) test. Generally in mixture models, the selection of the number of components cannot be based on the LR test because its regularity assumptions are not fulfilled. Instead, information criteria are often used and "flexmix" upon which betamix() is based offers AIC, BIC, and ICL. So you could choose the best BIC solution among 1, 2, 3 clusters via
library("flexmix")
set.seed(0)
m <- betamix(y ~ 1 | 1, data = d, k = 1:3)
(2) The parameters in betamix() are not mu and phi directly but additionally link functions are employed for both parameters. The defaults are logit and log, respectively. This ensure that the parameters are in their valid ranges (0, 1) and (0, inf), respectively. One could refit the models in both components to get easier access to the links and inverse links etc. However, here it is probably easiest to apply the inverse links by hand:
mu <- plogis(coef(m)[,1])
phi <- exp(coef(m)[,2])
This shows that the means are very different (0.25 and 0.77) while the precisions are rather similar (49.4 and 47.8). Then we can transform back to alpha and beta which gives 12.4, 37.0 and 36.7, 11.1 which is reasonably close to the original parameters in the simulation:
a <- mu * phi
b <- (1 - mu) * phi
(3) The clusters can be extracted using the clusters() function. This simply selects the component with the highest posterior() probability. In this case, the posterior() is really clear-cut, i.e., either close to zero or close to 1.
cl <- clusters(m)
(4) When visualizing the data with histograms, one can either visualize both components separately, i.e., each with its own density function. Or one can draw one joint histogram with the corresponding joint density. The difference is that the latter needs to factor in the different cluster sizes: the prior weights are about 1/3 and 2/3 here. The separate histograms can be drawn like this:
## separate histograms for both clusters
hist(subset(d, cl == 1)$y, breaks = 0:25/25, freq = FALSE,
col = hcl(0, 50, 80), main = "", xlab = "y", ylim = c(0, 9))
hist(subset(d, cl == 2)$y, breaks = 0:25/25, freq = FALSE,
col = hcl(240, 50, 80), main = "", xlab = "y", ylim = c(0, 9), add = TRUE)
## lines for fitted densities
ys <- seq(0, 1, by = 0.01)
lines(ys, dbeta(ys, shape1 = a[1], shape2 = b[1]),
col = hcl(0, 80, 50), lwd = 2)
lines(ys, dbeta(ys, shape1 = a[2], shape2 = b[2]),
col = hcl(240, 80, 50), lwd = 2)
## lines for corresponding means
abline(v = mu[1], col = hcl(0, 80, 50), lty = 2, lwd = 2)
abline(v = mu[2], col = hcl(240, 80, 50), lty = 2, lwd = 2)
And the joint histogram:
p <- prior(m$flexmix)
hist(d$y, breaks = 0:25/25, freq = FALSE,
main = "", xlab = "y", ylim = c(0, 4.5))
lines(ys, p[1] * dbeta(ys, shape1 = a[1], shape2 = b[1]) +
p[2] * dbeta(ys, shape1 = a[2], shape2 = b[2]), lwd = 2)
The resulting figure is included below. | Mixture of beta distributions: full example
All items are not solved fully correctly in the question. I would recommend the following.
(0) The observations "y" do not need to be corrected as they are between 0 and 1 already. Applying the correc |
28,338 | Could it be shown statistically that cars are used as murder weapons? | This may be a long-shot (practically speaking), but if you could get your hands on the (victim, driver) pairs and had a decent social network search engine, you could calculate the "degrees of separation" between the driver and victim and then construct a null distribution of "degrees of separation" by assuming random assignment of driver and victim from the local population where the accident occurred (e.g., everyone within typical commuting distance). This would correct for the "small town" effect, where everyone has close ties to everyone else.
The key hypothesis is: do the actual driver/victim pairs have fewer degrees of separation than the population at large? If so, it means that either (a) close acquaintances are somehow "synched" in their movements about town [e.g., demographic stratification] (b), at least some of the incidents appear to involve an unusually large number of close acquaintances.
Another approach would be to do logistic regression with "degrees of separation" as the variable, and "probability of accident/victim pariing" on the y axis. A strongly increasing function would suggest a "closeness" effect.
You would need to corroborate this by seeing if any of the "high relation" pairs actually resulted in a homicide trial and compare it to the overall rate of homicide indictments for pedestrian collisions. | Could it be shown statistically that cars are used as murder weapons? | This may be a long-shot (practically speaking), but if you could get your hands on the (victim, driver) pairs and had a decent social network search engine, you could calculate the "degrees of separat | Could it be shown statistically that cars are used as murder weapons?
This may be a long-shot (practically speaking), but if you could get your hands on the (victim, driver) pairs and had a decent social network search engine, you could calculate the "degrees of separation" between the driver and victim and then construct a null distribution of "degrees of separation" by assuming random assignment of driver and victim from the local population where the accident occurred (e.g., everyone within typical commuting distance). This would correct for the "small town" effect, where everyone has close ties to everyone else.
The key hypothesis is: do the actual driver/victim pairs have fewer degrees of separation than the population at large? If so, it means that either (a) close acquaintances are somehow "synched" in their movements about town [e.g., demographic stratification] (b), at least some of the incidents appear to involve an unusually large number of close acquaintances.
Another approach would be to do logistic regression with "degrees of separation" as the variable, and "probability of accident/victim pariing" on the y axis. A strongly increasing function would suggest a "closeness" effect.
You would need to corroborate this by seeing if any of the "high relation" pairs actually resulted in a homicide trial and compare it to the overall rate of homicide indictments for pedestrian collisions. | Could it be shown statistically that cars are used as murder weapons?
This may be a long-shot (practically speaking), but if you could get your hands on the (victim, driver) pairs and had a decent social network search engine, you could calculate the "degrees of separat |
28,339 | Misunderstanding of Monte Carlo Pi Estimation | The area of a circle circle of radius $l$ is equal to $\pi l^2$. It means that a quarter of circle has area $l^2\pi/4$. This means that the square with side the radius of the circle as $area=l^2$.
This means that the ratio between the area of a quarter of circle and the area of the square is $\pi/4$.
A point $(x,y) $ is in the square if $ 0<x<1, 0<y<1$.
and it is in the quarter of circle if $ 0<x<1, 0<y<1 ,x^2+y^2<1$.
Your integral is so $∬I((x^2+y^2)<1)P(x,y)= ∬I((x^2+y^2)<1) I(0<x<1)I(0<y<1)$ That is exactly the area described by a quarter of circle | Misunderstanding of Monte Carlo Pi Estimation | The area of a circle circle of radius $l$ is equal to $\pi l^2$. It means that a quarter of circle has area $l^2\pi/4$. This means that the square with side the radius of the circle as $area=l^2$.
T | Misunderstanding of Monte Carlo Pi Estimation
The area of a circle circle of radius $l$ is equal to $\pi l^2$. It means that a quarter of circle has area $l^2\pi/4$. This means that the square with side the radius of the circle as $area=l^2$.
This means that the ratio between the area of a quarter of circle and the area of the square is $\pi/4$.
A point $(x,y) $ is in the square if $ 0<x<1, 0<y<1$.
and it is in the quarter of circle if $ 0<x<1, 0<y<1 ,x^2+y^2<1$.
Your integral is so $∬I((x^2+y^2)<1)P(x,y)= ∬I((x^2+y^2)<1) I(0<x<1)I(0<y<1)$ That is exactly the area described by a quarter of circle | Misunderstanding of Monte Carlo Pi Estimation
The area of a circle circle of radius $l$ is equal to $\pi l^2$. It means that a quarter of circle has area $l^2\pi/4$. This means that the square with side the radius of the circle as $area=l^2$.
T |
28,340 | Misunderstanding of Monte Carlo Pi Estimation | The simplest intuitive explanation relies on understanding that $E(I(A)) = P(A)$. Thus, $\int \int I(x^2+y^2 < 1)dxdy = P(x^2 + y^2 < 1)$. Once you realize the double integal is simply a probability, it should make intuitive sense that you could sample $x$ and $y$ from the unit square and compute the proportion of draws for which $x^2 + y^2 <1$.
Perhaps the other piece of intuition missing from your understanding is the connection between area and probability. Since the area of the entire unit square is 1 and points $(x,y)$ are uniformly distributed within the square, the area of any region $A$ within the unit square would correspond to the probability that a randomly chosen point would be within $A$. | Misunderstanding of Monte Carlo Pi Estimation | The simplest intuitive explanation relies on understanding that $E(I(A)) = P(A)$. Thus, $\int \int I(x^2+y^2 < 1)dxdy = P(x^2 + y^2 < 1)$. Once you realize the double integal is simply a probability | Misunderstanding of Monte Carlo Pi Estimation
The simplest intuitive explanation relies on understanding that $E(I(A)) = P(A)$. Thus, $\int \int I(x^2+y^2 < 1)dxdy = P(x^2 + y^2 < 1)$. Once you realize the double integal is simply a probability, it should make intuitive sense that you could sample $x$ and $y$ from the unit square and compute the proportion of draws for which $x^2 + y^2 <1$.
Perhaps the other piece of intuition missing from your understanding is the connection between area and probability. Since the area of the entire unit square is 1 and points $(x,y)$ are uniformly distributed within the square, the area of any region $A$ within the unit square would correspond to the probability that a randomly chosen point would be within $A$. | Misunderstanding of Monte Carlo Pi Estimation
The simplest intuitive explanation relies on understanding that $E(I(A)) = P(A)$. Thus, $\int \int I(x^2+y^2 < 1)dxdy = P(x^2 + y^2 < 1)$. Once you realize the double integal is simply a probability |
28,341 | Misunderstanding of Monte Carlo Pi Estimation | I landed on this surfing CV, and I see that the code of the Monte Carlo is in Octave. I happen to have a simulation in R that makes the idea of deriving the number $\pi$ as a bivariate uniform distribution in the $[0,1]$ plane under the constraints of the integrals in the OP very intuitive:
Given that the quarter of a circle is enclosed in a 1-unit square, the area is $\pi/4$. So generating uniformly distributed points in the square $(x,y)$ will end up carpeting the entire square, and calculating the fraction fulfilling $1 < \sqrt{(x^2+y^2)}$ will be tantamount to integrating $∬\textbf{1}((x^2+y^2)<1) \,\textbf{1}(0<x<1)\,\textbf{1}(0<y<1)$ since we are just selecting the fraction of dots within the circle in relation to the unit square:
x <- runif(1e4); y <- runif(1e4)
radius <- sqrt(x^2 + y^2)
# Selecting those values within the circle is obtained with radius[radius < 1]:
(pi = length(radius[radius < 1]) / length(radius)) * 4 = 3.1272
We can plot the values falling within the radius among 10,000 draws:
And we can, naturally, get closer and closer approximation by selecting more points. With 1 million points we get:
(pi = length(radius[radius < 1]) / length(radius)) * 4 [1] 3.141644
a very approximate result. Here is the plot: | Misunderstanding of Monte Carlo Pi Estimation | I landed on this surfing CV, and I see that the code of the Monte Carlo is in Octave. I happen to have a simulation in R that makes the idea of deriving the number $\pi$ as a bivariate uniform distrib | Misunderstanding of Monte Carlo Pi Estimation
I landed on this surfing CV, and I see that the code of the Monte Carlo is in Octave. I happen to have a simulation in R that makes the idea of deriving the number $\pi$ as a bivariate uniform distribution in the $[0,1]$ plane under the constraints of the integrals in the OP very intuitive:
Given that the quarter of a circle is enclosed in a 1-unit square, the area is $\pi/4$. So generating uniformly distributed points in the square $(x,y)$ will end up carpeting the entire square, and calculating the fraction fulfilling $1 < \sqrt{(x^2+y^2)}$ will be tantamount to integrating $∬\textbf{1}((x^2+y^2)<1) \,\textbf{1}(0<x<1)\,\textbf{1}(0<y<1)$ since we are just selecting the fraction of dots within the circle in relation to the unit square:
x <- runif(1e4); y <- runif(1e4)
radius <- sqrt(x^2 + y^2)
# Selecting those values within the circle is obtained with radius[radius < 1]:
(pi = length(radius[radius < 1]) / length(radius)) * 4 = 3.1272
We can plot the values falling within the radius among 10,000 draws:
And we can, naturally, get closer and closer approximation by selecting more points. With 1 million points we get:
(pi = length(radius[radius < 1]) / length(radius)) * 4 [1] 3.141644
a very approximate result. Here is the plot: | Misunderstanding of Monte Carlo Pi Estimation
I landed on this surfing CV, and I see that the code of the Monte Carlo is in Octave. I happen to have a simulation in R that makes the idea of deriving the number $\pi$ as a bivariate uniform distrib |
28,342 | Efficient evaluation of multidimensional kernel density estimate | I'm going to provide an (incomplete) answer here in case it helps anyone else out.
There are several recent mathematical methods for computing the KDE more efficiently. One is the Fast Gauss Transform, published in several studies including this one. Another is to use a tree-based approach (KD tree or ball tree) to work out which sources contribute to a given grid point. Unclear whether this has been published, but it is implemented in Scikit-learn and based on methods developed by Jake Vanderplas.
If these methods are a bit fiddly, it's possible to write something a bit more basic that achieves a similar task. I tried constructing a cuboid around each grid point, with side lengths related to the bandwidth in each of those dimensions. This doesn't allow great control of errors, though it does give you some speed up.
Finally, computing the KDE is quite easily parallelisable, either on multiple CPU cores or on a GPU. I'm considering implementing a KDE in CUDA, but haven't done that yet. | Efficient evaluation of multidimensional kernel density estimate | I'm going to provide an (incomplete) answer here in case it helps anyone else out.
There are several recent mathematical methods for computing the KDE more efficiently. One is the Fast Gauss Transfo | Efficient evaluation of multidimensional kernel density estimate
I'm going to provide an (incomplete) answer here in case it helps anyone else out.
There are several recent mathematical methods for computing the KDE more efficiently. One is the Fast Gauss Transform, published in several studies including this one. Another is to use a tree-based approach (KD tree or ball tree) to work out which sources contribute to a given grid point. Unclear whether this has been published, but it is implemented in Scikit-learn and based on methods developed by Jake Vanderplas.
If these methods are a bit fiddly, it's possible to write something a bit more basic that achieves a similar task. I tried constructing a cuboid around each grid point, with side lengths related to the bandwidth in each of those dimensions. This doesn't allow great control of errors, though it does give you some speed up.
Finally, computing the KDE is quite easily parallelisable, either on multiple CPU cores or on a GPU. I'm considering implementing a KDE in CUDA, but haven't done that yet. | Efficient evaluation of multidimensional kernel density estimate
I'm going to provide an (incomplete) answer here in case it helps anyone else out.
There are several recent mathematical methods for computing the KDE more efficiently. One is the Fast Gauss Transfo |
28,343 | Can MCMC iterations after burn in be used for density estimation? | You can - and people do - estimate densities from MCMC sampling.
One thing to keep in mind is that while histograms and KDEs are convenient, at least in simple cases (such as Gibbs sampling), much more efficient estimates of density may be available.
If we consider Gibbs sampling in particular, the conditional density you're sampling from can be used in place of the sample value itself in producing an averaged estimate of the density. The result tends to be quite smooth.
The approach is discussed in
Gelfand and Smith (1990),
"Sampling-Based Approaches to Calculating Marginal Densities"
Journal of the American Statistical Association, Vol. 85, No. 410, pp. 398-409
(though Geyer cautions that if the sampler dependence is high enough it doesn't always reduce the variance and gives conditions for it to do so)
This approach is also discussed, for example, in
Robert, C. P. and Casella, G. (1999)
Monte Carlo Statistical Methods.
You don't need independence, you're actually computing an average. If you want to compute a standard error of a density estimate (or a cdf), then you have to account for the dependence.
The same notion applies to other expectations, of course, and so it can be used to improve estimates of many other kinds of average. | Can MCMC iterations after burn in be used for density estimation? | You can - and people do - estimate densities from MCMC sampling.
One thing to keep in mind is that while histograms and KDEs are convenient, at least in simple cases (such as Gibbs sampling), much mor | Can MCMC iterations after burn in be used for density estimation?
You can - and people do - estimate densities from MCMC sampling.
One thing to keep in mind is that while histograms and KDEs are convenient, at least in simple cases (such as Gibbs sampling), much more efficient estimates of density may be available.
If we consider Gibbs sampling in particular, the conditional density you're sampling from can be used in place of the sample value itself in producing an averaged estimate of the density. The result tends to be quite smooth.
The approach is discussed in
Gelfand and Smith (1990),
"Sampling-Based Approaches to Calculating Marginal Densities"
Journal of the American Statistical Association, Vol. 85, No. 410, pp. 398-409
(though Geyer cautions that if the sampler dependence is high enough it doesn't always reduce the variance and gives conditions for it to do so)
This approach is also discussed, for example, in
Robert, C. P. and Casella, G. (1999)
Monte Carlo Statistical Methods.
You don't need independence, you're actually computing an average. If you want to compute a standard error of a density estimate (or a cdf), then you have to account for the dependence.
The same notion applies to other expectations, of course, and so it can be used to improve estimates of many other kinds of average. | Can MCMC iterations after burn in be used for density estimation?
You can - and people do - estimate densities from MCMC sampling.
One thing to keep in mind is that while histograms and KDEs are convenient, at least in simple cases (such as Gibbs sampling), much mor |
28,344 | Can MCMC iterations after burn in be used for density estimation? | Resume
You can directly use the MCMC iterations for anything because the average value of your observable will asymptotically approach the true value (because you are after the burn-in).
However, bear in mind that the variance of this average is influenced by the correlations between samples. This means that if the samples are correlated, as is common in MCMC, storing every measurement will not bring any real advantage.
In theory, you should measure after N steps, where N is of the order of the autocorrelation time of the observable you are measuring.
Detailed explanation
Let's define some notation to formally answer your question. Let $x_t$ be the state of your MCMC simulation at time $t$, assumed much higher than the burn-in time. Let $f$ be the observable you want to measure.
For example, $x_t \in \mathbb{R}$, and $f=f_a(x)$: "1 if $x\in[a,a+\Delta]$, 0 else". $x_t$ is naturally being drawn from a distribution $P(x)$, which you do using MCMC.
In any sampling, you will always need to compute an average of an observable $f$, which you do using an estimator:
$$F = \frac{1}{N}\sum_{i=1}^N f(x_i)$$
We see that the average value of this estimator $\langle F\rangle$ (in respect to $P(x)$) is
$$\langle F \rangle = \frac{1}{N}\sum_{i=1}^N \langle f(x_i)\rangle = \langle f(x)\rangle$$
which is what you want to obtain.
The main concern is that when you compute the variance of this estimator, $\langle F^2 \rangle - \langle F \rangle^2$, you will obtain terms of the form
$$\sum_{i=1}^N\sum_{j=1}^N \langle f(x_i)f(x_j)\rangle$$
which do not cancel out if $x_t$ are correlated samples. Moreover, because you can write $j=i+\Delta$, you can write the above double sum as sum of the autocorrelation function of $f$, $R(\Delta)$
So, to recap:
If computationally it does not cost anything to store every measure, you can do it, but bear in mind that the variance can not be computed using the usual formula.
If it is computationally expensive to measure at each step of your MCMC, you have to find a way to estimate the cumulative of the autocorrelation time $\tau$ and perform measurements only every $\tau$. In this case, the measurements are independent and thus you can use the usual formula of the variance. | Can MCMC iterations after burn in be used for density estimation? | Resume
You can directly use the MCMC iterations for anything because the average value of your observable will asymptotically approach the true value (because you are after the burn-in).
However, bear | Can MCMC iterations after burn in be used for density estimation?
Resume
You can directly use the MCMC iterations for anything because the average value of your observable will asymptotically approach the true value (because you are after the burn-in).
However, bear in mind that the variance of this average is influenced by the correlations between samples. This means that if the samples are correlated, as is common in MCMC, storing every measurement will not bring any real advantage.
In theory, you should measure after N steps, where N is of the order of the autocorrelation time of the observable you are measuring.
Detailed explanation
Let's define some notation to formally answer your question. Let $x_t$ be the state of your MCMC simulation at time $t$, assumed much higher than the burn-in time. Let $f$ be the observable you want to measure.
For example, $x_t \in \mathbb{R}$, and $f=f_a(x)$: "1 if $x\in[a,a+\Delta]$, 0 else". $x_t$ is naturally being drawn from a distribution $P(x)$, which you do using MCMC.
In any sampling, you will always need to compute an average of an observable $f$, which you do using an estimator:
$$F = \frac{1}{N}\sum_{i=1}^N f(x_i)$$
We see that the average value of this estimator $\langle F\rangle$ (in respect to $P(x)$) is
$$\langle F \rangle = \frac{1}{N}\sum_{i=1}^N \langle f(x_i)\rangle = \langle f(x)\rangle$$
which is what you want to obtain.
The main concern is that when you compute the variance of this estimator, $\langle F^2 \rangle - \langle F \rangle^2$, you will obtain terms of the form
$$\sum_{i=1}^N\sum_{j=1}^N \langle f(x_i)f(x_j)\rangle$$
which do not cancel out if $x_t$ are correlated samples. Moreover, because you can write $j=i+\Delta$, you can write the above double sum as sum of the autocorrelation function of $f$, $R(\Delta)$
So, to recap:
If computationally it does not cost anything to store every measure, you can do it, but bear in mind that the variance can not be computed using the usual formula.
If it is computationally expensive to measure at each step of your MCMC, you have to find a way to estimate the cumulative of the autocorrelation time $\tau$ and perform measurements only every $\tau$. In this case, the measurements are independent and thus you can use the usual formula of the variance. | Can MCMC iterations after burn in be used for density estimation?
Resume
You can directly use the MCMC iterations for anything because the average value of your observable will asymptotically approach the true value (because you are after the burn-in).
However, bear |
28,345 | Strange way of calculating chi-squared in Excel vs R | This turns out to be quite straightforward.
This is clearly binomial sampling. There are two ways to look at it.
Method 1, that of the spreadsheet, it to treat the observed counts $X_i$ as $\sim \text{Bin}(N_i,p_i)$, which may be approximated as $\text{N}(\mu_i=N_i\cdot p_i,\sigma_i^2=N_i\cdot p_i(1-p_i))$. As such, $Z_i=(X_i-\mu_i)/\sigma_i$ are approximately standard normal, and the $Z$'s are independent, so (approximately) $\sum_i Z_i^2\sim \chi^2$.
(If the p's are based off observed counts, then the $Z$'s aren't independent, but it's still chi-square with one fewer degree of freedom.)
Method 2: your use of the $(O-E)^2/E$ form of chi-square also works, but it requires that you take account not only of those in the category you have labelled 'Observed' but also those not in that category:
+------------+------+-------+
| Population | In A | Not A |
+------------+------+-------+
| 2000 | 42 | 1958 |
| 2000 | 42 | 1958 |
| 2000 | 25 | 1975 |
| 2000 | 21 | 1979 |
+ -----------+------+-------+
Where the $E$'s for the first column are as you have them, and those for the second column are $N_i(1-p_i)$
... and then sum $(O-E)^2/E$ over both columns.
The two forms are algebraically equivalent. Note that $1/p + 1/(1-p) = 1/p(1-p)$. Consider the i$^{th}$ row of the chi-square:
\begin{eqnarray}
\frac{(X_i - \mu_i)^2}{\sigma_i^2}
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{(X_i- N_ip_i)^2}{N_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{(N_i-N_i+N_ip_i-X_i)^2}{N_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{(N_i-X_i-(N_i-N_ip_i))^2}{N_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{((N_i-X_i)-N_i(1-p_i))^2}{N_i(1-p_i)}\\
&=& \frac{(O^{(A)}_i- E^{(A)}_i)^2}{E^{(A)}_i} +\frac{(O^{(\bar A)}_i-E^{(\bar A)}_i)^2}{E^{(\bar A)}_i}
\end{eqnarray}
Which means you should get the same answer both ways, up to rounding error.
Let's see:
Observed Expected (O-E)^2/E
Ni A not A A not A A not A
2000 42 1958 32.5 1967.5 2.776923077 0.045870394
2000 42 1958 32.5 1967.5 2.776923077 0.045870394
2000 25 1975 32.5 1967.5 1.730769231 0.028589581
2000 21 1979 32.5 1967.5 4.069230769 0.067217281
Sum 11.35384615 0.187547649
Chi-square = 11.353846 + 0.187548 = 11.54139
Which matches their answer. | Strange way of calculating chi-squared in Excel vs R | This turns out to be quite straightforward.
This is clearly binomial sampling. There are two ways to look at it.
Method 1, that of the spreadsheet, it to treat the observed counts $X_i$ as $\sim \text | Strange way of calculating chi-squared in Excel vs R
This turns out to be quite straightforward.
This is clearly binomial sampling. There are two ways to look at it.
Method 1, that of the spreadsheet, it to treat the observed counts $X_i$ as $\sim \text{Bin}(N_i,p_i)$, which may be approximated as $\text{N}(\mu_i=N_i\cdot p_i,\sigma_i^2=N_i\cdot p_i(1-p_i))$. As such, $Z_i=(X_i-\mu_i)/\sigma_i$ are approximately standard normal, and the $Z$'s are independent, so (approximately) $\sum_i Z_i^2\sim \chi^2$.
(If the p's are based off observed counts, then the $Z$'s aren't independent, but it's still chi-square with one fewer degree of freedom.)
Method 2: your use of the $(O-E)^2/E$ form of chi-square also works, but it requires that you take account not only of those in the category you have labelled 'Observed' but also those not in that category:
+------------+------+-------+
| Population | In A | Not A |
+------------+------+-------+
| 2000 | 42 | 1958 |
| 2000 | 42 | 1958 |
| 2000 | 25 | 1975 |
| 2000 | 21 | 1979 |
+ -----------+------+-------+
Where the $E$'s for the first column are as you have them, and those for the second column are $N_i(1-p_i)$
... and then sum $(O-E)^2/E$ over both columns.
The two forms are algebraically equivalent. Note that $1/p + 1/(1-p) = 1/p(1-p)$. Consider the i$^{th}$ row of the chi-square:
\begin{eqnarray}
\frac{(X_i - \mu_i)^2}{\sigma_i^2}
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{(X_i- N_ip_i)^2}{N_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{(N_i-N_i+N_ip_i-X_i)^2}{N_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{(N_i-X_i-(N_i-N_ip_i))^2}{N_i(1-p_i)}\\
&=& \frac{(X_i- N_ip_i)^2}{N_ip_i} +\frac{((N_i-X_i)-N_i(1-p_i))^2}{N_i(1-p_i)}\\
&=& \frac{(O^{(A)}_i- E^{(A)}_i)^2}{E^{(A)}_i} +\frac{(O^{(\bar A)}_i-E^{(\bar A)}_i)^2}{E^{(\bar A)}_i}
\end{eqnarray}
Which means you should get the same answer both ways, up to rounding error.
Let's see:
Observed Expected (O-E)^2/E
Ni A not A A not A A not A
2000 42 1958 32.5 1967.5 2.776923077 0.045870394
2000 42 1958 32.5 1967.5 2.776923077 0.045870394
2000 25 1975 32.5 1967.5 1.730769231 0.028589581
2000 21 1979 32.5 1967.5 4.069230769 0.067217281
Sum 11.35384615 0.187547649
Chi-square = 11.353846 + 0.187548 = 11.54139
Which matches their answer. | Strange way of calculating chi-squared in Excel vs R
This turns out to be quite straightforward.
This is clearly binomial sampling. There are two ways to look at it.
Method 1, that of the spreadsheet, it to treat the observed counts $X_i$ as $\sim \text |
28,346 | Explaining the beveridge nelson decomposition | Beveridge-Nelson decomposition is a decomposition of $ARIMA(p,1,q)$ process. Such process has a unit root:
$$y_t=y_{t-1}+u_{t},$$
but $u_t$ is not a white noise process, it is an $ARMA(p,q)$ process. What Beveridge and Nelson in their original article observed is that it is possible to decompose this process into two parts:
$$y_t=\tau_t+\xi_t,$$
where $\tau_t$ is now "pure" random walk, i.e. $\tau_t=\tau_{t-1}+\varepsilon_t$, where $\varepsilon_t$ is a white noise proces. The term $\xi_t$ is another stationary process. This decomposition is algebraic identity (the details below), but it can lead to interesting interpretations.
The precise statement. Let $u_t=\sum_{j=0}^\infty \psi_{j}\varepsilon_{t-j}$, where $\varepsilon_t$ is a white noise process and $\sum j|\psi_j|<\infty$. Then
$$u_1+...+u_t=\psi(1)(\varepsilon_1+...+\varepsilon_t)+\eta_t-\eta_0,$$
where
$$\psi(1)=\sum_{j=0}^\infty\psi_j,\quad \eta_t=\sum_{j=0}^\infty\alpha_j\varepsilon_{t-j},\quad \alpha_j=-(\psi_{j+1}+\psi_{j+2}+...), \quad \sum|\alpha_j|<\infty.$$
This decomposition has nice application, for example
$$\frac{1}{\sqrt{T}}\sum_{t=1}^Tu_{t}=\frac{1}{\sqrt{T}}\psi(1)\sum_{t=1}^T\varepsilon_t+\frac{1}{\sqrt{T}}(\eta_t-\eta_0)\to N(0,[\psi(1)\sigma]^2),$$
where we apply the central limit theorem for the first term and observe that the second term goes to zero, due to stationarity (mean is zero and variance of term goes to zero, due to T in the denominator).
So we get that limiting behaviour of ARIMA(p,1,q) process is simply the same as for a ARIMA(0,1,0) process. This fact is used a lot in the time series literature. For example Phillips and Perron unit root test is based on it. | Explaining the beveridge nelson decomposition | Beveridge-Nelson decomposition is a decomposition of $ARIMA(p,1,q)$ process. Such process has a unit root:
$$y_t=y_{t-1}+u_{t},$$
but $u_t$ is not a white noise process, it is an $ARMA(p,q)$ process. | Explaining the beveridge nelson decomposition
Beveridge-Nelson decomposition is a decomposition of $ARIMA(p,1,q)$ process. Such process has a unit root:
$$y_t=y_{t-1}+u_{t},$$
but $u_t$ is not a white noise process, it is an $ARMA(p,q)$ process. What Beveridge and Nelson in their original article observed is that it is possible to decompose this process into two parts:
$$y_t=\tau_t+\xi_t,$$
where $\tau_t$ is now "pure" random walk, i.e. $\tau_t=\tau_{t-1}+\varepsilon_t$, where $\varepsilon_t$ is a white noise proces. The term $\xi_t$ is another stationary process. This decomposition is algebraic identity (the details below), but it can lead to interesting interpretations.
The precise statement. Let $u_t=\sum_{j=0}^\infty \psi_{j}\varepsilon_{t-j}$, where $\varepsilon_t$ is a white noise process and $\sum j|\psi_j|<\infty$. Then
$$u_1+...+u_t=\psi(1)(\varepsilon_1+...+\varepsilon_t)+\eta_t-\eta_0,$$
where
$$\psi(1)=\sum_{j=0}^\infty\psi_j,\quad \eta_t=\sum_{j=0}^\infty\alpha_j\varepsilon_{t-j},\quad \alpha_j=-(\psi_{j+1}+\psi_{j+2}+...), \quad \sum|\alpha_j|<\infty.$$
This decomposition has nice application, for example
$$\frac{1}{\sqrt{T}}\sum_{t=1}^Tu_{t}=\frac{1}{\sqrt{T}}\psi(1)\sum_{t=1}^T\varepsilon_t+\frac{1}{\sqrt{T}}(\eta_t-\eta_0)\to N(0,[\psi(1)\sigma]^2),$$
where we apply the central limit theorem for the first term and observe that the second term goes to zero, due to stationarity (mean is zero and variance of term goes to zero, due to T in the denominator).
So we get that limiting behaviour of ARIMA(p,1,q) process is simply the same as for a ARIMA(0,1,0) process. This fact is used a lot in the time series literature. For example Phillips and Perron unit root test is based on it. | Explaining the beveridge nelson decomposition
Beveridge-Nelson decomposition is a decomposition of $ARIMA(p,1,q)$ process. Such process has a unit root:
$$y_t=y_{t-1}+u_{t},$$
but $u_t$ is not a white noise process, it is an $ARMA(p,q)$ process. |
28,347 | How to know if my data fits Pareto distribution? | (PS) First of all I think Glen_b is right in his above comments on the usefulness of such a test: real data are surely not exactly Pareto distributed, and for most practical applications the question would be "how good is the Pareto approximation?" – and the QQ plot is a good way to show the quality of such an approximation.
Any way you can do your test with the Kolmogorov-Smirnov statistic, after estimating the parameters by maximum likelihood. This parameter estimation prevents to use the $p$-value from ks.test, so you can do parametric bootstrap to estimate it. As Glen_b tells in the comment, this can be connected to Lilliefors test.
Here are a few lines of R code.
First define the basic functions to deal with Pareto distributions.
# distribution, cdf, quantile and random functions for Pareto distributions
dpareto <- function(x, xm, alpha) ifelse(x > xm , alpha*xm**alpha/(x**(alpha+1)), 0)
ppareto <- function(q, xm, alpha) ifelse(q > xm , 1 - (xm/q)**alpha, 0 )
qpareto <- function(p, xm, alpha) ifelse(p < 0 | p > 1, NaN, xm*(1-p)**(-1/alpha))
rpareto <- function(n, xm, alpha) qpareto(runif(n), xm, alpha)
The following function computes the MLE of the parameters (justifications in Wikipedia).
pareto.mle <- function(x)
{
xm <- min(x)
alpha <- length(x)/(sum(log(x))-length(x)*log(xm))
return( list(xm = xm, alpha = alpha))
}
And this functions compute the KS statistic, and uses parametric bootstrap to estimate the $p$-value.
pareto.test <- function(x, B = 1e3)
{
a <- pareto.mle(x)
# KS statistic
D <- ks.test(x, function(q) ppareto(q, a$xm, a$alpha))$statistic
# estimating p value with parametric bootstrap
B <- 1e5
n <- length(x)
emp.D <- numeric(B)
for(b in 1:B)
{
xx <- rpareto(n, a$xm, a$alpha);
aa <- pareto.mle(xx)
emp.D[b] <- ks.test(xx, function(q) ppareto(q, aa$xm, aa$alpha))$statistic
}
return(list(xm = a$xm, alpha = a$alpha, D = D, p = sum(emp.D > D)/B))
}
Now, for example, a sample coming from a Pareto distribution:
> # generating 100 values from Pareto distribution
> x <- rpareto(100, 0.5, 2)
> pareto.test(x)
$xm
[1] 0.5007593
$alpha
[1] 2.080203
$D
D
0.06020594
$p
[1] 0.69787
...and from a $\chi^2(2)$:
> # generating 100 values from chi square distribution
> x <- rchisq(100, df=2)
> pareto.test(x)
$xm
[1] 0.01015107
$alpha
[1] 0.2116619
$D
D
0.4002694
$p
[1] 0
Note that I do not claim that this test is unbiased: when the sample is small, some bias can exist. The parametric bootstrap doesn’t take well into account the uncertainty on the parameter estimation (think to what would happen when using this strategy to test naively if the mean of some normal variable with unknown variance is zero).
PS Wikipedia says a few words about this. Here are two other questions for which a similar strategy was suggested: Goodness of fit test for a mixture, goodness of fit test for a gamma distribution. | How to know if my data fits Pareto distribution? | (PS) First of all I think Glen_b is right in his above comments on the usefulness of such a test: real data are surely not exactly Pareto distributed, and for most practical applications the question | How to know if my data fits Pareto distribution?
(PS) First of all I think Glen_b is right in his above comments on the usefulness of such a test: real data are surely not exactly Pareto distributed, and for most practical applications the question would be "how good is the Pareto approximation?" – and the QQ plot is a good way to show the quality of such an approximation.
Any way you can do your test with the Kolmogorov-Smirnov statistic, after estimating the parameters by maximum likelihood. This parameter estimation prevents to use the $p$-value from ks.test, so you can do parametric bootstrap to estimate it. As Glen_b tells in the comment, this can be connected to Lilliefors test.
Here are a few lines of R code.
First define the basic functions to deal with Pareto distributions.
# distribution, cdf, quantile and random functions for Pareto distributions
dpareto <- function(x, xm, alpha) ifelse(x > xm , alpha*xm**alpha/(x**(alpha+1)), 0)
ppareto <- function(q, xm, alpha) ifelse(q > xm , 1 - (xm/q)**alpha, 0 )
qpareto <- function(p, xm, alpha) ifelse(p < 0 | p > 1, NaN, xm*(1-p)**(-1/alpha))
rpareto <- function(n, xm, alpha) qpareto(runif(n), xm, alpha)
The following function computes the MLE of the parameters (justifications in Wikipedia).
pareto.mle <- function(x)
{
xm <- min(x)
alpha <- length(x)/(sum(log(x))-length(x)*log(xm))
return( list(xm = xm, alpha = alpha))
}
And this functions compute the KS statistic, and uses parametric bootstrap to estimate the $p$-value.
pareto.test <- function(x, B = 1e3)
{
a <- pareto.mle(x)
# KS statistic
D <- ks.test(x, function(q) ppareto(q, a$xm, a$alpha))$statistic
# estimating p value with parametric bootstrap
B <- 1e5
n <- length(x)
emp.D <- numeric(B)
for(b in 1:B)
{
xx <- rpareto(n, a$xm, a$alpha);
aa <- pareto.mle(xx)
emp.D[b] <- ks.test(xx, function(q) ppareto(q, aa$xm, aa$alpha))$statistic
}
return(list(xm = a$xm, alpha = a$alpha, D = D, p = sum(emp.D > D)/B))
}
Now, for example, a sample coming from a Pareto distribution:
> # generating 100 values from Pareto distribution
> x <- rpareto(100, 0.5, 2)
> pareto.test(x)
$xm
[1] 0.5007593
$alpha
[1] 2.080203
$D
D
0.06020594
$p
[1] 0.69787
...and from a $\chi^2(2)$:
> # generating 100 values from chi square distribution
> x <- rchisq(100, df=2)
> pareto.test(x)
$xm
[1] 0.01015107
$alpha
[1] 0.2116619
$D
D
0.4002694
$p
[1] 0
Note that I do not claim that this test is unbiased: when the sample is small, some bias can exist. The parametric bootstrap doesn’t take well into account the uncertainty on the parameter estimation (think to what would happen when using this strategy to test naively if the mean of some normal variable with unknown variance is zero).
PS Wikipedia says a few words about this. Here are two other questions for which a similar strategy was suggested: Goodness of fit test for a mixture, goodness of fit test for a gamma distribution. | How to know if my data fits Pareto distribution?
(PS) First of all I think Glen_b is right in his above comments on the usefulness of such a test: real data are surely not exactly Pareto distributed, and for most practical applications the question |
28,348 | SMOTE throws error for multi class imbalance problem | I have encountered a similar problem, and I solved it by transferring the class values ("status" in your case) into factor type. After using data$status=factor(data$status), newData prints as follows:
looking risk every status
7 0 0 0 1
2 0 0 0 1
7.1 0 0 0 1
12 0 0 0 1
4 0 0 0 1
12.1 0 0 0 1
11 0 0 0 3
8 NA NA NA 3
9 NA NA NA 3
10 NA NA NA 3
111 NA NA NA 3
121 NA NA NA 3
13 NA NA NA 3
No errors! | SMOTE throws error for multi class imbalance problem | I have encountered a similar problem, and I solved it by transferring the class values ("status" in your case) into factor type. After using data$status=factor(data$status), newData prints as follows: | SMOTE throws error for multi class imbalance problem
I have encountered a similar problem, and I solved it by transferring the class values ("status" in your case) into factor type. After using data$status=factor(data$status), newData prints as follows:
looking risk every status
7 0 0 0 1
2 0 0 0 1
7.1 0 0 0 1
12 0 0 0 1
4 0 0 0 1
12.1 0 0 0 1
11 0 0 0 3
8 NA NA NA 3
9 NA NA NA 3
10 NA NA NA 3
111 NA NA NA 3
121 NA NA NA 3
13 NA NA NA 3
No errors! | SMOTE throws error for multi class imbalance problem
I have encountered a similar problem, and I solved it by transferring the class values ("status" in your case) into factor type. After using data$status=factor(data$status), newData prints as follows: |
28,349 | SVM confidence according to distance from hyperline | It's actually possible to get probabilities out of a Support Vector Machine, which might be more useful and interpretable than an arbitrary "score" value. There are a few approaches for doing this: one reasonable place to start is Platt (1999).
Most SVM packages/libraries implement something like this (for example, the -b 1 option causes LibSVM to produce probabilities). If you're going to roll your own, you should be aware that there are some potential numerical issues, summarized in this note by Lin, Lin, and Weng (2007). They also provide some psuedocode, which might be helpful too.
Edit in response to your comment:
It's somewhat unclear to me why you'd prefer a score to a probability, especially since you can get the probability with minimal extra effort. All that said, most of the probability calculations seem like they're derived from the distance between the point and the hyperplane. If you look at Section 2 of the Platt paper, he walks through the motivation and says:
The class conditional densities between the margins are apparently exponential. Bayes' rule on two exponentials suggests using a parametric form of a sigmoid:
$$ P(y=1 | f) = \frac{1}{1+\exp(Af+B)}$$
This sigmoid model is equivalent to assuming that the output of the SVM is proportional to the log-likelihood of a positive training example. [MK: $f$ was defined elsewhere to be the raw SVM output].
The rest of the method section describes how to fit the $A$ and $B$ parameters of that sigmoid. In the introduction (Section 1.0 and 1.1), Platt reviews a few other approaches by Vapnik, Wahba, and Hasti & Tibshirani. These methods also use something like the distance to the hyperplane, manipulated in various ways. These all seem to suggest that the distance to the hyperplane contains some useful information, so I guess you could use the raw distance as some (non-linear) measure of confidence. | SVM confidence according to distance from hyperline | It's actually possible to get probabilities out of a Support Vector Machine, which might be more useful and interpretable than an arbitrary "score" value. There are a few approaches for doing this: on | SVM confidence according to distance from hyperline
It's actually possible to get probabilities out of a Support Vector Machine, which might be more useful and interpretable than an arbitrary "score" value. There are a few approaches for doing this: one reasonable place to start is Platt (1999).
Most SVM packages/libraries implement something like this (for example, the -b 1 option causes LibSVM to produce probabilities). If you're going to roll your own, you should be aware that there are some potential numerical issues, summarized in this note by Lin, Lin, and Weng (2007). They also provide some psuedocode, which might be helpful too.
Edit in response to your comment:
It's somewhat unclear to me why you'd prefer a score to a probability, especially since you can get the probability with minimal extra effort. All that said, most of the probability calculations seem like they're derived from the distance between the point and the hyperplane. If you look at Section 2 of the Platt paper, he walks through the motivation and says:
The class conditional densities between the margins are apparently exponential. Bayes' rule on two exponentials suggests using a parametric form of a sigmoid:
$$ P(y=1 | f) = \frac{1}{1+\exp(Af+B)}$$
This sigmoid model is equivalent to assuming that the output of the SVM is proportional to the log-likelihood of a positive training example. [MK: $f$ was defined elsewhere to be the raw SVM output].
The rest of the method section describes how to fit the $A$ and $B$ parameters of that sigmoid. In the introduction (Section 1.0 and 1.1), Platt reviews a few other approaches by Vapnik, Wahba, and Hasti & Tibshirani. These methods also use something like the distance to the hyperplane, manipulated in various ways. These all seem to suggest that the distance to the hyperplane contains some useful information, so I guess you could use the raw distance as some (non-linear) measure of confidence. | SVM confidence according to distance from hyperline
It's actually possible to get probabilities out of a Support Vector Machine, which might be more useful and interpretable than an arbitrary "score" value. There are a few approaches for doing this: on |
28,350 | SVM confidence according to distance from hyperline | If the training dataset is reasonably balanced and has standardized features, I will take the SVM scores as the measure of confidence in belonging to the respective classes. The so-called calibration methods that convert the scores to probability-like quantities, such as Platt scaling, usually use monotone functions (like logistic function) to map the scores to probabilities. Hence, if you only want to compare the confidence levels of a learned SVM model in a particular test datapoint belonging to possible classes, you can just compare the score values (not their absolute values) given that the training dataset from which the model is learned is fairly balanced and does not have any unusual quirk. | SVM confidence according to distance from hyperline | If the training dataset is reasonably balanced and has standardized features, I will take the SVM scores as the measure of confidence in belonging to the respective classes. The so-called calibration | SVM confidence according to distance from hyperline
If the training dataset is reasonably balanced and has standardized features, I will take the SVM scores as the measure of confidence in belonging to the respective classes. The so-called calibration methods that convert the scores to probability-like quantities, such as Platt scaling, usually use monotone functions (like logistic function) to map the scores to probabilities. Hence, if you only want to compare the confidence levels of a learned SVM model in a particular test datapoint belonging to possible classes, you can just compare the score values (not their absolute values) given that the training dataset from which the model is learned is fairly balanced and does not have any unusual quirk. | SVM confidence according to distance from hyperline
If the training dataset is reasonably balanced and has standardized features, I will take the SVM scores as the measure of confidence in belonging to the respective classes. The so-called calibration |
28,351 | Logistic regression-like model for non-discrete outcomes | If you have "continuous" (seemingly, as they could still be discrete) values in between 0 and 1 there are at least two cases:
They came from a number of independent binary trials and the "continuous" value is the number of successes divided by trials. Then a binomial GLM might be appropriate. In this case you need to fit it in R as glm(cbind(numberSuccesses,numberFailures)~x,family=binomial)
If that is not the case, then you might have something for which a Beta Model might be more appropriate. The link I provided shows how to do that in R.
Note that in R glm(y~x,family=binomial) with a "continuous" $y$ will throw a warning and in general the result will not be the same as in the case with number of successes and trials:
set.seed(1)
successes<-sample(1:10,100,replace=TRUE)
x<-1:100
n<-12
failures<-n-successes
summary(glm(cbind(successes,failures)~x,family=binomial))
Call:
glm(formula = cbind(successes, failures) ~ x, family = binomial)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.8197 -0.9434 0.0454 0.9358 2.4921
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.24622 0.11349 -2.17 0.03 *
x 0.00080 0.00195 0.41 0.68
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 134.99 on 99 degrees of freedom
Residual deviance: 134.82 on 98 degrees of freedom
AIC: 422.2
Number of Fisher Scoring iterations: 3
but
props<-successes/n
summary(glm(props~x,family=binomial))
Call:
glm(formula = props ~ x, family = binomial)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.852 -0.282 -0.105 0.394 0.760
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.134339 0.403836 -0.33 0.74
x 0.000281 0.006941 0.04 0.97
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 20.888 on 99 degrees of freedom
Residual deviance: 20.887 on 98 degrees of freedom
AIC: 141.3
Number of Fisher Scoring iterations: 3
Warning message:
In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! | Logistic regression-like model for non-discrete outcomes | If you have "continuous" (seemingly, as they could still be discrete) values in between 0 and 1 there are at least two cases:
They came from a number of independent binary trials and the "continuous" | Logistic regression-like model for non-discrete outcomes
If you have "continuous" (seemingly, as they could still be discrete) values in between 0 and 1 there are at least two cases:
They came from a number of independent binary trials and the "continuous" value is the number of successes divided by trials. Then a binomial GLM might be appropriate. In this case you need to fit it in R as glm(cbind(numberSuccesses,numberFailures)~x,family=binomial)
If that is not the case, then you might have something for which a Beta Model might be more appropriate. The link I provided shows how to do that in R.
Note that in R glm(y~x,family=binomial) with a "continuous" $y$ will throw a warning and in general the result will not be the same as in the case with number of successes and trials:
set.seed(1)
successes<-sample(1:10,100,replace=TRUE)
x<-1:100
n<-12
failures<-n-successes
summary(glm(cbind(successes,failures)~x,family=binomial))
Call:
glm(formula = cbind(successes, failures) ~ x, family = binomial)
Deviance Residuals:
Min 1Q Median 3Q Max
-2.8197 -0.9434 0.0454 0.9358 2.4921
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.24622 0.11349 -2.17 0.03 *
x 0.00080 0.00195 0.41 0.68
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 134.99 on 99 degrees of freedom
Residual deviance: 134.82 on 98 degrees of freedom
AIC: 422.2
Number of Fisher Scoring iterations: 3
but
props<-successes/n
summary(glm(props~x,family=binomial))
Call:
glm(formula = props ~ x, family = binomial)
Deviance Residuals:
Min 1Q Median 3Q Max
-0.852 -0.282 -0.105 0.394 0.760
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) -0.134339 0.403836 -0.33 0.74
x 0.000281 0.006941 0.04 0.97
(Dispersion parameter for binomial family taken to be 1)
Null deviance: 20.888 on 99 degrees of freedom
Residual deviance: 20.887 on 98 degrees of freedom
AIC: 141.3
Number of Fisher Scoring iterations: 3
Warning message:
In eval(expr, envir, enclos) : non-integer #successes in a binomial glm! | Logistic regression-like model for non-discrete outcomes
If you have "continuous" (seemingly, as they could still be discrete) values in between 0 and 1 there are at least two cases:
They came from a number of independent binary trials and the "continuous" |
28,352 | Confused about confidence interval | This is a great question because it explores the possibility of alternative procedures and asks us to think about why and how one procedure might be superior to another.
The short answer is that there are infinitely many ways we might devise a procedure to obtain a lower confidence limit for the mean, but some of these are better and some are worse (in a sense that is meaningful and well-defined). Option 2 is an excellent procedure, because a person using it would need to collect less than half as much data as a person using Option 1 in order to obtain results of comparable quality. Half as much data typically means half the budget and half the time, so we're talking about a substantial and economically important difference. This supplies a concrete demonstration of the value of statistical theory.
Rather than rehash the theory, of which many excellent textbook accounts exist, let's quickly explore three lower confidence limit (LCL) procedures for $n$ independent normal variates of known standard deviation. I chose three natural and promising ones suggested by the question. Each of them is determined by a desired confidence level $1-\alpha$:
Option 1a, the "min" procedure. The lower confidence limit is set equal to $t_{\min} = \min(X_1, X_2, \ldots, X_n) - k^{\min}_{\alpha, n, \sigma} \sigma$. The value of the number $k^{\min}_{\alpha, n, \sigma}$ is determined so that the chance that $t_{\min}$ will exceed the true mean $\mu$ is just $\alpha$; that is, $\Pr(t_{\min} \gt \mu) = \alpha$.
Option 1b, the "max" procedure. The lower confidence limit is set equal to $t_{\max} = \max(X_1, X_2, \ldots, X_n) - k^{\max}_{\alpha, n, \sigma} \sigma$. The value of the number $k^{\max}_{\alpha, n, \sigma}$ is determined so that the chance that $t_{\max}$ will exceed the true mean $\mu$ is just $\alpha$; that is, $\Pr(t_{\max} \gt \mu) = \alpha$.
Option 2, the "mean" procedure. The lower confidence limit is set equal to $t_\text{mean} = \text{mean}(X_1, X_2, \ldots, X_n) - k^\text{mean}_{\alpha, n, \sigma} \sigma$. The value of the number $k^\text{mean}_{\alpha, n, \sigma}$ is determined so that the chance that $t_\text{mean}$ will exceed the true mean $\mu$ is just $\alpha$; that is, $\Pr(t_\text{mean} \gt \mu) = \alpha$.
As is well known, $k^\text{mean}_{\alpha, n, \sigma} = z_\alpha/\sqrt{n}$ where $\Phi(z_\alpha) = 1-\alpha$; $\Phi$ is the cumulative probability function of the standard Normal distribution. This is the formula cited in the question. A mathematical shorthand is
$k^\text{mean}_{\alpha, n, \sigma} = \Phi^{-1}(1-\alpha)/\sqrt{n}.$
The formulas for the min and max procedures are less well known but easy to determine:
$k^\text{min}_{\alpha,n,\sigma} = \Phi^{-1}(1-\alpha^{1/n})$.
$k^\text{max}_{\alpha, n, \sigma} = \Phi^{-1}((1-\alpha)^{1/n})$.
By means of a simulation, we can see that all three formulas work. The following R code conducts the experiment n.trials separate times and reports all three LCLs for each trial:
simulate <- function(n.trials=100, alpha=.05, n=5) {
z.min <- qnorm(1-alpha^(1/n))
z.mean <- qnorm(1-alpha) / sqrt(n)
z.max <- qnorm((1-alpha)^(1/n))
f <- function() {
x <- rnorm(n);
c(max=max(x) - z.max, min=min(x) - z.min, mean=mean(x) - z.mean)
}
replicate(n.trials, f())
}
(The code does not bother to work with general normal distributions: because we are free to choose the units of measurement and the zero of the measurement scale, it suffices to study the case $\mu=0$, $\sigma=1$. That is why none of the formulas for the various $k^*_{\alpha,n,\sigma}$ actually depend on $\sigma$.)
10,000 trials will provide sufficient accuracy. Let's run the simulation and calculate the frequency with which each procedure fails to produce a confidence limit less than the true mean:
set.seed(17)
sim <- simulate(10000, alpha=.05, n=5)
apply(sim > 0, 1, mean)
The output is
max min mean
0.0515 0.0527 0.0520
These frequencies are close enough to the stipulated value of $\alpha=.05$ that we can be satisfied all three procedures work as advertised: each one of them produces a 95% confidence lower confidence limit for the mean.
(If you're concerned that these frequencies differ slightly from $.05$, you can run more trials. With a million trials, they come even closer to $.05$: $(0.050547, 0.049877, 0.050274)$.)
However, one thing we would like about any LCL procedure is that not only should it be correct the intended proportion of time, but it should tend to be close to correct. For instance, imagine a (hypothetical) statistician who, by virtue of a deep religious sensibility, can consult the Delphic oracle (of Apollo) instead of collecting the data $X_1, X_2, \ldots, X_n$ and doing an LCL computation. When she asks the god for a 95% LCL, the god will just divine the true mean and tell that to her--after all, he's perfect. But, because the god does not wish to share his abilities fully with mankind (which must remain fallible), 5% of the time he will give an LCL that is $100\sigma$ too high. This Delphic procedure is also a 95% LCL--but it would be a scary one to use in practice due to the risk of it producing a truly horrible bound.
We can assess how accurate our three LCL procedures tend to be. A good way is to look at their sampling distributions: equivalently, histograms of many simulated values will do as well. Here they are. First though, the code to produce them:
dx <- -min(sim)/12
breaks <- seq(from=min(sim), to=max(sim)+dx, by=dx)
par(mfcol=c(1,3))
tmp <- sapply(c("min", "max", "mean"), function(s) {
hist(sim[s,], breaks=breaks, col="#70C0E0",
main=paste("Histogram of", s, "procedure"),
yaxt="n", ylab="", xlab="LCL");
hist(sim[s, sim[s,] > 0], breaks=breaks, col="Red", add=TRUE)
})
They are shown on identical x axes (but slightly different vertical axes). What we are interested in are
The red portions to the right of $0$--whose areas represent the frequency with which the procedures fail to underestimate the mean--are all about equal to the desired amount, $\alpha=.05$. (We had already confirmed that numerically.)
The spreads of the simulation results. Evidently, the rightmost histogram is narrower than the other two: it describes a procedure that indeed underestimates the mean (equal to $0$) fully $95$% of the time, but even when it does, that underestimate is almost always within $2 \sigma$ of the true mean. The other two histograms have a propensity to underestimate the true mean by a little more, out to about $3\sigma$ too low. Also, when they overestimate the true mean, they tend to overestimate it by more than the rightmost procedure. These qualities make them inferior to the rightmost histogram.
The rightmost histogram describes Option 2, the conventional LCL procedure.
One measure of these spreads is the standard deviation of the simulation results:
> apply(sim, 1, sd)
max min mean
0.673834 0.677219 0.453829
These numbers tell us that the max and min procedures have equal spreads (of about $0.68$) and the usual, mean, procedure has only about two-thirds their spread (of about $0.45$). This confirms the evidence of our eyes.
The squares of the standard deviations are the variances, equal to $0.45$, $0.45$, and $0.20$, respectively. The variances can be related to the amount of data: if one analyst recommends the max (or min) procedure, then in order to achieve the narrow spread exhibited by the usual procedure, their client would have to obtain $0.45/0.21$ times as much data--over twice as much. In other words, by using Option 1, you would be paying more than twice as much for your information than by using Option 2. | Confused about confidence interval | This is a great question because it explores the possibility of alternative procedures and asks us to think about why and how one procedure might be superior to another.
The short answer is that there | Confused about confidence interval
This is a great question because it explores the possibility of alternative procedures and asks us to think about why and how one procedure might be superior to another.
The short answer is that there are infinitely many ways we might devise a procedure to obtain a lower confidence limit for the mean, but some of these are better and some are worse (in a sense that is meaningful and well-defined). Option 2 is an excellent procedure, because a person using it would need to collect less than half as much data as a person using Option 1 in order to obtain results of comparable quality. Half as much data typically means half the budget and half the time, so we're talking about a substantial and economically important difference. This supplies a concrete demonstration of the value of statistical theory.
Rather than rehash the theory, of which many excellent textbook accounts exist, let's quickly explore three lower confidence limit (LCL) procedures for $n$ independent normal variates of known standard deviation. I chose three natural and promising ones suggested by the question. Each of them is determined by a desired confidence level $1-\alpha$:
Option 1a, the "min" procedure. The lower confidence limit is set equal to $t_{\min} = \min(X_1, X_2, \ldots, X_n) - k^{\min}_{\alpha, n, \sigma} \sigma$. The value of the number $k^{\min}_{\alpha, n, \sigma}$ is determined so that the chance that $t_{\min}$ will exceed the true mean $\mu$ is just $\alpha$; that is, $\Pr(t_{\min} \gt \mu) = \alpha$.
Option 1b, the "max" procedure. The lower confidence limit is set equal to $t_{\max} = \max(X_1, X_2, \ldots, X_n) - k^{\max}_{\alpha, n, \sigma} \sigma$. The value of the number $k^{\max}_{\alpha, n, \sigma}$ is determined so that the chance that $t_{\max}$ will exceed the true mean $\mu$ is just $\alpha$; that is, $\Pr(t_{\max} \gt \mu) = \alpha$.
Option 2, the "mean" procedure. The lower confidence limit is set equal to $t_\text{mean} = \text{mean}(X_1, X_2, \ldots, X_n) - k^\text{mean}_{\alpha, n, \sigma} \sigma$. The value of the number $k^\text{mean}_{\alpha, n, \sigma}$ is determined so that the chance that $t_\text{mean}$ will exceed the true mean $\mu$ is just $\alpha$; that is, $\Pr(t_\text{mean} \gt \mu) = \alpha$.
As is well known, $k^\text{mean}_{\alpha, n, \sigma} = z_\alpha/\sqrt{n}$ where $\Phi(z_\alpha) = 1-\alpha$; $\Phi$ is the cumulative probability function of the standard Normal distribution. This is the formula cited in the question. A mathematical shorthand is
$k^\text{mean}_{\alpha, n, \sigma} = \Phi^{-1}(1-\alpha)/\sqrt{n}.$
The formulas for the min and max procedures are less well known but easy to determine:
$k^\text{min}_{\alpha,n,\sigma} = \Phi^{-1}(1-\alpha^{1/n})$.
$k^\text{max}_{\alpha, n, \sigma} = \Phi^{-1}((1-\alpha)^{1/n})$.
By means of a simulation, we can see that all three formulas work. The following R code conducts the experiment n.trials separate times and reports all three LCLs for each trial:
simulate <- function(n.trials=100, alpha=.05, n=5) {
z.min <- qnorm(1-alpha^(1/n))
z.mean <- qnorm(1-alpha) / sqrt(n)
z.max <- qnorm((1-alpha)^(1/n))
f <- function() {
x <- rnorm(n);
c(max=max(x) - z.max, min=min(x) - z.min, mean=mean(x) - z.mean)
}
replicate(n.trials, f())
}
(The code does not bother to work with general normal distributions: because we are free to choose the units of measurement and the zero of the measurement scale, it suffices to study the case $\mu=0$, $\sigma=1$. That is why none of the formulas for the various $k^*_{\alpha,n,\sigma}$ actually depend on $\sigma$.)
10,000 trials will provide sufficient accuracy. Let's run the simulation and calculate the frequency with which each procedure fails to produce a confidence limit less than the true mean:
set.seed(17)
sim <- simulate(10000, alpha=.05, n=5)
apply(sim > 0, 1, mean)
The output is
max min mean
0.0515 0.0527 0.0520
These frequencies are close enough to the stipulated value of $\alpha=.05$ that we can be satisfied all three procedures work as advertised: each one of them produces a 95% confidence lower confidence limit for the mean.
(If you're concerned that these frequencies differ slightly from $.05$, you can run more trials. With a million trials, they come even closer to $.05$: $(0.050547, 0.049877, 0.050274)$.)
However, one thing we would like about any LCL procedure is that not only should it be correct the intended proportion of time, but it should tend to be close to correct. For instance, imagine a (hypothetical) statistician who, by virtue of a deep religious sensibility, can consult the Delphic oracle (of Apollo) instead of collecting the data $X_1, X_2, \ldots, X_n$ and doing an LCL computation. When she asks the god for a 95% LCL, the god will just divine the true mean and tell that to her--after all, he's perfect. But, because the god does not wish to share his abilities fully with mankind (which must remain fallible), 5% of the time he will give an LCL that is $100\sigma$ too high. This Delphic procedure is also a 95% LCL--but it would be a scary one to use in practice due to the risk of it producing a truly horrible bound.
We can assess how accurate our three LCL procedures tend to be. A good way is to look at their sampling distributions: equivalently, histograms of many simulated values will do as well. Here they are. First though, the code to produce them:
dx <- -min(sim)/12
breaks <- seq(from=min(sim), to=max(sim)+dx, by=dx)
par(mfcol=c(1,3))
tmp <- sapply(c("min", "max", "mean"), function(s) {
hist(sim[s,], breaks=breaks, col="#70C0E0",
main=paste("Histogram of", s, "procedure"),
yaxt="n", ylab="", xlab="LCL");
hist(sim[s, sim[s,] > 0], breaks=breaks, col="Red", add=TRUE)
})
They are shown on identical x axes (but slightly different vertical axes). What we are interested in are
The red portions to the right of $0$--whose areas represent the frequency with which the procedures fail to underestimate the mean--are all about equal to the desired amount, $\alpha=.05$. (We had already confirmed that numerically.)
The spreads of the simulation results. Evidently, the rightmost histogram is narrower than the other two: it describes a procedure that indeed underestimates the mean (equal to $0$) fully $95$% of the time, but even when it does, that underestimate is almost always within $2 \sigma$ of the true mean. The other two histograms have a propensity to underestimate the true mean by a little more, out to about $3\sigma$ too low. Also, when they overestimate the true mean, they tend to overestimate it by more than the rightmost procedure. These qualities make them inferior to the rightmost histogram.
The rightmost histogram describes Option 2, the conventional LCL procedure.
One measure of these spreads is the standard deviation of the simulation results:
> apply(sim, 1, sd)
max min mean
0.673834 0.677219 0.453829
These numbers tell us that the max and min procedures have equal spreads (of about $0.68$) and the usual, mean, procedure has only about two-thirds their spread (of about $0.45$). This confirms the evidence of our eyes.
The squares of the standard deviations are the variances, equal to $0.45$, $0.45$, and $0.20$, respectively. The variances can be related to the amount of data: if one analyst recommends the max (or min) procedure, then in order to achieve the narrow spread exhibited by the usual procedure, their client would have to obtain $0.45/0.21$ times as much data--over twice as much. In other words, by using Option 1, you would be paying more than twice as much for your information than by using Option 2. | Confused about confidence interval
This is a great question because it explores the possibility of alternative procedures and asks us to think about why and how one procedure might be superior to another.
The short answer is that there |
28,353 | Confused about confidence interval | The first option does not take account of the reduced variance that you get from the sample The first option gives you five lower 95% confidence bounds for the mean based on a sample of size 1 in each case. Combining them by averaging does not create a bound that you can interpret as a lower 95% bound. No one would do that. The second option is what is done. The average of the five independent observations has a variance smaller by a factor of 6 than the variance for a single sample. It therefore gives you a much better lower bound than any of the five you calculated the first way.
Also if the X$_i$ can be assumed to be iid normal then T will be normal. | Confused about confidence interval | The first option does not take account of the reduced variance that you get from the sample The first option gives you five lower 95% confidence bounds for the mean based on a sample of size 1 in ea | Confused about confidence interval
The first option does not take account of the reduced variance that you get from the sample The first option gives you five lower 95% confidence bounds for the mean based on a sample of size 1 in each case. Combining them by averaging does not create a bound that you can interpret as a lower 95% bound. No one would do that. The second option is what is done. The average of the five independent observations has a variance smaller by a factor of 6 than the variance for a single sample. It therefore gives you a much better lower bound than any of the five you calculated the first way.
Also if the X$_i$ can be assumed to be iid normal then T will be normal. | Confused about confidence interval
The first option does not take account of the reduced variance that you get from the sample The first option gives you five lower 95% confidence bounds for the mean based on a sample of size 1 in ea |
28,354 | Correcting for normally distributed clock inprecision | Clock synchronization issues could indeed cause the peak to be shifted to the right. The following simulation in R shows this phenomenon. I used exponential times and normal clock differences to get a shape that roughly resembles your picture:
The distribution to the left (the actual differences, measured without error) has its peak at 0, whereas the distribution to the right (differences measured with error) has its peak around 100.
R-code:
set.seed(20120904)
# Generate exponential time differences:
x<-rexp(100000,1/900)
# Generate normal clock differences:
y<-rnorm(100000,0,50)
# Resulting observations:
xy<-x+y
# Truncate at 500:
xy<-xy[xy<=500]
# Plot histograms:
par(mfrow=c(1,2))
hist(x[x<=500],breaks=100,col="blue",main="Actual differences")
hist(xy,breaks=100,col="blue",main="Observed differences")
lines(c(0,0),c(0,550),col="red")
If the clock differences are normal with mean 0 the differences should cancel out in the sense that the mean of the observed differences should equal that of the actual differences. Whether this is the case depends on whether there is a systematic difference between the computers where the first event occurs and the computers where the second event occurs. | Correcting for normally distributed clock inprecision | Clock synchronization issues could indeed cause the peak to be shifted to the right. The following simulation in R shows this phenomenon. I used exponential times and normal clock differences to get a | Correcting for normally distributed clock inprecision
Clock synchronization issues could indeed cause the peak to be shifted to the right. The following simulation in R shows this phenomenon. I used exponential times and normal clock differences to get a shape that roughly resembles your picture:
The distribution to the left (the actual differences, measured without error) has its peak at 0, whereas the distribution to the right (differences measured with error) has its peak around 100.
R-code:
set.seed(20120904)
# Generate exponential time differences:
x<-rexp(100000,1/900)
# Generate normal clock differences:
y<-rnorm(100000,0,50)
# Resulting observations:
xy<-x+y
# Truncate at 500:
xy<-xy[xy<=500]
# Plot histograms:
par(mfrow=c(1,2))
hist(x[x<=500],breaks=100,col="blue",main="Actual differences")
hist(xy,breaks=100,col="blue",main="Observed differences")
lines(c(0,0),c(0,550),col="red")
If the clock differences are normal with mean 0 the differences should cancel out in the sense that the mean of the observed differences should equal that of the actual differences. Whether this is the case depends on whether there is a systematic difference between the computers where the first event occurs and the computers where the second event occurs. | Correcting for normally distributed clock inprecision
Clock synchronization issues could indeed cause the peak to be shifted to the right. The following simulation in R shows this phenomenon. I used exponential times and normal clock differences to get a |
28,355 | randomForest and variable importance bug? | No, this isn't a bug. The values given in fit$importance are unscaled, while the values given by importance(fit) are expressed in terms of standard deviations (as given by fit$importanceSD). This is usually a more meaningful measure. If you want the "raw" values, you can use importance(fit, scale=FALSE).
In general, it's a very bad idea to rely on the internal details of a fit object, when there's an extractor function provided. There are no guarantees as to the contents of fit$importance - they could change drastically from version to version without notice. You should always use the extractor function when it's provided.
Edit: Yes, that line in rfcv() does look like a bug, or at least unintended behaviour. It's actually quite a good example of why you shouldn't rely on the contents of things like fit$importance. If the fit is for a regression forest, the first column of fit$importance is %IncMSE, equivalent to importance(fit, type=1). However, this doesn't hold in the classification case, where you have extra columns for each factor level. | randomForest and variable importance bug? | No, this isn't a bug. The values given in fit$importance are unscaled, while the values given by importance(fit) are expressed in terms of standard deviations (as given by fit$importanceSD). This is u | randomForest and variable importance bug?
No, this isn't a bug. The values given in fit$importance are unscaled, while the values given by importance(fit) are expressed in terms of standard deviations (as given by fit$importanceSD). This is usually a more meaningful measure. If you want the "raw" values, you can use importance(fit, scale=FALSE).
In general, it's a very bad idea to rely on the internal details of a fit object, when there's an extractor function provided. There are no guarantees as to the contents of fit$importance - they could change drastically from version to version without notice. You should always use the extractor function when it's provided.
Edit: Yes, that line in rfcv() does look like a bug, or at least unintended behaviour. It's actually quite a good example of why you shouldn't rely on the contents of things like fit$importance. If the fit is for a regression forest, the first column of fit$importance is %IncMSE, equivalent to importance(fit, type=1). However, this doesn't hold in the classification case, where you have extra columns for each factor level. | randomForest and variable importance bug?
No, this isn't a bug. The values given in fit$importance are unscaled, while the values given by importance(fit) are expressed in terms of standard deviations (as given by fit$importanceSD). This is u |
28,356 | Unit root tests for panel data in R | At the current moment (version 1.2-10, 2012-05-05) it seems that the unbalanced case is not supported. Edit: The issue of unbalanced panel data is solved in version 2.2-2 of plm on CRAN (2020-02-21).
Rest of the answer is assuming version 1.2-10:
I've looked at the code, and the final data preparation line (no matter what is your initial argument) is the following:
object <- as.data.frame(split(object, id))
If you pass unbalanced panel, this line will make it balanced by repeating the same values. If your unbalanced panel has time series with lengths which divide each other then even no error message is produced. Here is the example from purtest page:
> data(Grunfeld)
> purtest(inv ~ 1, data = Grunfeld, index = "firm", pmax = 4, test = "madwu")
Maddala-Wu Unit-Root Test (ex. var. : Individual Intercepts )
data: inv ~ 1
chisq = 47.5818, df = 20, p-value = 0.0004868
alternative hypothesis: stationarity
This panel is balanced:
> unique(table(Grunfeld$firm))
[1] 20
Disbalance it:
> gr <- subset(Grunfeld, !(firm %in% c(3,4,5) & year <1945))
Two different time series length in the panel:
> unique(table(gr$firm))
[1] 20 10
No error message:
> purtest(inv ~ 1, data = gr, index = "firm", pmax = 4, test = "madwu")
Maddala-Wu Unit-Root Test (ex. var. : Individual Intercepts )
data: inv ~ 1
chisq = 86.2132, df = 20, p-value = 3.379e-10
alternative hypothesis: stationarity
Another disbalanced panel:
> gr <- subset(Grunfeld, !(firm %in% c(3,4,5) & year <1940))
> unique(table(gr$firm))
[1] 20 15
And the error message:
> purtest(inv ~ 1, data = gr, index = "firm", pmax = 4, test = "madwu")
Erreur dans data.frame(`1` = c(317.6, 391.8, 410.6, 257.7, 330.8, 461.2, :
arguments imply differing number of rows: 20, 15 | Unit root tests for panel data in R | At the current moment (version 1.2-10, 2012-05-05) it seems that the unbalanced case is not supported. Edit: The issue of unbalanced panel data is solved in version 2.2-2 of plm on CRAN (2020-02-21).
| Unit root tests for panel data in R
At the current moment (version 1.2-10, 2012-05-05) it seems that the unbalanced case is not supported. Edit: The issue of unbalanced panel data is solved in version 2.2-2 of plm on CRAN (2020-02-21).
Rest of the answer is assuming version 1.2-10:
I've looked at the code, and the final data preparation line (no matter what is your initial argument) is the following:
object <- as.data.frame(split(object, id))
If you pass unbalanced panel, this line will make it balanced by repeating the same values. If your unbalanced panel has time series with lengths which divide each other then even no error message is produced. Here is the example from purtest page:
> data(Grunfeld)
> purtest(inv ~ 1, data = Grunfeld, index = "firm", pmax = 4, test = "madwu")
Maddala-Wu Unit-Root Test (ex. var. : Individual Intercepts )
data: inv ~ 1
chisq = 47.5818, df = 20, p-value = 0.0004868
alternative hypothesis: stationarity
This panel is balanced:
> unique(table(Grunfeld$firm))
[1] 20
Disbalance it:
> gr <- subset(Grunfeld, !(firm %in% c(3,4,5) & year <1945))
Two different time series length in the panel:
> unique(table(gr$firm))
[1] 20 10
No error message:
> purtest(inv ~ 1, data = gr, index = "firm", pmax = 4, test = "madwu")
Maddala-Wu Unit-Root Test (ex. var. : Individual Intercepts )
data: inv ~ 1
chisq = 86.2132, df = 20, p-value = 3.379e-10
alternative hypothesis: stationarity
Another disbalanced panel:
> gr <- subset(Grunfeld, !(firm %in% c(3,4,5) & year <1940))
> unique(table(gr$firm))
[1] 20 15
And the error message:
> purtest(inv ~ 1, data = gr, index = "firm", pmax = 4, test = "madwu")
Erreur dans data.frame(`1` = c(317.6, 391.8, 410.6, 257.7, 330.8, 461.2, :
arguments imply differing number of rows: 20, 15 | Unit root tests for panel data in R
At the current moment (version 1.2-10, 2012-05-05) it seems that the unbalanced case is not supported. Edit: The issue of unbalanced panel data is solved in version 2.2-2 of plm on CRAN (2020-02-21).
|
28,357 | Unit root tests for panel data in R | Did you try to convert your data to pdata.frame? I have an unbalanced panel also, but purtest seems to work with unbalanced panel if the data is pdata.frame. But I might be wrong too:)
However in ?purtest authors write:
"object, x
Either a 'data.frame' or a matrix containing the time series,
a 'pseries' object, a formula, or the name of a column of a 'data.frame',
or a **'pdata.frame'**
on which the test has to be computed; a'purtest' object for the print
and summary methods,"
So I guess if one uses pdata.frame the purtest "understands" that panel is unbalanced.
Am I wrong??? | Unit root tests for panel data in R | Did you try to convert your data to pdata.frame? I have an unbalanced panel also, but purtest seems to work with unbalanced panel if the data is pdata.frame. But I might be wrong too:)
However in ?pur | Unit root tests for panel data in R
Did you try to convert your data to pdata.frame? I have an unbalanced panel also, but purtest seems to work with unbalanced panel if the data is pdata.frame. But I might be wrong too:)
However in ?purtest authors write:
"object, x
Either a 'data.frame' or a matrix containing the time series,
a 'pseries' object, a formula, or the name of a column of a 'data.frame',
or a **'pdata.frame'**
on which the test has to be computed; a'purtest' object for the print
and summary methods,"
So I guess if one uses pdata.frame the purtest "understands" that panel is unbalanced.
Am I wrong??? | Unit root tests for panel data in R
Did you try to convert your data to pdata.frame? I have an unbalanced panel also, but purtest seems to work with unbalanced panel if the data is pdata.frame. But I might be wrong too:)
However in ?pur |
28,358 | Unit root tests for panel data in R | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You can use for sure the IPS test on unbalanced data, as long as you test for the Wtbar or Ztbar statistic.
Fore example, the following R code can be used to test sattionarity in unbalanced heterogeneous panel data, with IPS test (plm package):
`purtest(data$tot.emp, test = c("ips"), ips.stat="Ztbar", exo="intercept",dfcor=TRUE, lags = c("AIC"), pmax = 10)` | Unit root tests for panel data in R | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Unit root tests for panel data in R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You can use for sure the IPS test on unbalanced data, as long as you test for the Wtbar or Ztbar statistic.
Fore example, the following R code can be used to test sattionarity in unbalanced heterogeneous panel data, with IPS test (plm package):
`purtest(data$tot.emp, test = c("ips"), ips.stat="Ztbar", exo="intercept",dfcor=TRUE, lags = c("AIC"), pmax = 10)` | Unit root tests for panel data in R
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
28,359 | Unit root tests for panel data in R | Eviews 5 allows you to test the panel unit roots for the unbalanced data that is not possible with R and Stata. For example, even though Im–Pesaran–Shin and Fisher-type tests can be applied for unbalanced panel in Stata, it is not possible if we have some observations , with the gap i.e. we have data of country i for year 2002 and 2004 but not 2003 (assuming the lag to be greater than one). I think that Eviews drop all such observations while performing tests, for our example this is country i. However, if you manually drop all such observations, you can still perform the tests with R and Stata. | Unit root tests for panel data in R | Eviews 5 allows you to test the panel unit roots for the unbalanced data that is not possible with R and Stata. For example, even though Im–Pesaran–Shin and Fisher-type tests can be applied for unbala | Unit root tests for panel data in R
Eviews 5 allows you to test the panel unit roots for the unbalanced data that is not possible with R and Stata. For example, even though Im–Pesaran–Shin and Fisher-type tests can be applied for unbalanced panel in Stata, it is not possible if we have some observations , with the gap i.e. we have data of country i for year 2002 and 2004 but not 2003 (assuming the lag to be greater than one). I think that Eviews drop all such observations while performing tests, for our example this is country i. However, if you manually drop all such observations, you can still perform the tests with R and Stata. | Unit root tests for panel data in R
Eviews 5 allows you to test the panel unit roots for the unbalanced data that is not possible with R and Stata. For example, even though Im–Pesaran–Shin and Fisher-type tests can be applied for unbala |
28,360 | Mean square error or mean squared error | The conceptual uses of "square" and "squared" are subtly different, although (almost) interchangeable:
"Squared" refers to the past action of taking or computing the second power. E.g., $x^2$ is usually read as "x-squared," not "x-square." (The latter is sometimes encountered but I suspect it results from speakers who are accustomed to clipping their phrases or who just haven't heard the terminal dental in "x-squared.")
"Square" refers to the result of taking the second power. E.g., $x^2$ can be referred to as the "square of x." (The illocution "squared of x" is never used.)
These suggest that a person using a phrase like "mean squared error" is thinking in terms of a computation: take the errors, square them, average those. The phrase "mean square error" has a more conceptual feel to it: average the square errors. The user of this phrase may be thinking in terms of square errors rather than the errors themselves. I believe this shows up especially in theoretical literature where the second form, "square," appears more often (I believe: I haven't systematically checked).
Obviously both are equivalent in function and safely interchangeable in practice. It is interesting, though, that some careful Google queries give substantially different hit counts. Presently,
"mean squared" -square -root -Einstein -Relativity
returns about 367,000 results (notice the necessity of ruling out the phrase "$e=m c^2$" popularly quoted in certain contexts, which demands the use of "squared" instead of "square" when written out), while
"mean square" -squared -root -Einstein -Relativity
(maintaining analogous exclusions for comparability) returns an order of magnitude more, at 3.47 million results. This (weakly) suggests people favor "mean square" over "mean squared," but don't take this too much to heart: "mean squared" is used in official SAS documentation, for instance. | Mean square error or mean squared error | The conceptual uses of "square" and "squared" are subtly different, although (almost) interchangeable:
"Squared" refers to the past action of taking or computing the second power. E.g., $x^2$ is usu | Mean square error or mean squared error
The conceptual uses of "square" and "squared" are subtly different, although (almost) interchangeable:
"Squared" refers to the past action of taking or computing the second power. E.g., $x^2$ is usually read as "x-squared," not "x-square." (The latter is sometimes encountered but I suspect it results from speakers who are accustomed to clipping their phrases or who just haven't heard the terminal dental in "x-squared.")
"Square" refers to the result of taking the second power. E.g., $x^2$ can be referred to as the "square of x." (The illocution "squared of x" is never used.)
These suggest that a person using a phrase like "mean squared error" is thinking in terms of a computation: take the errors, square them, average those. The phrase "mean square error" has a more conceptual feel to it: average the square errors. The user of this phrase may be thinking in terms of square errors rather than the errors themselves. I believe this shows up especially in theoretical literature where the second form, "square," appears more often (I believe: I haven't systematically checked).
Obviously both are equivalent in function and safely interchangeable in practice. It is interesting, though, that some careful Google queries give substantially different hit counts. Presently,
"mean squared" -square -root -Einstein -Relativity
returns about 367,000 results (notice the necessity of ruling out the phrase "$e=m c^2$" popularly quoted in certain contexts, which demands the use of "squared" instead of "square" when written out), while
"mean square" -squared -root -Einstein -Relativity
(maintaining analogous exclusions for comparability) returns an order of magnitude more, at 3.47 million results. This (weakly) suggests people favor "mean square" over "mean squared," but don't take this too much to heart: "mean squared" is used in official SAS documentation, for instance. | Mean square error or mean squared error
The conceptual uses of "square" and "squared" are subtly different, although (almost) interchangeable:
"Squared" refers to the past action of taking or computing the second power. E.g., $x^2$ is usu |
28,361 | Mean square error or mean squared error | Nope! Both can be used.. interchangeably :-) it's the same. | Mean square error or mean squared error | Nope! Both can be used.. interchangeably :-) it's the same. | Mean square error or mean squared error
Nope! Both can be used.. interchangeably :-) it's the same. | Mean square error or mean squared error
Nope! Both can be used.. interchangeably :-) it's the same. |
28,362 | Mean square error or mean squared error | Mean squared error sounds better to me but indeed both forms are used (see, e.g., the Wikipedia page). | Mean square error or mean squared error | Mean squared error sounds better to me but indeed both forms are used (see, e.g., the Wikipedia page). | Mean square error or mean squared error
Mean squared error sounds better to me but indeed both forms are used (see, e.g., the Wikipedia page). | Mean square error or mean squared error
Mean squared error sounds better to me but indeed both forms are used (see, e.g., the Wikipedia page). |
28,363 | Mean square error or mean squared error | They are absolutely NOT the same.
mean SQUARE error: square the quantity => calculate the error => calculate the mean
mean SQUARED error: calculate the error => square the result => calculate the mean | Mean square error or mean squared error | They are absolutely NOT the same.
mean SQUARE error: square the quantity => calculate the error => calculate the mean
mean SQUARED error: calculate the error => square the result => calculate the | Mean square error or mean squared error
They are absolutely NOT the same.
mean SQUARE error: square the quantity => calculate the error => calculate the mean
mean SQUARED error: calculate the error => square the result => calculate the mean | Mean square error or mean squared error
They are absolutely NOT the same.
mean SQUARE error: square the quantity => calculate the error => calculate the mean
mean SQUARED error: calculate the error => square the result => calculate the |
28,364 | What distribution can be closely (or precisely) fit to the "5 number summary" statistics? | user1448319's answer triggered the following thought in my brain. Do a natural cubic spline on the set of points of the form
$(x_p, \Phi^{-1}(p))$
where $x_p$ is the $100p$ percentile and $\Phi^{-1}(\cdot)$ is the quantile function of the normal distribution. Run the resulting interpolating spline function through the normal CDF and take the derivative to obtain the PDF. This procedure has the following properties:
the resulting distribution matches the given percentiles exactly;
the tails are normal;
if the given percentiles actually match those of some normal distribution, the output is that normal distribution;
the numerical computations are dead easy and give analytical expressions for the PDF;
the generalization to other target distributions is obvious.
But the proof is in the pudding. Let me whip up some R code...
elicit_distribution <- function(x, p, qfun = qnorm, pfun = pnorm, dfun = dnorm, range_factor = 1, length.out = 1000, ...)
{
fun <- splinefun(x, qfun(p), method = "natural", ...)
cdfun <- function(x) pfun(fun(x, deriv = 0))
from <- min(x) - range_factor*diff(range(x))
to <- max(x) + range_factor*diff(range(x))
xval <- seq(from, to, length.out = length.out)
list(cdfun = cdfun
,pdfun = function(x) fun(x, deriv = 1)*dfun(fun(x, deriv = 0))
,quantfun = approxfun(cdfun(xval),xval)
)
}
plot_elicited_distribution <- function(x, p, qfun = qnorm, pfun = pnorm, dfun = dnorm, range_factor = 0.1, lwd = 2, ylab = "PDF", ...)
{
dist <- elicit_distribution(x,p,qfun,pfun,dfun)
from <- min(x) - range_factor*diff(range(x))
to <- max(x) + range_factor*diff(range(x))
curve(dist$pdfun(x), from = from, to = to, lwd = lwd, ylab = ylab, ...)
lineseg <- function(x,y,...)
points(c(x,x),c(0,y),type = "l", lwd = lwd, ...)
col <- function(i) c("red","green")[1+((i-1)%%2)]
xval <- dist$quantfun(p)
for(i in 1:length(xval))
{
points(x[i], dist$pdfun(x[i]), col = col(i), pch = 16)
lineseg(xval[i],dist$pdfun(xval[i]), col = col(i))
}
}
x <- c(5, 15, 17, 25, 46)
p <- c(0.01, 0.25, 0.5, 0.75, 0.99)
plot_elicited_distribution(x,p)
(Solid points plotted on the PDF curve show given values. Lines show percentiles of the generated distribution.)
Aw, crap. Add one more property to the list:
no guarantee of unimodality
Let's try a smoothing spline instead. Code as before, except in "elicit_distribution" replace
fun <- splinefun(x, qfun(p), method = "natural")
with
splineobj <- smooth.spline(x, qfun(p))
fun <- function(x, deriv) predict(splineobj, x, deriv)$y
That's a bit better. It's quite similar to the skew-normal plot you posted but it seems to have a different trade-off for awkward percentiles, resulting in a slightly better fit at the median and a slightly worse fit at the 25% point. | What distribution can be closely (or precisely) fit to the "5 number summary" statistics? | user1448319's answer triggered the following thought in my brain. Do a natural cubic spline on the set of points of the form
$(x_p, \Phi^{-1}(p))$
where $x_p$ is the $100p$ percentile and $\Phi^{-1}( | What distribution can be closely (or precisely) fit to the "5 number summary" statistics?
user1448319's answer triggered the following thought in my brain. Do a natural cubic spline on the set of points of the form
$(x_p, \Phi^{-1}(p))$
where $x_p$ is the $100p$ percentile and $\Phi^{-1}(\cdot)$ is the quantile function of the normal distribution. Run the resulting interpolating spline function through the normal CDF and take the derivative to obtain the PDF. This procedure has the following properties:
the resulting distribution matches the given percentiles exactly;
the tails are normal;
if the given percentiles actually match those of some normal distribution, the output is that normal distribution;
the numerical computations are dead easy and give analytical expressions for the PDF;
the generalization to other target distributions is obvious.
But the proof is in the pudding. Let me whip up some R code...
elicit_distribution <- function(x, p, qfun = qnorm, pfun = pnorm, dfun = dnorm, range_factor = 1, length.out = 1000, ...)
{
fun <- splinefun(x, qfun(p), method = "natural", ...)
cdfun <- function(x) pfun(fun(x, deriv = 0))
from <- min(x) - range_factor*diff(range(x))
to <- max(x) + range_factor*diff(range(x))
xval <- seq(from, to, length.out = length.out)
list(cdfun = cdfun
,pdfun = function(x) fun(x, deriv = 1)*dfun(fun(x, deriv = 0))
,quantfun = approxfun(cdfun(xval),xval)
)
}
plot_elicited_distribution <- function(x, p, qfun = qnorm, pfun = pnorm, dfun = dnorm, range_factor = 0.1, lwd = 2, ylab = "PDF", ...)
{
dist <- elicit_distribution(x,p,qfun,pfun,dfun)
from <- min(x) - range_factor*diff(range(x))
to <- max(x) + range_factor*diff(range(x))
curve(dist$pdfun(x), from = from, to = to, lwd = lwd, ylab = ylab, ...)
lineseg <- function(x,y,...)
points(c(x,x),c(0,y),type = "l", lwd = lwd, ...)
col <- function(i) c("red","green")[1+((i-1)%%2)]
xval <- dist$quantfun(p)
for(i in 1:length(xval))
{
points(x[i], dist$pdfun(x[i]), col = col(i), pch = 16)
lineseg(xval[i],dist$pdfun(xval[i]), col = col(i))
}
}
x <- c(5, 15, 17, 25, 46)
p <- c(0.01, 0.25, 0.5, 0.75, 0.99)
plot_elicited_distribution(x,p)
(Solid points plotted on the PDF curve show given values. Lines show percentiles of the generated distribution.)
Aw, crap. Add one more property to the list:
no guarantee of unimodality
Let's try a smoothing spline instead. Code as before, except in "elicit_distribution" replace
fun <- splinefun(x, qfun(p), method = "natural")
with
splineobj <- smooth.spline(x, qfun(p))
fun <- function(x, deriv) predict(splineobj, x, deriv)$y
That's a bit better. It's quite similar to the skew-normal plot you posted but it seems to have a different trade-off for awkward percentiles, resulting in a slightly better fit at the median and a slightly worse fit at the 25% point. | What distribution can be closely (or precisely) fit to the "5 number summary" statistics?
user1448319's answer triggered the following thought in my brain. Do a natural cubic spline on the set of points of the form
$(x_p, \Phi^{-1}(p))$
where $x_p$ is the $100p$ percentile and $\Phi^{-1}( |
28,365 | What distribution can be closely (or precisely) fit to the "5 number summary" statistics? | Why not just use something like a piecewise linear distribution?
Let's say a scientist gives you the values $x_{01}, x_{25}, x_{50}, x_{75}, x_{99}$ which correspond to the 1%, ..., 99% of the unknown underlying distribution. We want to make a distribution where there is 1% of the mass to the left of $y_{01}$, ..., and 99% of the mass to the left of $y_{99}$.
Let's call this distribution function $f$, i.e., $f(x_t) = y_t$.
Let's suppose that the distribution has a finite $x_{00}$ and $x_{100}$. Let's also suppose that we know what $x_{00}$ is. For now, let's pick something like $x_{00} = x_{01} - |x_{25} - x_{01}|$ (just so we have a specific value to do debugging with or something). I'll come back to this later.
Set $y_{00} = 0$. Set $y_{01}$ so that the area under the line segment from $(x_{00},y_{00})$ to $(x_{01},y_{01})$ is equal to 1% (i.e., so $\int_{x_{00}}^{x_{01}} f(x) = 0.1$). This gives you a value for $y_{01}$. Now find the value for $y_{25}$ so that the area under the line segment from $(x_{01},y_{01})$ to $(x_{25},y_{25})$ is equal to 25%-1%=24%. Do this again to find $y_{50}$, $y_{75}$, and $y_{99}$. No select the $x_{100}$ which gives you a total area of 100% under the piecewise linear function you've constructed. Now you have a distribution with exactly 1% of the mass to the left of the 1% value the expert told you, 25% to the left of the 25% value the expert told you, etc.
Now, look at your distribution. Pick a value of $x_{00}$ that makes sense. It might be clever to pick some measure that you want to minimize to give you an automatic selection of $x_{00}$. For example, you could minimize the total angle of your distribution (e.g., if your distribution is $f$, you could minimize $\int_{-\infty}^\infty {d^2 \over dx^2}f(x) dx$ which is just the sum of the angles of $f$ at each of $x_{00},...,x_{100}$).
This seems like the most naive approach to me, it is very flexible, and it has the added benefit of being non parametric so you don't have to estimate anything. I hope it's a good starting place. | What distribution can be closely (or precisely) fit to the "5 number summary" statistics? | Why not just use something like a piecewise linear distribution?
Let's say a scientist gives you the values $x_{01}, x_{25}, x_{50}, x_{75}, x_{99}$ which correspond to the 1%, ..., 99% of the unknown | What distribution can be closely (or precisely) fit to the "5 number summary" statistics?
Why not just use something like a piecewise linear distribution?
Let's say a scientist gives you the values $x_{01}, x_{25}, x_{50}, x_{75}, x_{99}$ which correspond to the 1%, ..., 99% of the unknown underlying distribution. We want to make a distribution where there is 1% of the mass to the left of $y_{01}$, ..., and 99% of the mass to the left of $y_{99}$.
Let's call this distribution function $f$, i.e., $f(x_t) = y_t$.
Let's suppose that the distribution has a finite $x_{00}$ and $x_{100}$. Let's also suppose that we know what $x_{00}$ is. For now, let's pick something like $x_{00} = x_{01} - |x_{25} - x_{01}|$ (just so we have a specific value to do debugging with or something). I'll come back to this later.
Set $y_{00} = 0$. Set $y_{01}$ so that the area under the line segment from $(x_{00},y_{00})$ to $(x_{01},y_{01})$ is equal to 1% (i.e., so $\int_{x_{00}}^{x_{01}} f(x) = 0.1$). This gives you a value for $y_{01}$. Now find the value for $y_{25}$ so that the area under the line segment from $(x_{01},y_{01})$ to $(x_{25},y_{25})$ is equal to 25%-1%=24%. Do this again to find $y_{50}$, $y_{75}$, and $y_{99}$. No select the $x_{100}$ which gives you a total area of 100% under the piecewise linear function you've constructed. Now you have a distribution with exactly 1% of the mass to the left of the 1% value the expert told you, 25% to the left of the 25% value the expert told you, etc.
Now, look at your distribution. Pick a value of $x_{00}$ that makes sense. It might be clever to pick some measure that you want to minimize to give you an automatic selection of $x_{00}$. For example, you could minimize the total angle of your distribution (e.g., if your distribution is $f$, you could minimize $\int_{-\infty}^\infty {d^2 \over dx^2}f(x) dx$ which is just the sum of the angles of $f$ at each of $x_{00},...,x_{100}$).
This seems like the most naive approach to me, it is very flexible, and it has the added benefit of being non parametric so you don't have to estimate anything. I hope it's a good starting place. | What distribution can be closely (or precisely) fit to the "5 number summary" statistics?
Why not just use something like a piecewise linear distribution?
Let's say a scientist gives you the values $x_{01}, x_{25}, x_{50}, x_{75}, x_{99}$ which correspond to the 1%, ..., 99% of the unknown |
28,366 | What distribution can be closely (or precisely) fit to the "5 number summary" statistics? | You could achieve this based on Box-Cox transformation or other power transformation family (denpending on whether your random variable is strictly positive or not). First, your can assume the original unknown distribution is well-behaved (not from a mixed distribution). Then based on the Box-Cox transformation, the transformed distribution will be approximately normally distributed.
(1).Set the initial value of summary statistics for a normally distributed random variable. The initial values can be calculated by applying Box-Cox transformation to your reported summary statistics of the unknow distribution $X$. This will give you the initial values of $y_q$ and the initial transformation parameter $\lambda$.
(2). Simulate a normal random variable for the sample size of the study with the initial values from (1), therefore $y\sim Normal(\mu, \sigma^2)$. If you using quantiles in (1), then $\mu$ and $\sigma^2$ can be derived by using the formula of $\mu\pm v_q\sigma=y_q$, where $v_q$ is the theoretical quantile values for the normal distribution.
(3). Inverse the Box-Cox transformation $x=(y\lambda+1)^{1/\lambda}$ and calcualte summary statistics of sample mean, sample standard deviation or sample percentile ranges from inversed distribution of $x$.
(4). Minimize the sum of least-squares $\sum{\frac{\theta_i-O_i}{O_i}} $ to obtain the optimal estimates of normal random variable $Y$, where $\Theta$ is the vector of summary statistics from the inversed distribution, and $O$ is the vector of reported summary statistics from the unknown distribution.
(5). Substitute those optimal estimates into (2) and (3) to get the simulated distribution of the unknown.
(6). Go back to (2) and using different random seeds to simulate a new normal distribution.
I hope this helps. | What distribution can be closely (or precisely) fit to the "5 number summary" statistics? | You could achieve this based on Box-Cox transformation or other power transformation family (denpending on whether your random variable is strictly positive or not). First, your can assume the origina | What distribution can be closely (or precisely) fit to the "5 number summary" statistics?
You could achieve this based on Box-Cox transformation or other power transformation family (denpending on whether your random variable is strictly positive or not). First, your can assume the original unknown distribution is well-behaved (not from a mixed distribution). Then based on the Box-Cox transformation, the transformed distribution will be approximately normally distributed.
(1).Set the initial value of summary statistics for a normally distributed random variable. The initial values can be calculated by applying Box-Cox transformation to your reported summary statistics of the unknow distribution $X$. This will give you the initial values of $y_q$ and the initial transformation parameter $\lambda$.
(2). Simulate a normal random variable for the sample size of the study with the initial values from (1), therefore $y\sim Normal(\mu, \sigma^2)$. If you using quantiles in (1), then $\mu$ and $\sigma^2$ can be derived by using the formula of $\mu\pm v_q\sigma=y_q$, where $v_q$ is the theoretical quantile values for the normal distribution.
(3). Inverse the Box-Cox transformation $x=(y\lambda+1)^{1/\lambda}$ and calcualte summary statistics of sample mean, sample standard deviation or sample percentile ranges from inversed distribution of $x$.
(4). Minimize the sum of least-squares $\sum{\frac{\theta_i-O_i}{O_i}} $ to obtain the optimal estimates of normal random variable $Y$, where $\Theta$ is the vector of summary statistics from the inversed distribution, and $O$ is the vector of reported summary statistics from the unknown distribution.
(5). Substitute those optimal estimates into (2) and (3) to get the simulated distribution of the unknown.
(6). Go back to (2) and using different random seeds to simulate a new normal distribution.
I hope this helps. | What distribution can be closely (or precisely) fit to the "5 number summary" statistics?
You could achieve this based on Box-Cox transformation or other power transformation family (denpending on whether your random variable is strictly positive or not). First, your can assume the origina |
28,367 | Parametric, semiparametric and nonparametric bootstrapping for mixed models | Bootstrapping in mixed linear models is very much like bootstrapping in regression except that you have the complication that the effects are divided into fixed and random. In regression to do the parametric bootstrap, you fit the parametric model to the data, compute the model residuals, bootstrap the residuals, take the bootstrap residuals and add them to the fitted model to get a bootstrap sample for the data and then fit the model to the bootstrap data to get bootstrap sample parameter estimates. You repeat the procedure by bootstrapping the original residuals again and then repeating the other steps in the procedure to get another bootstrap sample estimate of the parameters.
For the nonparametric bootstrap, you create the vector of the response and covariate values and bootstrap the selection of vectors for the bootstrap sample. From the bootstrap sample, you fit the model to get the parameters and you repeat the process. The only difference between the parametric and nonparametric bootstrap is that you bootstrap the residuals for the parametric bootstrap while the nonparametric bootstrap bootstraps the vectors. In the mixed model case you also can have a semiparametric bootstrap by treating some effects parametrically and the others nonparametrically. If your code is bootstrapping vectors you are doing the nonparametric bootstrap. I don't have a specific solution to provide for doing this in R but if you look at Efron and Tibshirani's book or my book with Robert LaBudde you will see R code for similar types of models to the linear mixed model. The nonparametric bootstrap has been shown to be more robust than the parametric bootstrap when the model is misspecified. | Parametric, semiparametric and nonparametric bootstrapping for mixed models | Bootstrapping in mixed linear models is very much like bootstrapping in regression except that you have the complication that the effects are divided into fixed and random. In regression to do the pa | Parametric, semiparametric and nonparametric bootstrapping for mixed models
Bootstrapping in mixed linear models is very much like bootstrapping in regression except that you have the complication that the effects are divided into fixed and random. In regression to do the parametric bootstrap, you fit the parametric model to the data, compute the model residuals, bootstrap the residuals, take the bootstrap residuals and add them to the fitted model to get a bootstrap sample for the data and then fit the model to the bootstrap data to get bootstrap sample parameter estimates. You repeat the procedure by bootstrapping the original residuals again and then repeating the other steps in the procedure to get another bootstrap sample estimate of the parameters.
For the nonparametric bootstrap, you create the vector of the response and covariate values and bootstrap the selection of vectors for the bootstrap sample. From the bootstrap sample, you fit the model to get the parameters and you repeat the process. The only difference between the parametric and nonparametric bootstrap is that you bootstrap the residuals for the parametric bootstrap while the nonparametric bootstrap bootstraps the vectors. In the mixed model case you also can have a semiparametric bootstrap by treating some effects parametrically and the others nonparametrically. If your code is bootstrapping vectors you are doing the nonparametric bootstrap. I don't have a specific solution to provide for doing this in R but if you look at Efron and Tibshirani's book or my book with Robert LaBudde you will see R code for similar types of models to the linear mixed model. The nonparametric bootstrap has been shown to be more robust than the parametric bootstrap when the model is misspecified. | Parametric, semiparametric and nonparametric bootstrapping for mixed models
Bootstrapping in mixed linear models is very much like bootstrapping in regression except that you have the complication that the effects are divided into fixed and random. In regression to do the pa |
28,368 | Parametric, semiparametric and nonparametric bootstrapping for mixed models | You might want to have a look at the bootMer function in the development version of lme4,
install_github("lme4",user="lme4")
library(lme4)
that can do model-based (semi-)parametric bootstrapping of mixed models...
Just check ?bootMer | Parametric, semiparametric and nonparametric bootstrapping for mixed models | You might want to have a look at the bootMer function in the development version of lme4,
install_github("lme4",user="lme4")
library(lme4)
that can do model-based (semi-)parametric bootstrapping of m | Parametric, semiparametric and nonparametric bootstrapping for mixed models
You might want to have a look at the bootMer function in the development version of lme4,
install_github("lme4",user="lme4")
library(lme4)
that can do model-based (semi-)parametric bootstrapping of mixed models...
Just check ?bootMer | Parametric, semiparametric and nonparametric bootstrapping for mixed models
You might want to have a look at the bootMer function in the development version of lme4,
install_github("lme4",user="lme4")
library(lme4)
that can do model-based (semi-)parametric bootstrapping of m |
28,369 | How to quantify statistical insignificance? | If you are comparing two groups and want to show no significant difference, this is called equivalence testing. It essentially reverses the null and alternative hypotheses. The idea is to define an interval of insignificance called the window of equivalence. This is used a lot when trying to show that a generic drug is a suitable replacement for a marketed drug. A good source to read about this is William Blackwelder's paper titled “Proving the null hypothesis” in clinical trials. | How to quantify statistical insignificance? | If you are comparing two groups and want to show no significant difference, this is called equivalence testing. It essentially reverses the null and alternative hypotheses. The idea is to define an i | How to quantify statistical insignificance?
If you are comparing two groups and want to show no significant difference, this is called equivalence testing. It essentially reverses the null and alternative hypotheses. The idea is to define an interval of insignificance called the window of equivalence. This is used a lot when trying to show that a generic drug is a suitable replacement for a marketed drug. A good source to read about this is William Blackwelder's paper titled “Proving the null hypothesis” in clinical trials. | How to quantify statistical insignificance?
If you are comparing two groups and want to show no significant difference, this is called equivalence testing. It essentially reverses the null and alternative hypotheses. The idea is to define an i |
28,370 | Simple combination/probability question based on string-length and possible-characters | Total number of possibilities
1) Close! You've got 62 choices for the first character, 62 for the 2nd, etc, so you end up with $62 \cdot 62 \cdot 62 \cdot \cdots 62 = 62^{20}$, which is an absurdly huge number.
Collision with a "Target" String
2) As we established above, there are $62^{20}$ potential strings. You want to know how many you'd need to guess to have better than 1 in 100,000 odds of guessing the "target" string. Essentially, you're asking what $$\frac{x}{62^{20}} \ge \frac{1}{10^5}$$To get it spot on, you'd have to round x up (or add one, if they're precisely equal), but as you'll see in a second, it doesn't really matter.
Through basic algebra, we can rearrange that as
$$\begin{aligned}
10^5x &\ge 62^{20}\\
10^5{x} &\ge (6.2 \cdot 10)^{20}\\
10^5x &\ge 6.2^{20} \cdot 10^{20}\\
x &\ge 6.2^{20} \cdot 10^{15}
\end{aligned}$$
Doing the math, $6.2^{20}$ is about $7 \cdot 10^{15}$, so let's call the whole thing $7 \cdot 10^{30}$ or, more succinctly, a whole heck of a lot.
This is, of course, why long passwords work really well :-) For real passwords, of course, you have to worry about strings of length less than or equal to twenty, which increases the number of possibilities even more.
Duplicates in the list
Now, let's consider the other scenario. Strings are generated at random and we want to determine how many can be generated before there's a 1:100,000 chance of any two strings matching. The classic version of this problem is called the Birthday Problem (or 'Paradox') and asks what the probability that two of n people have the same birthday. The wikipedia article[1] looks decent and has some tables that you might find useful. Nevertheless, I'll try to give you the flavor for the answer here too.
Some things to keep in mind:
-The probability of a match and of not having a match must sum to 1, so $P(\textrm{match}) = 1 - P(\textrm{no match})$ and vice versa.
-For two independent events $A$ and $B$, the probability of $P(A \& B) = P(A) \cdot P(B)$.
To get the answer, we're going to start by calculating the probability of not seeing a match for a fixed number of strings $k$. Once we know how to do that, we can set that equation equal to the threshold (1/100,000) and solve for $k$.
For convenience, let's call $N$ the number of possible strings ($62^{20}$).
We're going to 'walk' down the list and calculate the probability that the $k$^{th} string matches any of the strings "above" it in the list. For the first string, we've got $N$ total strings and nothing in the list, so $P_{k=1}(\textrm{no match}) = \frac{N}{N} = 1$. For the second string, there are still $N$ total possibilities, but one of those has been "used up" by the first string, so the probability of a match for this string is $P_{k=2}(\textrm{no match}) = \frac{N-1}{N}$ For the third string, there are two ways for it a match and therefore $N-2$ ways not to, so $P_{k=3}(\textrm{no match}) = \frac{N-2}{N}$ and so on. In general, the probability of the $k$th string not matching the others is $$P_{k}(\textrm{no match})= \frac{N-k+1}{N}$$
However, we want the probability of no matches between any of the $k$ strings. Since all of the events are independent (per the question), we can just multiply these probabilities together, like this:
$$P(\textrm{No Matches}) = \frac{N}{N} \cdot \frac{N-1}{N} \cdot \frac{N-2}{N} \cdots \frac{N-k+1}{N}$$
That can be simplified a little bit:
$$\begin{aligned}
P(\textrm{No Matches}) &= \frac{N \cdot (N-1) \cdot (N-2) \cdots (N-k+1)}{N^k} \\
P(\textrm{No Matches}) &= \frac{N!}{N^k \cdot (N-k)!} \\
P(\textrm{No Matches}) &= \frac{k! \cdot \binom{N}{k}}{N^k} \\
\end{aligned}
$$
The first step just multiplies the fractions together, the second uses the definition of factorial ($k! = (k) \cdot (k-1) \cdot (k-2) \cdots 1$) to replace the products of $N-k+1 \cdots N$ with something a little more manageable, and the final step swaps in a binomial coefficient. This gives us an equation for the probability of having no matches at all after generating $k$ strings. In theory, you could set that equal to $\frac{1}{100,000}$ and solve for $k$. In practice, it's going to be difficult to an answer since you'll be multiplying/dividing by huge numbers--factorials grow really quickly ($100!$ is more than 150 digits long).
However, there are approximations, both for computing the factorial and for the whole problem. This paper[2] suggests $$ k = 0.5 + \sqrt{0.25 - 2N\ln(p)}$$ where p is the probability of not seeing a match. His tests max out at $N=48,000$, but it's still pretty accurate there. Plugging in your numbers, I get approximately $3.7 \cdot 10^{15}$.
References
[1] http://en.wikipedia.org/wiki/Birthday_problem
[2] Mathis, Frank H. (June 1991). "A Generalized Birthday Problem". SIAM Review (Society for Industrial and Applied Mathematics) 33 (2): 265–270. JSTOR Link | Simple combination/probability question based on string-length and possible-characters | Total number of possibilities
1) Close! You've got 62 choices for the first character, 62 for the 2nd, etc, so you end up with $62 \cdot 62 \cdot 62 \cdot \cdots 62 = 62^{20}$, which is an absurdly h | Simple combination/probability question based on string-length and possible-characters
Total number of possibilities
1) Close! You've got 62 choices for the first character, 62 for the 2nd, etc, so you end up with $62 \cdot 62 \cdot 62 \cdot \cdots 62 = 62^{20}$, which is an absurdly huge number.
Collision with a "Target" String
2) As we established above, there are $62^{20}$ potential strings. You want to know how many you'd need to guess to have better than 1 in 100,000 odds of guessing the "target" string. Essentially, you're asking what $$\frac{x}{62^{20}} \ge \frac{1}{10^5}$$To get it spot on, you'd have to round x up (or add one, if they're precisely equal), but as you'll see in a second, it doesn't really matter.
Through basic algebra, we can rearrange that as
$$\begin{aligned}
10^5x &\ge 62^{20}\\
10^5{x} &\ge (6.2 \cdot 10)^{20}\\
10^5x &\ge 6.2^{20} \cdot 10^{20}\\
x &\ge 6.2^{20} \cdot 10^{15}
\end{aligned}$$
Doing the math, $6.2^{20}$ is about $7 \cdot 10^{15}$, so let's call the whole thing $7 \cdot 10^{30}$ or, more succinctly, a whole heck of a lot.
This is, of course, why long passwords work really well :-) For real passwords, of course, you have to worry about strings of length less than or equal to twenty, which increases the number of possibilities even more.
Duplicates in the list
Now, let's consider the other scenario. Strings are generated at random and we want to determine how many can be generated before there's a 1:100,000 chance of any two strings matching. The classic version of this problem is called the Birthday Problem (or 'Paradox') and asks what the probability that two of n people have the same birthday. The wikipedia article[1] looks decent and has some tables that you might find useful. Nevertheless, I'll try to give you the flavor for the answer here too.
Some things to keep in mind:
-The probability of a match and of not having a match must sum to 1, so $P(\textrm{match}) = 1 - P(\textrm{no match})$ and vice versa.
-For two independent events $A$ and $B$, the probability of $P(A \& B) = P(A) \cdot P(B)$.
To get the answer, we're going to start by calculating the probability of not seeing a match for a fixed number of strings $k$. Once we know how to do that, we can set that equation equal to the threshold (1/100,000) and solve for $k$.
For convenience, let's call $N$ the number of possible strings ($62^{20}$).
We're going to 'walk' down the list and calculate the probability that the $k$^{th} string matches any of the strings "above" it in the list. For the first string, we've got $N$ total strings and nothing in the list, so $P_{k=1}(\textrm{no match}) = \frac{N}{N} = 1$. For the second string, there are still $N$ total possibilities, but one of those has been "used up" by the first string, so the probability of a match for this string is $P_{k=2}(\textrm{no match}) = \frac{N-1}{N}$ For the third string, there are two ways for it a match and therefore $N-2$ ways not to, so $P_{k=3}(\textrm{no match}) = \frac{N-2}{N}$ and so on. In general, the probability of the $k$th string not matching the others is $$P_{k}(\textrm{no match})= \frac{N-k+1}{N}$$
However, we want the probability of no matches between any of the $k$ strings. Since all of the events are independent (per the question), we can just multiply these probabilities together, like this:
$$P(\textrm{No Matches}) = \frac{N}{N} \cdot \frac{N-1}{N} \cdot \frac{N-2}{N} \cdots \frac{N-k+1}{N}$$
That can be simplified a little bit:
$$\begin{aligned}
P(\textrm{No Matches}) &= \frac{N \cdot (N-1) \cdot (N-2) \cdots (N-k+1)}{N^k} \\
P(\textrm{No Matches}) &= \frac{N!}{N^k \cdot (N-k)!} \\
P(\textrm{No Matches}) &= \frac{k! \cdot \binom{N}{k}}{N^k} \\
\end{aligned}
$$
The first step just multiplies the fractions together, the second uses the definition of factorial ($k! = (k) \cdot (k-1) \cdot (k-2) \cdots 1$) to replace the products of $N-k+1 \cdots N$ with something a little more manageable, and the final step swaps in a binomial coefficient. This gives us an equation for the probability of having no matches at all after generating $k$ strings. In theory, you could set that equal to $\frac{1}{100,000}$ and solve for $k$. In practice, it's going to be difficult to an answer since you'll be multiplying/dividing by huge numbers--factorials grow really quickly ($100!$ is more than 150 digits long).
However, there are approximations, both for computing the factorial and for the whole problem. This paper[2] suggests $$ k = 0.5 + \sqrt{0.25 - 2N\ln(p)}$$ where p is the probability of not seeing a match. His tests max out at $N=48,000$, but it's still pretty accurate there. Plugging in your numbers, I get approximately $3.7 \cdot 10^{15}$.
References
[1] http://en.wikipedia.org/wiki/Birthday_problem
[2] Mathis, Frank H. (June 1991). "A Generalized Birthday Problem". SIAM Review (Society for Industrial and Applied Mathematics) 33 (2): 265–270. JSTOR Link | Simple combination/probability question based on string-length and possible-characters
Total number of possibilities
1) Close! You've got 62 choices for the first character, 62 for the 2nd, etc, so you end up with $62 \cdot 62 \cdot 62 \cdot \cdots 62 = 62^{20}$, which is an absurdly h |
28,371 | Simple combination/probability question based on string-length and possible-characters | I wrote a calculator for general cases of this problem. Let's say you have 6 unique characters (A,B,C,D,E,F) and each combination is 3 characters in length (e.g. "DBF", "EAC"...). If you have 12 samples, then the collision probability is 26.7%.
https://codepen.io/Walkipedia/pen/wvzeZeM | Simple combination/probability question based on string-length and possible-characters | I wrote a calculator for general cases of this problem. Let's say you have 6 unique characters (A,B,C,D,E,F) and each combination is 3 characters in length (e.g. "DBF", "EAC"...). If you have 12 sampl | Simple combination/probability question based on string-length and possible-characters
I wrote a calculator for general cases of this problem. Let's say you have 6 unique characters (A,B,C,D,E,F) and each combination is 3 characters in length (e.g. "DBF", "EAC"...). If you have 12 samples, then the collision probability is 26.7%.
https://codepen.io/Walkipedia/pen/wvzeZeM | Simple combination/probability question based on string-length and possible-characters
I wrote a calculator for general cases of this problem. Let's say you have 6 unique characters (A,B,C,D,E,F) and each combination is 3 characters in length (e.g. "DBF", "EAC"...). If you have 12 sampl |
28,372 | Automatic data cleansing | Dimensionality reduction via something like PCA would be helpful to get an idea of the number of dimensions that are critical to represent your data.
To check for misclassified instances, you can do a rudimentary k-means clustering of your data to get an idea of how well your raw data would fit your proposed categories. While not automatic, visualizing at this stage would be helpful, as your visual brain is a powerful classifier in and of itself.
In terms of data that are outright missing, statistics has numerous techniques to deal with that situation already, including imputation, taking data from the existing set or another set to fill in the gaps. | Automatic data cleansing | Dimensionality reduction via something like PCA would be helpful to get an idea of the number of dimensions that are critical to represent your data.
To check for misclassified instances, you can do a | Automatic data cleansing
Dimensionality reduction via something like PCA would be helpful to get an idea of the number of dimensions that are critical to represent your data.
To check for misclassified instances, you can do a rudimentary k-means clustering of your data to get an idea of how well your raw data would fit your proposed categories. While not automatic, visualizing at this stage would be helpful, as your visual brain is a powerful classifier in and of itself.
In terms of data that are outright missing, statistics has numerous techniques to deal with that situation already, including imputation, taking data from the existing set or another set to fill in the gaps. | Automatic data cleansing
Dimensionality reduction via something like PCA would be helpful to get an idea of the number of dimensions that are critical to represent your data.
To check for misclassified instances, you can do a |
28,373 | Automatic data cleansing | You can't really remove a knowledgeable person from the loop and expect reasonable results. That doesn't mean that the person has to look at every single item individually, but ultimately it takes some actual knowledge to know if summaries/graphs of data are reasonable. (For example: can variable A be negative, can variable B be larger than variable A, or are there 4 or 5 choices for categorical variable C?)
Once you've had a knowledgeable human look at the data, you can probably make a series of rules that you could use to test the data automatically. The problem is, other errors can arise that you haven't thought about. (For example, a programming error in the data gathering process that duplicates variable A to variable C.) | Automatic data cleansing | You can't really remove a knowledgeable person from the loop and expect reasonable results. That doesn't mean that the person has to look at every single item individually, but ultimately it takes som | Automatic data cleansing
You can't really remove a knowledgeable person from the loop and expect reasonable results. That doesn't mean that the person has to look at every single item individually, but ultimately it takes some actual knowledge to know if summaries/graphs of data are reasonable. (For example: can variable A be negative, can variable B be larger than variable A, or are there 4 or 5 choices for categorical variable C?)
Once you've had a knowledgeable human look at the data, you can probably make a series of rules that you could use to test the data automatically. The problem is, other errors can arise that you haven't thought about. (For example, a programming error in the data gathering process that duplicates variable A to variable C.) | Automatic data cleansing
You can't really remove a knowledgeable person from the loop and expect reasonable results. That doesn't mean that the person has to look at every single item individually, but ultimately it takes som |
28,374 | Automatic data cleansing | If you know that your data is not quite good, it is always good to check for outliers as well. Most of the time there are anomalies.
If you have a lot of features, dimensionality reduction is a must. PCA is quite efficient for that.
If you have missing data, you can use imputation or interpolation, but if your needs allows it, the winning case is to use collaborative filtering. | Automatic data cleansing | If you know that your data is not quite good, it is always good to check for outliers as well. Most of the time there are anomalies.
If you have a lot of features, dimensionality reduction is a must. | Automatic data cleansing
If you know that your data is not quite good, it is always good to check for outliers as well. Most of the time there are anomalies.
If you have a lot of features, dimensionality reduction is a must. PCA is quite efficient for that.
If you have missing data, you can use imputation or interpolation, but if your needs allows it, the winning case is to use collaborative filtering. | Automatic data cleansing
If you know that your data is not quite good, it is always good to check for outliers as well. Most of the time there are anomalies.
If you have a lot of features, dimensionality reduction is a must. |
28,375 | Mixing and dividing point processes | To answer this question we need a little background and notation. In the general terminology let $N$ denote a of point process in the plane, which means that for any Borel set, $A$, in the plane, $N(A)$ is an integer valued (including $+\infty$) random variable, which counts the number of points in $A$. Moreover, $A \mapsto N(A)$ is a measure for each realization of the point process $N$.
Associated with the point process is the expectation measure
$$A \mapsto \mu(A) := E(N(A))$$
where the expectation is always well defined, since $N(A) \geq 0$, but may be $+\infty$. It is left as an exercise to verify that $\mu$ is again a measure. To avoid technical issues lets assume that $\mu(\mathbf{R}^2) < \infty$, which is also reasonable if the process only really lives on a bounded set such as the box in the figure that the OP posted. It implies that $N(A) < \infty$ a.s. for all $A$.
The following definitions and observations follow.
We say that $N$ has intensity $\lambda$ if $\mu$ has density $\lambda$ w.r.t. the Lebesgue measure, that is, if
$$\mu(A) = \int_A \lambda(x) \mathrm{d}x.$$
If $N_1$ and $N_2$ are two point processes we define the superposition as the sum $N_1 + N_2$. This is equivalent to superimposing one point pattern on top of the other.
If $N_1$ and $N_2$ are two point processes (independent or not) with intensities $\lambda_1$ and $\lambda_2$ then the superposition has intensity $\lambda_1 + \lambda_2$.
If $N_1$ and $N_2$ are independent Poisson processes then the superposition is a Poisson process. To show this we first observe that $N_1(A) + N_2(A)$ is Poisson from the convolution properties of the Poisson distribution, and then that if $A_1, \ldots, A_n$ are disjoint then $N_1(A_1) + N_2(A_1), \ldots, N_1(A_n) + N_2(A_n)$ are independent because $N_1$ and $N_2$ are independent and Poisson processes themselves. These two properties characterize a Poisson process.
Summary I: We have shown that whenever a point process is a sum, or superposition, of two point processes with intensities then the superposition has as intensity the sum of the intensities. If, moreover, the processes are independent Poisson the superposition is Poisson.
For the remaining part of the question we assume that $N(\{x\}) \leq 1$ a.s. for all singleton sets $\{x\}$. Then the point process is called simple. Poisson processes with intensities are simple. For a simple point process there is a representation of $N$ as
$$N = \sum_i \delta_{X_i},$$
that is, as a sum of Dirac measures at the random points. If $Z_i \in \{0,1\}$ are Bernoulli random variables, a random thinning is the simple point process
$$N_1 = \sum_i Z_i \delta_{X_i}.$$
It is quite clear that with
$$N_2 = \sum_i (1-Z_i) \delta_{X_i}$$
it holds that $N = N_1 + N_2$. If we do i.i.d. random thinning, meaning that the $Z_i$'s are all independent and identically distributed with success probability $p$, say, then
$$N_1(A) \mid N(A) = n \sim \text{Bin}(n, p).$$
From this,
$$E(N_1(A)) = E \big(E(N_1(A) \mid N(A))\big) = E(N(A)p) = p \mu(A).$$
If $N$ is a Poisson process it should be clear that for disjoint $A_1, \ldots, A_n$ then $N_1(A_1), \ldots, N_1(A_n)$ are again independent, and
$$
\begin{array}{rcl}
P(N_1(A) = k) & = & \sum_{n=k}^{\infty} P(N_1(A) = k \mid N(A) = n)P(N(A) = n) \\
& =& e^{-\mu(A)} \sum_{n=k}^{\infty} {n \choose k} p^k(1-p)^{n-k} \frac{\mu(A)^n}{n!} \\
& = & \frac{(p\mu)^k}{k!}e^{-\mu(A)} \sum_{n=k}^{\infty} \frac{((1-p)\mu(A))^{n-k}}{(n-k)!} \\
& = & \frac{(p\mu(A))^k}{k!}e^{-\mu(A) + (1-p)\mu(A)} = e^{-p\mu(A)}\frac{(p\mu(A))^k}{k!}.
\end{array}
$$
This shows that $N_1$ is a Poisson process. Similarly, $N_2$ is a Poisson process (with mean measure $(1-p)\mu$). What is left is to show that $N_1$ and $N_2$ are, in fact, independent. We cut a corner here and say that it is actually sufficient to show that $N_1(A)$ and $N_2(A)$ are independent for arbitrary $A$, and this follows from
$$
\begin{array}{rcl}
P(N_1(A) = k, N_2(A) = r) & = & P(N_1(A) = k, N(A) = k + r) \\
& = & P(N_1(A) = k \mid N(A) = k + r) P(N(A) = k + r) \\
& = & e^{-\mu(A)} {k+r \choose k} p^k(1-p)^{r} \frac{\mu(A)^{k+r}}{(k+r)!} \\
& = & e^{-p\mu(A)}\frac{(p\mu(A))^k}{k!} e^{-(1-p)\mu(A)}\frac{((1-p)\mu(A))^r}{r!} \\
& = & P(N_1(A) = k)P(N_2(A) = r).
\end{array}
$$
Summary II: We conclude that i.i.d. random thinning with success probability $p$ of a simple point process, $N$, with intensity $\lambda$ results in two simple point processes, $N_1$ and $N_2$, with intensities $p\lambda$ and $(1-p)\lambda$, respectively, and $N$ is the superposition of $N_1$ and $N_2$. If, moreover, $N$ is a Poisson process then $N_1$ and $N_2$ are independent Poisson processes.
It is natural to ask if we could thin independently without assuming that the $Z_i$'s are identically distributed and obtain similar results. This is possible, but a little more complicated to formulate, because the distribution of $Z_i$ then has to be linked to the $X_i$ somehow. For instance, $P(Z_i = 1 \mid N) = p(x_i)$ for a given function $p$. It is then possible to show the same result as above but with the intensity $p\lambda$ meaning the function $p(x)\lambda(x)$. We skip the proof. The best general mathematical reference covering spatial point processes is Daley and Vere-Jones. A close second covering statistics and simulation algorithms, in particular, is Møller and Waagepetersen. | Mixing and dividing point processes | To answer this question we need a little background and notation. In the general terminology let $N$ denote a of point process in the plane, which means that for any Borel set, $A$, in the plane, $N(A | Mixing and dividing point processes
To answer this question we need a little background and notation. In the general terminology let $N$ denote a of point process in the plane, which means that for any Borel set, $A$, in the plane, $N(A)$ is an integer valued (including $+\infty$) random variable, which counts the number of points in $A$. Moreover, $A \mapsto N(A)$ is a measure for each realization of the point process $N$.
Associated with the point process is the expectation measure
$$A \mapsto \mu(A) := E(N(A))$$
where the expectation is always well defined, since $N(A) \geq 0$, but may be $+\infty$. It is left as an exercise to verify that $\mu$ is again a measure. To avoid technical issues lets assume that $\mu(\mathbf{R}^2) < \infty$, which is also reasonable if the process only really lives on a bounded set such as the box in the figure that the OP posted. It implies that $N(A) < \infty$ a.s. for all $A$.
The following definitions and observations follow.
We say that $N$ has intensity $\lambda$ if $\mu$ has density $\lambda$ w.r.t. the Lebesgue measure, that is, if
$$\mu(A) = \int_A \lambda(x) \mathrm{d}x.$$
If $N_1$ and $N_2$ are two point processes we define the superposition as the sum $N_1 + N_2$. This is equivalent to superimposing one point pattern on top of the other.
If $N_1$ and $N_2$ are two point processes (independent or not) with intensities $\lambda_1$ and $\lambda_2$ then the superposition has intensity $\lambda_1 + \lambda_2$.
If $N_1$ and $N_2$ are independent Poisson processes then the superposition is a Poisson process. To show this we first observe that $N_1(A) + N_2(A)$ is Poisson from the convolution properties of the Poisson distribution, and then that if $A_1, \ldots, A_n$ are disjoint then $N_1(A_1) + N_2(A_1), \ldots, N_1(A_n) + N_2(A_n)$ are independent because $N_1$ and $N_2$ are independent and Poisson processes themselves. These two properties characterize a Poisson process.
Summary I: We have shown that whenever a point process is a sum, or superposition, of two point processes with intensities then the superposition has as intensity the sum of the intensities. If, moreover, the processes are independent Poisson the superposition is Poisson.
For the remaining part of the question we assume that $N(\{x\}) \leq 1$ a.s. for all singleton sets $\{x\}$. Then the point process is called simple. Poisson processes with intensities are simple. For a simple point process there is a representation of $N$ as
$$N = \sum_i \delta_{X_i},$$
that is, as a sum of Dirac measures at the random points. If $Z_i \in \{0,1\}$ are Bernoulli random variables, a random thinning is the simple point process
$$N_1 = \sum_i Z_i \delta_{X_i}.$$
It is quite clear that with
$$N_2 = \sum_i (1-Z_i) \delta_{X_i}$$
it holds that $N = N_1 + N_2$. If we do i.i.d. random thinning, meaning that the $Z_i$'s are all independent and identically distributed with success probability $p$, say, then
$$N_1(A) \mid N(A) = n \sim \text{Bin}(n, p).$$
From this,
$$E(N_1(A)) = E \big(E(N_1(A) \mid N(A))\big) = E(N(A)p) = p \mu(A).$$
If $N$ is a Poisson process it should be clear that for disjoint $A_1, \ldots, A_n$ then $N_1(A_1), \ldots, N_1(A_n)$ are again independent, and
$$
\begin{array}{rcl}
P(N_1(A) = k) & = & \sum_{n=k}^{\infty} P(N_1(A) = k \mid N(A) = n)P(N(A) = n) \\
& =& e^{-\mu(A)} \sum_{n=k}^{\infty} {n \choose k} p^k(1-p)^{n-k} \frac{\mu(A)^n}{n!} \\
& = & \frac{(p\mu)^k}{k!}e^{-\mu(A)} \sum_{n=k}^{\infty} \frac{((1-p)\mu(A))^{n-k}}{(n-k)!} \\
& = & \frac{(p\mu(A))^k}{k!}e^{-\mu(A) + (1-p)\mu(A)} = e^{-p\mu(A)}\frac{(p\mu(A))^k}{k!}.
\end{array}
$$
This shows that $N_1$ is a Poisson process. Similarly, $N_2$ is a Poisson process (with mean measure $(1-p)\mu$). What is left is to show that $N_1$ and $N_2$ are, in fact, independent. We cut a corner here and say that it is actually sufficient to show that $N_1(A)$ and $N_2(A)$ are independent for arbitrary $A$, and this follows from
$$
\begin{array}{rcl}
P(N_1(A) = k, N_2(A) = r) & = & P(N_1(A) = k, N(A) = k + r) \\
& = & P(N_1(A) = k \mid N(A) = k + r) P(N(A) = k + r) \\
& = & e^{-\mu(A)} {k+r \choose k} p^k(1-p)^{r} \frac{\mu(A)^{k+r}}{(k+r)!} \\
& = & e^{-p\mu(A)}\frac{(p\mu(A))^k}{k!} e^{-(1-p)\mu(A)}\frac{((1-p)\mu(A))^r}{r!} \\
& = & P(N_1(A) = k)P(N_2(A) = r).
\end{array}
$$
Summary II: We conclude that i.i.d. random thinning with success probability $p$ of a simple point process, $N$, with intensity $\lambda$ results in two simple point processes, $N_1$ and $N_2$, with intensities $p\lambda$ and $(1-p)\lambda$, respectively, and $N$ is the superposition of $N_1$ and $N_2$. If, moreover, $N$ is a Poisson process then $N_1$ and $N_2$ are independent Poisson processes.
It is natural to ask if we could thin independently without assuming that the $Z_i$'s are identically distributed and obtain similar results. This is possible, but a little more complicated to formulate, because the distribution of $Z_i$ then has to be linked to the $X_i$ somehow. For instance, $P(Z_i = 1 \mid N) = p(x_i)$ for a given function $p$. It is then possible to show the same result as above but with the intensity $p\lambda$ meaning the function $p(x)\lambda(x)$. We skip the proof. The best general mathematical reference covering spatial point processes is Daley and Vere-Jones. A close second covering statistics and simulation algorithms, in particular, is Møller and Waagepetersen. | Mixing and dividing point processes
To answer this question we need a little background and notation. In the general terminology let $N$ denote a of point process in the plane, which means that for any Borel set, $A$, in the plane, $N(A |
28,376 | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven priors from same data set) | Yes this is inappropriate because it uses the same data twice, leading to falsely overconfident results. This is known as 'double dipping'.
For references, I would start with Carlin and Louis (2000). Although 'double dipping' has been one of the primary critiques of Empirical Bayes, Ch. 3, in particular section 3.5, of this book describes ways to estimate appropriate confidence intervals using the EB approach.
Berger J (2006). \The Case for Objective Bayesian Analysis." Bayesian Analysis, 1(3), 385{
402
Bradley P. Carlin, Thomas A. Louis 2000. Bayes and Empirical Bayes methods for data analysis.
Darniede, W.F. 2011. Bayesian Methods for Data-Dependent Priors. MS Thesis, Ohio State Univ.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2003), Bayesian Data Anal-
ysis, Second Edition (Chapman & Hall/CRC Texts in Statistical Science), Chap-
man and Hall/CRC, 2nd ed. | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven prior | Yes this is inappropriate because it uses the same data twice, leading to falsely overconfident results. This is known as 'double dipping'.
For references, I would start with Carlin and Louis (2000). | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven priors from same data set)
Yes this is inappropriate because it uses the same data twice, leading to falsely overconfident results. This is known as 'double dipping'.
For references, I would start with Carlin and Louis (2000). Although 'double dipping' has been one of the primary critiques of Empirical Bayes, Ch. 3, in particular section 3.5, of this book describes ways to estimate appropriate confidence intervals using the EB approach.
Berger J (2006). \The Case for Objective Bayesian Analysis." Bayesian Analysis, 1(3), 385{
402
Bradley P. Carlin, Thomas A. Louis 2000. Bayes and Empirical Bayes methods for data analysis.
Darniede, W.F. 2011. Bayesian Methods for Data-Dependent Priors. MS Thesis, Ohio State Univ.
Gelman, A., Carlin, J. B., Stern, H. S., and Rubin, D. B. (2003), Bayesian Data Anal-
ysis, Second Edition (Chapman & Hall/CRC Texts in Statistical Science), Chap-
man and Hall/CRC, 2nd ed. | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven prior
Yes this is inappropriate because it uses the same data twice, leading to falsely overconfident results. This is known as 'double dipping'.
For references, I would start with Carlin and Louis (2000). |
28,377 | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven priors from same data set) | It can make sense to use the data to build the prior though.
For an example in mixture modelling, see Richardson & Green (1997):
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.3667
They use the mean and the range of the data points as hyperparameters for the prior and it makes perfect sense.
The problem of using the data twice occurs when a informative prior is derived from the data, in my opinion.
As long as you check that your prior distribution is "flat" where the posterior distribution is peaked, then you know that your prior distribution has not a strong impact on the results. | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven prior | It can make sense to use the data to build the prior though.
For an example in mixture modelling, see Richardson & Green (1997):
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.3667
They us | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven priors from same data set)
It can make sense to use the data to build the prior though.
For an example in mixture modelling, see Richardson & Green (1997):
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.3667
They use the mean and the range of the data points as hyperparameters for the prior and it makes perfect sense.
The problem of using the data twice occurs when a informative prior is derived from the data, in my opinion.
As long as you check that your prior distribution is "flat" where the posterior distribution is peaked, then you know that your prior distribution has not a strong impact on the results. | Allow data to dictate the priors and then run the model using these priors? (e.g., data-driven prior
It can make sense to use the data to build the prior though.
For an example in mixture modelling, see Richardson & Green (1997):
http://citeseer.ist.psu.edu/viewdoc/summary?doi=10.1.1.27.3667
They us |
28,378 | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and aggregate statistics at the small area scale? | Dasymetric mapping is mainly focused on interpolating population estimates to smaller areas than available in currently disseminated data (see this question for a host of useful references on the topic). Frequently this was done by simply identifying areas (based on land characteristics) in which obviously no population exists, and then re-estimating population densities (ommitting those areas). An example might be if there is a body of water in a city, another might be if you identify industrial land parcels which can not have any residential population. More recent approaches to dasymetric mapping incorporate other ancillary data in a probabilistic framework to allocate population estimates (Kyriakidis, 2004; Liu et al., 2008; Lin et al., 2011; Zhang & Qiu, 2011).
Now it is easy to see the relation to your question at hand. You want the population estimates of the small areas. But, it should also be clear how it may fall short of your goals. You not only want the population data, but characteristics of those populations as well. One of the terms used to describe this situation is the change of support problem (Cressie, 1996; Gotway & Young, 2002). Borrowing from the geostatistical literature in which one tries to make predictions of a certain characteristic over a wide area from point samples, recent work has attempted to interpolate areal data to different target zones. Much of the work of Pierre Goovaerts focuses on such area-to-point kriging methods, a recent article in the journal Geographical Analysis has several examples of the method applied different subject materials (Haining et al., 2010), and one of my favorite applications of it is in this article (Young et al., 2009).
What I cite should hardly be viewed as a panacea to the problem though. Ultimately many of the same issues with ecological inference and aggregation bias apply to the goals of areal interpolation as well. It is likley many of the relationships between the micro level data are simply lost in the aggregation process, and such interpolation techiques will not be able to recover them. Also the process through which the data is empirically interpolated (through estimating variograms from the aggregate level data) is often quite full of ad-hoc steps which should make the process questionable (Goovaerts, 2008).
Unfortunately, I post this in a separate answer as the ecological inference literature and the literature on dasymetric mapping and area-to-point kriging are non-overlapping. Although the literature on ecological inference has many implications for these techniques. Not only are the interpolation techniques subject to aggregation bias, but the intelligent dasymetric techniques (which use the aggregate data to fit models to predict the smaller areas) are likely suspect to aggregation bias. Knowledge of the situations in which aggregation bias occurs should be enlightening as to the situations in which areal interpolation and dasymetric mapping will largely fail (especially in regards to identifying correlations between different variables at the disaggregated level).
Citations
Cressie N. (1996). Change of support and the modifiable areal unit problem. Geographical Systems 3: 159-180.
Gotway C.A. & L. J. Young (2002). Combining incompatible spatial data. Journal of the American Statistical Association 97(458): 632-648. (PDF here)
Goovaerts P. (2008). Kriging and semivariogram deconvolution in the presence of irregualar geographical units. Mathematical Geosciences 40(1): 101-128 (PDF here)
Haining, R.P., R. Kerry & M.A. Oliver (2010). Geography, spatial data analysis, and geostatistics: An overview. Geographical Analysis 42(1): 7-31.
Kyriakidis P.C. (2004). A geostatistical framework for area-to-point spatial interpolation. Geographical Analysis 36(3): 259-289. (PDF here)
Liu X.H., P.C. Kyriakidis & M.F. Goodchild (2008). Population-density estimation using regression and area-to-point residual kriging. International Journal of Geographical Information Science 22(4): 431-447.
Lin J., R. Cromley & C. Zhang (2011). Using geographically weighted regression to solve the areal interpolation problem. Annals of GIS 17(1): 1-14.
Young, L.J., C.A. Gotway, J. Yang, G. Kearney & C. DuClos (2009). Linking health and environmental data in geographical analysis: It's so much more than centroids. Spatial and Spatio-temporal Epidemiology 1(1): 73-84.
Zhang C. & F. Qiu (2011). A point-based intelligent approach to areal interpolation. The Professional Geographer 63(2): 262-276. (PDF here) | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and | Dasymetric mapping is mainly focused on interpolating population estimates to smaller areas than available in currently disseminated data (see this question for a host of useful references on the topi | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and aggregate statistics at the small area scale?
Dasymetric mapping is mainly focused on interpolating population estimates to smaller areas than available in currently disseminated data (see this question for a host of useful references on the topic). Frequently this was done by simply identifying areas (based on land characteristics) in which obviously no population exists, and then re-estimating population densities (ommitting those areas). An example might be if there is a body of water in a city, another might be if you identify industrial land parcels which can not have any residential population. More recent approaches to dasymetric mapping incorporate other ancillary data in a probabilistic framework to allocate population estimates (Kyriakidis, 2004; Liu et al., 2008; Lin et al., 2011; Zhang & Qiu, 2011).
Now it is easy to see the relation to your question at hand. You want the population estimates of the small areas. But, it should also be clear how it may fall short of your goals. You not only want the population data, but characteristics of those populations as well. One of the terms used to describe this situation is the change of support problem (Cressie, 1996; Gotway & Young, 2002). Borrowing from the geostatistical literature in which one tries to make predictions of a certain characteristic over a wide area from point samples, recent work has attempted to interpolate areal data to different target zones. Much of the work of Pierre Goovaerts focuses on such area-to-point kriging methods, a recent article in the journal Geographical Analysis has several examples of the method applied different subject materials (Haining et al., 2010), and one of my favorite applications of it is in this article (Young et al., 2009).
What I cite should hardly be viewed as a panacea to the problem though. Ultimately many of the same issues with ecological inference and aggregation bias apply to the goals of areal interpolation as well. It is likley many of the relationships between the micro level data are simply lost in the aggregation process, and such interpolation techiques will not be able to recover them. Also the process through which the data is empirically interpolated (through estimating variograms from the aggregate level data) is often quite full of ad-hoc steps which should make the process questionable (Goovaerts, 2008).
Unfortunately, I post this in a separate answer as the ecological inference literature and the literature on dasymetric mapping and area-to-point kriging are non-overlapping. Although the literature on ecological inference has many implications for these techniques. Not only are the interpolation techniques subject to aggregation bias, but the intelligent dasymetric techniques (which use the aggregate data to fit models to predict the smaller areas) are likely suspect to aggregation bias. Knowledge of the situations in which aggregation bias occurs should be enlightening as to the situations in which areal interpolation and dasymetric mapping will largely fail (especially in regards to identifying correlations between different variables at the disaggregated level).
Citations
Cressie N. (1996). Change of support and the modifiable areal unit problem. Geographical Systems 3: 159-180.
Gotway C.A. & L. J. Young (2002). Combining incompatible spatial data. Journal of the American Statistical Association 97(458): 632-648. (PDF here)
Goovaerts P. (2008). Kriging and semivariogram deconvolution in the presence of irregualar geographical units. Mathematical Geosciences 40(1): 101-128 (PDF here)
Haining, R.P., R. Kerry & M.A. Oliver (2010). Geography, spatial data analysis, and geostatistics: An overview. Geographical Analysis 42(1): 7-31.
Kyriakidis P.C. (2004). A geostatistical framework for area-to-point spatial interpolation. Geographical Analysis 36(3): 259-289. (PDF here)
Liu X.H., P.C. Kyriakidis & M.F. Goodchild (2008). Population-density estimation using regression and area-to-point residual kriging. International Journal of Geographical Information Science 22(4): 431-447.
Lin J., R. Cromley & C. Zhang (2011). Using geographically weighted regression to solve the areal interpolation problem. Annals of GIS 17(1): 1-14.
Young, L.J., C.A. Gotway, J. Yang, G. Kearney & C. DuClos (2009). Linking health and environmental data in geographical analysis: It's so much more than centroids. Spatial and Spatio-temporal Epidemiology 1(1): 73-84.
Zhang C. & F. Qiu (2011). A point-based intelligent approach to areal interpolation. The Professional Geographer 63(2): 262-276. (PDF here) | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and
Dasymetric mapping is mainly focused on interpolating population estimates to smaller areas than available in currently disseminated data (see this question for a host of useful references on the topi |
28,379 | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and aggregate statistics at the small area scale? | The work of Gary King, in particular his book "A Solution to the Ecological Inference Problem" (the first two chapters are available here), would be of interest (as well as the accompanying software he uses for ecological inference). King shows in his book how the estimates of regression models using aggregate data can be improved by examining the potential bounds lower level groupings have based on available aggregate data. The fact that your data are mostly categorical groupings makes them amenable to this technique. (Although don't be fooled, it's not as much an omnibus solution as you might hope given the title!) More current work exists, but King's book is IMO the best place to start.
Another possibility would be just to represent the potential bounds of the data themselves (in maps or graphs). So for example you may have the sex distribution reported at the aggregate level (say 5,000 men and 5,000 women), and you know this aggregate level encompasses 2 different small area units of populations 9,000 and 1,000 individuals. You could then represent this as a contingency table of the form;
Men Women
Unit1 ? ? 9000
Unit2 ? ? 1000
5000 5000
Although you don't have the information in the cells for the lower level aggregations, from the marginal totals we can construct minimum or maximum potential values for each cell. So, in this example the Men X Unit1 cell can only take values inbetween 4,000 and 5,000 (Anytime the marginal distributions are more uneven the smaller the interval of possible values the cells will take). Apparently getting the bounds of the table is more difficult than I expected it to be (Dobra & Fienberg, 2000), but it appears a function is available in the eiPack library in R (Lau et al., 2007, p. 43).
Multivariate analysis with aggregate level data is difficult, as aggregation bias inevitably occurs with this type of data. (In a nutshell, I would just describe aggregation bias as that many different individual level data generating processes could result in the aggregate level associations) A series of articles in the American Sociological Review in the 1970's are some of my favorite references for the topics (Firebaugh, 1978; Hammond, 1973; Hannan & Burstein, 1974) although canonical sources on the topic may be (Fotheringham & Wong, 1991; Oppenshaw, 1984; Robinson, 1950). I do think that representing the potential bounds that data could take could potentially be inciteful, although you are really hamstrung by the limitations of aggregate data for conducting multivariate analysis. That doesn't stop anyone from doing it though in the social sciences though (for better or for worse!)
Note, (as Charlie said in the comments) that King's "solution" has recieved a fair amount of critisicm (Anselin & Cho, 2002; Freedman et al., 1998). Although these critisicms aren't per say about the mathematics of King's method, more so in regards to what situations in which King's method still fails to account for aggregation bias (and I agree with both Freedman and Anselin in that the situations in which data for the social sciences are still suspect are far more common than those that meet King's assumptions). This is partly the reason why I suggest just examining the bounds (theres nothing wrong with that), but making inferences about individual level correlations from such data takes much more leaps of faith that are ultimately unjustified in most situations.
Citations
Anselin, L. & W.K.T. Cho (2002). Spatial effects and ecological inference. Political Analysis 10(3): 276-297.
Dobra A. & S.E. Fienberg (2000). Bounds for cell entries in contingency tables given marginal totals and decomposable graphs. Proceedings of the National Academy of Sciences 97(22): 11885-11892
Firebaugh, G. (1978). A rule for inferring individual relationships from aggregate data. American Sociological Review 43(4): 557-572
Fotheringham, A.S. & D.W. Wong (1991). The modifiable areal unit problem in multivariate statistical analysis. Environment and Planning A 23(7): 1025-1044
Freedman, D.A., S.P. Klein, M. Ostland, & M.R. Roberts (1998). Reviewed Works: A Solution to the Ecological Inference Problem by G. King. Journal of the American Statistical Association 93(444): 1518-1522. (PDF here)
Hammond, J.L. (1973) Two sources of error in ecological correlations. American Sociological Review 38(6): 764-777
Hannan, M.T. & L. Burstein (1974). Estimation from grouped observations. American Sociological Review 39(3): 374-392
King G. (1997). A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data. Princeton: Princeton University Press.
Lau O., R.T. Moore & M. Kellerman (2007). eiPack: R X C Ecological Inference and Higher-Dimension Data Management. R News 7(2): 43-47
Oppenshaw, S. (1984). The Modifiable Areal Unit Problem. Norwich: Geo Books. (PDF here)
Robinson, W.S. (1950). Ecological correlations and the behavior of individuals. American Sociological Review 15(3): 351-357. (PDF here) | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and | The work of Gary King, in particular his book "A Solution to the Ecological Inference Problem" (the first two chapters are available here), would be of interest (as well as the accompanying software h | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and aggregate statistics at the small area scale?
The work of Gary King, in particular his book "A Solution to the Ecological Inference Problem" (the first two chapters are available here), would be of interest (as well as the accompanying software he uses for ecological inference). King shows in his book how the estimates of regression models using aggregate data can be improved by examining the potential bounds lower level groupings have based on available aggregate data. The fact that your data are mostly categorical groupings makes them amenable to this technique. (Although don't be fooled, it's not as much an omnibus solution as you might hope given the title!) More current work exists, but King's book is IMO the best place to start.
Another possibility would be just to represent the potential bounds of the data themselves (in maps or graphs). So for example you may have the sex distribution reported at the aggregate level (say 5,000 men and 5,000 women), and you know this aggregate level encompasses 2 different small area units of populations 9,000 and 1,000 individuals. You could then represent this as a contingency table of the form;
Men Women
Unit1 ? ? 9000
Unit2 ? ? 1000
5000 5000
Although you don't have the information in the cells for the lower level aggregations, from the marginal totals we can construct minimum or maximum potential values for each cell. So, in this example the Men X Unit1 cell can only take values inbetween 4,000 and 5,000 (Anytime the marginal distributions are more uneven the smaller the interval of possible values the cells will take). Apparently getting the bounds of the table is more difficult than I expected it to be (Dobra & Fienberg, 2000), but it appears a function is available in the eiPack library in R (Lau et al., 2007, p. 43).
Multivariate analysis with aggregate level data is difficult, as aggregation bias inevitably occurs with this type of data. (In a nutshell, I would just describe aggregation bias as that many different individual level data generating processes could result in the aggregate level associations) A series of articles in the American Sociological Review in the 1970's are some of my favorite references for the topics (Firebaugh, 1978; Hammond, 1973; Hannan & Burstein, 1974) although canonical sources on the topic may be (Fotheringham & Wong, 1991; Oppenshaw, 1984; Robinson, 1950). I do think that representing the potential bounds that data could take could potentially be inciteful, although you are really hamstrung by the limitations of aggregate data for conducting multivariate analysis. That doesn't stop anyone from doing it though in the social sciences though (for better or for worse!)
Note, (as Charlie said in the comments) that King's "solution" has recieved a fair amount of critisicm (Anselin & Cho, 2002; Freedman et al., 1998). Although these critisicms aren't per say about the mathematics of King's method, more so in regards to what situations in which King's method still fails to account for aggregation bias (and I agree with both Freedman and Anselin in that the situations in which data for the social sciences are still suspect are far more common than those that meet King's assumptions). This is partly the reason why I suggest just examining the bounds (theres nothing wrong with that), but making inferences about individual level correlations from such data takes much more leaps of faith that are ultimately unjustified in most situations.
Citations
Anselin, L. & W.K.T. Cho (2002). Spatial effects and ecological inference. Political Analysis 10(3): 276-297.
Dobra A. & S.E. Fienberg (2000). Bounds for cell entries in contingency tables given marginal totals and decomposable graphs. Proceedings of the National Academy of Sciences 97(22): 11885-11892
Firebaugh, G. (1978). A rule for inferring individual relationships from aggregate data. American Sociological Review 43(4): 557-572
Fotheringham, A.S. & D.W. Wong (1991). The modifiable areal unit problem in multivariate statistical analysis. Environment and Planning A 23(7): 1025-1044
Freedman, D.A., S.P. Klein, M. Ostland, & M.R. Roberts (1998). Reviewed Works: A Solution to the Ecological Inference Problem by G. King. Journal of the American Statistical Association 93(444): 1518-1522. (PDF here)
Hammond, J.L. (1973) Two sources of error in ecological correlations. American Sociological Review 38(6): 764-777
Hannan, M.T. & L. Burstein (1974). Estimation from grouped observations. American Sociological Review 39(3): 374-392
King G. (1997). A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data. Princeton: Princeton University Press.
Lau O., R.T. Moore & M. Kellerman (2007). eiPack: R X C Ecological Inference and Higher-Dimension Data Management. R News 7(2): 43-47
Oppenshaw, S. (1984). The Modifiable Areal Unit Problem. Norwich: Geo Books. (PDF here)
Robinson, W.S. (1950). Ecological correlations and the behavior of individuals. American Sociological Review 15(3): 351-357. (PDF here) | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and
The work of Gary King, in particular his book "A Solution to the Ecological Inference Problem" (the first two chapters are available here), would be of interest (as well as the accompanying software h |
28,380 | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and aggregate statistics at the small area scale? | I am not sure a well-defined answer exists in the literature for this, given that Google search gives basically three usable references on multivariate small area estimation. Pfeffermann (2002) discusses discrete response variables in section 4 of the paper, but these will be univariate models. Of course, with hierarchical Bayesian methods (Rao 2003, Ch. 10), you can do any sort of wonders, but if in the end you find yourself just replicating your priors (because you have so little data), this would be a terrible outcome of your simulation exercise. Besides, Rao only treats continuous variables.
I guess the biggest challenge will be the decomposition of the covariance matrix into the between- and within-small-area components. With 1% sample, you will only have 3 observations from your SAE, so it might be hard to get a stable estimate of the within-component.
If I were in your shoes, I would try a multivariate extension of Pfeffermann's model with a multivariate random effect of the small area. You may indeed end up with a hierarchical Bayesian model for this, if nothing design-based works.
UPDATE (to address Andy's comment to this answer): the bootstrap methods for small area estimation (Lahiri 2003) specifically recreate a plausible population from the study. While the focus of the bootstrap exercise is to estimate the variances of the small area estimates, the procedures should be of interest and relevance to the posted problem. | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and | I am not sure a well-defined answer exists in the literature for this, given that Google search gives basically three usable references on multivariate small area estimation. Pfeffermann (2002) discus | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and aggregate statistics at the small area scale?
I am not sure a well-defined answer exists in the literature for this, given that Google search gives basically three usable references on multivariate small area estimation. Pfeffermann (2002) discusses discrete response variables in section 4 of the paper, but these will be univariate models. Of course, with hierarchical Bayesian methods (Rao 2003, Ch. 10), you can do any sort of wonders, but if in the end you find yourself just replicating your priors (because you have so little data), this would be a terrible outcome of your simulation exercise. Besides, Rao only treats continuous variables.
I guess the biggest challenge will be the decomposition of the covariance matrix into the between- and within-small-area components. With 1% sample, you will only have 3 observations from your SAE, so it might be hard to get a stable estimate of the within-component.
If I were in your shoes, I would try a multivariate extension of Pfeffermann's model with a multivariate random effect of the small area. You may indeed end up with a hierarchical Bayesian model for this, if nothing design-based works.
UPDATE (to address Andy's comment to this answer): the bootstrap methods for small area estimation (Lahiri 2003) specifically recreate a plausible population from the study. While the focus of the bootstrap exercise is to estimate the variances of the small area estimates, the procedures should be of interest and relevance to the posted problem. | How can I simulate census microdata for small areas using a 1% microdata sample at a large scale and
I am not sure a well-defined answer exists in the literature for this, given that Google search gives basically three usable references on multivariate small area estimation. Pfeffermann (2002) discus |
28,381 | What are some good references/advice for learning Emacs Speaks Statistics (with R)? | First you will need to learn basic text operating with emacs. Since emacs is very sophisticated finding out how to simply select text and copy might be a challenge. So invest some time in finding out how to do that efficiently. Reading the manual might help. For Mac OS X use Aquamacs, it supports native shortcuts.
Working with ESS does not differ from working with R. The same rules for organizing code should apply. In my case every project has its own directory, which resides in parent directory called R, which is in my home directory (For Windows, I recommend to point emacs home directory to the directory where all your source resides). So when I use ESS for working on the project I always do M-x R and select the starting directory my project directory.
After starting R process, I usually divide emacs in two windows (emacs terminology). Then on the left I have a source code, which I send to the R process on the right. The relevant shortcuts (these are emacs shortcuts) are C-x 3 for splitting windows verticaly, C-x 1 for making the current buffer the only window and C-x 2 for splitting windows horizontally.
When sending code to R, I keep distinction between functions and R statements. I do this by keeping all my functions in one file usually called 10code.R. Then I can simply load this file using load ESS file option (shortcut C-c C-l). The advantage of this approach is that it sources all the functions and produces nothing in the R buffer. If there is an error in your code then ESS shows a message in the minibuffer and you can investigate it by pressing C-c `.
The other code is the R statements, which I try to keep self-explanatory: load data, clean data, fit statistical model, inspect the results, produce the final results. The source code for these statements is the current status of the project. The intention is that after project is finished, sourcing the files with this code, reproduces the project (I also use git for tracking history). When working with this file, I usually work only with one R statement, which I send to the R process via eval function, paragraph, statement command which shortcut is C-c C-c. This command sends to R process the paragraph, i.e. the text which is delimited by new lines. This is handy, since you can group R statements into tasks, and send whole task to R process. It also does not require selecting text, which is also very convenient. The shortcut C-c C-c has the advantage that it moves the cursor to R window, so you immediately can inspect the results of the sent R statement.
So my basic workflow is moving a lot between windows and buffers. To facilitate this I use the following shortcuts in my .emacs file:
(define-key global-map [f1] 'Control-X-prefix)
(define-key global-map [f3] 'find-file)
(define-key global-map [f2] 'save-buffer)
(define-key global-map [f8] 'kill-buffer)
(define-key global-map [f5] 'switch-to-buffer)
(define-key global-map [f6] 'other-window)
(define-key global-map [f9] 'ess-load-file)
I rarely use f1, but all the others very frequently. Other specific ESS settings I use are the following:
(setq comint-input-ring-size 1000)
(setq ess-indent-level 4)
(setq ess-arg-function-offset 4)
(setq ess-else-offset 4)
This tells ESS to make the tab 4 characters wide (the default is 2), which is my personal preference, and expands the number of your issued commands ESS saves as a history.
For working with R process directly I found the following shortcuts very useful:
(add-hook 'inferior-ess-mode-hook
'(lambda nil
(define-key inferior-ess-mode-map [\C-up] 'comint-previous-matching-input-from-input)
(define-key inferior-ess-mode-map [\C-down] 'comint-next-matching-input-from-input)
(define-key inferior-ess-mode-map [\C-x \t] 'comint-dynamic-complete-filename)
)
)
This recalls the R statement from your R statement history, but it tries to match it with the one which is already on your line. So for example typing pl in R process and pressing \C-up (that's control and the up arrow) will cycle through all the statements which start with pl, so will recall for example all the plot(... commands.
The final setting I use with ESS is the following:
(setq ess-ask-about-transfile t)
This way ESS always asks where to save the text in the buffer with R process. I usually number these files according to date, so I always have another way to track what exactly I was doing. The only caveat of this option is that for some reason ESS sets the R buffer to read only, after loading the R. The shortcut for making buffer writable is C-x C-q.
So these are my settings which I use for working with ESS, I feel happy with them and I didn't feel the need to add anything for a few years already. When introducing ESS to first-time users I usually give this overview.
I will end with the final shortcut which for me is the most used shortcut when working with Emacs and with ESS in particular is C-g which quits the command in the mini-buffer. Through all the years I work with Emacs and ESS I still manage to invoke some Emacs command which I did not want, C-g is very helpful in these situations. | What are some good references/advice for learning Emacs Speaks Statistics (with R)? | First you will need to learn basic text operating with emacs. Since emacs is very sophisticated finding out how to simply select text and copy might be a challenge. So invest some time in finding out | What are some good references/advice for learning Emacs Speaks Statistics (with R)?
First you will need to learn basic text operating with emacs. Since emacs is very sophisticated finding out how to simply select text and copy might be a challenge. So invest some time in finding out how to do that efficiently. Reading the manual might help. For Mac OS X use Aquamacs, it supports native shortcuts.
Working with ESS does not differ from working with R. The same rules for organizing code should apply. In my case every project has its own directory, which resides in parent directory called R, which is in my home directory (For Windows, I recommend to point emacs home directory to the directory where all your source resides). So when I use ESS for working on the project I always do M-x R and select the starting directory my project directory.
After starting R process, I usually divide emacs in two windows (emacs terminology). Then on the left I have a source code, which I send to the R process on the right. The relevant shortcuts (these are emacs shortcuts) are C-x 3 for splitting windows verticaly, C-x 1 for making the current buffer the only window and C-x 2 for splitting windows horizontally.
When sending code to R, I keep distinction between functions and R statements. I do this by keeping all my functions in one file usually called 10code.R. Then I can simply load this file using load ESS file option (shortcut C-c C-l). The advantage of this approach is that it sources all the functions and produces nothing in the R buffer. If there is an error in your code then ESS shows a message in the minibuffer and you can investigate it by pressing C-c `.
The other code is the R statements, which I try to keep self-explanatory: load data, clean data, fit statistical model, inspect the results, produce the final results. The source code for these statements is the current status of the project. The intention is that after project is finished, sourcing the files with this code, reproduces the project (I also use git for tracking history). When working with this file, I usually work only with one R statement, which I send to the R process via eval function, paragraph, statement command which shortcut is C-c C-c. This command sends to R process the paragraph, i.e. the text which is delimited by new lines. This is handy, since you can group R statements into tasks, and send whole task to R process. It also does not require selecting text, which is also very convenient. The shortcut C-c C-c has the advantage that it moves the cursor to R window, so you immediately can inspect the results of the sent R statement.
So my basic workflow is moving a lot between windows and buffers. To facilitate this I use the following shortcuts in my .emacs file:
(define-key global-map [f1] 'Control-X-prefix)
(define-key global-map [f3] 'find-file)
(define-key global-map [f2] 'save-buffer)
(define-key global-map [f8] 'kill-buffer)
(define-key global-map [f5] 'switch-to-buffer)
(define-key global-map [f6] 'other-window)
(define-key global-map [f9] 'ess-load-file)
I rarely use f1, but all the others very frequently. Other specific ESS settings I use are the following:
(setq comint-input-ring-size 1000)
(setq ess-indent-level 4)
(setq ess-arg-function-offset 4)
(setq ess-else-offset 4)
This tells ESS to make the tab 4 characters wide (the default is 2), which is my personal preference, and expands the number of your issued commands ESS saves as a history.
For working with R process directly I found the following shortcuts very useful:
(add-hook 'inferior-ess-mode-hook
'(lambda nil
(define-key inferior-ess-mode-map [\C-up] 'comint-previous-matching-input-from-input)
(define-key inferior-ess-mode-map [\C-down] 'comint-next-matching-input-from-input)
(define-key inferior-ess-mode-map [\C-x \t] 'comint-dynamic-complete-filename)
)
)
This recalls the R statement from your R statement history, but it tries to match it with the one which is already on your line. So for example typing pl in R process and pressing \C-up (that's control and the up arrow) will cycle through all the statements which start with pl, so will recall for example all the plot(... commands.
The final setting I use with ESS is the following:
(setq ess-ask-about-transfile t)
This way ESS always asks where to save the text in the buffer with R process. I usually number these files according to date, so I always have another way to track what exactly I was doing. The only caveat of this option is that for some reason ESS sets the R buffer to read only, after loading the R. The shortcut for making buffer writable is C-x C-q.
So these are my settings which I use for working with ESS, I feel happy with them and I didn't feel the need to add anything for a few years already. When introducing ESS to first-time users I usually give this overview.
I will end with the final shortcut which for me is the most used shortcut when working with Emacs and with ESS in particular is C-g which quits the command in the mini-buffer. Through all the years I work with Emacs and ESS I still manage to invoke some Emacs command which I did not want, C-g is very helpful in these situations. | What are some good references/advice for learning Emacs Speaks Statistics (with R)?
First you will need to learn basic text operating with emacs. Since emacs is very sophisticated finding out how to simply select text and copy might be a challenge. So invest some time in finding out |
28,382 | What are some good references/advice for learning Emacs Speaks Statistics (with R)? | Depending where you are, this may not work for you, but I found someone down the hall who used it and pestered them with questions. Stick with it, it's worth it!
One tip I found especially useful was to use cua-mode; it makes emacs share some of the most common keyboard shortcuts (like save, cut, copy, etc) with modern programs. I also found rectangle mode much easier to use in cua-mode. | What are some good references/advice for learning Emacs Speaks Statistics (with R)? | Depending where you are, this may not work for you, but I found someone down the hall who used it and pestered them with questions. Stick with it, it's worth it!
One tip I found especially useful w | What are some good references/advice for learning Emacs Speaks Statistics (with R)?
Depending where you are, this may not work for you, but I found someone down the hall who used it and pestered them with questions. Stick with it, it's worth it!
One tip I found especially useful was to use cua-mode; it makes emacs share some of the most common keyboard shortcuts (like save, cut, copy, etc) with modern programs. I also found rectangle mode much easier to use in cua-mode. | What are some good references/advice for learning Emacs Speaks Statistics (with R)?
Depending where you are, this may not work for you, but I found someone down the hall who used it and pestered them with questions. Stick with it, it's worth it!
One tip I found especially useful w |
28,383 | Is it appropriate to use the term "bits" to discuss a log-base-2 likelihood ratio? | I think it's perfectly well justified. (In fact, I've use this convention in papers I've published; or you can call them "nats" if you prefer to stick with logarithms of base $e$).
The justification runs as follows: the log-likelihood of the fitted model can be viewed as a Monte Carlo estimate of the KL divergence between the "true" (unknown) data distribution and the distribution implied by the fitted model. Let $P(x)$ denote the "true" distribution of the data, and let $P_\theta(x)$ denote the distribution (i.e., the likelihood $P(x|\theta))$ provided by a model.
Maximum likelihood fitting involves maximizing
$L(\theta) = \frac{1}{N}\sum_i \log P_\theta(x_i) \approx \int P(x) \log P_\theta(x) dx$
The left hand side (the log-likelihood, scaled by the # datapoints $N$) is a Monte Carlo estimate for the right hand side, i.e., since the datapoints $x_i$ were drawn from $P(x)$. So we can rewrite
$L(\theta) \approx \int P(x) \log P_\theta(x) dx = \int P(x) \log \frac{P_\theta(x)}{P(x)} dx + \int P(x) \log P(x)dx$
$ = -D_{KL}(P,P_\theta) - H(x)$
So the log-likelihood normalized by the number of points is an estimate of the (negative) KL-divergence between $P$ and $P_\theta$ minus the (true) entropy of $x$. The KL divergence has units of "bits" (if we use log 2), and can be understood as the number of "extra bits" you would need to encode data from $P(x)$ using a codebook based on $P_\theta(x)$. (If $P = P_\theta$, you don't need any extra bits, so KL divergence is zero).
Now: when you take the log-likelihood ratio of two different models, it should be obvious that you end up with:
$\log \frac{P_{\theta_1(x)}}{P_{\theta_2}(x)} \approx D_{KL}(P,P_{\theta_2}) - D_{KL}(P,P_{\theta_1})$
The entropy $H(x)$ terms cancel. So the log-likelihood ratio (normalized by $N$) is an estimate of the difference between the KL divergence of the true distribution and the distribution provided by model 1, and the true distribution provided by model 2. It's therefore an estimate of the number of "extra bits" you need to code your data with model 2 compared to coding it with model 1. So I think the "bits" units are perfectly well justified.
One important caveat: when using this statistic for model-comparison, you should really use LLR computed on cross-validated data. The log-likelihood of training data is generally artificially high (favoring the model with more parameters) due to overfitting. That is, the model assigns this data higher probability than it would if it were fit to an infinite set of training data and then evaluated at the points $x_i \dots x_N$ in your dataset. So the procedure many people follow is to:
train models 1 and 2 using training data;
evaluate the log-likelihood ratio of a test dataset and report the resulting number in units of bits as a measure of the improved "code" provided by model 1 compared to model
The LLR evaluated on training data would generally give an unfair advantage to the model with more parameters / degrees of freedom. | Is it appropriate to use the term "bits" to discuss a log-base-2 likelihood ratio? | I think it's perfectly well justified. (In fact, I've use this convention in papers I've published; or you can call them "nats" if you prefer to stick with logarithms of base $e$).
The justification r | Is it appropriate to use the term "bits" to discuss a log-base-2 likelihood ratio?
I think it's perfectly well justified. (In fact, I've use this convention in papers I've published; or you can call them "nats" if you prefer to stick with logarithms of base $e$).
The justification runs as follows: the log-likelihood of the fitted model can be viewed as a Monte Carlo estimate of the KL divergence between the "true" (unknown) data distribution and the distribution implied by the fitted model. Let $P(x)$ denote the "true" distribution of the data, and let $P_\theta(x)$ denote the distribution (i.e., the likelihood $P(x|\theta))$ provided by a model.
Maximum likelihood fitting involves maximizing
$L(\theta) = \frac{1}{N}\sum_i \log P_\theta(x_i) \approx \int P(x) \log P_\theta(x) dx$
The left hand side (the log-likelihood, scaled by the # datapoints $N$) is a Monte Carlo estimate for the right hand side, i.e., since the datapoints $x_i$ were drawn from $P(x)$. So we can rewrite
$L(\theta) \approx \int P(x) \log P_\theta(x) dx = \int P(x) \log \frac{P_\theta(x)}{P(x)} dx + \int P(x) \log P(x)dx$
$ = -D_{KL}(P,P_\theta) - H(x)$
So the log-likelihood normalized by the number of points is an estimate of the (negative) KL-divergence between $P$ and $P_\theta$ minus the (true) entropy of $x$. The KL divergence has units of "bits" (if we use log 2), and can be understood as the number of "extra bits" you would need to encode data from $P(x)$ using a codebook based on $P_\theta(x)$. (If $P = P_\theta$, you don't need any extra bits, so KL divergence is zero).
Now: when you take the log-likelihood ratio of two different models, it should be obvious that you end up with:
$\log \frac{P_{\theta_1(x)}}{P_{\theta_2}(x)} \approx D_{KL}(P,P_{\theta_2}) - D_{KL}(P,P_{\theta_1})$
The entropy $H(x)$ terms cancel. So the log-likelihood ratio (normalized by $N$) is an estimate of the difference between the KL divergence of the true distribution and the distribution provided by model 1, and the true distribution provided by model 2. It's therefore an estimate of the number of "extra bits" you need to code your data with model 2 compared to coding it with model 1. So I think the "bits" units are perfectly well justified.
One important caveat: when using this statistic for model-comparison, you should really use LLR computed on cross-validated data. The log-likelihood of training data is generally artificially high (favoring the model with more parameters) due to overfitting. That is, the model assigns this data higher probability than it would if it were fit to an infinite set of training data and then evaluated at the points $x_i \dots x_N$ in your dataset. So the procedure many people follow is to:
train models 1 and 2 using training data;
evaluate the log-likelihood ratio of a test dataset and report the resulting number in units of bits as a measure of the improved "code" provided by model 1 compared to model
The LLR evaluated on training data would generally give an unfair advantage to the model with more parameters / degrees of freedom. | Is it appropriate to use the term "bits" to discuss a log-base-2 likelihood ratio?
I think it's perfectly well justified. (In fact, I've use this convention in papers I've published; or you can call them "nats" if you prefer to stick with logarithms of base $e$).
The justification r |
28,384 | Is there an anderson-darling goodness of fit test for two datasets? | Package adk was replaced by package kSamples:
Try:
install.packages("kSamples")
library(kSamples)
ad.test(runif(50), rnorm(30)) | Is there an anderson-darling goodness of fit test for two datasets? | Package adk was replaced by package kSamples:
Try:
install.packages("kSamples")
library(kSamples)
ad.test(runif(50), rnorm(30)) | Is there an anderson-darling goodness of fit test for two datasets?
Package adk was replaced by package kSamples:
Try:
install.packages("kSamples")
library(kSamples)
ad.test(runif(50), rnorm(30)) | Is there an anderson-darling goodness of fit test for two datasets?
Package adk was replaced by package kSamples:
Try:
install.packages("kSamples")
library(kSamples)
ad.test(runif(50), rnorm(30)) |
28,385 | Is there an anderson-darling goodness of fit test for two datasets? | The adk package for R does this. http://cran.r-project.org/web/packages/adk/
install.packages("adk")
library(adk)
adk.test(runif(50), rnorm(30)) | Is there an anderson-darling goodness of fit test for two datasets? | The adk package for R does this. http://cran.r-project.org/web/packages/adk/
install.packages("adk")
library(adk)
adk.test(runif(50), rnorm(30)) | Is there an anderson-darling goodness of fit test for two datasets?
The adk package for R does this. http://cran.r-project.org/web/packages/adk/
install.packages("adk")
library(adk)
adk.test(runif(50), rnorm(30)) | Is there an anderson-darling goodness of fit test for two datasets?
The adk package for R does this. http://cran.r-project.org/web/packages/adk/
install.packages("adk")
library(adk)
adk.test(runif(50), rnorm(30)) |
28,386 | In convergence in probability or a.s. convergence w.r.t which measure is the probability? | The probability measure is the same in both cases, but the question of interest is different between the two. In both cases we have a (countably) infinite sequence of random variables defined on a the single probability space $(\Omega,\mathscr{F},P)$. We take $\Omega$, $\mathscr{F}$ and $P$ to be the infinite products in each case (care is needed, here, that we are talking about only probability measures because we can run into troubles otherwise).
For the SLLN, what we care about is the probability (or measure) of the set of all $\omega = (\omega_{1},\omega_{2},\ldots)$ where the scaled partial sums DO NOT converge. This set has measure zero (w.r.t. $P$), says the SLLN.
For the WLLN, what we care about is the behavior of the sequence of projection measures $\left(P_{n}\right)_{n=1}^{\infty}$, where for each $n$, $P_{n}$ is the projection of $P$ onto the finite measureable space $\Omega_{n} = \prod_{i=1}^{n} \Omega_{i}$. The WLLN says that the (projected) probability of the cylinders (that is, events involving $X_{1},\ldots,X_{n}$), on which the scaled partial sums do not converge, goes to zero in the limit as $n$ goes to infinity.
In the WLLN we are calculating probabilities which appear removed from the infinite product space, but it never actually went away - it was there all along. All we were doing was projecting onto the subspace from 1 to $n$ and then taking the limit afterward. That such a thing is possible, that it is possible to construct a probability measure on an infinite product space such that the projections for each $n$ match what we think they should, and do what they're supposed to do, is one of the consequences of Kolmogorov's Extension Theorem .
If you'd like to read more, I've found the most detailed discussion of subtle points like these in "Probability and Measure Theory" by Ash, Doleans-Dade. There are a couple others, but Ash/D-D is my favorite. | In convergence in probability or a.s. convergence w.r.t which measure is the probability? | The probability measure is the same in both cases, but the question of interest is different between the two. In both cases we have a (countably) infinite sequence of random variables defined on a th | In convergence in probability or a.s. convergence w.r.t which measure is the probability?
The probability measure is the same in both cases, but the question of interest is different between the two. In both cases we have a (countably) infinite sequence of random variables defined on a the single probability space $(\Omega,\mathscr{F},P)$. We take $\Omega$, $\mathscr{F}$ and $P$ to be the infinite products in each case (care is needed, here, that we are talking about only probability measures because we can run into troubles otherwise).
For the SLLN, what we care about is the probability (or measure) of the set of all $\omega = (\omega_{1},\omega_{2},\ldots)$ where the scaled partial sums DO NOT converge. This set has measure zero (w.r.t. $P$), says the SLLN.
For the WLLN, what we care about is the behavior of the sequence of projection measures $\left(P_{n}\right)_{n=1}^{\infty}$, where for each $n$, $P_{n}$ is the projection of $P$ onto the finite measureable space $\Omega_{n} = \prod_{i=1}^{n} \Omega_{i}$. The WLLN says that the (projected) probability of the cylinders (that is, events involving $X_{1},\ldots,X_{n}$), on which the scaled partial sums do not converge, goes to zero in the limit as $n$ goes to infinity.
In the WLLN we are calculating probabilities which appear removed from the infinite product space, but it never actually went away - it was there all along. All we were doing was projecting onto the subspace from 1 to $n$ and then taking the limit afterward. That such a thing is possible, that it is possible to construct a probability measure on an infinite product space such that the projections for each $n$ match what we think they should, and do what they're supposed to do, is one of the consequences of Kolmogorov's Extension Theorem .
If you'd like to read more, I've found the most detailed discussion of subtle points like these in "Probability and Measure Theory" by Ash, Doleans-Dade. There are a couple others, but Ash/D-D is my favorite. | In convergence in probability or a.s. convergence w.r.t which measure is the probability?
The probability measure is the same in both cases, but the question of interest is different between the two. In both cases we have a (countably) infinite sequence of random variables defined on a th |
28,387 | How to deal with omitted dummy variables in a fixed effect model? | Fixed effect panel regression models involve subtracting group means from the regressors. This means that you can only include time-varying regressors in the model. Since firms usually belong to one industry the dummy variable for industry does not vary with time. Hence it is excluded from your model by Stata, since after subtracting the group mean from such variable you will get that it is equal to zero.
Note that Hausman test is a bit tricky, so you cannot solely base your model selection (fixed vs random effects) with it. Wooldridge explains it very nicely (in my opinion) in his book. | How to deal with omitted dummy variables in a fixed effect model? | Fixed effect panel regression models involve subtracting group means from the regressors. This means that you can only include time-varying regressors in the model. Since firms usually belong to one i | How to deal with omitted dummy variables in a fixed effect model?
Fixed effect panel regression models involve subtracting group means from the regressors. This means that you can only include time-varying regressors in the model. Since firms usually belong to one industry the dummy variable for industry does not vary with time. Hence it is excluded from your model by Stata, since after subtracting the group mean from such variable you will get that it is equal to zero.
Note that Hausman test is a bit tricky, so you cannot solely base your model selection (fixed vs random effects) with it. Wooldridge explains it very nicely (in my opinion) in his book. | How to deal with omitted dummy variables in a fixed effect model?
Fixed effect panel regression models involve subtracting group means from the regressors. This means that you can only include time-varying regressors in the model. Since firms usually belong to one i |
28,388 | Cycling in k-means algorithm | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This paper appears to prove convergence in a finite number of steps. | Cycling in k-means algorithm | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Cycling in k-means algorithm
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
This paper appears to prove convergence in a finite number of steps. | Cycling in k-means algorithm
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
28,389 | Cycling in k-means algorithm | The $k$-means objective function strictly decreases with each change of assignment, which automatically implies convergence without cycling. Moreover, the partitions produced in each step of $k$-means satisfy a "Voronoi property" in that each point is always assigned to its nearest center. This implies an upper bound on the total number of possible partitions, which yields a finite upper bound on the termination time for $k$-means. | Cycling in k-means algorithm | The $k$-means objective function strictly decreases with each change of assignment, which automatically implies convergence without cycling. Moreover, the partitions produced in each step of $k$-means | Cycling in k-means algorithm
The $k$-means objective function strictly decreases with each change of assignment, which automatically implies convergence without cycling. Moreover, the partitions produced in each step of $k$-means satisfy a "Voronoi property" in that each point is always assigned to its nearest center. This implies an upper bound on the total number of possible partitions, which yields a finite upper bound on the termination time for $k$-means. | Cycling in k-means algorithm
The $k$-means objective function strictly decreases with each change of assignment, which automatically implies convergence without cycling. Moreover, the partitions produced in each step of $k$-means |
28,390 | Cycling in k-means algorithm | In finite precision, cycling may appear.
Cycling is frequent in single precision, exceptional in double precision.
When close to a local minimum, the objective function may sometimes slightly increase due to round-off errors. This is often innocuous as the algorithm function decreases again and eventually reaches a local minimum. But occasionally, the algorithm steps on a previously visited assignment, and start cycling.
It is easy and safe to watch for cycles in real-world stopping criteria implementations. | Cycling in k-means algorithm | In finite precision, cycling may appear.
Cycling is frequent in single precision, exceptional in double precision.
When close to a local minimum, the objective function may sometimes slightly increase | Cycling in k-means algorithm
In finite precision, cycling may appear.
Cycling is frequent in single precision, exceptional in double precision.
When close to a local minimum, the objective function may sometimes slightly increase due to round-off errors. This is often innocuous as the algorithm function decreases again and eventually reaches a local minimum. But occasionally, the algorithm steps on a previously visited assignment, and start cycling.
It is easy and safe to watch for cycles in real-world stopping criteria implementations. | Cycling in k-means algorithm
In finite precision, cycling may appear.
Cycling is frequent in single precision, exceptional in double precision.
When close to a local minimum, the objective function may sometimes slightly increase |
28,391 | Generate random multivariate values from empirical data | (1) It's the CDF you'll need to generate your simulated time-series. To build it, first histogram your price changes/returns. Take a cumulative sum of bin population starting with your left-most populated bin. Normalize your new function by dividing by the total bin population. What you are left with is a CDF. Here is some numpy code that does the trick:
# Make a histogram of price changes
counts,bin_edges = np.histogram(deltas,numbins,normed=False) # numpy histogram
# Make a CDF of the price changes
n_counts,bin_edges2 = np.histogram(deltas,numbins,normed=True)
cdf = np.cumsum(n_counts) # cdf not normalized, despite above
scale = 1.0/cdf[-1]
ncdf = scale * cdf
(2) To generate correlated picks, use a copula. See this answer to my previous question on generating correlated time series. | Generate random multivariate values from empirical data | (1) It's the CDF you'll need to generate your simulated time-series. To build it, first histogram your price changes/returns. Take a cumulative sum of bin population starting with your left-most pop | Generate random multivariate values from empirical data
(1) It's the CDF you'll need to generate your simulated time-series. To build it, first histogram your price changes/returns. Take a cumulative sum of bin population starting with your left-most populated bin. Normalize your new function by dividing by the total bin population. What you are left with is a CDF. Here is some numpy code that does the trick:
# Make a histogram of price changes
counts,bin_edges = np.histogram(deltas,numbins,normed=False) # numpy histogram
# Make a CDF of the price changes
n_counts,bin_edges2 = np.histogram(deltas,numbins,normed=True)
cdf = np.cumsum(n_counts) # cdf not normalized, despite above
scale = 1.0/cdf[-1]
ncdf = scale * cdf
(2) To generate correlated picks, use a copula. See this answer to my previous question on generating correlated time series. | Generate random multivariate values from empirical data
(1) It's the CDF you'll need to generate your simulated time-series. To build it, first histogram your price changes/returns. Take a cumulative sum of bin population starting with your left-most pop |
28,392 | Generate random multivariate values from empirical data | Regarding the first question, you might consider resampling your data. There would be a problem in case your data were correlated over time (rather than contemporaneously correlated), in which case you would need something like a block bootstrap. But for returns data, a simple bootstrap is probably fine.
I guess the answer to the second question is very much dependent on the target distribution. | Generate random multivariate values from empirical data | Regarding the first question, you might consider resampling your data. There would be a problem in case your data were correlated over time (rather than contemporaneously correlated), in which case yo | Generate random multivariate values from empirical data
Regarding the first question, you might consider resampling your data. There would be a problem in case your data were correlated over time (rather than contemporaneously correlated), in which case you would need something like a block bootstrap. But for returns data, a simple bootstrap is probably fine.
I guess the answer to the second question is very much dependent on the target distribution. | Generate random multivariate values from empirical data
Regarding the first question, you might consider resampling your data. There would be a problem in case your data were correlated over time (rather than contemporaneously correlated), in which case yo |
28,393 | Generate random multivariate values from empirical data | The answer to the first question is that you build a model. In your case this means choosing a distribution and estimating its parameters.
When you have the distribution you can sample from it using Gibbs or Metropolis algorithms.
On the side note, do you really need to sample from this distribution? Usually the interest is in some characteristic of the distribution. You can estimate it using empirical distribution via bootstrap, or again build a model for this characteristic. | Generate random multivariate values from empirical data | The answer to the first question is that you build a model. In your case this means choosing a distribution and estimating its parameters.
When you have the distribution you can sample from it using | Generate random multivariate values from empirical data
The answer to the first question is that you build a model. In your case this means choosing a distribution and estimating its parameters.
When you have the distribution you can sample from it using Gibbs or Metropolis algorithms.
On the side note, do you really need to sample from this distribution? Usually the interest is in some characteristic of the distribution. You can estimate it using empirical distribution via bootstrap, or again build a model for this characteristic. | Generate random multivariate values from empirical data
The answer to the first question is that you build a model. In your case this means choosing a distribution and estimating its parameters.
When you have the distribution you can sample from it using |
28,394 | Generate random multivariate values from empirical data | I'm with @mpiktas in that I also think you need a model.
I think the standard method here would be to estimate a copula to capture the dependence structure between the different assets and use e.g. skew-normal- or t-distributed marginal distributions for the different assets. That gives you a very general model class (more general that assuming e.g. a multivariate t-distribution) that is pretty much the standard for your kind of task (e.g. I think Basel II requires financial institutions to use copula-methods to estimate their VaR). There's a copula package for R. | Generate random multivariate values from empirical data | I'm with @mpiktas in that I also think you need a model.
I think the standard method here would be to estimate a copula to capture the dependence structure between the different assets and use e.g. s | Generate random multivariate values from empirical data
I'm with @mpiktas in that I also think you need a model.
I think the standard method here would be to estimate a copula to capture the dependence structure between the different assets and use e.g. skew-normal- or t-distributed marginal distributions for the different assets. That gives you a very general model class (more general that assuming e.g. a multivariate t-distribution) that is pretty much the standard for your kind of task (e.g. I think Basel II requires financial institutions to use copula-methods to estimate their VaR). There's a copula package for R. | Generate random multivariate values from empirical data
I'm with @mpiktas in that I also think you need a model.
I think the standard method here would be to estimate a copula to capture the dependence structure between the different assets and use e.g. s |
28,395 | Generate random multivariate values from empirical data | A possible answer to the first part of the question using R... using the ecdf() function
# simulate some data...
N <- 1000
fdata <- c( rnorm(N %/% 2, mean=14), rnorm(N %/% 2, mean=35))
# here's the Empirical CDF of that data...
E1 <- ecdf(fdata)
plot(E1)
# now simulate 1000 numbers from this ECDF...
ns <- 1000
ans <- as.numeric(quantile(E1, runif(ns)))
hist(ans,pro=T,nclass=113,col='wheat2') | Generate random multivariate values from empirical data | A possible answer to the first part of the question using R... using the ecdf() function
# simulate some data...
N <- 1000
fdata <- c( rnorm(N %/% 2, mean=14), rnorm(N %/% 2, mean=35))
# here's the E | Generate random multivariate values from empirical data
A possible answer to the first part of the question using R... using the ecdf() function
# simulate some data...
N <- 1000
fdata <- c( rnorm(N %/% 2, mean=14), rnorm(N %/% 2, mean=35))
# here's the Empirical CDF of that data...
E1 <- ecdf(fdata)
plot(E1)
# now simulate 1000 numbers from this ECDF...
ns <- 1000
ans <- as.numeric(quantile(E1, runif(ns)))
hist(ans,pro=T,nclass=113,col='wheat2') | Generate random multivariate values from empirical data
A possible answer to the first part of the question using R... using the ecdf() function
# simulate some data...
N <- 1000
fdata <- c( rnorm(N %/% 2, mean=14), rnorm(N %/% 2, mean=35))
# here's the E |
28,396 | What is the meaning of 'Marginal mean'? | Perhaps, the term originates from how the data is represented in a contingency table. See this example from the wiki.
In the above example, we would speak of marginal totals for gender and handedness when referring to the last column and the bottom row respectively. If you see the wiktionary the first definition of marginal is:
of, relating to, or located at a margin or an edge
Since the totals (and means if means are reported) are at the edge of the table they are referred to as marginal totals (and marginal means if the edges have means). | What is the meaning of 'Marginal mean'? | Perhaps, the term originates from how the data is represented in a contingency table. See this example from the wiki.
In the above example, we would speak of marginal totals for gender and handedness | What is the meaning of 'Marginal mean'?
Perhaps, the term originates from how the data is represented in a contingency table. See this example from the wiki.
In the above example, we would speak of marginal totals for gender and handedness when referring to the last column and the bottom row respectively. If you see the wiktionary the first definition of marginal is:
of, relating to, or located at a margin or an edge
Since the totals (and means if means are reported) are at the edge of the table they are referred to as marginal totals (and marginal means if the edges have means). | What is the meaning of 'Marginal mean'?
Perhaps, the term originates from how the data is represented in a contingency table. See this example from the wiki.
In the above example, we would speak of marginal totals for gender and handedness |
28,397 | What is the meaning of 'Marginal mean'? | I'd assume it means the sample analogue of the marginal expectation $\operatorname{E}(X)$, as opposed to the sample analogue of a conditional expectation $\operatorname{E}(X \mid Y)$, where $Y$ could be anything. | What is the meaning of 'Marginal mean'? | I'd assume it means the sample analogue of the marginal expectation $\operatorname{E}(X)$, as opposed to the sample analogue of a conditional expectation $\operatorname{E}(X \mid Y)$, where $Y$ could | What is the meaning of 'Marginal mean'?
I'd assume it means the sample analogue of the marginal expectation $\operatorname{E}(X)$, as opposed to the sample analogue of a conditional expectation $\operatorname{E}(X \mid Y)$, where $Y$ could be anything. | What is the meaning of 'Marginal mean'?
I'd assume it means the sample analogue of the marginal expectation $\operatorname{E}(X)$, as opposed to the sample analogue of a conditional expectation $\operatorname{E}(X \mid Y)$, where $Y$ could |
28,398 | What is the meaning of 'Marginal mean'? | Can't add it as a comment, so here it comes as an answer:
As user28 already said, the marginal mean refers to the mean of a factor level, which - in a cross-table - is at the table's margins, hence the name marginal mean.
Why this term is not entirely redundant? "Mean" could mean just any mean, e.g. the mean of all right handed men in the example of user28. By saying "mean of factor A" you should mean the mean of all levels of factor A, but you could mean (or be misunderstood as meaning) the mean of one level of factor A. Using the term "marginal mean of factor A" makes it unambiguously clear what you mean. | What is the meaning of 'Marginal mean'? | Can't add it as a comment, so here it comes as an answer:
As user28 already said, the marginal mean refers to the mean of a factor level, which - in a cross-table - is at the table's margins, hence t | What is the meaning of 'Marginal mean'?
Can't add it as a comment, so here it comes as an answer:
As user28 already said, the marginal mean refers to the mean of a factor level, which - in a cross-table - is at the table's margins, hence the name marginal mean.
Why this term is not entirely redundant? "Mean" could mean just any mean, e.g. the mean of all right handed men in the example of user28. By saying "mean of factor A" you should mean the mean of all levels of factor A, but you could mean (or be misunderstood as meaning) the mean of one level of factor A. Using the term "marginal mean of factor A" makes it unambiguously clear what you mean. | What is the meaning of 'Marginal mean'?
Can't add it as a comment, so here it comes as an answer:
As user28 already said, the marginal mean refers to the mean of a factor level, which - in a cross-table - is at the table's margins, hence t |
28,399 | Chi-square test for equality of distributions: how many zeroes does it tolerate? | Perfectly feasible these days to do Fisher's 'exact' test on such a table. I just got p = 0.087 using Stata (tabi 2 1 \ 2 3 \ .... , exact. Execution took 0.19 seconds).
EDIT after chl's comment below (tried adding as a comment but can't format):
It works in R 2.12.0 for me, though i had to increase the 'workspace' option over its default value of 200000:
> fisher.test(x)
Error in fisher.test(x) : FEXACT error 7.
LDSTP is too small for this problem.
Try increasing the size of the workspace.
> system.time(result<-fisher.test(x, workspace = 400000))
user system elapsed
0.11 0.00 0.11
> result$p.value
[1] 0.0866764
(The execution time is slightly quicker than in Stata, but that's of dubious relevance given the time taken to work out the meaning of the error message, which uses 'workspace' to mean something different from R's usual meaning despite the fact that fisher.test is part of R's core 'stats' package.) | Chi-square test for equality of distributions: how many zeroes does it tolerate? | Perfectly feasible these days to do Fisher's 'exact' test on such a table. I just got p = 0.087 using Stata (tabi 2 1 \ 2 3 \ .... , exact. Execution took 0.19 seconds).
EDIT after chl's comment belo | Chi-square test for equality of distributions: how many zeroes does it tolerate?
Perfectly feasible these days to do Fisher's 'exact' test on such a table. I just got p = 0.087 using Stata (tabi 2 1 \ 2 3 \ .... , exact. Execution took 0.19 seconds).
EDIT after chl's comment below (tried adding as a comment but can't format):
It works in R 2.12.0 for me, though i had to increase the 'workspace' option over its default value of 200000:
> fisher.test(x)
Error in fisher.test(x) : FEXACT error 7.
LDSTP is too small for this problem.
Try increasing the size of the workspace.
> system.time(result<-fisher.test(x, workspace = 400000))
user system elapsed
0.11 0.00 0.11
> result$p.value
[1] 0.0866764
(The execution time is slightly quicker than in Stata, but that's of dubious relevance given the time taken to work out the meaning of the error message, which uses 'workspace' to mean something different from R's usual meaning despite the fact that fisher.test is part of R's core 'stats' package.) | Chi-square test for equality of distributions: how many zeroes does it tolerate?
Perfectly feasible these days to do Fisher's 'exact' test on such a table. I just got p = 0.087 using Stata (tabi 2 1 \ 2 3 \ .... , exact. Execution took 0.19 seconds).
EDIT after chl's comment belo |
28,400 | Chi-square test for equality of distributions: how many zeroes does it tolerate? | The usual guidelines are that the expected counts should be greater than 5, but it can be somewhat relaxed as discussed in the following article:
Campbell , I, Chi-squared and
Fisher–Irwin tests of two-by-two
tables with small sample
recommendations, Statistics in
Medicine (2007) 26(19): 3661–3675.
See also Ian Campbell's homepage.
Note that in R, there's always the possibility to compute $p$-value by a Monte Carlo approach (chisq.test(..., sim=TRUE)), instead of relying on the asymptotic distribution.
In you case, it appears that about 80% of the expected counts are below 5, and 40% are below 1. Would it make sense to aggregate some of the observed phenotypes? | Chi-square test for equality of distributions: how many zeroes does it tolerate? | The usual guidelines are that the expected counts should be greater than 5, but it can be somewhat relaxed as discussed in the following article:
Campbell , I, Chi-squared and
Fisher–Irwin tests of | Chi-square test for equality of distributions: how many zeroes does it tolerate?
The usual guidelines are that the expected counts should be greater than 5, but it can be somewhat relaxed as discussed in the following article:
Campbell , I, Chi-squared and
Fisher–Irwin tests of two-by-two
tables with small sample
recommendations, Statistics in
Medicine (2007) 26(19): 3661–3675.
See also Ian Campbell's homepage.
Note that in R, there's always the possibility to compute $p$-value by a Monte Carlo approach (chisq.test(..., sim=TRUE)), instead of relying on the asymptotic distribution.
In you case, it appears that about 80% of the expected counts are below 5, and 40% are below 1. Would it make sense to aggregate some of the observed phenotypes? | Chi-square test for equality of distributions: how many zeroes does it tolerate?
The usual guidelines are that the expected counts should be greater than 5, but it can be somewhat relaxed as discussed in the following article:
Campbell , I, Chi-squared and
Fisher–Irwin tests of |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.