idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
37,401
Is it possible to have an estimator that is unbiased and bounded?
I will present conditions under which an unbiased estimator remains unbiased, even after it is bounded. But I am not sure that they amount to something interesting or useful. Let an estimator $\hat \theta$ of the unknown parameter $\theta$ of a continuous distribution, and $E(\hat \theta) =\theta$. Assume that for some reasons, under repeated sampling we want the estimator to produce estimates that range in $[\delta_l,\delta_u]$. We assume that $\theta \in [\delta_l,\delta_u]$ and so we can write when convenient the interval as $[\theta-a,\theta+b]$ with $\{a,b\}$ positive numbers but of course unknown. Then the constrained estimator is $$\hat \theta_c = \left\{\begin{matrix} \delta_l & \hat \theta <\delta_l\\\hat \theta &\delta_l \leq \hat \theta \leq \delta_u \\\delta_u & \delta_u < \hat \theta \end{matrix} \right\}$$ and its expected value is $$\begin{align} E(\hat \theta_c) &= \delta_l\cdot P[\hat \theta \leq \delta_l] \\&+ E(\hat \theta \mid \delta_l \leq\hat \theta \leq\delta_u )\cdot P[\delta_l \leq\hat \theta \leq \delta_u] \\ &+\delta_u\cdot P[\hat \theta > \delta_u]\end{align}$$ Define now the indicator functions $$I_l = I(\hat \theta \leq \delta_l),\;\; I_m = I(\delta_l\leq \hat \theta \leq \delta_l),\;\; I_u = I(\hat \theta > \delta_u)$$ and note that $$I_l + I_u = 1- I_m \tag{1}$$ using these indicator functions, and integrals, we can write the expected value of the constrained estimator as ($f(\hat \theta)$ is the density function of $\hat \theta$), $$E(\hat \theta_c) = \int_{-\infty}^{\infty}\delta_lf(\hat \theta)I_ld\hat \theta + \int_{-\infty}^{\infty}\hat \theta f(\hat \theta)I_md\hat \theta + \int_{-\infty}^{\infty} \delta_uf(\hat \theta)I_ud\hat \theta$$ $$=\int_{-\infty}^{\infty}f(\hat \theta)\Big[\delta_lI_l + \hat \theta I_m + \delta_uI_u\Big]d\hat \theta$$ $$=E\Big[\delta_lI_l + \hat \theta I_m + \delta_uI_u\Big] \tag{2}$$ Decomposing the upper and lower bound, we have $$E(\hat \theta_c) = E\Big[(\theta-a)I_l + \hat \theta I_m + (\theta+b)I_u\Big]$$ $$=E\Big[\theta\cdot(I_l+I_u) + \hat \theta I_m\Big] -aE(I_l)+bE(I_u) $$ and using $(1)$, $$ = E\Big[\theta\cdot(1-I_m) + \hat \theta I_m\Big] -aE(I_l)+bE(I_u) $$ $$\Rightarrow E(\hat \theta_c) = \theta +E\big[(\hat \theta -\theta)I_m\big]-aE(I_l)+bE(I_u) \tag {3}$$ Now, since $E(\hat \theta) = \theta$ we have $$E\big[(\hat \theta -\theta)I_m\big] = E\big(\hat \theta I_m\big) - E(\hat \theta)E(I_m)$$ But $$E\big(\hat \theta I_m\big) = E\big(\hat \theta I_m\mid I_m=1\big)E(I_m) = E\big(\hat \theta \big)E(I_m)$$ Hence, $E\big[(\hat \theta -\theta)I_m\big] =0$ and so $$\begin{align} E(\hat \theta_c) &= \theta -aE(I_l)+bE(I_u) \\ &= \theta -aP(\hat \theta \leq \delta_l)+bP(\hat \theta > \delta_u)\end{align}\tag {4}$$ or alternatively $$ E(\hat \theta_c) = \theta -(\theta-\delta_l)P(\hat \theta \leq \delta_l)+(\delta_u-\theta)P(\hat \theta > \delta_u)\tag {4a}$$ Therefore from $(4)$, we see that for the constrained estimator to also be unbiased, we must have $$aP(\hat \theta \leq \delta_l) = bP(\hat \theta > \delta_u) \tag {5}$$ What is the problem with condition $(5)$? It involves the unknown numbers $\{a,b\}$, so in practice we will not be able to actually determine an interval to bound the estimator and keep it unbiased. But let's say this is some controlled simulation experiment, where we want to investigate other properties of estimators, given unbiasedness. Then we can "neutralize" $a$ and $b$ by setting $a=b$, which essentially creates a symmetric interval around the value of $\theta$... In this case, to achieve unbiasedness, we must more over have $P(\hat \theta \leq \delta_l) = P(\hat \theta > \delta_u)$, i.e. we must have that the probability mass of the unconstrained estimator is equal to the left and to the right of the (symmetric around $\theta$) interval... ...and so we learn that (as sufficient conditions), if the distribution of the unconstrained estimator is symmetric around the true value, then the estimator constrained in an interval symmetric around the true value will also be unbiased... but this is almost trivially evident or intuitive, isn't it? It becomes a little more interesting, if we realize that the necessary and sufficient condition (given a symmetric interval) a) does not require a symmetric distribution, only equal probability mass "in the tails" (and this in turn does not imply that the distribution of the mass in each tail has to be identical) and b) permits that inside the interval, the estimator's density can have any non-symmetric shape that is consistent with maintaining unbiasedness -it will still make the constrained estimator unbiased. APPLICATION: The OP's case Our estimator is $\hat \theta = \theta + w,\;\; w \sim N(0,1)$ and so $\hat \theta \sim N(\theta,1)$. Then, using $(4)$ while writing $a,b$ in terms of $\theta, \delta$, we have, for bounding interval $[0,1]$, $$E[\hat \theta_c] = \theta -\theta P(\hat \theta \leq 0) +(1-\theta)P(\hat \theta > 1)$$ The distribution is symmetric around $\theta$. Transforming ($\Phi()$ is the standard normal CDF) $$E[\hat \theta_c] = \theta -\theta P(\hat \theta-\theta \leq -\theta) +(1-\theta)P(\hat \theta -\theta > 1-\theta)$$ $$=\theta -\theta \Phi(-\theta) +(1-\theta)[1-\Phi(1-\theta)]$$ One can verify that the additional terms cancel off only if $\theta =1/2$, namely, only if the bounding interval is also symmetric around $\theta$.
Is it possible to have an estimator that is unbiased and bounded?
I will present conditions under which an unbiased estimator remains unbiased, even after it is bounded. But I am not sure that they amount to something interesting or useful. Let an estimator $\hat
Is it possible to have an estimator that is unbiased and bounded? I will present conditions under which an unbiased estimator remains unbiased, even after it is bounded. But I am not sure that they amount to something interesting or useful. Let an estimator $\hat \theta$ of the unknown parameter $\theta$ of a continuous distribution, and $E(\hat \theta) =\theta$. Assume that for some reasons, under repeated sampling we want the estimator to produce estimates that range in $[\delta_l,\delta_u]$. We assume that $\theta \in [\delta_l,\delta_u]$ and so we can write when convenient the interval as $[\theta-a,\theta+b]$ with $\{a,b\}$ positive numbers but of course unknown. Then the constrained estimator is $$\hat \theta_c = \left\{\begin{matrix} \delta_l & \hat \theta <\delta_l\\\hat \theta &\delta_l \leq \hat \theta \leq \delta_u \\\delta_u & \delta_u < \hat \theta \end{matrix} \right\}$$ and its expected value is $$\begin{align} E(\hat \theta_c) &= \delta_l\cdot P[\hat \theta \leq \delta_l] \\&+ E(\hat \theta \mid \delta_l \leq\hat \theta \leq\delta_u )\cdot P[\delta_l \leq\hat \theta \leq \delta_u] \\ &+\delta_u\cdot P[\hat \theta > \delta_u]\end{align}$$ Define now the indicator functions $$I_l = I(\hat \theta \leq \delta_l),\;\; I_m = I(\delta_l\leq \hat \theta \leq \delta_l),\;\; I_u = I(\hat \theta > \delta_u)$$ and note that $$I_l + I_u = 1- I_m \tag{1}$$ using these indicator functions, and integrals, we can write the expected value of the constrained estimator as ($f(\hat \theta)$ is the density function of $\hat \theta$), $$E(\hat \theta_c) = \int_{-\infty}^{\infty}\delta_lf(\hat \theta)I_ld\hat \theta + \int_{-\infty}^{\infty}\hat \theta f(\hat \theta)I_md\hat \theta + \int_{-\infty}^{\infty} \delta_uf(\hat \theta)I_ud\hat \theta$$ $$=\int_{-\infty}^{\infty}f(\hat \theta)\Big[\delta_lI_l + \hat \theta I_m + \delta_uI_u\Big]d\hat \theta$$ $$=E\Big[\delta_lI_l + \hat \theta I_m + \delta_uI_u\Big] \tag{2}$$ Decomposing the upper and lower bound, we have $$E(\hat \theta_c) = E\Big[(\theta-a)I_l + \hat \theta I_m + (\theta+b)I_u\Big]$$ $$=E\Big[\theta\cdot(I_l+I_u) + \hat \theta I_m\Big] -aE(I_l)+bE(I_u) $$ and using $(1)$, $$ = E\Big[\theta\cdot(1-I_m) + \hat \theta I_m\Big] -aE(I_l)+bE(I_u) $$ $$\Rightarrow E(\hat \theta_c) = \theta +E\big[(\hat \theta -\theta)I_m\big]-aE(I_l)+bE(I_u) \tag {3}$$ Now, since $E(\hat \theta) = \theta$ we have $$E\big[(\hat \theta -\theta)I_m\big] = E\big(\hat \theta I_m\big) - E(\hat \theta)E(I_m)$$ But $$E\big(\hat \theta I_m\big) = E\big(\hat \theta I_m\mid I_m=1\big)E(I_m) = E\big(\hat \theta \big)E(I_m)$$ Hence, $E\big[(\hat \theta -\theta)I_m\big] =0$ and so $$\begin{align} E(\hat \theta_c) &= \theta -aE(I_l)+bE(I_u) \\ &= \theta -aP(\hat \theta \leq \delta_l)+bP(\hat \theta > \delta_u)\end{align}\tag {4}$$ or alternatively $$ E(\hat \theta_c) = \theta -(\theta-\delta_l)P(\hat \theta \leq \delta_l)+(\delta_u-\theta)P(\hat \theta > \delta_u)\tag {4a}$$ Therefore from $(4)$, we see that for the constrained estimator to also be unbiased, we must have $$aP(\hat \theta \leq \delta_l) = bP(\hat \theta > \delta_u) \tag {5}$$ What is the problem with condition $(5)$? It involves the unknown numbers $\{a,b\}$, so in practice we will not be able to actually determine an interval to bound the estimator and keep it unbiased. But let's say this is some controlled simulation experiment, where we want to investigate other properties of estimators, given unbiasedness. Then we can "neutralize" $a$ and $b$ by setting $a=b$, which essentially creates a symmetric interval around the value of $\theta$... In this case, to achieve unbiasedness, we must more over have $P(\hat \theta \leq \delta_l) = P(\hat \theta > \delta_u)$, i.e. we must have that the probability mass of the unconstrained estimator is equal to the left and to the right of the (symmetric around $\theta$) interval... ...and so we learn that (as sufficient conditions), if the distribution of the unconstrained estimator is symmetric around the true value, then the estimator constrained in an interval symmetric around the true value will also be unbiased... but this is almost trivially evident or intuitive, isn't it? It becomes a little more interesting, if we realize that the necessary and sufficient condition (given a symmetric interval) a) does not require a symmetric distribution, only equal probability mass "in the tails" (and this in turn does not imply that the distribution of the mass in each tail has to be identical) and b) permits that inside the interval, the estimator's density can have any non-symmetric shape that is consistent with maintaining unbiasedness -it will still make the constrained estimator unbiased. APPLICATION: The OP's case Our estimator is $\hat \theta = \theta + w,\;\; w \sim N(0,1)$ and so $\hat \theta \sim N(\theta,1)$. Then, using $(4)$ while writing $a,b$ in terms of $\theta, \delta$, we have, for bounding interval $[0,1]$, $$E[\hat \theta_c] = \theta -\theta P(\hat \theta \leq 0) +(1-\theta)P(\hat \theta > 1)$$ The distribution is symmetric around $\theta$. Transforming ($\Phi()$ is the standard normal CDF) $$E[\hat \theta_c] = \theta -\theta P(\hat \theta-\theta \leq -\theta) +(1-\theta)P(\hat \theta -\theta > 1-\theta)$$ $$=\theta -\theta \Phi(-\theta) +(1-\theta)[1-\Phi(1-\theta)]$$ One can verify that the additional terms cancel off only if $\theta =1/2$, namely, only if the bounding interval is also symmetric around $\theta$.
Is it possible to have an estimator that is unbiased and bounded? I will present conditions under which an unbiased estimator remains unbiased, even after it is bounded. But I am not sure that they amount to something interesting or useful. Let an estimator $\hat
37,402
Dickey-Fuller augmented tests: how to choose lags?
Use ac and pac in Stata to assess the possible lags. However, if you are using the ARMA model, it is normal to estimate arma for the candidate models with p=0, q=1 and so on to p=3 and q=3. Then obtain the aic and bic. The model with the lowest aic or bic is chosen. The lags chosen by these criteria may differ, but you have to make sure that the residuals of these models are white noise at their chosen lags.
Dickey-Fuller augmented tests: how to choose lags?
Use ac and pac in Stata to assess the possible lags. However, if you are using the ARMA model, it is normal to estimate arma for the candidate models with p=0, q=1 and so on to p=3 and q=3. Then obtai
Dickey-Fuller augmented tests: how to choose lags? Use ac and pac in Stata to assess the possible lags. However, if you are using the ARMA model, it is normal to estimate arma for the candidate models with p=0, q=1 and so on to p=3 and q=3. Then obtain the aic and bic. The model with the lowest aic or bic is chosen. The lags chosen by these criteria may differ, but you have to make sure that the residuals of these models are white noise at their chosen lags.
Dickey-Fuller augmented tests: how to choose lags? Use ac and pac in Stata to assess the possible lags. However, if you are using the ARMA model, it is normal to estimate arma for the candidate models with p=0, q=1 and so on to p=3 and q=3. Then obtai
37,403
Bootstrapping power estimates for a bootstrap test
I don't think use bootstrap to artificially increase your sample size would be a good idea. Any violation of the assumption of the independence of the observations dramatically increase the odds od a spurious result (which would be the case when n1 is significantly greater than n0). I would estimate the confidence interval of the effect size (the strength of the difference / relationship you are trying to use) and assume the lower bound as the true effect. Then it would be easy to estimate the power. [note: I assume you already have a significan result with n0 observations. Otherwise your data is compatible* with the null hypothesis and there is no way to have a conservative estimate of the powere - unless you are using the wrong test. Power analysis assume the knowledge of the "real" effect size so there is no way to use theme to "bypass" inferential statistics] *likely to be observed if the null hypotesis is true
Bootstrapping power estimates for a bootstrap test
I don't think use bootstrap to artificially increase your sample size would be a good idea. Any violation of the assumption of the independence of the observations dramatically increase the odds od a
Bootstrapping power estimates for a bootstrap test I don't think use bootstrap to artificially increase your sample size would be a good idea. Any violation of the assumption of the independence of the observations dramatically increase the odds od a spurious result (which would be the case when n1 is significantly greater than n0). I would estimate the confidence interval of the effect size (the strength of the difference / relationship you are trying to use) and assume the lower bound as the true effect. Then it would be easy to estimate the power. [note: I assume you already have a significan result with n0 observations. Otherwise your data is compatible* with the null hypothesis and there is no way to have a conservative estimate of the powere - unless you are using the wrong test. Power analysis assume the knowledge of the "real" effect size so there is no way to use theme to "bypass" inferential statistics] *likely to be observed if the null hypotesis is true
Bootstrapping power estimates for a bootstrap test I don't think use bootstrap to artificially increase your sample size would be a good idea. Any violation of the assumption of the independence of the observations dramatically increase the odds od a
37,404
Bootstrapping power estimates for a bootstrap test
As you bootstrap you assume that the new bootstrapped distribution is equivalent to the original distribution. If $n_1>n_0$ then you are forced to draw repeated values of $n_0$, which leads to several problems. First of all, the bootstrapped distribution will only have de facto $n_0$ values, the rest being copies, and these copies will only lead to a stronger weighting of these -- randomly drawn -- value when doing e.g. a correlation analysis. Then there is the issue on how your tests might require non-correlated data, as referred by @nic. If you want to really attain $n_1$ real independent values, I think you have to analyse the $n_0$ values, understand which distribution they are likely to follow and then draw $n_1$ values from this assumed distribution. It is certainly not as generic, but, IMO, is more transparent and you will not suffer from the issues mentioned above.
Bootstrapping power estimates for a bootstrap test
As you bootstrap you assume that the new bootstrapped distribution is equivalent to the original distribution. If $n_1>n_0$ then you are forced to draw repeated values of $n_0$, which leads to several
Bootstrapping power estimates for a bootstrap test As you bootstrap you assume that the new bootstrapped distribution is equivalent to the original distribution. If $n_1>n_0$ then you are forced to draw repeated values of $n_0$, which leads to several problems. First of all, the bootstrapped distribution will only have de facto $n_0$ values, the rest being copies, and these copies will only lead to a stronger weighting of these -- randomly drawn -- value when doing e.g. a correlation analysis. Then there is the issue on how your tests might require non-correlated data, as referred by @nic. If you want to really attain $n_1$ real independent values, I think you have to analyse the $n_0$ values, understand which distribution they are likely to follow and then draw $n_1$ values from this assumed distribution. It is certainly not as generic, but, IMO, is more transparent and you will not suffer from the issues mentioned above.
Bootstrapping power estimates for a bootstrap test As you bootstrap you assume that the new bootstrapped distribution is equivalent to the original distribution. If $n_1>n_0$ then you are forced to draw repeated values of $n_0$, which leads to several
37,405
Confusion matrices with percentages rather than number of instances?
A confusion matrix in percents would be appropriate if the distribution between your classes is flat (either naturally, or intentionally sampled that way). If this is not the case, such a confusion matrix can lead to major confusion. It is useful to have both: number of instances overall (to see the skews) and percents for data sampled from a flat distribution.
Confusion matrices with percentages rather than number of instances?
A confusion matrix in percents would be appropriate if the distribution between your classes is flat (either naturally, or intentionally sampled that way). If this is not the case, such a confusion ma
Confusion matrices with percentages rather than number of instances? A confusion matrix in percents would be appropriate if the distribution between your classes is flat (either naturally, or intentionally sampled that way). If this is not the case, such a confusion matrix can lead to major confusion. It is useful to have both: number of instances overall (to see the skews) and percents for data sampled from a flat distribution.
Confusion matrices with percentages rather than number of instances? A confusion matrix in percents would be appropriate if the distribution between your classes is flat (either naturally, or intentionally sampled that way). If this is not the case, such a confusion ma
37,406
Applying Recency In Logistic Regression
It doesn't necessarily have to be arbitrary; you could, for example, assume exponential discounting and look for the coefficient that has the best predictive value. That is, at time $t$ you might say that the weight, $w_{t-k} \propto p^k$ or equivalently $w_{t-k} \propto \exp{(-\alpha k)}$ (where $0<p<1$ and $\alpha>0$). In the above, $p$ or $\alpha$ are free parameters, but rather than choose them arbitrarily, you can compare their predictive performance across different values of $p$ (or equivalently, across different $\alpha$), perhaps via comparing sums of squares of one step ahead prediction errors, or whatever other criterion (loss function) you regard as most valuable/interesting/useful. Alternatively you could use a similar approach but where you apply some other form of discounting, such as hyperbolic discounting.
Applying Recency In Logistic Regression
It doesn't necessarily have to be arbitrary; you could, for example, assume exponential discounting and look for the coefficient that has the best predictive value. That is, at time $t$ you might say
Applying Recency In Logistic Regression It doesn't necessarily have to be arbitrary; you could, for example, assume exponential discounting and look for the coefficient that has the best predictive value. That is, at time $t$ you might say that the weight, $w_{t-k} \propto p^k$ or equivalently $w_{t-k} \propto \exp{(-\alpha k)}$ (where $0<p<1$ and $\alpha>0$). In the above, $p$ or $\alpha$ are free parameters, but rather than choose them arbitrarily, you can compare their predictive performance across different values of $p$ (or equivalently, across different $\alpha$), perhaps via comparing sums of squares of one step ahead prediction errors, or whatever other criterion (loss function) you regard as most valuable/interesting/useful. Alternatively you could use a similar approach but where you apply some other form of discounting, such as hyperbolic discounting.
Applying Recency In Logistic Regression It doesn't necessarily have to be arbitrary; you could, for example, assume exponential discounting and look for the coefficient that has the best predictive value. That is, at time $t$ you might say
37,407
Imputing a missing variable based on common variables with another data set
In general, I would do something close to your approach I, with some minor tweaks. Assuming that you want eventually to estimate some population parameters on the imputed data, I would use multiple imputation to obtain correct confidence intervals and P-values. Rounding is generally not recommended, unless you use "adaptive rounding". Variable a2 can better be imputed by predictive mean matching, which always provides imputed values that are observed. Variable a1 can be imputed by normal imputation of the logged data. If you are an R user, this can be done very quickly with mice.
Imputing a missing variable based on common variables with another data set
In general, I would do something close to your approach I, with some minor tweaks. Assuming that you want eventually to estimate some population parameters on the imputed data, I would use multiple im
Imputing a missing variable based on common variables with another data set In general, I would do something close to your approach I, with some minor tweaks. Assuming that you want eventually to estimate some population parameters on the imputed data, I would use multiple imputation to obtain correct confidence intervals and P-values. Rounding is generally not recommended, unless you use "adaptive rounding". Variable a2 can better be imputed by predictive mean matching, which always provides imputed values that are observed. Variable a1 can be imputed by normal imputation of the logged data. If you are an R user, this can be done very quickly with mice.
Imputing a missing variable based on common variables with another data set In general, I would do something close to your approach I, with some minor tweaks. Assuming that you want eventually to estimate some population parameters on the imputed data, I would use multiple im
37,408
Imputing a missing variable based on common variables with another data set
This may not be the exact answer but it does answer what you are looking for. There is a StatMatch package in R that does the parametric and non-parametric matching based on the common variables which is what you may want to look upon. To use the non-parametric approach you have to define the donation class (usually categorical variables like race, gender, marital status, education)and matching variables (continuous). These approaches are also discussed in details in their book Statistical Matching Practice Methodology. Alternatively, you can also use propensity score to match the two datasets. R has Matching package for this. If you are using Stata, there is user written function called psmatch2. There is also discussion here and here on this topic.
Imputing a missing variable based on common variables with another data set
This may not be the exact answer but it does answer what you are looking for. There is a StatMatch package in R that does the parametric and non-parametric matching based on the common variables which
Imputing a missing variable based on common variables with another data set This may not be the exact answer but it does answer what you are looking for. There is a StatMatch package in R that does the parametric and non-parametric matching based on the common variables which is what you may want to look upon. To use the non-parametric approach you have to define the donation class (usually categorical variables like race, gender, marital status, education)and matching variables (continuous). These approaches are also discussed in details in their book Statistical Matching Practice Methodology. Alternatively, you can also use propensity score to match the two datasets. R has Matching package for this. If you are using Stata, there is user written function called psmatch2. There is also discussion here and here on this topic.
Imputing a missing variable based on common variables with another data set This may not be the exact answer but it does answer what you are looking for. There is a StatMatch package in R that does the parametric and non-parametric matching based on the common variables which
37,409
How to use Box-Muller transform to generate n-dimensional normal random variables
Box-Muller generates pairs of independent normals from pairs of independent uniforms. To get more than two independent normals, generate more uniforms. If you want 17 normals, generate 18 uniforms, and get 9 pairs of normals. Discard one. If your 18 uniforms are independent, your 17 normals should be. http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform (Incidentally Marsaglia appears to have invented this kind of approach first, I think perhaps in the polar-method form detailed at the link. But because it was right after the war, it was treated as a secret and he was not able to publish it.) You can also get correlated normals by starting from independent ones, for example via use of a Choleski decomposition of the covariance matrix.
How to use Box-Muller transform to generate n-dimensional normal random variables
Box-Muller generates pairs of independent normals from pairs of independent uniforms. To get more than two independent normals, generate more uniforms. If you want 17 normals, generate 18 uniforms, an
How to use Box-Muller transform to generate n-dimensional normal random variables Box-Muller generates pairs of independent normals from pairs of independent uniforms. To get more than two independent normals, generate more uniforms. If you want 17 normals, generate 18 uniforms, and get 9 pairs of normals. Discard one. If your 18 uniforms are independent, your 17 normals should be. http://en.wikipedia.org/wiki/Box%E2%80%93Muller_transform (Incidentally Marsaglia appears to have invented this kind of approach first, I think perhaps in the polar-method form detailed at the link. But because it was right after the war, it was treated as a secret and he was not able to publish it.) You can also get correlated normals by starting from independent ones, for example via use of a Choleski decomposition of the covariance matrix.
How to use Box-Muller transform to generate n-dimensional normal random variables Box-Muller generates pairs of independent normals from pairs of independent uniforms. To get more than two independent normals, generate more uniforms. If you want 17 normals, generate 18 uniforms, an
37,410
How close to zero should the sum of the random effects be in GLMM (with lme4)
Since @Hemmo's code got slightly mangled in the "Bounty" box, I'm adding this reformatted version as "community wiki". If this is not an appropriate use of the wiki, I apologize in advance. Feel free to remove it. library(mvabund) library(lme4) data(spider) Y <- as.matrix(spider$abund) X <- spider$x X <- X[ ,c(1, 4, 5, 6)] X <- rbind(X, X, X, X, X, X, X, X, X, X, X, X) site <- rep(seq(1, 28), 12) dataspider <- data.frame(c(Y), X, site) names(dataspider) <- c("Y","soil.dry", "moss", "herb.layer", "reflection", "site") fit <- glmer( Y ~ soil.dry + moss + herb.layer + reflection + (1|site), family = poisson(link = log), data = dataspider, control = glmerControl(optimizer = "bobyqa") )
How close to zero should the sum of the random effects be in GLMM (with lme4)
Since @Hemmo's code got slightly mangled in the "Bounty" box, I'm adding this reformatted version as "community wiki". If this is not an appropriate use of the wiki, I apologize in advance. Feel free
How close to zero should the sum of the random effects be in GLMM (with lme4) Since @Hemmo's code got slightly mangled in the "Bounty" box, I'm adding this reformatted version as "community wiki". If this is not an appropriate use of the wiki, I apologize in advance. Feel free to remove it. library(mvabund) library(lme4) data(spider) Y <- as.matrix(spider$abund) X <- spider$x X <- X[ ,c(1, 4, 5, 6)] X <- rbind(X, X, X, X, X, X, X, X, X, X, X, X) site <- rep(seq(1, 28), 12) dataspider <- data.frame(c(Y), X, site) names(dataspider) <- c("Y","soil.dry", "moss", "herb.layer", "reflection", "site") fit <- glmer( Y ~ soil.dry + moss + herb.layer + reflection + (1|site), family = poisson(link = log), data = dataspider, control = glmerControl(optimizer = "bobyqa") )
How close to zero should the sum of the random effects be in GLMM (with lme4) Since @Hemmo's code got slightly mangled in the "Bounty" box, I'm adding this reformatted version as "community wiki". If this is not an appropriate use of the wiki, I apologize in advance. Feel free
37,411
Distribution of eigenvalues given one is known
Here is a document about your issue: http://math.nyu.edu/faculty/avellane/LalouxPCA.pdf The idea is simple, you calculate the Marcenko-Pastur distribution with a modified variance of the elements of the matrix. The modified variance simply correspond to the variance explained by other eigenvalue than the first one. As said by john, you have to replace $\sigma^2$ by $(\sum_{i=1}^{n}\lambda_{i}-\sum_{j=1}^{J}\lambda_{j})/n$ for the first $J$ eigenvalues. If you have normalized your problem and you only want to remove the first component, you have to replace $\sigma^2$ by $\frac{1-\lambda_{1}}{n}$. You will obtain: $$ \rho'(\lambda)= \frac{nQ}{2\pi(1-\lambda_{1})}(\frac{\sqrt{(\lambda_{max}-\lambda)(\lambda-\lambda_{min})}}{\lambda}) $$ With: $$ \lambda_{min/max}= \frac{n}{(1-\lambda_{1})}(1+\frac{1}{Q}\pm2\sqrt{\frac{1}{Q}}) $$ As there is probably more information in your matrix than just one big eigenvalue and noise, you will observe some difference. For exemple in market correlation studies we can observe a leakage of the eigenvalues by the upper edge of the spectrum. (It corresponds to financial sectors). Another approach mentioned in the document is to consider $\sigma^2$ as a single parameter in the marcenko pastur distribution. You then have to adjust this parameter to fit your curve. For more information useful techniques and references, you can take a look at: http://arxiv.org/abs/physics/0507111
Distribution of eigenvalues given one is known
Here is a document about your issue: http://math.nyu.edu/faculty/avellane/LalouxPCA.pdf The idea is simple, you calculate the Marcenko-Pastur distribution with a modified variance of the elements of t
Distribution of eigenvalues given one is known Here is a document about your issue: http://math.nyu.edu/faculty/avellane/LalouxPCA.pdf The idea is simple, you calculate the Marcenko-Pastur distribution with a modified variance of the elements of the matrix. The modified variance simply correspond to the variance explained by other eigenvalue than the first one. As said by john, you have to replace $\sigma^2$ by $(\sum_{i=1}^{n}\lambda_{i}-\sum_{j=1}^{J}\lambda_{j})/n$ for the first $J$ eigenvalues. If you have normalized your problem and you only want to remove the first component, you have to replace $\sigma^2$ by $\frac{1-\lambda_{1}}{n}$. You will obtain: $$ \rho'(\lambda)= \frac{nQ}{2\pi(1-\lambda_{1})}(\frac{\sqrt{(\lambda_{max}-\lambda)(\lambda-\lambda_{min})}}{\lambda}) $$ With: $$ \lambda_{min/max}= \frac{n}{(1-\lambda_{1})}(1+\frac{1}{Q}\pm2\sqrt{\frac{1}{Q}}) $$ As there is probably more information in your matrix than just one big eigenvalue and noise, you will observe some difference. For exemple in market correlation studies we can observe a leakage of the eigenvalues by the upper edge of the spectrum. (It corresponds to financial sectors). Another approach mentioned in the document is to consider $\sigma^2$ as a single parameter in the marcenko pastur distribution. You then have to adjust this parameter to fit your curve. For more information useful techniques and references, you can take a look at: http://arxiv.org/abs/physics/0507111
Distribution of eigenvalues given one is known Here is a document about your issue: http://math.nyu.edu/faculty/avellane/LalouxPCA.pdf The idea is simple, you calculate the Marcenko-Pastur distribution with a modified variance of the elements of t
37,412
Recommend a graduate-level regression textbook?
I believe Graham Cookson's answer to a similar question would be of assistance. Basically, he recommends Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. According to Mr. Cookson, the book "covers basic regression, multilevel regression, and Bayesian methods in a clear and intuitive way" and "would be good for any scientist with a basic background in statistics".
Recommend a graduate-level regression textbook?
I believe Graham Cookson's answer to a similar question would be of assistance. Basically, he recommends Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. According
Recommend a graduate-level regression textbook? I believe Graham Cookson's answer to a similar question would be of assistance. Basically, he recommends Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. According to Mr. Cookson, the book "covers basic regression, multilevel regression, and Bayesian methods in a clear and intuitive way" and "would be good for any scientist with a basic background in statistics".
Recommend a graduate-level regression textbook? I believe Graham Cookson's answer to a similar question would be of assistance. Basically, he recommends Gelman and Hill's Data Analysis Using Regression and Multilevel/Hierarchical Models. According
37,413
Unit root tests and stationarity
The short answer is no at least for the ADF. I suspect a similar reasoning as the one outlined below applies for the KPSS, but I have not investigated this. The reason that the ADF will not work is that it is based on the notion of integration of order d, $d\geq 0$ to capture nonstationarity. (KPSS also is based on this notion, hence my suspicion that a similar problem as the one below will arise for KPSS.) (1) recaps the definition of integration order, while (2) explains why/how this definition makes no sense as a basis for a test in nonlinear settings using the DF test. The derivations here are only schematic and aimed at making conceivable the intuition of the (A)DF test family. I hope they will answer the question satisfactory. (1) Nonstationarity: The ADF and the KPSS are defined for linear nonstationary processes. For such processes, the 'degree' of nonstationarity is traditionally captured by the order of integration. One says that a process $X_t \sim I(1)$ (integrated of order 1) $\Longleftrightarrow$ $\Delta X_t \sim I(0)$ (integrated of order 0, i.e. stationary). Similarly, $X_t \sim I(d)$ $\Longleftrightarrow$ $\Delta X_t \sim I(d-1)$ $\Longleftrightarrow$ $\Delta^d X_t \sim I(0)$. (Where $\Delta^d $ denotes that we have differenced $d$ times.) For example, with $\varepsilon_i \overset{iid}{\sim} N(0, \sigma^2), \; 1\leq i \leq N$, define the random variable $S_i \equiv \sum_{j=1}^i\varepsilon_j$ and define $R_i \equiv \sum_{j=1}^i S_j$ for $1\leq i \leq N$. Then clearly, $\varepsilon_i \sim I(0)$. Thus, by construction $\Delta S_i = S_i - S_{i-1} = \varepsilon_i \sim I(0)$ which implies that $S_i \sim I(1)$. Using similar reasoning, one can see that $R_i \sim I(2)$. (2) Example (A)DF: Conceptually, there is no major difference between the ADF and the DF. Both are based on the convergence of the autoregressive coefficient to a functional of Brownian Motion. For mathematical simplicity, I will use a simple nonlinear process in the DF (rather than the ADF) framework to show why it is generally not feasible to apply the ADF test for nonlinear time series. I progress by first demonstrating what the DF test does in a linear setting and then give an example where it fails in a nonlinear setting. (2a) Linear setting: let $\varepsilon_t \overset{iid}{\sim} N(0, \sigma^2), \; - \infty \leq t \leq N$ and consider the process \begin{align} X_t &= X_{t-1} + \varepsilon_t = \sum_{i=0}^{\infty} \varepsilon_{t-i}\\ \end{align} Where the last stop follows because we can always express $X_t$ in deviations from the initial condition and so wlog impose $x_0 = 0$. Clearly, this process is $I(1)$. The DF test now works by estimating $\beta$ in the statistical model \begin{align} X_t &= \beta X_{t-1} + \varepsilon_t \end{align} by using OLS. Suppose $N>1$ and we have observations $x_i:1\leq i \leq N$. Then the OLS estimator can simply be written as \begin{align} \hat{\beta} &= \frac{\sum_{i=1}^N x_t x_{t-1}} {\sum_{i=1}^Nx_{t-1}^2} \\ &= \frac{N^{-1}\sum_{i=1}^N (x_{t-1} + \varepsilon_t) x_{t-1}} {N^{-1}\sum_{i=1}^Nx_{t-1}^2} \\ &= 1 + \frac{N^{-1}\sum_{i=1}^N \varepsilon_t x_{t-1}} {N^{-1}\sum_{i=1}^Nx_{t-1}^2} \\ \end{align} Now note that the functional central limit theorem implies two convergence results (in distribution) for the fraction's numerator and the denominator: \begin{align} N^{-1}\sum_{t=1}^N \varepsilon_t x_{t-1} = \sum_{t=1}^N \left( \frac{\varepsilon_t}{\sqrt{N}} \cdot \frac{\sum_{j=1}^{t-1}\varepsilon_{j-1}}{\sqrt{N}}\right) &\Longrightarrow \sigma^2\int_0^1W(r) dW(r) \\ N^{-2}\sum_{t=1}^Nx_{t-1}^2 = \sum_{t=1}^N \left( \frac{\sum_{j=1}^{t}\varepsilon_{j-1}}{\sqrt{N}} \right)^2 &\Longrightarrow \sigma^2\int_0^1W(r)^2 dr \end{align} Where $W(r):r\in[0,1]$ denotes a Standard Brownian Motion/Wiener Process (i.e., $W(r) \sim N(0,r)$. When putting this all together, the continuous mapping theorem implies that \begin{align} N(\hat{\beta} -1) &\Longrightarrow \frac{\int_0^1W(r) dW(r)}{\int_0^1W(r)^2 dr} \equiv DF \end{align} which is known as the Dickey-Fuller (DF) distribution. The DF test now computes $N(\hat{\beta} -1)$ in the model above. Under the alternative of stationarity, $\hat{\beta}$ will consistently estimate $\beta$ and so the statistic will diverge to $\infty$. The critical values of the DF distribution are then used to reject/accept the null hypothesis of $X_t \sim I(1)$. Clearly, this setting only works in the simplistic first order autoregression model. For more involved statistical models of the linear process model class, the ADF is used. The only further adjustment the ADF test makes is to restructure the regression equation such that the parameter $\phi = (1-\beta)$ is estimated directly, see e.g. Wikipedia (https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test). Conceptually, this is the same as a normal DF test, but it converges to different functionals of Brownian motions in the limit (i.e., the appropriate critical values are different). The take away for either the DF or the ADF is that what the test statistic really relies on is the meaningfulness of the first difference of $X_t$. The parameter $\beta$ (or, equivalently, the parameter $\phi = (1-\beta)$) converge in distribution to a functional of Brownian Motion ONLY if $X_t \sim I(1)$. This is the null hypothesis of the test, and if $X_t$ is not integrated of order one (but order 2, 0, or not integrated of any order while still nonstationary) the test statistics are not useful anymore. Summary: The (A)DF tests against integration of order 1, NOT against general nonstationarity! (2b) Nonlinear Setting: Suppose the data are generated by the following statistical model with nonlinear function $f$: \begin{align} X_t = f(X_{t-1})+\varepsilon_t \end{align} Using recursive substitution, one can observe that $X_t = f(f(...f(X_{t-r}) + \varepsilon_{t-r+1})+ \varepsilon_{t-r+2}) ... \varepsilon_{t-1}) + \varepsilon_t$. Applying the DF or ADF test to data that has a data generating process captured by the above equation amounts to fitting a regression to a non-linear process. Consequently, the estimated parameter $\hat{\beta}$ would not have the same interpretation/meaning as in the linear world*. It would also depend on the exact functional form of $f(\cdot)$ whether and where to the OLS estimate $\hat{\beta}$ would converge. Generally however, the (A)DF test would be invalid unless $f$ would be linear in $X_{t-1}, ..., X_{t-p}$ for some $p\geq 0$ *In particular, it would have the interpretation of the 'best linear projection' of $X_{t-1}, ..., X_{t-p}$ on $X_t$.
Unit root tests and stationarity
The short answer is no at least for the ADF. I suspect a similar reasoning as the one outlined below applies for the KPSS, but I have not investigated this. The reason that the ADF will not work is th
Unit root tests and stationarity The short answer is no at least for the ADF. I suspect a similar reasoning as the one outlined below applies for the KPSS, but I have not investigated this. The reason that the ADF will not work is that it is based on the notion of integration of order d, $d\geq 0$ to capture nonstationarity. (KPSS also is based on this notion, hence my suspicion that a similar problem as the one below will arise for KPSS.) (1) recaps the definition of integration order, while (2) explains why/how this definition makes no sense as a basis for a test in nonlinear settings using the DF test. The derivations here are only schematic and aimed at making conceivable the intuition of the (A)DF test family. I hope they will answer the question satisfactory. (1) Nonstationarity: The ADF and the KPSS are defined for linear nonstationary processes. For such processes, the 'degree' of nonstationarity is traditionally captured by the order of integration. One says that a process $X_t \sim I(1)$ (integrated of order 1) $\Longleftrightarrow$ $\Delta X_t \sim I(0)$ (integrated of order 0, i.e. stationary). Similarly, $X_t \sim I(d)$ $\Longleftrightarrow$ $\Delta X_t \sim I(d-1)$ $\Longleftrightarrow$ $\Delta^d X_t \sim I(0)$. (Where $\Delta^d $ denotes that we have differenced $d$ times.) For example, with $\varepsilon_i \overset{iid}{\sim} N(0, \sigma^2), \; 1\leq i \leq N$, define the random variable $S_i \equiv \sum_{j=1}^i\varepsilon_j$ and define $R_i \equiv \sum_{j=1}^i S_j$ for $1\leq i \leq N$. Then clearly, $\varepsilon_i \sim I(0)$. Thus, by construction $\Delta S_i = S_i - S_{i-1} = \varepsilon_i \sim I(0)$ which implies that $S_i \sim I(1)$. Using similar reasoning, one can see that $R_i \sim I(2)$. (2) Example (A)DF: Conceptually, there is no major difference between the ADF and the DF. Both are based on the convergence of the autoregressive coefficient to a functional of Brownian Motion. For mathematical simplicity, I will use a simple nonlinear process in the DF (rather than the ADF) framework to show why it is generally not feasible to apply the ADF test for nonlinear time series. I progress by first demonstrating what the DF test does in a linear setting and then give an example where it fails in a nonlinear setting. (2a) Linear setting: let $\varepsilon_t \overset{iid}{\sim} N(0, \sigma^2), \; - \infty \leq t \leq N$ and consider the process \begin{align} X_t &= X_{t-1} + \varepsilon_t = \sum_{i=0}^{\infty} \varepsilon_{t-i}\\ \end{align} Where the last stop follows because we can always express $X_t$ in deviations from the initial condition and so wlog impose $x_0 = 0$. Clearly, this process is $I(1)$. The DF test now works by estimating $\beta$ in the statistical model \begin{align} X_t &= \beta X_{t-1} + \varepsilon_t \end{align} by using OLS. Suppose $N>1$ and we have observations $x_i:1\leq i \leq N$. Then the OLS estimator can simply be written as \begin{align} \hat{\beta} &= \frac{\sum_{i=1}^N x_t x_{t-1}} {\sum_{i=1}^Nx_{t-1}^2} \\ &= \frac{N^{-1}\sum_{i=1}^N (x_{t-1} + \varepsilon_t) x_{t-1}} {N^{-1}\sum_{i=1}^Nx_{t-1}^2} \\ &= 1 + \frac{N^{-1}\sum_{i=1}^N \varepsilon_t x_{t-1}} {N^{-1}\sum_{i=1}^Nx_{t-1}^2} \\ \end{align} Now note that the functional central limit theorem implies two convergence results (in distribution) for the fraction's numerator and the denominator: \begin{align} N^{-1}\sum_{t=1}^N \varepsilon_t x_{t-1} = \sum_{t=1}^N \left( \frac{\varepsilon_t}{\sqrt{N}} \cdot \frac{\sum_{j=1}^{t-1}\varepsilon_{j-1}}{\sqrt{N}}\right) &\Longrightarrow \sigma^2\int_0^1W(r) dW(r) \\ N^{-2}\sum_{t=1}^Nx_{t-1}^2 = \sum_{t=1}^N \left( \frac{\sum_{j=1}^{t}\varepsilon_{j-1}}{\sqrt{N}} \right)^2 &\Longrightarrow \sigma^2\int_0^1W(r)^2 dr \end{align} Where $W(r):r\in[0,1]$ denotes a Standard Brownian Motion/Wiener Process (i.e., $W(r) \sim N(0,r)$. When putting this all together, the continuous mapping theorem implies that \begin{align} N(\hat{\beta} -1) &\Longrightarrow \frac{\int_0^1W(r) dW(r)}{\int_0^1W(r)^2 dr} \equiv DF \end{align} which is known as the Dickey-Fuller (DF) distribution. The DF test now computes $N(\hat{\beta} -1)$ in the model above. Under the alternative of stationarity, $\hat{\beta}$ will consistently estimate $\beta$ and so the statistic will diverge to $\infty$. The critical values of the DF distribution are then used to reject/accept the null hypothesis of $X_t \sim I(1)$. Clearly, this setting only works in the simplistic first order autoregression model. For more involved statistical models of the linear process model class, the ADF is used. The only further adjustment the ADF test makes is to restructure the regression equation such that the parameter $\phi = (1-\beta)$ is estimated directly, see e.g. Wikipedia (https://en.wikipedia.org/wiki/Augmented_Dickey%E2%80%93Fuller_test). Conceptually, this is the same as a normal DF test, but it converges to different functionals of Brownian motions in the limit (i.e., the appropriate critical values are different). The take away for either the DF or the ADF is that what the test statistic really relies on is the meaningfulness of the first difference of $X_t$. The parameter $\beta$ (or, equivalently, the parameter $\phi = (1-\beta)$) converge in distribution to a functional of Brownian Motion ONLY if $X_t \sim I(1)$. This is the null hypothesis of the test, and if $X_t$ is not integrated of order one (but order 2, 0, or not integrated of any order while still nonstationary) the test statistics are not useful anymore. Summary: The (A)DF tests against integration of order 1, NOT against general nonstationarity! (2b) Nonlinear Setting: Suppose the data are generated by the following statistical model with nonlinear function $f$: \begin{align} X_t = f(X_{t-1})+\varepsilon_t \end{align} Using recursive substitution, one can observe that $X_t = f(f(...f(X_{t-r}) + \varepsilon_{t-r+1})+ \varepsilon_{t-r+2}) ... \varepsilon_{t-1}) + \varepsilon_t$. Applying the DF or ADF test to data that has a data generating process captured by the above equation amounts to fitting a regression to a non-linear process. Consequently, the estimated parameter $\hat{\beta}$ would not have the same interpretation/meaning as in the linear world*. It would also depend on the exact functional form of $f(\cdot)$ whether and where to the OLS estimate $\hat{\beta}$ would converge. Generally however, the (A)DF test would be invalid unless $f$ would be linear in $X_{t-1}, ..., X_{t-p}$ for some $p\geq 0$ *In particular, it would have the interpretation of the 'best linear projection' of $X_{t-1}, ..., X_{t-p}$ on $X_t$.
Unit root tests and stationarity The short answer is no at least for the ADF. I suspect a similar reasoning as the one outlined below applies for the KPSS, but I have not investigated this. The reason that the ADF will not work is th
37,414
Regarding the sampling procedure in Adaboost algorithm
There are two methods for training Adaboost. Either use the weight vector directly in the training of the weak learner, or use the weight vector to sample datapoints with replacement from the original data. In the latter case the sampled dataset is the same size as the original dataset, and will contain some repeated datapoints. The weight vector is usually a distribution as it makes drawing the weighted sample easier, but any weight vector will work after normalisation. The simplest way to sample the new dataset is to make the weight vector a probability distribution, calculate its cumulative distribution function, then generate N random doubles in the range $(0,1]$. Then test to see what interval the random numbers are in. for i = 1:N rnd = new Random(0.0,1.0) for j = 1:N if (cdf(j) < rnd) samplePoint(i) = dataPoint(j) end end end There are ways which aren't $O(n^2)$, but this is easier to understand.
Regarding the sampling procedure in Adaboost algorithm
There are two methods for training Adaboost. Either use the weight vector directly in the training of the weak learner, or use the weight vector to sample datapoints with replacement from the original
Regarding the sampling procedure in Adaboost algorithm There are two methods for training Adaboost. Either use the weight vector directly in the training of the weak learner, or use the weight vector to sample datapoints with replacement from the original data. In the latter case the sampled dataset is the same size as the original dataset, and will contain some repeated datapoints. The weight vector is usually a distribution as it makes drawing the weighted sample easier, but any weight vector will work after normalisation. The simplest way to sample the new dataset is to make the weight vector a probability distribution, calculate its cumulative distribution function, then generate N random doubles in the range $(0,1]$. Then test to see what interval the random numbers are in. for i = 1:N rnd = new Random(0.0,1.0) for j = 1:N if (cdf(j) < rnd) samplePoint(i) = dataPoint(j) end end end There are ways which aren't $O(n^2)$, but this is easier to understand.
Regarding the sampling procedure in Adaboost algorithm There are two methods for training Adaboost. Either use the weight vector directly in the training of the weak learner, or use the weight vector to sample datapoints with replacement from the original
37,415
Generalization of multivariate normal distribution and classification
The answer to the first question was given by Procrastinator in a comment: The family is called Elliptical Distributions. The standard textbook reference seems to be Fang, K., Kotz, S., Ng, K.W., 1990. Symmetric Multivariate and Related Distributions. Chapman and Hall. Regarding the second question, it appears that most literature on classification either considers multivariate normal distributions or completely nonparametric procedures. I did find one publication though that compares classification algorithms based on different estimators of $\vec \mu$ and $\Sigma$, and does so in the context of elliptical distributions: Hartikainen, A., Oja, H., 2006. On some parametric, nonparametric and semiparametric discrimination rules, in: Data Depth: Robust Multivariate Analysis, Computational Geometry, and Applications. American Mathematical Society, pp. 61–70.
Generalization of multivariate normal distribution and classification
The answer to the first question was given by Procrastinator in a comment: The family is called Elliptical Distributions. The standard textbook reference seems to be Fang, K., Kotz, S., Ng, K.W., 199
Generalization of multivariate normal distribution and classification The answer to the first question was given by Procrastinator in a comment: The family is called Elliptical Distributions. The standard textbook reference seems to be Fang, K., Kotz, S., Ng, K.W., 1990. Symmetric Multivariate and Related Distributions. Chapman and Hall. Regarding the second question, it appears that most literature on classification either considers multivariate normal distributions or completely nonparametric procedures. I did find one publication though that compares classification algorithms based on different estimators of $\vec \mu$ and $\Sigma$, and does so in the context of elliptical distributions: Hartikainen, A., Oja, H., 2006. On some parametric, nonparametric and semiparametric discrimination rules, in: Data Depth: Robust Multivariate Analysis, Computational Geometry, and Applications. American Mathematical Society, pp. 61–70.
Generalization of multivariate normal distribution and classification The answer to the first question was given by Procrastinator in a comment: The family is called Elliptical Distributions. The standard textbook reference seems to be Fang, K., Kotz, S., Ng, K.W., 199
37,416
What are the (best) methods for multiple comparisons correction with bootstrap for multiple glm models?
Method 1: Naive bootstrap Calculate the $\hat{\vec{\theta}}$ on each bootstrap sample. This way we will know the (hopefully) natural variability in the test statistics. The adjusted p-value is the 1 - percentage of cases where $\hat{\theta} > \theta_0$ (or $\hat{\theta} < \theta_0$ or $\left|\frac{\hat{\theta}}{\theta_0}\right| > 1$; the form depends on the nature of $\theta$ parameter. It should yield true for bootstrap result being "more significant" than the reference). This method violates both guidelines stated in the article by Hall P. and Wilson S.R. "Two Guidelines for Bootstrap Hypothesis Testing" (1992), so it lacks power (in our case should be too conservative). Method 2: Free step-down resampling (MaxT) using Wald statistics The name and decription comes from the book by "Applied Statistical Genetics with R (2009)" by Foulkes Andrea. Preparing for the bootstrap If it is possible, compute the residuals from the regressions, replacing the original dependent variables. Keep independent variable unchanged. Generate bootstrap samples from this new dataset. Gather the pivot statistics. On each sample compute the Wald statistics $T^*$. $\hat{T^*}=\frac{\hat{\vec{\theta}^*}}{\operatorname{SE}(\hat{\vec{\theta}^*})}$. Since all were computed under complete null (because dependent variable is in fact regression residuum), we can treat them as a set of potentially correlated, zero-centered random variables. The adjusted p-value for the $j$-th regression coefficient is the percentage of cases, where the observed $\hat{T}$ is equally or less significant, than the $j/m$ quantile of set of $\hat{\vec{T^*}}$ values. The problem is that this method is not working, when one cannot simply calculate the regression residual, when e.g. there are many independent regression models, and some of them share the same dependent variable. Method 3: Null unrestricted bootstrap This method is very similar to Free step-down resampling, with the difference that instead of calculating the residuals, one adjusts the $T$ statistic: Generate bootstrap samples from the dataset. Gather the pivot statistics. On each sample compute the Wald statistics $T^*$. $\hat{T^*}=\frac{\hat{\vec{\theta}^*}-\hat{\vec{\theta}}}{\operatorname{SE}(\hat{\vec{\theta}^*})}$. Since expected value of $\vec{\theta}^*$ equals $\vec{\theta}$, we can treat them as a set of potentially correlated, zero-centered random variables. The adjusted p-value for the $j$-th regression coefficient is the percentage of cases, where the observed $\hat{T}$ is equally or less significant, than the $j/m$ quantile of set of $\hat{\vec{T^*}}$ values.
What are the (best) methods for multiple comparisons correction with bootstrap for multiple glm mode
Method 1: Naive bootstrap Calculate the $\hat{\vec{\theta}}$ on each bootstrap sample. This way we will know the (hopefully) natural variability in the test statistics. The adjusted p-value is the 1
What are the (best) methods for multiple comparisons correction with bootstrap for multiple glm models? Method 1: Naive bootstrap Calculate the $\hat{\vec{\theta}}$ on each bootstrap sample. This way we will know the (hopefully) natural variability in the test statistics. The adjusted p-value is the 1 - percentage of cases where $\hat{\theta} > \theta_0$ (or $\hat{\theta} < \theta_0$ or $\left|\frac{\hat{\theta}}{\theta_0}\right| > 1$; the form depends on the nature of $\theta$ parameter. It should yield true for bootstrap result being "more significant" than the reference). This method violates both guidelines stated in the article by Hall P. and Wilson S.R. "Two Guidelines for Bootstrap Hypothesis Testing" (1992), so it lacks power (in our case should be too conservative). Method 2: Free step-down resampling (MaxT) using Wald statistics The name and decription comes from the book by "Applied Statistical Genetics with R (2009)" by Foulkes Andrea. Preparing for the bootstrap If it is possible, compute the residuals from the regressions, replacing the original dependent variables. Keep independent variable unchanged. Generate bootstrap samples from this new dataset. Gather the pivot statistics. On each sample compute the Wald statistics $T^*$. $\hat{T^*}=\frac{\hat{\vec{\theta}^*}}{\operatorname{SE}(\hat{\vec{\theta}^*})}$. Since all were computed under complete null (because dependent variable is in fact regression residuum), we can treat them as a set of potentially correlated, zero-centered random variables. The adjusted p-value for the $j$-th regression coefficient is the percentage of cases, where the observed $\hat{T}$ is equally or less significant, than the $j/m$ quantile of set of $\hat{\vec{T^*}}$ values. The problem is that this method is not working, when one cannot simply calculate the regression residual, when e.g. there are many independent regression models, and some of them share the same dependent variable. Method 3: Null unrestricted bootstrap This method is very similar to Free step-down resampling, with the difference that instead of calculating the residuals, one adjusts the $T$ statistic: Generate bootstrap samples from the dataset. Gather the pivot statistics. On each sample compute the Wald statistics $T^*$. $\hat{T^*}=\frac{\hat{\vec{\theta}^*}-\hat{\vec{\theta}}}{\operatorname{SE}(\hat{\vec{\theta}^*})}$. Since expected value of $\vec{\theta}^*$ equals $\vec{\theta}$, we can treat them as a set of potentially correlated, zero-centered random variables. The adjusted p-value for the $j$-th regression coefficient is the percentage of cases, where the observed $\hat{T}$ is equally or less significant, than the $j/m$ quantile of set of $\hat{\vec{T^*}}$ values.
What are the (best) methods for multiple comparisons correction with bootstrap for multiple glm mode Method 1: Naive bootstrap Calculate the $\hat{\vec{\theta}}$ on each bootstrap sample. This way we will know the (hopefully) natural variability in the test statistics. The adjusted p-value is the 1
37,417
Calculating $R^2$ for Elastic Net
Just use the regular $R^2$, i.e. the squared correlation between the fitted and the actual values. Whether the model was fit by OLS or by penalized OLS (such as the elastic net), it will still reflect the proportion of variance explained. Be aware, however, that model diagnostics and performance measures (such as $R^2$) applied after model selection may (and will) be overly optimistic if the model is evaluated on the same data that was used for model building (e.g. variable selection). Apart from the warning above, correlated variables are not a problem for $R^2$. If you were to predict the left-out fold in $K$-fold cross validation and base the $R^2$ on prediction accuracy, then it is not a very useful measure because $R^2$ ignores prediction bias and only accounts for prediction variance. But I am not sure I understand what you mean by cross-validated $R^2$. Split your data into training, validation and test subsets. Train your models on the training data and use validation data to pick the best-performing model. Re-estimate your selected model on test+validation data. Then assess the performance of the re-estimated model on the test data. Use mean squared error instead of $R^2$ since to properly account for any prediction bias extra to prediction variance.
Calculating $R^2$ for Elastic Net
Just use the regular $R^2$, i.e. the squared correlation between the fitted and the actual values. Whether the model was fit by OLS or by penalized OLS (such as the elastic net), it will still reflect
Calculating $R^2$ for Elastic Net Just use the regular $R^2$, i.e. the squared correlation between the fitted and the actual values. Whether the model was fit by OLS or by penalized OLS (such as the elastic net), it will still reflect the proportion of variance explained. Be aware, however, that model diagnostics and performance measures (such as $R^2$) applied after model selection may (and will) be overly optimistic if the model is evaluated on the same data that was used for model building (e.g. variable selection). Apart from the warning above, correlated variables are not a problem for $R^2$. If you were to predict the left-out fold in $K$-fold cross validation and base the $R^2$ on prediction accuracy, then it is not a very useful measure because $R^2$ ignores prediction bias and only accounts for prediction variance. But I am not sure I understand what you mean by cross-validated $R^2$. Split your data into training, validation and test subsets. Train your models on the training data and use validation data to pick the best-performing model. Re-estimate your selected model on test+validation data. Then assess the performance of the re-estimated model on the test data. Use mean squared error instead of $R^2$ since to properly account for any prediction bias extra to prediction variance.
Calculating $R^2$ for Elastic Net Just use the regular $R^2$, i.e. the squared correlation between the fitted and the actual values. Whether the model was fit by OLS or by penalized OLS (such as the elastic net), it will still reflect
37,418
How to identify structural change using a Chow test on Eviews?
I am assuming that you are treating each country separately, and are attempting to determine if there is a break-point in the level of a series. Here are three (EDIT: four) main points that I hope will help: The Chow test assumes that there is a known break-point in the series. If this point is not know, the Chow test is not appropriate (there are alternatives, although inference will be difficult in such a small sample). The degrees of freedom in the F-test will be the same for each test of break-point. That is, it will always be F(2,47). The F-statistic calculated (7.438332 in your example) should be different at each tested point. However, given that you have a relatively small sample, such a test may suggest that there is a structural break at every point in the series. Have you considered alternatives to the full structural break? For example, including a dummy variable for 1991 that could pick up an exogenous shock (such as a policy implementation that impacted GDP growth only in that period, but the economy returned to trend after). Alternatively, you could consider a broken trend model, if you think that the trend growth in GDP has shifted but not the intercept. EDIT: Following from another user's point (mpiktas) that GDP may have a unit root. You should probably be looking at GDP as a natural logarithm (as we often see GDP moving with an exponential trend, due to the nature of population growth, etc.). Inference from a trend model on the log of GDP should be fine (log-GDP is probably trend-stationary - although you should do some testing - which implies that once accounting for the trend the residual series is stationary). From your example: $$ y_t = \beta_0 + \beta_1 t + \epsilon_t \qquad (1)$$ The basic form of the Chow test is: Construct a dummy variable $D_t$ that is $=0$ before the break and $=1$ after the break. Run a regression: $$ y_t = \beta_0 + \beta_1 t + \gamma_0 D_t + \gamma_1 t D_t + \nu_t \qquad (2) $$ Test the sum of squared residuals from (1) against (2) where: $$ H_0 : \gamma_0 = \gamma_1 = 0 $$ $$ H_1 \text{: At least one coefficient not equal to zero} $$ And, $ F = \frac{SSR_{(1)} - SSR_{(2)}}{SSR_{(1)}} \frac{N-k}{q} $ Where $q$ is the number of restrictions (the number of equals signs in the null hypothesis $H_0$ above, and $k$ is the number of parameters in the restricted model (after applying the null hypothesis, so just $\beta_0$ and $\beta_1$). Hope this helps.
How to identify structural change using a Chow test on Eviews?
I am assuming that you are treating each country separately, and are attempting to determine if there is a break-point in the level of a series. Here are three (EDIT: four) main points that I hope wil
How to identify structural change using a Chow test on Eviews? I am assuming that you are treating each country separately, and are attempting to determine if there is a break-point in the level of a series. Here are three (EDIT: four) main points that I hope will help: The Chow test assumes that there is a known break-point in the series. If this point is not know, the Chow test is not appropriate (there are alternatives, although inference will be difficult in such a small sample). The degrees of freedom in the F-test will be the same for each test of break-point. That is, it will always be F(2,47). The F-statistic calculated (7.438332 in your example) should be different at each tested point. However, given that you have a relatively small sample, such a test may suggest that there is a structural break at every point in the series. Have you considered alternatives to the full structural break? For example, including a dummy variable for 1991 that could pick up an exogenous shock (such as a policy implementation that impacted GDP growth only in that period, but the economy returned to trend after). Alternatively, you could consider a broken trend model, if you think that the trend growth in GDP has shifted but not the intercept. EDIT: Following from another user's point (mpiktas) that GDP may have a unit root. You should probably be looking at GDP as a natural logarithm (as we often see GDP moving with an exponential trend, due to the nature of population growth, etc.). Inference from a trend model on the log of GDP should be fine (log-GDP is probably trend-stationary - although you should do some testing - which implies that once accounting for the trend the residual series is stationary). From your example: $$ y_t = \beta_0 + \beta_1 t + \epsilon_t \qquad (1)$$ The basic form of the Chow test is: Construct a dummy variable $D_t$ that is $=0$ before the break and $=1$ after the break. Run a regression: $$ y_t = \beta_0 + \beta_1 t + \gamma_0 D_t + \gamma_1 t D_t + \nu_t \qquad (2) $$ Test the sum of squared residuals from (1) against (2) where: $$ H_0 : \gamma_0 = \gamma_1 = 0 $$ $$ H_1 \text{: At least one coefficient not equal to zero} $$ And, $ F = \frac{SSR_{(1)} - SSR_{(2)}}{SSR_{(1)}} \frac{N-k}{q} $ Where $q$ is the number of restrictions (the number of equals signs in the null hypothesis $H_0$ above, and $k$ is the number of parameters in the restricted model (after applying the null hypothesis, so just $\beta_0$ and $\beta_1$). Hope this helps.
How to identify structural change using a Chow test on Eviews? I am assuming that you are treating each country separately, and are attempting to determine if there is a break-point in the level of a series. Here are three (EDIT: four) main points that I hope wil
37,419
How to identify structural change using a Chow test on Eviews?
Chow test tests whether the two different models have the same coefficients. It follows $F$ distribution with $k$ and $N_1+N_2-2k$ degrees of freedom, where $k$ is the number of parameters and $N_1$ and $N_2$ are sample sizes of the data the two models are estimated on. In your case two models are the same regression model estimated with the data before the potential break and after, hence $N_1+N_2=N$, where $N$ is the size of the full sample. This holds for any break. So the same degrees of freedom is perfectly normal. Now the fact that the test is always significant may indicate that the rejection occurs not because of the structural break but because of violations of the Chow test assumptions. Since you are testing GDP series this is possible, since GDP is usually a unit root process and this usually changes the distribution of the usual statistics.
How to identify structural change using a Chow test on Eviews?
Chow test tests whether the two different models have the same coefficients. It follows $F$ distribution with $k$ and $N_1+N_2-2k$ degrees of freedom, where $k$ is the number of parameters and $N_1$ a
How to identify structural change using a Chow test on Eviews? Chow test tests whether the two different models have the same coefficients. It follows $F$ distribution with $k$ and $N_1+N_2-2k$ degrees of freedom, where $k$ is the number of parameters and $N_1$ and $N_2$ are sample sizes of the data the two models are estimated on. In your case two models are the same regression model estimated with the data before the potential break and after, hence $N_1+N_2=N$, where $N$ is the size of the full sample. This holds for any break. So the same degrees of freedom is perfectly normal. Now the fact that the test is always significant may indicate that the rejection occurs not because of the structural break but because of violations of the Chow test assumptions. Since you are testing GDP series this is possible, since GDP is usually a unit root process and this usually changes the distribution of the usual statistics.
How to identify structural change using a Chow test on Eviews? Chow test tests whether the two different models have the same coefficients. It follows $F$ distribution with $k$ and $N_1+N_2-2k$ degrees of freedom, where $k$ is the number of parameters and $N_1$ a
37,420
How to identify structural change using a Chow test on Eviews?
here the null hypothesis is H0: no structural break just look at p-value of f-statistics it is 0.0016 which is below 5% So reject H0 and there is structural break in your data. thx
How to identify structural change using a Chow test on Eviews?
here the null hypothesis is H0: no structural break just look at p-value of f-statistics it is 0.0016 which is below 5% So reject H0 and there is structural break in your data. thx
How to identify structural change using a Chow test on Eviews? here the null hypothesis is H0: no structural break just look at p-value of f-statistics it is 0.0016 which is below 5% So reject H0 and there is structural break in your data. thx
How to identify structural change using a Chow test on Eviews? here the null hypothesis is H0: no structural break just look at p-value of f-statistics it is 0.0016 which is below 5% So reject H0 and there is structural break in your data. thx
37,421
How to identify structural change using a Chow test on Eviews?
Ho: Parameter is structurally stable when your probability is less than 5% as in your case it is 0.0016 then do reject Ho (i.e., the null hypothesis). It means there is a structural break in your data. The best thing you can do is check each variable one by one by using the chow test.
How to identify structural change using a Chow test on Eviews?
Ho: Parameter is structurally stable when your probability is less than 5% as in your case it is 0.0016 then do reject Ho (i.e., the null hypothesis). It means there is a structural break in your dat
How to identify structural change using a Chow test on Eviews? Ho: Parameter is structurally stable when your probability is less than 5% as in your case it is 0.0016 then do reject Ho (i.e., the null hypothesis). It means there is a structural break in your data. The best thing you can do is check each variable one by one by using the chow test.
How to identify structural change using a Chow test on Eviews? Ho: Parameter is structurally stable when your probability is less than 5% as in your case it is 0.0016 then do reject Ho (i.e., the null hypothesis). It means there is a structural break in your dat
37,422
How to identify structural change using a Chow test on Eviews?
I think that Ho is proved and H1 disproved. In addition the GDP has no unit root. is meaning the trend remains. There is no break point.
How to identify structural change using a Chow test on Eviews?
I think that Ho is proved and H1 disproved. In addition the GDP has no unit root. is meaning the trend remains. There is no break point.
How to identify structural change using a Chow test on Eviews? I think that Ho is proved and H1 disproved. In addition the GDP has no unit root. is meaning the trend remains. There is no break point.
How to identify structural change using a Chow test on Eviews? I think that Ho is proved and H1 disproved. In addition the GDP has no unit root. is meaning the trend remains. There is no break point.
37,423
How to identify structural change using a Chow test on Eviews?
Dear Stability test is used to determine the stability of coefficients of independent variable after OLS is peroformed. Stability test is performed by CUSUM and CUSUMQ. but if results of both contradict each other , then Chow breakpoint is used as alternative. for stability test Ho : Coefficients are stable or Coefficients are not different. here in this case p value is 0.0016 < 0.1 , that means our Ho got rejected. and we can say that our coeffiecients are different i.e. not stable. its all about from my side.
How to identify structural change using a Chow test on Eviews?
Dear Stability test is used to determine the stability of coefficients of independent variable after OLS is peroformed. Stability test is performed by CUSUM and CUSUMQ. but if results of both contrad
How to identify structural change using a Chow test on Eviews? Dear Stability test is used to determine the stability of coefficients of independent variable after OLS is peroformed. Stability test is performed by CUSUM and CUSUMQ. but if results of both contradict each other , then Chow breakpoint is used as alternative. for stability test Ho : Coefficients are stable or Coefficients are not different. here in this case p value is 0.0016 < 0.1 , that means our Ho got rejected. and we can say that our coeffiecients are different i.e. not stable. its all about from my side.
How to identify structural change using a Chow test on Eviews? Dear Stability test is used to determine the stability of coefficients of independent variable after OLS is peroformed. Stability test is performed by CUSUM and CUSUMQ. but if results of both contrad
37,424
How to identify structural change using a Chow test on Eviews?
An alternative for finding the exact year of structural break is to preform the regression: $$yt=β0+β_1t+γ_0Dt+γ_1tDt+νt(2)\;.$$ Move on the number of cases one by one below. From the point of structural break the regression coefficients will become significant continuously.
How to identify structural change using a Chow test on Eviews?
An alternative for finding the exact year of structural break is to preform the regression: $$yt=β0+β_1t+γ_0Dt+γ_1tDt+νt(2)\;.$$ Move on the number of cases one by one below. From the point of struc
How to identify structural change using a Chow test on Eviews? An alternative for finding the exact year of structural break is to preform the regression: $$yt=β0+β_1t+γ_0Dt+γ_1tDt+νt(2)\;.$$ Move on the number of cases one by one below. From the point of structural break the regression coefficients will become significant continuously.
How to identify structural change using a Chow test on Eviews? An alternative for finding the exact year of structural break is to preform the regression: $$yt=β0+β_1t+γ_0Dt+γ_1tDt+νt(2)\;.$$ Move on the number of cases one by one below. From the point of struc
37,425
Standard normal distribution on a subspace
Yes. You have that $U$ is a subspace of $\mathbb R^n$. Let $Y \sim \text{N}(0,I)$ and $P$ be the orthogonal projection matrix on $U$, so that $P$ is symmetric and idempotent. Then $PY \sim \text{N}(P0,PIP^T) = \text{N}(0,P)$. This is a singular normal distribution, which on the subspace $U$ is the standard normal on that subspace. As a singular distribution, it does not have a density with respect to volume measure in $\mathbb R^n$, but it does have a density with respect to the (lower-dim) volume measure on $U$.
Standard normal distribution on a subspace
Yes. You have that $U$ is a subspace of $\mathbb R^n$. Let $Y \sim \text{N}(0,I)$ and $P$ be the orthogonal projection matrix on $U$, so that $P$ is symmetric and idempotent. Then $PY \sim \text{N}(P0
Standard normal distribution on a subspace Yes. You have that $U$ is a subspace of $\mathbb R^n$. Let $Y \sim \text{N}(0,I)$ and $P$ be the orthogonal projection matrix on $U$, so that $P$ is symmetric and idempotent. Then $PY \sim \text{N}(P0,PIP^T) = \text{N}(0,P)$. This is a singular normal distribution, which on the subspace $U$ is the standard normal on that subspace. As a singular distribution, it does not have a density with respect to volume measure in $\mathbb R^n$, but it does have a density with respect to the (lower-dim) volume measure on $U$.
Standard normal distribution on a subspace Yes. You have that $U$ is a subspace of $\mathbb R^n$. Let $Y \sim \text{N}(0,I)$ and $P$ be the orthogonal projection matrix on $U$, so that $P$ is symmetric and idempotent. Then $PY \sim \text{N}(P0
37,426
Repeated measures: aggregate, fixed effects, or random effects?
I think you should go with subject as a fixed effect since you only have 3 subjects. What is the goal of the study? If you want to test whether all 8 treatments are the same - or test particular contrasts - you could treat subject like a fixed effect block. If you want to predict the response of an as-yet-unseen subject, you need random effects. You also need a bigger sample. RE: "dependence of observations." This doesn't matter. Whether subject is treated as random or fixed, you still have repeated measures - i.e. multiple observations nested within subject. I would go with options 2 and 3, averaging response and treating subject as a fixed blocking factor. You don't lose power in the comparison between treatments because treatments are applied at the "subject" level, not the replicate level. By taking the mean of your repeated measures, you reduce the variance of your response, so you gain that way (i.e. N is smaller but so is $\sigma^2$)
Repeated measures: aggregate, fixed effects, or random effects?
I think you should go with subject as a fixed effect since you only have 3 subjects. What is the goal of the study? If you want to test whether all 8 treatments are the same - or test particular contr
Repeated measures: aggregate, fixed effects, or random effects? I think you should go with subject as a fixed effect since you only have 3 subjects. What is the goal of the study? If you want to test whether all 8 treatments are the same - or test particular contrasts - you could treat subject like a fixed effect block. If you want to predict the response of an as-yet-unseen subject, you need random effects. You also need a bigger sample. RE: "dependence of observations." This doesn't matter. Whether subject is treated as random or fixed, you still have repeated measures - i.e. multiple observations nested within subject. I would go with options 2 and 3, averaging response and treating subject as a fixed blocking factor. You don't lose power in the comparison between treatments because treatments are applied at the "subject" level, not the replicate level. By taking the mean of your repeated measures, you reduce the variance of your response, so you gain that way (i.e. N is smaller but so is $\sigma^2$)
Repeated measures: aggregate, fixed effects, or random effects? I think you should go with subject as a fixed effect since you only have 3 subjects. What is the goal of the study? If you want to test whether all 8 treatments are the same - or test particular contr
37,427
What am I measuring when I apply a graded response model to the "Hunting of the Snark" dataset?
The analyses you did are somewhat unorthodox. Using the GRM you usually want to locate each comment on a latent unidimensional trait from unfriendly to friendly. Therefore, one would use several polytomous items which are indicators of the latent trait. In your analysis, you treated the different raters as "items". Plot 1 gives you the item (i.e., rater) information curve: the higher the curve, the better the reliability in the respective region of the trait. That means, the reddish raters are good in discriminating friendly and unfriendly comments, but not so good in discriminating neutral comments. The greenish raters generally are worse in discriminating comments (their curves are lower overall), especially friendly comments. The next three plots (these are ICC plots) give you basically the same information as the first plot: the steeper the curve in the ICC plot, the more information is obtained. It is interesting that there seem to be two types of raters: lenient and demanding. The greenish raters are rather lenient: their boundaries between friendly and neutral, and between neutral and unfriendly are moved to the right (as you can see in Plots 2-4). That means, when a comment moves the latent trait from friendly to unfriendly, they switch the categories later to neutral and to unfriendly.
What am I measuring when I apply a graded response model to the "Hunting of the Snark" dataset?
The analyses you did are somewhat unorthodox. Using the GRM you usually want to locate each comment on a latent unidimensional trait from unfriendly to friendly. Therefore, one would use several polyt
What am I measuring when I apply a graded response model to the "Hunting of the Snark" dataset? The analyses you did are somewhat unorthodox. Using the GRM you usually want to locate each comment on a latent unidimensional trait from unfriendly to friendly. Therefore, one would use several polytomous items which are indicators of the latent trait. In your analysis, you treated the different raters as "items". Plot 1 gives you the item (i.e., rater) information curve: the higher the curve, the better the reliability in the respective region of the trait. That means, the reddish raters are good in discriminating friendly and unfriendly comments, but not so good in discriminating neutral comments. The greenish raters generally are worse in discriminating comments (their curves are lower overall), especially friendly comments. The next three plots (these are ICC plots) give you basically the same information as the first plot: the steeper the curve in the ICC plot, the more information is obtained. It is interesting that there seem to be two types of raters: lenient and demanding. The greenish raters are rather lenient: their boundaries between friendly and neutral, and between neutral and unfriendly are moved to the right (as you can see in Plots 2-4). That means, when a comment moves the latent trait from friendly to unfriendly, they switch the categories later to neutral and to unfriendly.
What am I measuring when I apply a graded response model to the "Hunting of the Snark" dataset? The analyses you did are somewhat unorthodox. Using the GRM you usually want to locate each comment on a latent unidimensional trait from unfriendly to friendly. Therefore, one would use several polyt
37,428
Bootstrapping and comparing multiple proportions
The questions, especially the second one, are meaningless as they stand right now. The problem is that the concept of "taking a random number of candies from the bag" is not defined. Even with a finite number of candies in the bag there could be multiple definitions. For example, the following two both sound reasonable, but give different results: If there are $n$ candies in the bag, the probabilities of taking exactly $0, 1, \ldots , n$ candies are all the same: $1/(n+1)$. We go through each candy, and decide whether to take it out with probability $p$. This means that the probability of exactly $x$ candies is ${n\choose x} p^x (1-p)^{n-x}$. Once you go to an infinite bag, neither of those options apply. So all we can say that there is some unknown distribution that gives the probability of $x$ candies. Using the bootstrap idea you would estimate this as the observed distribution: $P(1 \text{ candy})=2/7$, $P(4 \text{ candies})=3/7$, $P(15 \text{ candies})=1/7$, and $P(44 \text{ candies})=1/7$. Note that this implies that the question of comparing the number of candies received by different children is almost meaningless. You could potentially calculate the probability of receiving the actual number or fewer candies for each child, and some would be less lucky than others, but somebody would have to be less lucky by definition. As for the first question, you would need to bootstrap from the observed distribution after making some assumption about the adults, or the process that sends children/adults to Mom. I can think of several options, but nothing is totally satisfactory, as you would want to keep the number of children fixed at 7 and the total number of candies fixed at 324 while keeping the observed distribution of candies per handful and varying the number of adults appropriately. Perhaps letting go of some of these conditions (eg total number of candies) is reasonable.
Bootstrapping and comparing multiple proportions
The questions, especially the second one, are meaningless as they stand right now. The problem is that the concept of "taking a random number of candies from the bag" is not defined. Even with a finit
Bootstrapping and comparing multiple proportions The questions, especially the second one, are meaningless as they stand right now. The problem is that the concept of "taking a random number of candies from the bag" is not defined. Even with a finite number of candies in the bag there could be multiple definitions. For example, the following two both sound reasonable, but give different results: If there are $n$ candies in the bag, the probabilities of taking exactly $0, 1, \ldots , n$ candies are all the same: $1/(n+1)$. We go through each candy, and decide whether to take it out with probability $p$. This means that the probability of exactly $x$ candies is ${n\choose x} p^x (1-p)^{n-x}$. Once you go to an infinite bag, neither of those options apply. So all we can say that there is some unknown distribution that gives the probability of $x$ candies. Using the bootstrap idea you would estimate this as the observed distribution: $P(1 \text{ candy})=2/7$, $P(4 \text{ candies})=3/7$, $P(15 \text{ candies})=1/7$, and $P(44 \text{ candies})=1/7$. Note that this implies that the question of comparing the number of candies received by different children is almost meaningless. You could potentially calculate the probability of receiving the actual number or fewer candies for each child, and some would be less lucky than others, but somebody would have to be less lucky by definition. As for the first question, you would need to bootstrap from the observed distribution after making some assumption about the adults, or the process that sends children/adults to Mom. I can think of several options, but nothing is totally satisfactory, as you would want to keep the number of children fixed at 7 and the total number of candies fixed at 324 while keeping the observed distribution of candies per handful and varying the number of adults appropriately. Perhaps letting go of some of these conditions (eg total number of candies) is reasonable.
Bootstrapping and comparing multiple proportions The questions, especially the second one, are meaningless as they stand right now. The problem is that the concept of "taking a random number of candies from the bag" is not defined. Even with a finit
37,429
Bootstrapping and comparing multiple proportions
The way you've constructed your problem you can't really get the answers you want, even with bootstrapping. I'll give you responses from the bootstrap perspective. #1 All you can conclude is that she was equally generous. It's because you only have one sample of the random process, the one for kids. Extrapolating that to the adults means it's still the same distribution and any proper bootstrap of the problem from the kids data has to show the adults are the same. (if you meant generosity to be each individual adult and each individual child, which is what makes sense) #2 This isn't answerable because you don't have any other knowledge than the distribution you have. Therefore, that is your definition of random and you can't call any of the samples unfairly higher than any other. Perhaps if you had more than one sample to each child you could, but you don't.
Bootstrapping and comparing multiple proportions
The way you've constructed your problem you can't really get the answers you want, even with bootstrapping. I'll give you responses from the bootstrap perspective. #1 All you can conclude is that she
Bootstrapping and comparing multiple proportions The way you've constructed your problem you can't really get the answers you want, even with bootstrapping. I'll give you responses from the bootstrap perspective. #1 All you can conclude is that she was equally generous. It's because you only have one sample of the random process, the one for kids. Extrapolating that to the adults means it's still the same distribution and any proper bootstrap of the problem from the kids data has to show the adults are the same. (if you meant generosity to be each individual adult and each individual child, which is what makes sense) #2 This isn't answerable because you don't have any other knowledge than the distribution you have. Therefore, that is your definition of random and you can't call any of the samples unfairly higher than any other. Perhaps if you had more than one sample to each child you could, but you don't.
Bootstrapping and comparing multiple proportions The way you've constructed your problem you can't really get the answers you want, even with bootstrapping. I'll give you responses from the bootstrap perspective. #1 All you can conclude is that she
37,430
Bootstrapping and comparing multiple proportions
The bootstrap is not needed here because the null distribution is well defined. Bootstrap should not be used just because you are comfortable with simulation. The bootstrap works in this situation and will give essentally the same results as the straightforward method. So your motivation is not a factor here. Sounds like you handled the first question properly. For question 2 do a pairwise comparison of the proportions (two sample binomial or corresponding bootstrap). Adjust the p-values for multiplicity. The children with signifiicant p-values in the adjusted comparison can be considered to be treated differently with respect to the sweets received. This answer was to the initial question before it was changed. Random selection of a number to give a child muddies the waters (adds to the uncertainty). Statistically significant differences between children can be due to preferentially handing to some children more than others (the effect of preference which you would be interested in) or just because by chance certain children happened to get the luck of the draw and get more sweets when selected (a random event not of interest). As Aniko points out the problem is only well defined if you specify a probability distribution for the number drawn.
Bootstrapping and comparing multiple proportions
The bootstrap is not needed here because the null distribution is well defined. Bootstrap should not be used just because you are comfortable with simulation. The bootstrap works in this situation a
Bootstrapping and comparing multiple proportions The bootstrap is not needed here because the null distribution is well defined. Bootstrap should not be used just because you are comfortable with simulation. The bootstrap works in this situation and will give essentally the same results as the straightforward method. So your motivation is not a factor here. Sounds like you handled the first question properly. For question 2 do a pairwise comparison of the proportions (two sample binomial or corresponding bootstrap). Adjust the p-values for multiplicity. The children with signifiicant p-values in the adjusted comparison can be considered to be treated differently with respect to the sweets received. This answer was to the initial question before it was changed. Random selection of a number to give a child muddies the waters (adds to the uncertainty). Statistically significant differences between children can be due to preferentially handing to some children more than others (the effect of preference which you would be interested in) or just because by chance certain children happened to get the luck of the draw and get more sweets when selected (a random event not of interest). As Aniko points out the problem is only well defined if you specify a probability distribution for the number drawn.
Bootstrapping and comparing multiple proportions The bootstrap is not needed here because the null distribution is well defined. Bootstrap should not be used just because you are comfortable with simulation. The bootstrap works in this situation a
37,431
How to measure the number of people in a picture of a crowd?
I am working in a similar project, I will follow the next approach: Get a lot of classified images, images with few people, images with crowded spaces. For example, 'The Zócalo' in Mexico, City could afford more or less 250 000 people. Extract features from these images, maybe with methods like either HOG or SIFT. HOG for example, is widely used to extract features in projects aimed to detecting pedestrians. Visit http://hogprocessing.altervista.org/ With the data obtained in the above step, is possible run some machine learning algorithm: SVM or NN. It will be necessary train this algorithm, and thereafter when you have a new image, then is possible use the NN or SVM trained in order to get a prediction. I guess you can follow a similar path.
How to measure the number of people in a picture of a crowd?
I am working in a similar project, I will follow the next approach: Get a lot of classified images, images with few people, images with crowded spaces. For example, 'The Zócalo' in Mexico, City could
How to measure the number of people in a picture of a crowd? I am working in a similar project, I will follow the next approach: Get a lot of classified images, images with few people, images with crowded spaces. For example, 'The Zócalo' in Mexico, City could afford more or less 250 000 people. Extract features from these images, maybe with methods like either HOG or SIFT. HOG for example, is widely used to extract features in projects aimed to detecting pedestrians. Visit http://hogprocessing.altervista.org/ With the data obtained in the above step, is possible run some machine learning algorithm: SVM or NN. It will be necessary train this algorithm, and thereafter when you have a new image, then is possible use the NN or SVM trained in order to get a prediction. I guess you can follow a similar path.
How to measure the number of people in a picture of a crowd? I am working in a similar project, I will follow the next approach: Get a lot of classified images, images with few people, images with crowded spaces. For example, 'The Zócalo' in Mexico, City could
37,432
Autocorrelation from multiple time series samples
Yes, there is a correct way and it's simple, too. By definition, the autocorrelation of a stationary process $X_t$ at lag $dt$ is the correlation between $X_t$ and $X_{t+dt}$. Suppose you have observations of this process $x_{t_0}, x_{t_0+dt}, x_{t_0+2dt}, \ldots, x_{t_0+k_0dt}$ at lag $dt$, another set of observations in a non-overlapping time interval $x_{t_1}, x_{t_1+dt}, x_{t_1+2dt}, \ldots, x_{t_1+k_1dt}$ at lag $dt$ for $t_1 \gt t_0+d_0dt$, and in general you have contiguous observations of samples $x_{t_i}, x_{t_i+dt}, x_{t_i+2dt}, \ldots, x_{t_i+k_idt}$, $i=0, 1, \ldots$ for non-overlapping time intervals. Then the correlation coefficient of the ordered pairs $$\{(x_{t_i+jdt}, x_{t_i+(j+1)dt})\}$$ for $i=0, 1, \ldots$ and $j=0, 1, k_i-1$ estimates the autocorrelation of $x_t$ at lag $dt$. Compute the standard errors of the correlation exactly as you would compute the standard error for the correlation of any bivariate data set $\{(x_k, y_k)\}$. The difference between this approach and the one proposed in the question is that pairs spanning two sequences, $(x_{t_j+k_jdt}, x_{t_{j+1}})$, are not included in the calculation. Intuitively they should not be, because in general the time interval between these pairs is not equal to $dt$ and therefore such pairs do not provide direct information about the correlation at lag $dt$.
Autocorrelation from multiple time series samples
Yes, there is a correct way and it's simple, too. By definition, the autocorrelation of a stationary process $X_t$ at lag $dt$ is the correlation between $X_t$ and $X_{t+dt}$. Suppose you have observ
Autocorrelation from multiple time series samples Yes, there is a correct way and it's simple, too. By definition, the autocorrelation of a stationary process $X_t$ at lag $dt$ is the correlation between $X_t$ and $X_{t+dt}$. Suppose you have observations of this process $x_{t_0}, x_{t_0+dt}, x_{t_0+2dt}, \ldots, x_{t_0+k_0dt}$ at lag $dt$, another set of observations in a non-overlapping time interval $x_{t_1}, x_{t_1+dt}, x_{t_1+2dt}, \ldots, x_{t_1+k_1dt}$ at lag $dt$ for $t_1 \gt t_0+d_0dt$, and in general you have contiguous observations of samples $x_{t_i}, x_{t_i+dt}, x_{t_i+2dt}, \ldots, x_{t_i+k_idt}$, $i=0, 1, \ldots$ for non-overlapping time intervals. Then the correlation coefficient of the ordered pairs $$\{(x_{t_i+jdt}, x_{t_i+(j+1)dt})\}$$ for $i=0, 1, \ldots$ and $j=0, 1, k_i-1$ estimates the autocorrelation of $x_t$ at lag $dt$. Compute the standard errors of the correlation exactly as you would compute the standard error for the correlation of any bivariate data set $\{(x_k, y_k)\}$. The difference between this approach and the one proposed in the question is that pairs spanning two sequences, $(x_{t_j+k_jdt}, x_{t_{j+1}})$, are not included in the calculation. Intuitively they should not be, because in general the time interval between these pairs is not equal to $dt$ and therefore such pairs do not provide direct information about the correlation at lag $dt$.
Autocorrelation from multiple time series samples Yes, there is a correct way and it's simple, too. By definition, the autocorrelation of a stationary process $X_t$ at lag $dt$ is the correlation between $X_t$ and $X_{t+dt}$. Suppose you have observ
37,433
Autocorrelation from multiple time series samples
First of all either way you do it you are assuming each day is the same as any other from 12 noon to 3 PM. Also in time series analysis it is not common to have ensembles of the process. But given the assumptions I think you can treat it like you would with independent individual observations. Each day can be viewed as providing independent estimates of the acf for a set of lags $1$ to $k$. Then the estimates can be averaged and confidence intervals estimated. A complication would seem to be that for each series the estimate of $\rho(i)$ is correlated with the estimate of $\rho(j)$ for $i\neq j$.
Autocorrelation from multiple time series samples
First of all either way you do it you are assuming each day is the same as any other from 12 noon to 3 PM. Also in time series analysis it is not common to have ensembles of the process. But given the
Autocorrelation from multiple time series samples First of all either way you do it you are assuming each day is the same as any other from 12 noon to 3 PM. Also in time series analysis it is not common to have ensembles of the process. But given the assumptions I think you can treat it like you would with independent individual observations. Each day can be viewed as providing independent estimates of the acf for a set of lags $1$ to $k$. Then the estimates can be averaged and confidence intervals estimated. A complication would seem to be that for each series the estimate of $\rho(i)$ is correlated with the estimate of $\rho(j)$ for $i\neq j$.
Autocorrelation from multiple time series samples First of all either way you do it you are assuming each day is the same as any other from 12 noon to 3 PM. Also in time series analysis it is not common to have ensembles of the process. But given the
37,434
Autocorrelation from multiple time series samples
I have stumbled across the same type of problem and I know that people have used your first suggestion in a situation where each sample is a number of residuals from one individual (in mixed effects modelling). See this paper. The autocorrelation is merely a general measure of dependence between adjacent sample points. In your situation it might be more interesting to look at the actual dependences between specific time points (e,g, the correlations between adjacent sample points may be large in the first hour but decline thereafter and/or it may also be interesting to analyse the correlation between, say, the first and last sample). Is it? If you have more experiments than ten (you mentioned that only ten days are analysed) you could compute a correlation matrix of the association between the specific time points. I have written about this (yet again in mixed effects modeling) here. A free pre-print of the latter paper is freely available at arxiv.org (Investigations of a compartmental model for leucine kinetics using nonlinear mixed effects models with ordinary and stochastic differential equations). See page 23 for the part on autocorrelation. EDIT: Oops, I just realised that my suggestion was kind of already mentioned above. Sorry for missing out on that, I wrote my answer offline and couldn't post it in time. In either case it should be mentioned that there is a difference between my suggestion and the one above. In my suggestion you get correlations between specific time points. This may be interesting in many situations and could also be used in case of non-equidistant data. However, a large number of individuals/experiments/sampling days are needed to estimate the correlations and their standard deviations.
Autocorrelation from multiple time series samples
I have stumbled across the same type of problem and I know that people have used your first suggestion in a situation where each sample is a number of residuals from one individual (in mixed effects m
Autocorrelation from multiple time series samples I have stumbled across the same type of problem and I know that people have used your first suggestion in a situation where each sample is a number of residuals from one individual (in mixed effects modelling). See this paper. The autocorrelation is merely a general measure of dependence between adjacent sample points. In your situation it might be more interesting to look at the actual dependences between specific time points (e,g, the correlations between adjacent sample points may be large in the first hour but decline thereafter and/or it may also be interesting to analyse the correlation between, say, the first and last sample). Is it? If you have more experiments than ten (you mentioned that only ten days are analysed) you could compute a correlation matrix of the association between the specific time points. I have written about this (yet again in mixed effects modeling) here. A free pre-print of the latter paper is freely available at arxiv.org (Investigations of a compartmental model for leucine kinetics using nonlinear mixed effects models with ordinary and stochastic differential equations). See page 23 for the part on autocorrelation. EDIT: Oops, I just realised that my suggestion was kind of already mentioned above. Sorry for missing out on that, I wrote my answer offline and couldn't post it in time. In either case it should be mentioned that there is a difference between my suggestion and the one above. In my suggestion you get correlations between specific time points. This may be interesting in many situations and could also be used in case of non-equidistant data. However, a large number of individuals/experiments/sampling days are needed to estimate the correlations and their standard deviations.
Autocorrelation from multiple time series samples I have stumbled across the same type of problem and I know that people have used your first suggestion in a situation where each sample is a number of residuals from one individual (in mixed effects m
37,435
Model-selection for linear mixed models over alternative sets of parameters (nlme function in R)
I would use Akaike’s Information Criterion ($AIC$) for model selection where: $$ AIC = -2ln(L)-2k $$ Though a better alternative is often $AIC_c$, which is the second-order Akaike’s Information Criterion ($AIC_c$). $AIC_c$ is corrected for small sample size with an addtion bias-correction term because $AIC$ can perform poorly when the ratio of sample size to the number parameters in the model is small (Burnham and Anderson 2002). $$ AIC_c = -2ln(L)-2k+\frac{2k(k+1)}{(n-k-1)} $$ In fact, I would always use $AIC_c$ since the bias-correction term goes to zero as sample size increases. However, there are some types of models where it is difficult to determine sample size (i.e., hierarchical models of abundance see links to these model types here). $AIC$ or $AIC_c$ can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model will have $\mathsf{\Delta}_i=0$. Further, these values can be used to estimate relative strength of evidence ($w_i$) for the alternative models where: $$ w_i = \frac{e^{(-0.5\mathsf{\Delta}_i)}}{\sum_{r=1}^Re^{(-0.5\mathsf{\Delta}_i)}} $$ This is often refered to as the "weight of evidence" for model $i$ from the model set. As $\mathsf{\Delta}_i$ increases, $w_i$ decreases suggesting model $i$ is less plausible. Also, the weights of evidence for the models in a model set can be use in model averaging and multi-model inference.
Model-selection for linear mixed models over alternative sets of parameters (nlme function in R)
I would use Akaike’s Information Criterion ($AIC$) for model selection where: $$ AIC = -2ln(L)-2k $$ Though a better alternative is often $AIC_c$, which is the second-order Akaike’s Information Criter
Model-selection for linear mixed models over alternative sets of parameters (nlme function in R) I would use Akaike’s Information Criterion ($AIC$) for model selection where: $$ AIC = -2ln(L)-2k $$ Though a better alternative is often $AIC_c$, which is the second-order Akaike’s Information Criterion ($AIC_c$). $AIC_c$ is corrected for small sample size with an addtion bias-correction term because $AIC$ can perform poorly when the ratio of sample size to the number parameters in the model is small (Burnham and Anderson 2002). $$ AIC_c = -2ln(L)-2k+\frac{2k(k+1)}{(n-k-1)} $$ In fact, I would always use $AIC_c$ since the bias-correction term goes to zero as sample size increases. However, there are some types of models where it is difficult to determine sample size (i.e., hierarchical models of abundance see links to these model types here). $AIC$ or $AIC_c$ can be recaled to $\mathsf{\Delta}_i=AIC_i-minAIC$ where the best model will have $\mathsf{\Delta}_i=0$. Further, these values can be used to estimate relative strength of evidence ($w_i$) for the alternative models where: $$ w_i = \frac{e^{(-0.5\mathsf{\Delta}_i)}}{\sum_{r=1}^Re^{(-0.5\mathsf{\Delta}_i)}} $$ This is often refered to as the "weight of evidence" for model $i$ from the model set. As $\mathsf{\Delta}_i$ increases, $w_i$ decreases suggesting model $i$ is less plausible. Also, the weights of evidence for the models in a model set can be use in model averaging and multi-model inference.
Model-selection for linear mixed models over alternative sets of parameters (nlme function in R) I would use Akaike’s Information Criterion ($AIC$) for model selection where: $$ AIC = -2ln(L)-2k $$ Though a better alternative is often $AIC_c$, which is the second-order Akaike’s Information Criter
37,436
Generalization of Brownian motion to $\alpha$-stable distributions
My quick answer would be yes, but I am not sure about the scale parameter. You can view a Gaussian random walk as a subset of random walks with stable distributions. All stable distributions have the property that a linear combination of two i.i.d. stable distributions is also stable. (All this is related to a generalized central limit theore and functional analysis, but that's too much to deal with here.)
Generalization of Brownian motion to $\alpha$-stable distributions
My quick answer would be yes, but I am not sure about the scale parameter. You can view a Gaussian random walk as a subset of random walks with stable distributions. All stable distributions have the
Generalization of Brownian motion to $\alpha$-stable distributions My quick answer would be yes, but I am not sure about the scale parameter. You can view a Gaussian random walk as a subset of random walks with stable distributions. All stable distributions have the property that a linear combination of two i.i.d. stable distributions is also stable. (All this is related to a generalized central limit theore and functional analysis, but that's too much to deal with here.)
Generalization of Brownian motion to $\alpha$-stable distributions My quick answer would be yes, but I am not sure about the scale parameter. You can view a Gaussian random walk as a subset of random walks with stable distributions. All stable distributions have the
37,437
Linear regression with shot noise
The probability model for such shot noise is $$X \sim \text{Poisson}(\mu),\quad Y|X \sim \text{Normal}(\beta_0+\beta_1 X, \sigma^2).$$ A good estimate of $\mu$ is the mean of $X$ and a good estimate of $(\beta_0, \beta_1)$ is afforded by ordinary least squares, because the values of $Y$ are assumed independent, identically distributed, and normal. The estimate of $\sigma^2$ given by OLS is inappropriate here, though, due to the randomness of $X$. The maximum likelihood estimate is $$s^2 = \frac{S_{xy}^2 - 2 S_x S_y S_{xy} + S_{xx}\left(S_y^2 - S_{yy}\right) + S_x^2 S_{yy}}{S_x^2 - S_{xx}}.$$ In this notation, $S_x$ is the mean $X$ value, $S_{xy}$ is the mean of the products of the $X$ and $Y$ values, etc. We can expect the standard errors of estimation in the two approaches (OLS, which is not quite right, and MLE as described here) to differ. There are various ways to obtain ML standard errors: consult a reference. Because the log likelihood is relatively simple (especially when the Poisson$(\mu)$ distribution is approximated by a Normal$(\mu,\mu)$ distribution for large $\mu$), these standard errors can be computed in closed form if one desires. As a worked example, I generated $12$ $X$ values from a Poisson$(100)$ distribution: 94,99,106,87,91,101,90,102,93,110,97,123 Then, setting $\beta_0=3$, $\beta_1=1/2$, and $\sigma=1$, I generated $12$ corresponding $Y$ values: 47.4662,53.5622,54.6656,45.3592,49.0347,53.8803,48.3437,54.2255,48.4506,58.6761,50.7423,63.9922 The mean $X$ value equals $99.4167$, the estimate of $\mu$. The OLS results (which are identical to the MLE of the coefficients) estimate $\beta_0$ as $1.24$ and $\beta_1$ as $0.514271$. It is no surprise the estimate of the intercept, $\beta_0$, departs from its true value of $3$, because these $X$ values stay far from the origin. The estimate of the slope, $\beta_1$, is close to the true value of $0.5$. The OLS estimate of $\sigma^2$, however, is $0.715$, less than the true value of $1$. The MLE of $\sigma^2$ works out to $0.999351$. (It is an accident that both estimates are low and that the MLE is greater than the OLS estimate.) The line is both the OLS fit and the maximum likelihood estimate for the joint Poisson-Normal probability model.
Linear regression with shot noise
The probability model for such shot noise is $$X \sim \text{Poisson}(\mu),\quad Y|X \sim \text{Normal}(\beta_0+\beta_1 X, \sigma^2).$$ A good estimate of $\mu$ is the mean of $X$ and a good estimate o
Linear regression with shot noise The probability model for such shot noise is $$X \sim \text{Poisson}(\mu),\quad Y|X \sim \text{Normal}(\beta_0+\beta_1 X, \sigma^2).$$ A good estimate of $\mu$ is the mean of $X$ and a good estimate of $(\beta_0, \beta_1)$ is afforded by ordinary least squares, because the values of $Y$ are assumed independent, identically distributed, and normal. The estimate of $\sigma^2$ given by OLS is inappropriate here, though, due to the randomness of $X$. The maximum likelihood estimate is $$s^2 = \frac{S_{xy}^2 - 2 S_x S_y S_{xy} + S_{xx}\left(S_y^2 - S_{yy}\right) + S_x^2 S_{yy}}{S_x^2 - S_{xx}}.$$ In this notation, $S_x$ is the mean $X$ value, $S_{xy}$ is the mean of the products of the $X$ and $Y$ values, etc. We can expect the standard errors of estimation in the two approaches (OLS, which is not quite right, and MLE as described here) to differ. There are various ways to obtain ML standard errors: consult a reference. Because the log likelihood is relatively simple (especially when the Poisson$(\mu)$ distribution is approximated by a Normal$(\mu,\mu)$ distribution for large $\mu$), these standard errors can be computed in closed form if one desires. As a worked example, I generated $12$ $X$ values from a Poisson$(100)$ distribution: 94,99,106,87,91,101,90,102,93,110,97,123 Then, setting $\beta_0=3$, $\beta_1=1/2$, and $\sigma=1$, I generated $12$ corresponding $Y$ values: 47.4662,53.5622,54.6656,45.3592,49.0347,53.8803,48.3437,54.2255,48.4506,58.6761,50.7423,63.9922 The mean $X$ value equals $99.4167$, the estimate of $\mu$. The OLS results (which are identical to the MLE of the coefficients) estimate $\beta_0$ as $1.24$ and $\beta_1$ as $0.514271$. It is no surprise the estimate of the intercept, $\beta_0$, departs from its true value of $3$, because these $X$ values stay far from the origin. The estimate of the slope, $\beta_1$, is close to the true value of $0.5$. The OLS estimate of $\sigma^2$, however, is $0.715$, less than the true value of $1$. The MLE of $\sigma^2$ works out to $0.999351$. (It is an accident that both estimates are low and that the MLE is greater than the OLS estimate.) The line is both the OLS fit and the maximum likelihood estimate for the joint Poisson-Normal probability model.
Linear regression with shot noise The probability model for such shot noise is $$X \sim \text{Poisson}(\mu),\quad Y|X \sim \text{Normal}(\beta_0+\beta_1 X, \sigma^2).$$ A good estimate of $\mu$ is the mean of $X$ and a good estimate o
37,438
Combining LASSO coefficients across imputed datasets
I am by no means an expert, but found this while looking into this problem for my own work. https://www.biostat.wisc.edu/sites/default/files/tr_217.pdf In a nutshell, they used grouped lasso (reference below) on the variables, where a "group" of variables actually refers to the same variable, but "grouped" across the imputed datasets. ftp://ftp.stat.math.ethz.ch/Manuscripts/buhlmann/lukas-sara-peter.pdf I have also seen somewhere that you just average the zeros in like any other estimate. I'll post a reference if I find one.
Combining LASSO coefficients across imputed datasets
I am by no means an expert, but found this while looking into this problem for my own work. https://www.biostat.wisc.edu/sites/default/files/tr_217.pdf In a nutshell, they used grouped lasso (referen
Combining LASSO coefficients across imputed datasets I am by no means an expert, but found this while looking into this problem for my own work. https://www.biostat.wisc.edu/sites/default/files/tr_217.pdf In a nutshell, they used grouped lasso (reference below) on the variables, where a "group" of variables actually refers to the same variable, but "grouped" across the imputed datasets. ftp://ftp.stat.math.ethz.ch/Manuscripts/buhlmann/lukas-sara-peter.pdf I have also seen somewhere that you just average the zeros in like any other estimate. I'll post a reference if I find one.
Combining LASSO coefficients across imputed datasets I am by no means an expert, but found this while looking into this problem for my own work. https://www.biostat.wisc.edu/sites/default/files/tr_217.pdf In a nutshell, they used grouped lasso (referen
37,439
Bayesian inference on a sum of iid real-valued random variables
Consider the following Bayesian nonparametric analysis. Define $\mathscr{X}=[0,1]$ and let $\mathscr{B}$ be the Borel subsets of $\mathscr{X}$. Let $\alpha$ be a nonzero finite measure over $(\mathscr{X},\mathscr{B})$. Let $Q$ be a Dirichlet process with parameter $\alpha$, and suppose that $X_1,\dots,X_n$ are conditionally i.i.d., given that $Q=q$, such that $\mu_{X_1}(B)=P\{X_1\in B\} = q(B)$, for every $B\in\mathscr{B}$. From the properties of the Dirichlet process, we know that, given $X_1,\dots,X_k$, the predictive distribution of a future observation like $X_{k+1}$ is the measure $\beta$ over $(\mathscr{X},\mathscr{B})$ defined by $$ \beta(B) = \frac{1}{\alpha(\mathscr{X})+k} \left( \alpha(B) + \sum_{i=1}^k I_B(X_i)\right) \, . $$ Now, define $\mathscr{F}_k$ as the sigma-field generated by $X_1,\dots,X_k$, and use measurability and the symmetry of the $X_i$'s to get $$ E\left[ S_n \mid \mathscr{F}_k \right] = S_k + E\left[ \sum_{i=k+1}^n X_i \,\Bigg\vert\, \mathscr{F}_k \right] = S_k + (n-k) E\left[ X_{k+1} \mid \mathscr{F}_k \right] \, , $$ almost surely. To find an explicit answer, suppose that $\alpha(\cdot)/\alpha(\mathscr{X})$ is $U[0,1]$. Defining $c=\alpha(\mathscr{X})>0$, we have $$ E\left[ S_n \mid X_1=x_1,\dots,X_k=x_k \right] = s_k + \frac{n-k}{c+k}\left(\frac{c}{2}+s_k\right) \, , $$ almost surely $[\mu_{X_1,\dots,X_k}]$ (the joint distribution of $X_1,\dots,X_k$), where $s_k=x_1+\dots+x_k$. In the "noninformative" limit of $c\to 0$, the former expectation reduces to $n\cdot (s_k/k)$, which means that, in this case, your posterior guess for $S_n$ is just $n$ times the mean of the first $k$ observations, which looks like as intuitive as possible.
Bayesian inference on a sum of iid real-valued random variables
Consider the following Bayesian nonparametric analysis. Define $\mathscr{X}=[0,1]$ and let $\mathscr{B}$ be the Borel subsets of $\mathscr{X}$. Let $\alpha$ be a nonzero finite measure over $(\mathscr
Bayesian inference on a sum of iid real-valued random variables Consider the following Bayesian nonparametric analysis. Define $\mathscr{X}=[0,1]$ and let $\mathscr{B}$ be the Borel subsets of $\mathscr{X}$. Let $\alpha$ be a nonzero finite measure over $(\mathscr{X},\mathscr{B})$. Let $Q$ be a Dirichlet process with parameter $\alpha$, and suppose that $X_1,\dots,X_n$ are conditionally i.i.d., given that $Q=q$, such that $\mu_{X_1}(B)=P\{X_1\in B\} = q(B)$, for every $B\in\mathscr{B}$. From the properties of the Dirichlet process, we know that, given $X_1,\dots,X_k$, the predictive distribution of a future observation like $X_{k+1}$ is the measure $\beta$ over $(\mathscr{X},\mathscr{B})$ defined by $$ \beta(B) = \frac{1}{\alpha(\mathscr{X})+k} \left( \alpha(B) + \sum_{i=1}^k I_B(X_i)\right) \, . $$ Now, define $\mathscr{F}_k$ as the sigma-field generated by $X_1,\dots,X_k$, and use measurability and the symmetry of the $X_i$'s to get $$ E\left[ S_n \mid \mathscr{F}_k \right] = S_k + E\left[ \sum_{i=k+1}^n X_i \,\Bigg\vert\, \mathscr{F}_k \right] = S_k + (n-k) E\left[ X_{k+1} \mid \mathscr{F}_k \right] \, , $$ almost surely. To find an explicit answer, suppose that $\alpha(\cdot)/\alpha(\mathscr{X})$ is $U[0,1]$. Defining $c=\alpha(\mathscr{X})>0$, we have $$ E\left[ S_n \mid X_1=x_1,\dots,X_k=x_k \right] = s_k + \frac{n-k}{c+k}\left(\frac{c}{2}+s_k\right) \, , $$ almost surely $[\mu_{X_1,\dots,X_k}]$ (the joint distribution of $X_1,\dots,X_k$), where $s_k=x_1+\dots+x_k$. In the "noninformative" limit of $c\to 0$, the former expectation reduces to $n\cdot (s_k/k)$, which means that, in this case, your posterior guess for $S_n$ is just $n$ times the mean of the first $k$ observations, which looks like as intuitive as possible.
Bayesian inference on a sum of iid real-valued random variables Consider the following Bayesian nonparametric analysis. Define $\mathscr{X}=[0,1]$ and let $\mathscr{B}$ be the Borel subsets of $\mathscr{X}$. Let $\alpha$ be a nonzero finite measure over $(\mathscr
37,440
Bayesian inference on a sum of iid real-valued random variables
Forgive the lack of measure theory and abuses of notation in the below... Since this is Bayesian inference, there must be some prior on the unknown in the problem, which in this case is the distribution of $X_1$, an infinite-dimensional parameter taking values in the set of distributions on $[0, 1]$ (call it $\pi$). The data distribution $S_k|\pi$ converges to a normal distribution, so if $k$ is large enough (Berry-Esseen theorem) we can just slap in that normal as an approximation. Furthermore, if the approximation is accurate the only aspect of the prior $p(\pi)$ that matters in practical terms is the induced prior on $(\text{E}_\pi(X_1),\text{Var}_\pi(X_1))=(\mu,\sigma^2)$. Now we do standard Bayesian prediction and put in the approximate densities. ($S_n$ is subject to the same approximation as $S_k$.) $p(S_n|S_k) = \int p(\pi|S_k)p(S_n|\pi,S_k)d\pi$ $p(S_n|S_k) = \int \frac{p(\pi)p(S_k|\pi)}{p(S_k)}p(S_n|\pi,S_k)d\pi$ $p(S_n|S_k) \approx \frac{\int p(\mu,\sigma^2)\text{N}(S_k|k\mu,k\sigma^2)\text{N}(S_n|(n-k)\mu + S_k, (n-k)\sigma^2) d(\mu,\sigma^2)}{\int p(\mu,\sigma^2)\text{N}(S_k|k\mu,k\sigma^2) d(\mu,\sigma^2)}$ For the limits of the integral, $\mu \in [0, 1]$, obviously; I think $\sigma^2 \in [0,\frac{1}{4}]$? Added later: no, $\sigma^2 \in [0,\mu(1-\mu)].$ This is nice -- the allowed values of $\sigma^2$ depend on $\mu$, so info in the data about $\mu$ is relevant to $\sigma^2$ too.
Bayesian inference on a sum of iid real-valued random variables
Forgive the lack of measure theory and abuses of notation in the below... Since this is Bayesian inference, there must be some prior on the unknown in the problem, which in this case is the distributi
Bayesian inference on a sum of iid real-valued random variables Forgive the lack of measure theory and abuses of notation in the below... Since this is Bayesian inference, there must be some prior on the unknown in the problem, which in this case is the distribution of $X_1$, an infinite-dimensional parameter taking values in the set of distributions on $[0, 1]$ (call it $\pi$). The data distribution $S_k|\pi$ converges to a normal distribution, so if $k$ is large enough (Berry-Esseen theorem) we can just slap in that normal as an approximation. Furthermore, if the approximation is accurate the only aspect of the prior $p(\pi)$ that matters in practical terms is the induced prior on $(\text{E}_\pi(X_1),\text{Var}_\pi(X_1))=(\mu,\sigma^2)$. Now we do standard Bayesian prediction and put in the approximate densities. ($S_n$ is subject to the same approximation as $S_k$.) $p(S_n|S_k) = \int p(\pi|S_k)p(S_n|\pi,S_k)d\pi$ $p(S_n|S_k) = \int \frac{p(\pi)p(S_k|\pi)}{p(S_k)}p(S_n|\pi,S_k)d\pi$ $p(S_n|S_k) \approx \frac{\int p(\mu,\sigma^2)\text{N}(S_k|k\mu,k\sigma^2)\text{N}(S_n|(n-k)\mu + S_k, (n-k)\sigma^2) d(\mu,\sigma^2)}{\int p(\mu,\sigma^2)\text{N}(S_k|k\mu,k\sigma^2) d(\mu,\sigma^2)}$ For the limits of the integral, $\mu \in [0, 1]$, obviously; I think $\sigma^2 \in [0,\frac{1}{4}]$? Added later: no, $\sigma^2 \in [0,\mu(1-\mu)].$ This is nice -- the allowed values of $\sigma^2$ depend on $\mu$, so info in the data about $\mu$ is relevant to $\sigma^2$ too.
Bayesian inference on a sum of iid real-valued random variables Forgive the lack of measure theory and abuses of notation in the below... Since this is Bayesian inference, there must be some prior on the unknown in the problem, which in this case is the distributi
37,441
Bayesian inference on a sum of iid real-valued random variables
Let each $X_i$ belong to distribution family $F$ and have parameters $\theta$. Given, $S_k$, we have a distribution on $\theta$: \begin{align} \Pr(\theta \mid S_k) &= \frac1Z \Pr(\theta)\Pr(S_k \mid \theta) \end{align} And, our distribution on $S_n$, $n \ge k$ is \begin{align} \Pr(S_n = i \mid S_k) &= \Pr(S_{n-k} = i - S_k | S_k) \\ &= \int \Pr(S_{n-k} = i - S_k | \theta)\Pr(\theta \mid S_k)d\theta \\ \end{align} (and similarly for $n < k$) Both of these equations have nice forms when $F$ is a distribution in the exponential family that is closed under summation of iid elements like the normal distribution, the gamma distribution, and the binomial distribution. It also works for their special cases like the exponential distribution and the Bernoulli distribution. It might be interesting to consider $F$ is the family of scaled (by $\frac1n$) binomial distributions with known "trials" $n$, and taking the limit as $n$ goes to infinity.
Bayesian inference on a sum of iid real-valued random variables
Let each $X_i$ belong to distribution family $F$ and have parameters $\theta$. Given, $S_k$, we have a distribution on $\theta$: \begin{align} \Pr(\theta \mid S_k) &= \frac1Z \Pr(\theta)\Pr(S_k \mid \
Bayesian inference on a sum of iid real-valued random variables Let each $X_i$ belong to distribution family $F$ and have parameters $\theta$. Given, $S_k$, we have a distribution on $\theta$: \begin{align} \Pr(\theta \mid S_k) &= \frac1Z \Pr(\theta)\Pr(S_k \mid \theta) \end{align} And, our distribution on $S_n$, $n \ge k$ is \begin{align} \Pr(S_n = i \mid S_k) &= \Pr(S_{n-k} = i - S_k | S_k) \\ &= \int \Pr(S_{n-k} = i - S_k | \theta)\Pr(\theta \mid S_k)d\theta \\ \end{align} (and similarly for $n < k$) Both of these equations have nice forms when $F$ is a distribution in the exponential family that is closed under summation of iid elements like the normal distribution, the gamma distribution, and the binomial distribution. It also works for their special cases like the exponential distribution and the Bernoulli distribution. It might be interesting to consider $F$ is the family of scaled (by $\frac1n$) binomial distributions with known "trials" $n$, and taking the limit as $n$ goes to infinity.
Bayesian inference on a sum of iid real-valued random variables Let each $X_i$ belong to distribution family $F$ and have parameters $\theta$. Given, $S_k$, we have a distribution on $\theta$: \begin{align} \Pr(\theta \mid S_k) &= \frac1Z \Pr(\theta)\Pr(S_k \mid \
37,442
Open-sourced pairwise learning models
The R implementation of sofia-ml, RSofia provides a couple of options, rank and query-norm-rank, that allow for pairwise learning. It's a fast SVM implementation out of Google. It's more barebones than some other R packages, but once you get it up and running I've found that it is a nice SVM option.
Open-sourced pairwise learning models
The R implementation of sofia-ml, RSofia provides a couple of options, rank and query-norm-rank, that allow for pairwise learning. It's a fast SVM implementation out of Google. It's more barebones tha
Open-sourced pairwise learning models The R implementation of sofia-ml, RSofia provides a couple of options, rank and query-norm-rank, that allow for pairwise learning. It's a fast SVM implementation out of Google. It's more barebones than some other R packages, but once you get it up and running I've found that it is a nice SVM option.
Open-sourced pairwise learning models The R implementation of sofia-ml, RSofia provides a couple of options, rank and query-norm-rank, that allow for pairwise learning. It's a fast SVM implementation out of Google. It's more barebones tha
37,443
Is the variance of the multivariate folded normal distribution known?
There is a section entitled 'Bivariate Half-normal distribution in: Continuous Multivariate Distributions: Models and applications By Samuel Kotz, Norman Lloyd Johnson, N. Balakrishnan. I would be curious to see how this can be generalized to a random vector of any dimensions. In fact, the bivariate case appears to be thoroughly treated in this paper: http://www.stat-athens.aueb.gr/~jpan/papers/Panaretos-ApplStatScience2001(119-136)ft.pdf
Is the variance of the multivariate folded normal distribution known?
There is a section entitled 'Bivariate Half-normal distribution in: Continuous Multivariate Distributions: Models and applications By Samuel Kotz, Norman Lloyd Johnson, N. Balakrishnan. I would be cur
Is the variance of the multivariate folded normal distribution known? There is a section entitled 'Bivariate Half-normal distribution in: Continuous Multivariate Distributions: Models and applications By Samuel Kotz, Norman Lloyd Johnson, N. Balakrishnan. I would be curious to see how this can be generalized to a random vector of any dimensions. In fact, the bivariate case appears to be thoroughly treated in this paper: http://www.stat-athens.aueb.gr/~jpan/papers/Panaretos-ApplStatScience2001(119-136)ft.pdf
Is the variance of the multivariate folded normal distribution known? There is a section entitled 'Bivariate Half-normal distribution in: Continuous Multivariate Distributions: Models and applications By Samuel Kotz, Norman Lloyd Johnson, N. Balakrishnan. I would be cur
37,444
Is the variance of the multivariate folded normal distribution known?
I don't know what you mean by folded normal distribution. The distribution of $|X|$ where $X \sim N(0,1)$? The distribution of $|X|$ when $X \sim N(\mu,\sigma^2)$? But, regardless of the interpretation, if you aver that "The mean and variance of the folded normal distribution are known" to you, then rest assured that if $x \sim N(\mu,\Sigma)$ has a multivariate normal distribution, then $x_i \sim N(\mu_i, \Sigma_{i,i})$ and so whatever formulas are known to you as the mean and variance of $|X|$ where $X \sim N(\mu,\sigma^2)$ also can be used for the mean and variance of $|x_i|$ which has a folded normal distribution since $x_i \sim N(\mu_i, \Sigma_{i,i})$. If you know only the mean and variance of $|X|$ when $X \sim N(0,1)$ but not when $X \sim N(\mu,\sigma^2)$, then please edit your question to say so clearly. If you know formulas for the mean and variance of $|X|$ where $X \sim N(\mu,\sigma^2)$, please apply the formulas to each $|x_i|$ since $x_i \sim N(\mu_i, \sigma_{i,i})$. It would probably help the readers of this forum of you were to type in the formulas for the mean and variance of $|X|$. If you want to know the covariance of $|x_i|$ and $|x_j|$, please edit your question to say so clearly. You have been asked the same question by cardinal also.
Is the variance of the multivariate folded normal distribution known?
I don't know what you mean by folded normal distribution. The distribution of $|X|$ where $X \sim N(0,1)$? The distribution of $|X|$ when $X \sim N(\mu,\sigma^2)$? But, regardless of the interpretat
Is the variance of the multivariate folded normal distribution known? I don't know what you mean by folded normal distribution. The distribution of $|X|$ where $X \sim N(0,1)$? The distribution of $|X|$ when $X \sim N(\mu,\sigma^2)$? But, regardless of the interpretation, if you aver that "The mean and variance of the folded normal distribution are known" to you, then rest assured that if $x \sim N(\mu,\Sigma)$ has a multivariate normal distribution, then $x_i \sim N(\mu_i, \Sigma_{i,i})$ and so whatever formulas are known to you as the mean and variance of $|X|$ where $X \sim N(\mu,\sigma^2)$ also can be used for the mean and variance of $|x_i|$ which has a folded normal distribution since $x_i \sim N(\mu_i, \Sigma_{i,i})$. If you know only the mean and variance of $|X|$ when $X \sim N(0,1)$ but not when $X \sim N(\mu,\sigma^2)$, then please edit your question to say so clearly. If you know formulas for the mean and variance of $|X|$ where $X \sim N(\mu,\sigma^2)$, please apply the formulas to each $|x_i|$ since $x_i \sim N(\mu_i, \sigma_{i,i})$. It would probably help the readers of this forum of you were to type in the formulas for the mean and variance of $|X|$. If you want to know the covariance of $|x_i|$ and $|x_j|$, please edit your question to say so clearly. You have been asked the same question by cardinal also.
Is the variance of the multivariate folded normal distribution known? I don't know what you mean by folded normal distribution. The distribution of $|X|$ where $X \sim N(0,1)$? The distribution of $|X|$ when $X \sim N(\mu,\sigma^2)$? But, regardless of the interpretat
37,445
How to analyze GEE with unevenly spaced observations?
When using the geeglm fonction (in geepack), you can specify the order and the spacing between observations with the waves argument.
How to analyze GEE with unevenly spaced observations?
When using the geeglm fonction (in geepack), you can specify the order and the spacing between observations with the waves argument.
How to analyze GEE with unevenly spaced observations? When using the geeglm fonction (in geepack), you can specify the order and the spacing between observations with the waves argument.
How to analyze GEE with unevenly spaced observations? When using the geeglm fonction (in geepack), you can specify the order and the spacing between observations with the waves argument.
37,446
How to compute asymptotic confidence intervals for differences in quantiles?
It doesn't sound like you want the difference in quantiles: the parameter you describe is the probability of the interval $[1,2]$. More generally, you can specify any interval endpoints $[x_-, x_+]$ in advance. Given parameters $(\mu, \sigma)$ of the distribution with CDF $F_{(\mu, \sigma)}$, this probability would be written $$\theta(\mu, \sigma) = F_{(\mu, \sigma)}(x_+) - F_{(\mu, \sigma)}(x_-).$$ As such it's just a (nice, differentiable) function of the parameters and can be addressed--at least conceptually--as would any other such function. Specifically, let $\Lambda$ be the log likelihood (a function of $(\mu,\sigma)$) with maximum value $\Lambda_0$ and let $1-\alpha$ be the desired confidence (e.g., $\alpha=0.05$ for a 95% confidence interval). To find this confidence interval you would compute the upper $1-\alpha$ percentile $c$ of a $\chi^2(1)$ distribution (e.g., equal to 3.841 for $\alpha=0.05$) and explore the values attained by $\alpha$ within the locus of all $(\mu,\sigma)$ for which $\Lambda(\mu,\sigma) \ge 2 \Lambda_0 - c$. The range of these values forms a confidence region for $\theta$. For example, I obtained 20 iid variates from a Normal$(1, 1/2)$ distribution (which emulates the problem situation, viewing the values as logarithms). The mean of this sample was $1.016$ and its standard deviation was $0.689$. The maximum value of twice the log likelihood equals $-40.8172$. I chose $x_- = 0$ and $x_+ = \log(2) \approx 0.693$ (which to correspond to the interval $[1,2]$ for $\exp(x)$ as in the problem statement). For this interval, the value of $\theta$ is $0.247$. The ML estimates are the coordinates of the dot in the middle where $2 \Lambda$ is largest, because the likelihood is maximized exactly when twice its logarithm is maximized. Shaded areas show contours of $2\Lambda(\mu,\sigma)$ in the $(\mu,\sigma)$ plane in intervals of $3.841/4$, so that the region of interest (by descending values of $2\Lambda$) is comprised of the red, orange, yellow, and light yellow areas, terminated by the beginning of the blue area. The thick lines and dashed lines, labeled 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, and 0.4, are contours of $\theta(\mu,\sigma)$. (Notice how the $0.25$ contour passes through the point of maximum likelihood. This means that $0.25$ is the ML estimate of $\theta$. Because the true value of $\theta$ is $0.247$, the ML estimate of $\theta$ is almost exactly right. This was a matter of luck, not design.) Green dots mark where the extreme values of $\theta$ are attained: they equal $0.151$ at $\mu=1.29, \sigma=0.63$ (the right-hand dot) and they equal $0.350$ at $\mu = 0.75, \sigma=0.62$ (the left-hand dot). As the pattern of $\theta$ contours shows, all other values of $\theta$ within the region of interest lie between these two values. Therefore, a 95% CI for $\theta$ is $[0.151, 0.350]$. This figure plots the PDFs of the distributions corresponding to the two extreme values of $\theta$. (For reference, the true PDF is shown in light dashes.) The one on the right (in blue) has the larger mean. It corresponds to the right green dot in the contour plot. Its value of $\theta$ is the blue shaded area beneath it from $0$ to $\log(2)$. The one on the left (in red) has the smaller mean. It corresponds to the left green dot in the contour plot. Its value of $\theta$ is the corresponding area beneath it. Both these PDFs are (just barely) consistent with the data: their likelihoods are within $3.841/2$ of the maximum likelihood. As you move around within the region of interest in the contour, the corresponding PDFs vary. (Indeed, some of them have more extreme means than exhibited by these two and many of them have greater standard deviations.) However, none of them has any more or any less probability between $x_-$ and $x_+$ than the two PDFs shown here. In summary, I have described a constrained optimization problem: to find the confidence limits of $\theta$, minimize and maximize $\theta$ within the region $2 \Lambda(\mu,\sigma) \ge 2 \Lambda_0 - c$. When the log likelihood is expensive to compute, this problem can be computationally expensive to solve, but at least it is straightforward.
How to compute asymptotic confidence intervals for differences in quantiles?
It doesn't sound like you want the difference in quantiles: the parameter you describe is the probability of the interval $[1,2]$. More generally, you can specify any interval endpoints $[x_-, x_+]$
How to compute asymptotic confidence intervals for differences in quantiles? It doesn't sound like you want the difference in quantiles: the parameter you describe is the probability of the interval $[1,2]$. More generally, you can specify any interval endpoints $[x_-, x_+]$ in advance. Given parameters $(\mu, \sigma)$ of the distribution with CDF $F_{(\mu, \sigma)}$, this probability would be written $$\theta(\mu, \sigma) = F_{(\mu, \sigma)}(x_+) - F_{(\mu, \sigma)}(x_-).$$ As such it's just a (nice, differentiable) function of the parameters and can be addressed--at least conceptually--as would any other such function. Specifically, let $\Lambda$ be the log likelihood (a function of $(\mu,\sigma)$) with maximum value $\Lambda_0$ and let $1-\alpha$ be the desired confidence (e.g., $\alpha=0.05$ for a 95% confidence interval). To find this confidence interval you would compute the upper $1-\alpha$ percentile $c$ of a $\chi^2(1)$ distribution (e.g., equal to 3.841 for $\alpha=0.05$) and explore the values attained by $\alpha$ within the locus of all $(\mu,\sigma)$ for which $\Lambda(\mu,\sigma) \ge 2 \Lambda_0 - c$. The range of these values forms a confidence region for $\theta$. For example, I obtained 20 iid variates from a Normal$(1, 1/2)$ distribution (which emulates the problem situation, viewing the values as logarithms). The mean of this sample was $1.016$ and its standard deviation was $0.689$. The maximum value of twice the log likelihood equals $-40.8172$. I chose $x_- = 0$ and $x_+ = \log(2) \approx 0.693$ (which to correspond to the interval $[1,2]$ for $\exp(x)$ as in the problem statement). For this interval, the value of $\theta$ is $0.247$. The ML estimates are the coordinates of the dot in the middle where $2 \Lambda$ is largest, because the likelihood is maximized exactly when twice its logarithm is maximized. Shaded areas show contours of $2\Lambda(\mu,\sigma)$ in the $(\mu,\sigma)$ plane in intervals of $3.841/4$, so that the region of interest (by descending values of $2\Lambda$) is comprised of the red, orange, yellow, and light yellow areas, terminated by the beginning of the blue area. The thick lines and dashed lines, labeled 0.1, 0.15, 0.2, 0.25, 0.3, 0.35, and 0.4, are contours of $\theta(\mu,\sigma)$. (Notice how the $0.25$ contour passes through the point of maximum likelihood. This means that $0.25$ is the ML estimate of $\theta$. Because the true value of $\theta$ is $0.247$, the ML estimate of $\theta$ is almost exactly right. This was a matter of luck, not design.) Green dots mark where the extreme values of $\theta$ are attained: they equal $0.151$ at $\mu=1.29, \sigma=0.63$ (the right-hand dot) and they equal $0.350$ at $\mu = 0.75, \sigma=0.62$ (the left-hand dot). As the pattern of $\theta$ contours shows, all other values of $\theta$ within the region of interest lie between these two values. Therefore, a 95% CI for $\theta$ is $[0.151, 0.350]$. This figure plots the PDFs of the distributions corresponding to the two extreme values of $\theta$. (For reference, the true PDF is shown in light dashes.) The one on the right (in blue) has the larger mean. It corresponds to the right green dot in the contour plot. Its value of $\theta$ is the blue shaded area beneath it from $0$ to $\log(2)$. The one on the left (in red) has the smaller mean. It corresponds to the left green dot in the contour plot. Its value of $\theta$ is the corresponding area beneath it. Both these PDFs are (just barely) consistent with the data: their likelihoods are within $3.841/2$ of the maximum likelihood. As you move around within the region of interest in the contour, the corresponding PDFs vary. (Indeed, some of them have more extreme means than exhibited by these two and many of them have greater standard deviations.) However, none of them has any more or any less probability between $x_-$ and $x_+$ than the two PDFs shown here. In summary, I have described a constrained optimization problem: to find the confidence limits of $\theta$, minimize and maximize $\theta$ within the region $2 \Lambda(\mu,\sigma) \ge 2 \Lambda_0 - c$. When the log likelihood is expensive to compute, this problem can be computationally expensive to solve, but at least it is straightforward.
How to compute asymptotic confidence intervals for differences in quantiles? It doesn't sound like you want the difference in quantiles: the parameter you describe is the probability of the interval $[1,2]$. More generally, you can specify any interval endpoints $[x_-, x_+]$
37,447
Longitudinal models in R and WINBUGS or JAGS
Longitudinal and mixed models in BUGS is talked about in Ch. 10 of Bayesian Ideas and Data Analysis. Below is a link to the book website which has some example code. http://www.ics.uci.edu/~wjohnson/BIDA/BIDABook.html
Longitudinal models in R and WINBUGS or JAGS
Longitudinal and mixed models in BUGS is talked about in Ch. 10 of Bayesian Ideas and Data Analysis. Below is a link to the book website which has some example code. http://www.ics.uci.edu/~wjohnson/
Longitudinal models in R and WINBUGS or JAGS Longitudinal and mixed models in BUGS is talked about in Ch. 10 of Bayesian Ideas and Data Analysis. Below is a link to the book website which has some example code. http://www.ics.uci.edu/~wjohnson/BIDA/BIDABook.html
Longitudinal models in R and WINBUGS or JAGS Longitudinal and mixed models in BUGS is talked about in Ch. 10 of Bayesian Ideas and Data Analysis. Below is a link to the book website which has some example code. http://www.ics.uci.edu/~wjohnson/
37,448
Longitudinal models in R and WINBUGS or JAGS
I'm not sure what you mean by R not having "factor analytic models for covariance matrices" - can you clarify what you'd like to reproduce from SAS? To my knowledge this is feasible with a lot of different packages in R. Regarding antedependence models, there is a book on this very topic that has associated R code and examples, at the first author's website. I'm not sure if WinBUGS will bring you any luck, but I'd start with the aforementioned textbook - it seems to be authoritative on antedependence models. :)
Longitudinal models in R and WINBUGS or JAGS
I'm not sure what you mean by R not having "factor analytic models for covariance matrices" - can you clarify what you'd like to reproduce from SAS? To my knowledge this is feasible with a lot of dif
Longitudinal models in R and WINBUGS or JAGS I'm not sure what you mean by R not having "factor analytic models for covariance matrices" - can you clarify what you'd like to reproduce from SAS? To my knowledge this is feasible with a lot of different packages in R. Regarding antedependence models, there is a book on this very topic that has associated R code and examples, at the first author's website. I'm not sure if WinBUGS will bring you any luck, but I'd start with the aforementioned textbook - it seems to be authoritative on antedependence models. :)
Longitudinal models in R and WINBUGS or JAGS I'm not sure what you mean by R not having "factor analytic models for covariance matrices" - can you clarify what you'd like to reproduce from SAS? To my knowledge this is feasible with a lot of dif
37,449
Longitudinal models in R and WINBUGS or JAGS
I believe, with a slight learning curve, you could use one of the SEM packages in R: lavaan, OpenMX, or sem. I am just learning about SEM and these packages, but it does look to me that lavaan has a formula syntax that's much like other modeling (lm, lmer) in R, and SEM lets you do a lot of things with your covariance structure.
Longitudinal models in R and WINBUGS or JAGS
I believe, with a slight learning curve, you could use one of the SEM packages in R: lavaan, OpenMX, or sem. I am just learning about SEM and these packages, but it does look to me that lavaan has a f
Longitudinal models in R and WINBUGS or JAGS I believe, with a slight learning curve, you could use one of the SEM packages in R: lavaan, OpenMX, or sem. I am just learning about SEM and these packages, but it does look to me that lavaan has a formula syntax that's much like other modeling (lm, lmer) in R, and SEM lets you do a lot of things with your covariance structure.
Longitudinal models in R and WINBUGS or JAGS I believe, with a slight learning curve, you could use one of the SEM packages in R: lavaan, OpenMX, or sem. I am just learning about SEM and these packages, but it does look to me that lavaan has a f
37,450
A name for this distributional condition?
This is almost the condition for the cumulative distribution function to be log-concave , which is a very useful property with many applications. But almost. A function $F(x)$ is log-concave if $$\frac {\partial^2 \ln F(x)}{\partial x^2} \le 0 \Rightarrow F''(x)F(x) - \left[F'(x)\right]^2 \le 0$$ Write $\phi(x)$ in terms of $F(x)$ $$\phi(x) \equiv \frac{F'(x)}{F(x)+xF'(x)}$$ and we want $$\frac {\partial \phi(x)}{\partial x} \le 0 \Rightarrow F''(x)\Big(F(x)+xF'(x)\Big)-F'(x)\Big(F'(x)+F'(x) +xF''(x)\Big) \le 0$$ $$\Rightarrow F''(x)F(x)-2\left[F'(x)\right]^2 \le 0 $$ ...which is not enough for log-concavity, due to the existence of the factor $2$. Assume that the condition is satisfied. If we divide by $[F(x)]^2$ and rearrange we obtain $$\frac {\partial \phi(x)}{\partial x} \le 0 \Rightarrow \frac {\partial^2 \ln F(x)}{\partial x^2} \le \left( \frac{F'(x)}{F(x)}\right)^2 = \left(\frac {\partial \ln F(x)}{\partial x}\right)^2$$
A name for this distributional condition?
This is almost the condition for the cumulative distribution function to be log-concave , which is a very useful property with many applications. But almost. A function $F(x)$ is log-concave if $$\fra
A name for this distributional condition? This is almost the condition for the cumulative distribution function to be log-concave , which is a very useful property with many applications. But almost. A function $F(x)$ is log-concave if $$\frac {\partial^2 \ln F(x)}{\partial x^2} \le 0 \Rightarrow F''(x)F(x) - \left[F'(x)\right]^2 \le 0$$ Write $\phi(x)$ in terms of $F(x)$ $$\phi(x) \equiv \frac{F'(x)}{F(x)+xF'(x)}$$ and we want $$\frac {\partial \phi(x)}{\partial x} \le 0 \Rightarrow F''(x)\Big(F(x)+xF'(x)\Big)-F'(x)\Big(F'(x)+F'(x) +xF''(x)\Big) \le 0$$ $$\Rightarrow F''(x)F(x)-2\left[F'(x)\right]^2 \le 0 $$ ...which is not enough for log-concavity, due to the existence of the factor $2$. Assume that the condition is satisfied. If we divide by $[F(x)]^2$ and rearrange we obtain $$\frac {\partial \phi(x)}{\partial x} \le 0 \Rightarrow \frac {\partial^2 \ln F(x)}{\partial x^2} \le \left( \frac{F'(x)}{F(x)}\right)^2 = \left(\frac {\partial \ln F(x)}{\partial x}\right)^2$$
A name for this distributional condition? This is almost the condition for the cumulative distribution function to be log-concave , which is a very useful property with many applications. But almost. A function $F(x)$ is log-concave if $$\fra
37,451
Sequential Monte Carlo (particle filter) with Metropolis-Hastings weighting
The answer is in the title: this is called sequential Monte Carlo or particle filtering or population Monte Carlo. It is validated in wider generality as an iterated importance sampling scheme where each importance sample is used to generate the following sample. This is for instance covered in Chapter 14 of our book Monte Carlo Statistical Methods. The specific issue of using the whole sequence of simulation is found in the literature, but not in the direct way you propose: using all samples at once with the same weights does not behave nicely when some of the weights are huge (as when one starts with a poor guess). This is covered in the fantastic multiple mixture paper by Owen and Zhou (2000, JASA) and in our more recent adaptive version (when $T$ depends on the iteration $t$ and on the past simulations) called AMIS.
Sequential Monte Carlo (particle filter) with Metropolis-Hastings weighting
The answer is in the title: this is called sequential Monte Carlo or particle filtering or population Monte Carlo. It is validated in wider generality as an iterated importance sampling scheme where e
Sequential Monte Carlo (particle filter) with Metropolis-Hastings weighting The answer is in the title: this is called sequential Monte Carlo or particle filtering or population Monte Carlo. It is validated in wider generality as an iterated importance sampling scheme where each importance sample is used to generate the following sample. This is for instance covered in Chapter 14 of our book Monte Carlo Statistical Methods. The specific issue of using the whole sequence of simulation is found in the literature, but not in the direct way you propose: using all samples at once with the same weights does not behave nicely when some of the weights are huge (as when one starts with a poor guess). This is covered in the fantastic multiple mixture paper by Owen and Zhou (2000, JASA) and in our more recent adaptive version (when $T$ depends on the iteration $t$ and on the past simulations) called AMIS.
Sequential Monte Carlo (particle filter) with Metropolis-Hastings weighting The answer is in the title: this is called sequential Monte Carlo or particle filtering or population Monte Carlo. It is validated in wider generality as an iterated importance sampling scheme where e
37,452
Why do these 2 approaches to applying mixed models yield different results?
It's not surprising to see such a difference with lmer or lme. A simple model with a random intercept (e.g., (1|id) in your case) sometimes may fail to fully capture the random effects. To see why this happens, let me use a much simpler dataset than yours to demonstrate the subtle difference. With the data 'dat' from the thread which I copy to here: dat <- structure(list(sex = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), .Label = c("f", "m"), class = "factor"), prevalence = c(0, 0.375, 0.133333333333333, 0.176470588235294, 0.1875, 0, 0, 1, 1, 0.5, 0.6, 0.333333333333333, 0.5, 0, 0.333333333333333, 0, 0.5, 0, 0.625, 0.333333333333333, 0.5, 0, 0.333333333333333, 0.153846153846154, 0.222222222222222, 0.5, 1, 0.5, 0, 0.277777777777778, 0.125, 0, 0, 0.428571428571429, 0.451612903225806, 0.362068965517241), tripsite = structure(c(1L, 1L, 4L, 4L, 14L, 14L, 5L, 5L, 8L, 8L, 15L, 15L, 6L, 6L, 9L, 9L, 11L, 11L, 16L, 16L, 2L, 2L, 7L, 7L, 10L, 10L, 13L, 13L, 17L, 17L, 3L, 3L, 12L, 12L, 18L, 18L), .Label = c("1.2", "4.2", "5.2", "1.3", "2.3", "3.3", "4.3", "2.4", "3.4", "4.4", "3.5", "5.5", "4.6", "1.9", "2.9", "3.9", "4.9", "5.9"), class = "factor")), .Names = c("sex","prevalence", "tripsite"), row.names = c(1L, 2L, 3L, 4L, 9L, 10L, 11L, 12L, 13L, 14L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 38L, 39L, 40L, 41L, 42L, 43L, 45L, 46L), class = "data.frame") a paired t-test (or a special case of one-way within-subject/repeated-measures ANOVA) would be like your Method 2: t0 <- with(dat,t.test(prevalence[sex=="f"],prevalence[sex=="m"],paired=TRUE,var.equal=TRUE)) (fstat0 <- t0$statistic^2) #0.789627 Its lme version corresponding to your Method 1 would be: a1 <- anova(lme(prevalence~sex,random=~1|tripsite,data=dat,method="REML")) (fstat1 <- a1[["F-value"]][2]) # 0.8056624 Same thing for the lmer counterpart: a2 <- anova(lmer(prevalence~sex+(1|tripsite), data=dat)) (fstat2 <- a2[["F value"]][2]) # 0.8056624 Although the difference with this simple example is tiny, but it shows that the paired t-test has a much stronger assumption about the two levels ("f" and "m") of the factor ("sex"), that the two levels are correlated, and such assumption is absent in the above lme/lmer model. Such an assumption difference also exists between the two methods in your case. To reconcile the difference, we can continue modeling 'dat' with a random slope (or symmetrical matrix or even compound symmetry) in lme/lmer: a3 <- anova(lme(prevalence~sex,random=~sex-1|tripsite,data=dat,method="REML")) (fstat3 <- a3[["F-value"]][2]) # 0.789627 a31 <- anova(lme(prevalence~sex,random=list(tripsite=pdCompSymm(~sex-1)),data=dat,method="REML"))) (fstat31 <- a31[["F-value"]][2]) # 0.789627 a4 <- anova(lmer(prevalence~sex+(sex-1|tripsite), data=dat)) (fstat4 <- a4[["F value"]][2]) # 0.789627 However, with multiple factors in your case, multiple random slopes (or other random-effects structure specifications) may become unwieldy with lme/lmer if not impossible.
Why do these 2 approaches to applying mixed models yield different results?
It's not surprising to see such a difference with lmer or lme. A simple model with a random intercept (e.g., (1|id) in your case) sometimes may fail to fully capture the random effects. To see why thi
Why do these 2 approaches to applying mixed models yield different results? It's not surprising to see such a difference with lmer or lme. A simple model with a random intercept (e.g., (1|id) in your case) sometimes may fail to fully capture the random effects. To see why this happens, let me use a much simpler dataset than yours to demonstrate the subtle difference. With the data 'dat' from the thread which I copy to here: dat <- structure(list(sex = structure(c(1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L, 1L, 2L), .Label = c("f", "m"), class = "factor"), prevalence = c(0, 0.375, 0.133333333333333, 0.176470588235294, 0.1875, 0, 0, 1, 1, 0.5, 0.6, 0.333333333333333, 0.5, 0, 0.333333333333333, 0, 0.5, 0, 0.625, 0.333333333333333, 0.5, 0, 0.333333333333333, 0.153846153846154, 0.222222222222222, 0.5, 1, 0.5, 0, 0.277777777777778, 0.125, 0, 0, 0.428571428571429, 0.451612903225806, 0.362068965517241), tripsite = structure(c(1L, 1L, 4L, 4L, 14L, 14L, 5L, 5L, 8L, 8L, 15L, 15L, 6L, 6L, 9L, 9L, 11L, 11L, 16L, 16L, 2L, 2L, 7L, 7L, 10L, 10L, 13L, 13L, 17L, 17L, 3L, 3L, 12L, 12L, 18L, 18L), .Label = c("1.2", "4.2", "5.2", "1.3", "2.3", "3.3", "4.3", "2.4", "3.4", "4.4", "3.5", "5.5", "4.6", "1.9", "2.9", "3.9", "4.9", "5.9"), class = "factor")), .Names = c("sex","prevalence", "tripsite"), row.names = c(1L, 2L, 3L, 4L, 9L, 10L, 11L, 12L, 13L, 14L, 17L, 18L, 19L, 20L, 21L, 22L, 23L, 24L, 27L, 28L, 29L, 30L, 31L, 32L, 33L, 34L, 35L, 36L, 38L, 39L, 40L, 41L, 42L, 43L, 45L, 46L), class = "data.frame") a paired t-test (or a special case of one-way within-subject/repeated-measures ANOVA) would be like your Method 2: t0 <- with(dat,t.test(prevalence[sex=="f"],prevalence[sex=="m"],paired=TRUE,var.equal=TRUE)) (fstat0 <- t0$statistic^2) #0.789627 Its lme version corresponding to your Method 1 would be: a1 <- anova(lme(prevalence~sex,random=~1|tripsite,data=dat,method="REML")) (fstat1 <- a1[["F-value"]][2]) # 0.8056624 Same thing for the lmer counterpart: a2 <- anova(lmer(prevalence~sex+(1|tripsite), data=dat)) (fstat2 <- a2[["F value"]][2]) # 0.8056624 Although the difference with this simple example is tiny, but it shows that the paired t-test has a much stronger assumption about the two levels ("f" and "m") of the factor ("sex"), that the two levels are correlated, and such assumption is absent in the above lme/lmer model. Such an assumption difference also exists between the two methods in your case. To reconcile the difference, we can continue modeling 'dat' with a random slope (or symmetrical matrix or even compound symmetry) in lme/lmer: a3 <- anova(lme(prevalence~sex,random=~sex-1|tripsite,data=dat,method="REML")) (fstat3 <- a3[["F-value"]][2]) # 0.789627 a31 <- anova(lme(prevalence~sex,random=list(tripsite=pdCompSymm(~sex-1)),data=dat,method="REML"))) (fstat31 <- a31[["F-value"]][2]) # 0.789627 a4 <- anova(lmer(prevalence~sex+(sex-1|tripsite), data=dat)) (fstat4 <- a4[["F value"]][2]) # 0.789627 However, with multiple factors in your case, multiple random slopes (or other random-effects structure specifications) may become unwieldy with lme/lmer if not impossible.
Why do these 2 approaches to applying mixed models yield different results? It's not surprising to see such a difference with lmer or lme. A simple model with a random intercept (e.g., (1|id) in your case) sometimes may fail to fully capture the random effects. To see why thi
37,453
Minimum number of observations per variable for linear regression or MARS model
I've often heard of 10 cases per variable as a rule of thumb. It is not clear if this is to mean that you start at 10 cases with 1 covariate, or 20 cases (since you also loose a degree of freedom due to the intercept). I scanned the indexes to a few of my old stats books, and didn't find any reference to a place where this was discussed (although it could be in there somewhere, just not indexed in a way that I could find it). I also don't know of any references in the statical literature or any statistical justification for such a rule of thumb. Moreover, I don't see how there could be and I think such rules of thumb are worthless. The minimum number of cases is contingent on many things, e.g., cost of collecting data, and your goal (minimum for a test of significance?, minimum to achieve a specified level of precision in your parameter estimates?, minimum for the prediction of future cases with some level of accuracy? etc). Since no single number (such as 10 / covariate) could be optimal for all goals, at all costs of gathering more data and with all levels of resources available for doing so, I argue that there cannot be a statistical justification. I don't know of any rules of thumb regarding splines, but I believe that the same arguments imply any such would be just as worthless.
Minimum number of observations per variable for linear regression or MARS model
I've often heard of 10 cases per variable as a rule of thumb. It is not clear if this is to mean that you start at 10 cases with 1 covariate, or 20 cases (since you also loose a degree of freedom due
Minimum number of observations per variable for linear regression or MARS model I've often heard of 10 cases per variable as a rule of thumb. It is not clear if this is to mean that you start at 10 cases with 1 covariate, or 20 cases (since you also loose a degree of freedom due to the intercept). I scanned the indexes to a few of my old stats books, and didn't find any reference to a place where this was discussed (although it could be in there somewhere, just not indexed in a way that I could find it). I also don't know of any references in the statical literature or any statistical justification for such a rule of thumb. Moreover, I don't see how there could be and I think such rules of thumb are worthless. The minimum number of cases is contingent on many things, e.g., cost of collecting data, and your goal (minimum for a test of significance?, minimum to achieve a specified level of precision in your parameter estimates?, minimum for the prediction of future cases with some level of accuracy? etc). Since no single number (such as 10 / covariate) could be optimal for all goals, at all costs of gathering more data and with all levels of resources available for doing so, I argue that there cannot be a statistical justification. I don't know of any rules of thumb regarding splines, but I believe that the same arguments imply any such would be just as worthless.
Minimum number of observations per variable for linear regression or MARS model I've often heard of 10 cases per variable as a rule of thumb. It is not clear if this is to mean that you start at 10 cases with 1 covariate, or 20 cases (since you also loose a degree of freedom due
37,454
How to test whether variance explained by first factor of PCA differs across repeated measures conditions?
Just one (maybe silly) idea. Save 1st principal component scores variable for condition A (PC1A) and 1st principal component scores variable for condition B (PC1B). The scores should be "raw", that is, their variances or sum-of-squares equal to their eigenvalues. Then use Pitman's test to compare the variances.
How to test whether variance explained by first factor of PCA differs across repeated measures condi
Just one (maybe silly) idea. Save 1st principal component scores variable for condition A (PC1A) and 1st principal component scores variable for condition B (PC1B). The scores should be "raw", that is
How to test whether variance explained by first factor of PCA differs across repeated measures conditions? Just one (maybe silly) idea. Save 1st principal component scores variable for condition A (PC1A) and 1st principal component scores variable for condition B (PC1B). The scores should be "raw", that is, their variances or sum-of-squares equal to their eigenvalues. Then use Pitman's test to compare the variances.
How to test whether variance explained by first factor of PCA differs across repeated measures condi Just one (maybe silly) idea. Save 1st principal component scores variable for condition A (PC1A) and 1st principal component scores variable for condition B (PC1B). The scores should be "raw", that is
37,455
How to test whether variance explained by first factor of PCA differs across repeated measures conditions?
Did I get your answer right? - You want to test if there is statistically significant difference between the two conditions? Perhabs vegan::adonis() is something for you? Don´t know if that´s what your looking for. It works on the distance-matrix and compares distances within a condition are bigger then between conditions. For example in a NMDS you would see a clear separation of the two conditions. Here is some example Code: df <- data.frame(cond = rep(c("A", "B"), each = 100), v1 <- jitter(rep(c(20, 100), each = 100)), v2 <- jitter(rep(c(0, 80), each = 100)), v3 <- jitter(rep(c(40, 5), each = 100)), v4 <- jitter(rep(c(42, 47), each = 100)), v5 <- jitter(rep(c(78, 100), each = 100)), v6 <- jitter(rep(c(10, 100), each = 100))) # PCA require(vegan) pca <- rda(df[ ,-1], scale = TRUE) ssc <- scores(pca, display = "sites") ordiplot(pca, type = "n") points(ssc[df$cond == "A", ], col = "red", pch = 16) points(ssc[df$cond == "B", ], col = "blue", pch = 16) # NMDS nmds <- metaMDS(df[ ,-1], distance = "euclidian") nmsc <- scores(nmds, display = "sites") ordiplot(nmds, type = "n") points(nmsc[df$cond == "A", ], col = "red", pch = 16) points(nmsc[df$cond == "B", ], col = "blue", pch = 16) # use adonis to test if there is a difference between the conditions adonis(df[ ,-1] ~ df[ ,1], method = "euclidean") ## There is a statistically significant difference between the two conditions
How to test whether variance explained by first factor of PCA differs across repeated measures condi
Did I get your answer right? - You want to test if there is statistically significant difference between the two conditions? Perhabs vegan::adonis() is something for you? Don´t know if that´s what you
How to test whether variance explained by first factor of PCA differs across repeated measures conditions? Did I get your answer right? - You want to test if there is statistically significant difference between the two conditions? Perhabs vegan::adonis() is something for you? Don´t know if that´s what your looking for. It works on the distance-matrix and compares distances within a condition are bigger then between conditions. For example in a NMDS you would see a clear separation of the two conditions. Here is some example Code: df <- data.frame(cond = rep(c("A", "B"), each = 100), v1 <- jitter(rep(c(20, 100), each = 100)), v2 <- jitter(rep(c(0, 80), each = 100)), v3 <- jitter(rep(c(40, 5), each = 100)), v4 <- jitter(rep(c(42, 47), each = 100)), v5 <- jitter(rep(c(78, 100), each = 100)), v6 <- jitter(rep(c(10, 100), each = 100))) # PCA require(vegan) pca <- rda(df[ ,-1], scale = TRUE) ssc <- scores(pca, display = "sites") ordiplot(pca, type = "n") points(ssc[df$cond == "A", ], col = "red", pch = 16) points(ssc[df$cond == "B", ], col = "blue", pch = 16) # NMDS nmds <- metaMDS(df[ ,-1], distance = "euclidian") nmsc <- scores(nmds, display = "sites") ordiplot(nmds, type = "n") points(nmsc[df$cond == "A", ], col = "red", pch = 16) points(nmsc[df$cond == "B", ], col = "blue", pch = 16) # use adonis to test if there is a difference between the conditions adonis(df[ ,-1] ~ df[ ,1], method = "euclidean") ## There is a statistically significant difference between the two conditions
How to test whether variance explained by first factor of PCA differs across repeated measures condi Did I get your answer right? - You want to test if there is statistically significant difference between the two conditions? Perhabs vegan::adonis() is something for you? Don´t know if that´s what you
37,456
How to test whether variance explained by first factor of PCA differs across repeated measures conditions?
Permutation test To test the null hypothesis directly, use a permutation test. Let the first PC in condition $A$ explain $a<100\%$ of variance, and the first PC in condition $B$ explain $b<100\%$ of variance. Your hypothesis is that $b>a$, so we can define $c=b-a$ as the statistic of interest, and the hypothesis is that $c>0$. The null hypothesis to reject is that $c=0$. To perform the permutation test, take your $N=200+200$ samples from both conditions, and randomly split them into conditions $A$ and $B$. As the splitting is random, there should be no difference in explained variance after that. For each permutation, you can compute $c$, repeat this process many (say, $10000$) times, and obtain the distribution of $c$ under the null hypothesis of $c_\mathrm{true}=0$. Comparing your empirical value of $c$ with this distribution will yield a $p$-value. Bootstrapping To obtain the confidence interval on $c$, use bootstrapping. In the bootstrapping approach, you would randomly select $N=200$ samples with replacement from the existing samples in $A$ and another $N=200$ from $B$. Compute $c$, and repeat it many (again, say, $10000$) times. You are going to obtain a bootstrapped distribution of the $c$ values, and its percentile intervals are going to correspond to the confidence intervals of the empirical value $c$. So you can estimate the $p$-value by looking at what part of this distribution lies above $0$. The permutation test is a more direct (and probably less relying on any assumptions) way to test the null hypothesis, but the bootstrap has an added benefit of yielding a confidence interval on $c$.
How to test whether variance explained by first factor of PCA differs across repeated measures condi
Permutation test To test the null hypothesis directly, use a permutation test. Let the first PC in condition $A$ explain $a<100\%$ of variance, and the first PC in condition $B$ explain $b<100\%$ of
How to test whether variance explained by first factor of PCA differs across repeated measures conditions? Permutation test To test the null hypothesis directly, use a permutation test. Let the first PC in condition $A$ explain $a<100\%$ of variance, and the first PC in condition $B$ explain $b<100\%$ of variance. Your hypothesis is that $b>a$, so we can define $c=b-a$ as the statistic of interest, and the hypothesis is that $c>0$. The null hypothesis to reject is that $c=0$. To perform the permutation test, take your $N=200+200$ samples from both conditions, and randomly split them into conditions $A$ and $B$. As the splitting is random, there should be no difference in explained variance after that. For each permutation, you can compute $c$, repeat this process many (say, $10000$) times, and obtain the distribution of $c$ under the null hypothesis of $c_\mathrm{true}=0$. Comparing your empirical value of $c$ with this distribution will yield a $p$-value. Bootstrapping To obtain the confidence interval on $c$, use bootstrapping. In the bootstrapping approach, you would randomly select $N=200$ samples with replacement from the existing samples in $A$ and another $N=200$ from $B$. Compute $c$, and repeat it many (again, say, $10000$) times. You are going to obtain a bootstrapped distribution of the $c$ values, and its percentile intervals are going to correspond to the confidence intervals of the empirical value $c$. So you can estimate the $p$-value by looking at what part of this distribution lies above $0$. The permutation test is a more direct (and probably less relying on any assumptions) way to test the null hypothesis, but the bootstrap has an added benefit of yielding a confidence interval on $c$.
How to test whether variance explained by first factor of PCA differs across repeated measures condi Permutation test To test the null hypothesis directly, use a permutation test. Let the first PC in condition $A$ explain $a<100\%$ of variance, and the first PC in condition $B$ explain $b<100\%$ of
37,457
How to test whether variance explained by first factor of PCA differs across repeated measures conditions?
This is only an outline of idea. The proportion of variance is defined as $$\frac{\lambda_1}{\lambda_1+...+\lambda_6},$$ where $\lambda_i$ are the eigenvalues of covariance matrix. Now if we use instead the eigenvalues of correlation matrix then $\lambda_1+...+\lambda_6=6$, since the sum of eigenvalues of a matrix is equal to the trace of the matrix, and for correlation matrices the trace is the sum of ones. So if we use the correlation matrices we need to test hypotheses about the difference of two maximal eigenvalues of sample correlation matrices. It is certainly possible to find in the literature the asymptotic distribution of the maximal eigen-value of correlation matrix. So the problem then reduces to some sort of paired or unpaired t-test.
How to test whether variance explained by first factor of PCA differs across repeated measures condi
This is only an outline of idea. The proportion of variance is defined as $$\frac{\lambda_1}{\lambda_1+...+\lambda_6},$$ where $\lambda_i$ are the eigenvalues of covariance matrix. Now if we use inste
How to test whether variance explained by first factor of PCA differs across repeated measures conditions? This is only an outline of idea. The proportion of variance is defined as $$\frac{\lambda_1}{\lambda_1+...+\lambda_6},$$ where $\lambda_i$ are the eigenvalues of covariance matrix. Now if we use instead the eigenvalues of correlation matrix then $\lambda_1+...+\lambda_6=6$, since the sum of eigenvalues of a matrix is equal to the trace of the matrix, and for correlation matrices the trace is the sum of ones. So if we use the correlation matrices we need to test hypotheses about the difference of two maximal eigenvalues of sample correlation matrices. It is certainly possible to find in the literature the asymptotic distribution of the maximal eigen-value of correlation matrix. So the problem then reduces to some sort of paired or unpaired t-test.
How to test whether variance explained by first factor of PCA differs across repeated measures condi This is only an outline of idea. The proportion of variance is defined as $$\frac{\lambda_1}{\lambda_1+...+\lambda_6},$$ where $\lambda_i$ are the eigenvalues of covariance matrix. Now if we use inste
37,458
Identifying sequential patterns
You can map the data into a feature space where sequence is important, along with both statistics calculated over sliding windows & cumulative statistics, and use that in a decision tree. A decision tree could handle both sequences and non-sequential data. This may substantially reduce your data complexity.
Identifying sequential patterns
You can map the data into a feature space where sequence is important, along with both statistics calculated over sliding windows & cumulative statistics, and use that in a decision tree. A decision t
Identifying sequential patterns You can map the data into a feature space where sequence is important, along with both statistics calculated over sliding windows & cumulative statistics, and use that in a decision tree. A decision tree could handle both sequences and non-sequential data. This may substantially reduce your data complexity.
Identifying sequential patterns You can map the data into a feature space where sequence is important, along with both statistics calculated over sliding windows & cumulative statistics, and use that in a decision tree. A decision t
37,459
Identifying sequential patterns
You may try other sequential pattern mining algorithm. For example, the open-source SPMF java data mining library offers SPADE, but also PrefixSpan, SPAM, CM-SPAM, CM-SPADE, GSP, etc (by the way, I'm the project founder). To my knowledge CM-SPADE usually is faster than SPADE. In terms of memory perhaps that SPAM uses less memory.. You could try it.
Identifying sequential patterns
You may try other sequential pattern mining algorithm. For example, the open-source SPMF java data mining library offers SPADE, but also PrefixSpan, SPAM, CM-SPAM, CM-SPADE, GSP, etc (by the way, I'm
Identifying sequential patterns You may try other sequential pattern mining algorithm. For example, the open-source SPMF java data mining library offers SPADE, but also PrefixSpan, SPAM, CM-SPAM, CM-SPADE, GSP, etc (by the way, I'm the project founder). To my knowledge CM-SPADE usually is faster than SPADE. In terms of memory perhaps that SPAM uses less memory.. You could try it.
Identifying sequential patterns You may try other sequential pattern mining algorithm. For example, the open-source SPMF java data mining library offers SPADE, but also PrefixSpan, SPAM, CM-SPAM, CM-SPADE, GSP, etc (by the way, I'm
37,460
What is a vector autoregressive model?
If purely from managerial perspective, VAR is practically the same as linear regression. The main difference is that in VAR you have several dependent variables instead of one. This means that instead of one linear regression you have several. Your interpretation of linear regression remains valid, since each VAR regression is usually estimated using OLS. As in linear regression so in VAR there exists various things you can or cannot do or should beware of. But these would be best explained if you provided more precise question.
What is a vector autoregressive model?
If purely from managerial perspective, VAR is practically the same as linear regression. The main difference is that in VAR you have several dependent variables instead of one. This means that instead
What is a vector autoregressive model? If purely from managerial perspective, VAR is practically the same as linear regression. The main difference is that in VAR you have several dependent variables instead of one. This means that instead of one linear regression you have several. Your interpretation of linear regression remains valid, since each VAR regression is usually estimated using OLS. As in linear regression so in VAR there exists various things you can or cannot do or should beware of. But these would be best explained if you provided more precise question.
What is a vector autoregressive model? If purely from managerial perspective, VAR is practically the same as linear regression. The main difference is that in VAR you have several dependent variables instead of one. This means that instead
37,461
What is a vector autoregressive model?
In Var models instead of using several dependent variable we use several independent variable and their effect on one dependent variable.
What is a vector autoregressive model?
In Var models instead of using several dependent variable we use several independent variable and their effect on one dependent variable.
What is a vector autoregressive model? In Var models instead of using several dependent variable we use several independent variable and their effect on one dependent variable.
What is a vector autoregressive model? In Var models instead of using several dependent variable we use several independent variable and their effect on one dependent variable.
37,462
How to visualize a GraphML multitree?
Have you tried GraphViz? Here is a tutorial and using GraphML.
How to visualize a GraphML multitree?
Have you tried GraphViz? Here is a tutorial and using GraphML.
How to visualize a GraphML multitree? Have you tried GraphViz? Here is a tutorial and using GraphML.
How to visualize a GraphML multitree? Have you tried GraphViz? Here is a tutorial and using GraphML.
37,463
Calibrated boosted decision trees in R or MATLAB
About R, I would vote for the gbm package; there's a vignette that provides a good overview: Generalized Boosted Models: A guide to the gbm package. If you are looking for an unified interface to ML algorithms, I recommend the caret package which has built-in facilities for data preprocessing, resampling, and comparative assessment of model performance. Other packages for boosted trees are reported under Table 1 of one of its accompanying vignettes, Model tuning, prediction and performance functions. There is also an example of parameters tuning for boosted trees in the JSS paper, pp. 10-11. Note: I didn't check, but you can also look into Weka (there's an R interface, RWeka).
Calibrated boosted decision trees in R or MATLAB
About R, I would vote for the gbm package; there's a vignette that provides a good overview: Generalized Boosted Models: A guide to the gbm package. If you are looking for an unified interface to ML a
Calibrated boosted decision trees in R or MATLAB About R, I would vote for the gbm package; there's a vignette that provides a good overview: Generalized Boosted Models: A guide to the gbm package. If you are looking for an unified interface to ML algorithms, I recommend the caret package which has built-in facilities for data preprocessing, resampling, and comparative assessment of model performance. Other packages for boosted trees are reported under Table 1 of one of its accompanying vignettes, Model tuning, prediction and performance functions. There is also an example of parameters tuning for boosted trees in the JSS paper, pp. 10-11. Note: I didn't check, but you can also look into Weka (there's an R interface, RWeka).
Calibrated boosted decision trees in R or MATLAB About R, I would vote for the gbm package; there's a vignette that provides a good overview: Generalized Boosted Models: A guide to the gbm package. If you are looking for an unified interface to ML a
37,464
How can I estimate the time at which 50% of a binomial variable will have transitioned?
As became evident in comments to the question, the data consist of only four observations of time to bud burst. (It would be a mistake to analyze them as if they were 16 independent values.) They consist of intervals of times rather than exact times: [1,8], [8,16], [16,24], [24,32] There are several approaches one might take. An appealing, highly general one is to take these intervals at their word: the true time of bud burst could be anything within each interval. We are thus led to represent "uncertainty" in two separate forms: sampling uncertainty (we have a presumably representative sample of the species this year) and observational uncertainty (reflected by the intervals). Sampling uncertainty is handled with familiar statistical techniques: we are asked to estimate the median and we can do so in any number of ways, depending on statistical assumptions, and we can provide confidence intervals for the estimate. For simplicity, let's suppose time to bud burst has a symmetrical distribution. Because it is (presumably) non-negative, this implies it has a variance and also suggests the mean of even just four observations may be approximately normally distributed. Moreover, symmetry implies we can use the mean as a surrogate for the median (which is sought in the original question). This gives us access to standard, simple, estimates and confidence interval methods. Observation uncertainty can be handled with principles of interval arithmetic (often called "probability bounds analysis"): perform all calculations using all possible configurations of data consistent with the observations. Let's see how this works in a simple case: estimating the mean. It is intuitively clear that the mean can be no smaller than $(1+8+16+24)/4$ = $10.25$, achieved by using the smallest values in each interval, and also that the mean can be no greater than $(8+16+24+32)$ = $18$. We conclude: $$\text{Mean} = [10.25, 18].$$ This represents an entire interval of estimates: an appropriate result of a computation with interval inputs! A $1-\alpha$ upper (one-sided) confidence limit of the mean of four values $\mathbf{x} = (x_1, x_2, x_3, x_4)$ is computed from their mean $m$ and sample standard deviation $s$ with the Student t-distribution as $$\text{ucl}(\mathbf{x}, \alpha) = x + t_{n-1}(\alpha) s / \sqrt{n}.$$ Unlike the calculation of the mean, it is no longer generally the case that the interval of ucl's is bounded by the ucl's of the limiting values. Indeed, note that the ucl of the lower interval limits, $\text{ucl}((1,8,16,24), .025)$, equals $28.0758$, whereas $\text{ucl}((8, 11.676, 16, 24), .025) = 25.8674$ is smaller yet. By maximizing and minimizing the ucl among all possible combinations of values consistent with the observations, we find (for example) that $$\text{ucl}(\text{data},.025) = [25.8, 39.3]$$ (that's an interval of numbers representing an interval-valued ucl, not a confidence interval!) and, for the lower confidence limit, $$\text{lcl}(\text{data},.025) = [0, 6.2].$$ (These values have been rounded outwards. The $0$ is a negative value that was truncated to $0$ on the premise that the median bud time cannot be negative.) In words, we might say that "These observations are consistent with values that, had they been precisely measured, could result in an upper 2.5% confidence limit of the median as high as 39.3 days, but no higher. They are consistent with values (which might differ from the first) that would result in a lower 2.5% confidence limit as low as 0." What one is to make of this is a matter for individual contemplation and depends on the application. If one wants to be reasonably sure that bud burst occurs before 40 days, then this result gives some satisfaction (conditional on the assumptions about bud burst distribution and independence of the observations). If one wants to estimate bud burst to the nearest day, then clearly more data are needed. In other circumstances, this statistical conclusion in terms of interval-valued confidence limits may be frustrating. E.g., how confident can we be that bud burst occurs in 50% of specimens before 30 days? It's hard to say, because the answers will be intervals. There are other ways to handle this problem. I especially favor using maximum likelihood methods. (To apply them here, we would need to know more about how the interval cutpoints were established. It matters whether they were determined independently of the data or not.) The present question appears to be a good opportunity to introduce interval-based methods because they do not seem to be well known, even though in certain disciplines (risk assessment and analysis of algorithms) they have been warmly advocated by some people.
How can I estimate the time at which 50% of a binomial variable will have transitioned?
As became evident in comments to the question, the data consist of only four observations of time to bud burst. (It would be a mistake to analyze them as if they were 16 independent values.) They co
How can I estimate the time at which 50% of a binomial variable will have transitioned? As became evident in comments to the question, the data consist of only four observations of time to bud burst. (It would be a mistake to analyze them as if they were 16 independent values.) They consist of intervals of times rather than exact times: [1,8], [8,16], [16,24], [24,32] There are several approaches one might take. An appealing, highly general one is to take these intervals at their word: the true time of bud burst could be anything within each interval. We are thus led to represent "uncertainty" in two separate forms: sampling uncertainty (we have a presumably representative sample of the species this year) and observational uncertainty (reflected by the intervals). Sampling uncertainty is handled with familiar statistical techniques: we are asked to estimate the median and we can do so in any number of ways, depending on statistical assumptions, and we can provide confidence intervals for the estimate. For simplicity, let's suppose time to bud burst has a symmetrical distribution. Because it is (presumably) non-negative, this implies it has a variance and also suggests the mean of even just four observations may be approximately normally distributed. Moreover, symmetry implies we can use the mean as a surrogate for the median (which is sought in the original question). This gives us access to standard, simple, estimates and confidence interval methods. Observation uncertainty can be handled with principles of interval arithmetic (often called "probability bounds analysis"): perform all calculations using all possible configurations of data consistent with the observations. Let's see how this works in a simple case: estimating the mean. It is intuitively clear that the mean can be no smaller than $(1+8+16+24)/4$ = $10.25$, achieved by using the smallest values in each interval, and also that the mean can be no greater than $(8+16+24+32)$ = $18$. We conclude: $$\text{Mean} = [10.25, 18].$$ This represents an entire interval of estimates: an appropriate result of a computation with interval inputs! A $1-\alpha$ upper (one-sided) confidence limit of the mean of four values $\mathbf{x} = (x_1, x_2, x_3, x_4)$ is computed from their mean $m$ and sample standard deviation $s$ with the Student t-distribution as $$\text{ucl}(\mathbf{x}, \alpha) = x + t_{n-1}(\alpha) s / \sqrt{n}.$$ Unlike the calculation of the mean, it is no longer generally the case that the interval of ucl's is bounded by the ucl's of the limiting values. Indeed, note that the ucl of the lower interval limits, $\text{ucl}((1,8,16,24), .025)$, equals $28.0758$, whereas $\text{ucl}((8, 11.676, 16, 24), .025) = 25.8674$ is smaller yet. By maximizing and minimizing the ucl among all possible combinations of values consistent with the observations, we find (for example) that $$\text{ucl}(\text{data},.025) = [25.8, 39.3]$$ (that's an interval of numbers representing an interval-valued ucl, not a confidence interval!) and, for the lower confidence limit, $$\text{lcl}(\text{data},.025) = [0, 6.2].$$ (These values have been rounded outwards. The $0$ is a negative value that was truncated to $0$ on the premise that the median bud time cannot be negative.) In words, we might say that "These observations are consistent with values that, had they been precisely measured, could result in an upper 2.5% confidence limit of the median as high as 39.3 days, but no higher. They are consistent with values (which might differ from the first) that would result in a lower 2.5% confidence limit as low as 0." What one is to make of this is a matter for individual contemplation and depends on the application. If one wants to be reasonably sure that bud burst occurs before 40 days, then this result gives some satisfaction (conditional on the assumptions about bud burst distribution and independence of the observations). If one wants to estimate bud burst to the nearest day, then clearly more data are needed. In other circumstances, this statistical conclusion in terms of interval-valued confidence limits may be frustrating. E.g., how confident can we be that bud burst occurs in 50% of specimens before 30 days? It's hard to say, because the answers will be intervals. There are other ways to handle this problem. I especially favor using maximum likelihood methods. (To apply them here, we would need to know more about how the interval cutpoints were established. It matters whether they were determined independently of the data or not.) The present question appears to be a good opportunity to introduce interval-based methods because they do not seem to be well known, even though in certain disciplines (risk assessment and analysis of algorithms) they have been warmly advocated by some people.
How can I estimate the time at which 50% of a binomial variable will have transitioned? As became evident in comments to the question, the data consist of only four observations of time to bud burst. (It would be a mistake to analyze them as if they were 16 independent values.) They co
37,465
How can I estimate the time at which 50% of a binomial variable will have transitioned?
Here is a simple approach that does not use logistic regression, but does attempt to use the suggestions above. Calculation of summary stats assumes, perhaps naively, that the date is normally distributed. Please pardon inelegant code write a function to estimate the day of budbreak for each individual: use the day of year half way between the last observation of 0 and the first observation of 1 for each individual. budburst.day <- function(i){ data.subset <- subset(testdata, subset = id == i, na.rm = TRUE) y1 <- data.subset$day[max(which(data.subset$obs==0))] y2 <- data.subset$day[min(which(data.subset$obs==1))] y <- mean(c(y1, y2), na.rm = TRUE) if(is.na(y) | y<0 | y > 180) y <- NA return(y) } Calculate summary statistics #calculate mean mean(unlist(lapply(1:4, budburst.day))) [1] 16.125 #calculate SE = sd/sqrt(n) sd(unlist(lapply(1:4, budburst.day)))/2 [1] 5.06777
How can I estimate the time at which 50% of a binomial variable will have transitioned?
Here is a simple approach that does not use logistic regression, but does attempt to use the suggestions above. Calculation of summary stats assumes, perhaps naively, that the date is normally distrib
How can I estimate the time at which 50% of a binomial variable will have transitioned? Here is a simple approach that does not use logistic regression, but does attempt to use the suggestions above. Calculation of summary stats assumes, perhaps naively, that the date is normally distributed. Please pardon inelegant code write a function to estimate the day of budbreak for each individual: use the day of year half way between the last observation of 0 and the first observation of 1 for each individual. budburst.day <- function(i){ data.subset <- subset(testdata, subset = id == i, na.rm = TRUE) y1 <- data.subset$day[max(which(data.subset$obs==0))] y2 <- data.subset$day[min(which(data.subset$obs==1))] y <- mean(c(y1, y2), na.rm = TRUE) if(is.na(y) | y<0 | y > 180) y <- NA return(y) } Calculate summary statistics #calculate mean mean(unlist(lapply(1:4, budburst.day))) [1] 16.125 #calculate SE = sd/sqrt(n) sd(unlist(lapply(1:4, budburst.day)))/2 [1] 5.06777
How can I estimate the time at which 50% of a binomial variable will have transitioned? Here is a simple approach that does not use logistic regression, but does attempt to use the suggestions above. Calculation of summary stats assumes, perhaps naively, that the date is normally distrib
37,466
How can I estimate the time at which 50% of a binomial variable will have transitioned?
We know that the $t_1$ transition time (from state 0 to state 1) of subject id=1 was between two boundaries: $24<t_1<32$. An approximation is to assume that $t_1$ may have taken values within this range with uniform probability. Resampling the $t_i$ values we can get an approximate distribution of $\text{median}(t_i)$: t = replicate(10000, median(sample(c(runif(1, 24, 32), # id=1 runif(1, 1, 8), # id=2 runif(1, 8, 16), # id=3 runif(1, 16, 24)), # id=4 replace=TRUE))) c(quantile(t, c(.025, .25, .5, .75, .975)), mean=mean(t), sd=sd(t)) Result (repeated): 2.5% 25% 50% 75% 97.5% mean sd 4.602999 11.428310 16.005289 20.549056 28.378774 16.085808 6.243129 4.517058 11.717245 16.084075 20.898324 28.031452 16.201022 6.219094 Thus an approximation with 95% confidence interval of this median is 16 (5 – 28). EDIT: See whuber's comment on the limitation of this method when the number of observations is small (including n=4 itself).
How can I estimate the time at which 50% of a binomial variable will have transitioned?
We know that the $t_1$ transition time (from state 0 to state 1) of subject id=1 was between two boundaries: $24<t_1<32$. An approximation is to assume that $t_1$ may have taken values within this ran
How can I estimate the time at which 50% of a binomial variable will have transitioned? We know that the $t_1$ transition time (from state 0 to state 1) of subject id=1 was between two boundaries: $24<t_1<32$. An approximation is to assume that $t_1$ may have taken values within this range with uniform probability. Resampling the $t_i$ values we can get an approximate distribution of $\text{median}(t_i)$: t = replicate(10000, median(sample(c(runif(1, 24, 32), # id=1 runif(1, 1, 8), # id=2 runif(1, 8, 16), # id=3 runif(1, 16, 24)), # id=4 replace=TRUE))) c(quantile(t, c(.025, .25, .5, .75, .975)), mean=mean(t), sd=sd(t)) Result (repeated): 2.5% 25% 50% 75% 97.5% mean sd 4.602999 11.428310 16.005289 20.549056 28.378774 16.085808 6.243129 4.517058 11.717245 16.084075 20.898324 28.031452 16.201022 6.219094 Thus an approximation with 95% confidence interval of this median is 16 (5 – 28). EDIT: See whuber's comment on the limitation of this method when the number of observations is small (including n=4 itself).
How can I estimate the time at which 50% of a binomial variable will have transitioned? We know that the $t_1$ transition time (from state 0 to state 1) of subject id=1 was between two boundaries: $24<t_1<32$. An approximation is to assume that $t_1$ may have taken values within this ran
37,467
How can I estimate the time at which 50% of a binomial variable will have transitioned?
You could use discrete time hazard model fit with logistic regression (using a person period data set). See Applied Longitudinal Data Analysis - software and Book Chapters 10-12. Allison also discusses You data set is tiny though.
How can I estimate the time at which 50% of a binomial variable will have transitioned?
You could use discrete time hazard model fit with logistic regression (using a person period data set). See Applied Longitudinal Data Analysis - software and Book Chapters 10-12. Allison also discuss
How can I estimate the time at which 50% of a binomial variable will have transitioned? You could use discrete time hazard model fit with logistic regression (using a person period data set). See Applied Longitudinal Data Analysis - software and Book Chapters 10-12. Allison also discusses You data set is tiny though.
How can I estimate the time at which 50% of a binomial variable will have transitioned? You could use discrete time hazard model fit with logistic regression (using a person period data set). See Applied Longitudinal Data Analysis - software and Book Chapters 10-12. Allison also discuss
37,468
How can I estimate the time at which 50% of a binomial variable will have transitioned?
Assuming that you will have more data of the same structure you will be able to use the actuarial (life table) method to estimate median survival.
How can I estimate the time at which 50% of a binomial variable will have transitioned?
Assuming that you will have more data of the same structure you will be able to use the actuarial (life table) method to estimate median survival.
How can I estimate the time at which 50% of a binomial variable will have transitioned? Assuming that you will have more data of the same structure you will be able to use the actuarial (life table) method to estimate median survival.
How can I estimate the time at which 50% of a binomial variable will have transitioned? Assuming that you will have more data of the same structure you will be able to use the actuarial (life table) method to estimate median survival.
37,469
Introductory statistics video courses for social scientists
http://www.statisticslectures.com/ - I think, this is a really high quality resource. Explanations are clear, concise and very informative.
Introductory statistics video courses for social scientists
http://www.statisticslectures.com/ - I think, this is a really high quality resource. Explanations are clear, concise and very informative.
Introductory statistics video courses for social scientists http://www.statisticslectures.com/ - I think, this is a really high quality resource. Explanations are clear, concise and very informative.
Introductory statistics video courses for social scientists http://www.statisticslectures.com/ - I think, this is a really high quality resource. Explanations are clear, concise and very informative.
37,470
Introductory statistics video courses for social scientists
The Udacity course : *Introduction to Statistics (st101) Making Decisions Based on Data * Was really good. I learned a lot and it's focused on real world examples. They have an introductory video here : http://www.udacity.com/overview/Course/st101/CourseRev/1
Introductory statistics video courses for social scientists
The Udacity course : *Introduction to Statistics (st101) Making Decisions Based on Data * Was really good. I learned a lot and it's focused on real world examples. They have an introductory video her
Introductory statistics video courses for social scientists The Udacity course : *Introduction to Statistics (st101) Making Decisions Based on Data * Was really good. I learned a lot and it's focused on real world examples. They have an introductory video here : http://www.udacity.com/overview/Course/st101/CourseRev/1
Introductory statistics video courses for social scientists The Udacity course : *Introduction to Statistics (st101) Making Decisions Based on Data * Was really good. I learned a lot and it's focused on real world examples. They have an introductory video her
37,471
Multiple regression with small data sets
As you want to select a few predictors from your data set, I would suggest a simple linear regression with $L_1$ penalty or using the LASSO (penalized linear regression). Your case is suited for regression with LASSO penalty as your sample size, $n = 50$, and the number of predictors, $p=30$. Changing the tuning parameter will select the number of predictors you want to choose. If you can give details about the distribution of your variables, I can be more specific. I don't use SPSS, but this can be done easily in R using the glmnet function in the package of the same name. If you look in the manual, it contains a generic example (very first one, for gaussian case) which will solve your problem. I am sure, similar solution must exist in SPSS.
Multiple regression with small data sets
As you want to select a few predictors from your data set, I would suggest a simple linear regression with $L_1$ penalty or using the LASSO (penalized linear regression). Your case is suited for regre
Multiple regression with small data sets As you want to select a few predictors from your data set, I would suggest a simple linear regression with $L_1$ penalty or using the LASSO (penalized linear regression). Your case is suited for regression with LASSO penalty as your sample size, $n = 50$, and the number of predictors, $p=30$. Changing the tuning parameter will select the number of predictors you want to choose. If you can give details about the distribution of your variables, I can be more specific. I don't use SPSS, but this can be done easily in R using the glmnet function in the package of the same name. If you look in the manual, it contains a generic example (very first one, for gaussian case) which will solve your problem. I am sure, similar solution must exist in SPSS.
Multiple regression with small data sets As you want to select a few predictors from your data set, I would suggest a simple linear regression with $L_1$ penalty or using the LASSO (penalized linear regression). Your case is suited for regre
37,472
Statistical test for trend (continuous variable) in Stata or R
It seems that your problem can be stated as change-point problem. R packages dealing with such type of problems are segmented and strucchange. Since you want to look into changes in time trend (and time trends always need special treatment in linear regression), I suggest differencing your hemoglobin level data and then testing whether there is a change in mean. Look also into answers for this question.
Statistical test for trend (continuous variable) in Stata or R
It seems that your problem can be stated as change-point problem. R packages dealing with such type of problems are segmented and strucchange. Since you want to look into changes in time trend (and ti
Statistical test for trend (continuous variable) in Stata or R It seems that your problem can be stated as change-point problem. R packages dealing with such type of problems are segmented and strucchange. Since you want to look into changes in time trend (and time trends always need special treatment in linear regression), I suggest differencing your hemoglobin level data and then testing whether there is a change in mean. Look also into answers for this question.
Statistical test for trend (continuous variable) in Stata or R It seems that your problem can be stated as change-point problem. R packages dealing with such type of problems are segmented and strucchange. Since you want to look into changes in time trend (and ti
37,473
Two-sample permutation Kolmogorov-Smirnov tests
The answer depends on the nature of the data generation process and on the alternative hypothesis you have in mind. Your test is a kind of unweighted chi-square. Because of this lack of weighting, changes that principally affect the less-populated categories will be difficult to detect. For example, your test is going to be much less powerful than the chi-square test for a uniform shift in location, which is detected primarily by noticing that almost all the probability in one tail gets shifted into the other tail. For example, suppose your categories are integer ranges $[i, i+1)$ indexed by $i$ and you are observing normal variates of unit variance but unknown mean. 100 observations of a standard normal variate, say, will mainly occupy categories $-2$ through $1$, although you can expect a few to occupy categories $-3$ and $2$. Even for a whopping big shift of $5$ standard errors (i.e., a change in mean of $5/\sqrt{100} = 0.5$), the power of your K-D-like test is only about 50% (when $\alpha = 0.05$). It is difficult to conceive of a setting where this test will be more powerful than the chi-square test. If you think you are in such a situation, perform some simulations to find out what the power is and how it compares to the standard alternative tests.
Two-sample permutation Kolmogorov-Smirnov tests
The answer depends on the nature of the data generation process and on the alternative hypothesis you have in mind. Your test is a kind of unweighted chi-square. Because of this lack of weighting, ch
Two-sample permutation Kolmogorov-Smirnov tests The answer depends on the nature of the data generation process and on the alternative hypothesis you have in mind. Your test is a kind of unweighted chi-square. Because of this lack of weighting, changes that principally affect the less-populated categories will be difficult to detect. For example, your test is going to be much less powerful than the chi-square test for a uniform shift in location, which is detected primarily by noticing that almost all the probability in one tail gets shifted into the other tail. For example, suppose your categories are integer ranges $[i, i+1)$ indexed by $i$ and you are observing normal variates of unit variance but unknown mean. 100 observations of a standard normal variate, say, will mainly occupy categories $-2$ through $1$, although you can expect a few to occupy categories $-3$ and $2$. Even for a whopping big shift of $5$ standard errors (i.e., a change in mean of $5/\sqrt{100} = 0.5$), the power of your K-D-like test is only about 50% (when $\alpha = 0.05$). It is difficult to conceive of a setting where this test will be more powerful than the chi-square test. If you think you are in such a situation, perform some simulations to find out what the power is and how it compares to the standard alternative tests.
Two-sample permutation Kolmogorov-Smirnov tests The answer depends on the nature of the data generation process and on the alternative hypothesis you have in mind. Your test is a kind of unweighted chi-square. Because of this lack of weighting, ch
37,474
Determination of effect size for a repeated measures ANOVA power analysis
Assuming you are going to average the first 12 months to form a baseline measure and the second 12 months to form as a follow-up measure, your problem reduces to a repeated measures t-test. G*Power You might want to check out the following menu in G*Power 3: Tests - Means - Two Dependent Groups (matched pairs). Use A priori, $\alpha=.05$, Power = 0.90. Use the Determine button to determine effect size. This requires that you can estimate time 1 and 2 means, sds, and correlation between time points. If you know nothing about the domain, based on my experience in psychology, I'd start with something like M1 = 0, SD1 = 1, SD2 = 1 correlation = .60 This means that M2 is basically a between subjects cohen's d. You could then examine a few different values of M2 such as 0.2, 0.3, ... 0.5, ... 0.8, etc. Cohen's rules of thumb suggest 0.2 is small, 0.5 is medium, and 0.8 is large. R UCLA has a tutorial on doing a power analysis on a repeated measures t-test using R. Side point As a side point, you might want to consider having a control group.
Determination of effect size for a repeated measures ANOVA power analysis
Assuming you are going to average the first 12 months to form a baseline measure and the second 12 months to form as a follow-up measure, your problem reduces to a repeated measures t-test. G*Power Yo
Determination of effect size for a repeated measures ANOVA power analysis Assuming you are going to average the first 12 months to form a baseline measure and the second 12 months to form as a follow-up measure, your problem reduces to a repeated measures t-test. G*Power You might want to check out the following menu in G*Power 3: Tests - Means - Two Dependent Groups (matched pairs). Use A priori, $\alpha=.05$, Power = 0.90. Use the Determine button to determine effect size. This requires that you can estimate time 1 and 2 means, sds, and correlation between time points. If you know nothing about the domain, based on my experience in psychology, I'd start with something like M1 = 0, SD1 = 1, SD2 = 1 correlation = .60 This means that M2 is basically a between subjects cohen's d. You could then examine a few different values of M2 such as 0.2, 0.3, ... 0.5, ... 0.8, etc. Cohen's rules of thumb suggest 0.2 is small, 0.5 is medium, and 0.8 is large. R UCLA has a tutorial on doing a power analysis on a repeated measures t-test using R. Side point As a side point, you might want to consider having a control group.
Determination of effect size for a repeated measures ANOVA power analysis Assuming you are going to average the first 12 months to form a baseline measure and the second 12 months to form as a follow-up measure, your problem reduces to a repeated measures t-test. G*Power Yo
37,475
Estimating parameters of sum-stable RV via L-estimators
The Levy distribution has 4 parameter. Each of them has a quantile-based sample equivalent: $\mu$, the location parameter, can be estimated by the median. This is a high efficiency alternative (ARE$\approx 0.85$). $\gamma$, the scale parameter, can be estimated by the median absolute deviation (or more efficiently yet by the Qn estimator (1) with ARE similar to that of the median) $\beta$, the skew parameter, can be estimated by the $S_k$ estimator, with $S_k=(Q_x(\frac{3}{4})-2Q_x(\frac{1}{2})+Q_x(\frac{1}{4}))(Q_x(\frac{3}{4})-Q_x(\frac{1}{4}))^{-1}$ where $Q_x(\tau)$ is the $\tau$^th quantile of $x$. $\alpha$, the tail parameter, can be estimated by Moors's quantile based kurtosis estimator (2). List of references: P.J. Rousseeuw, C. Croux (1993) Alternatives to the Median Absolute Deviation, JASA, 88, 1273-1283. J. J. A. Moors, (1988) A Quantile Alternative for Kurtosis Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 37, No. 1, pp. 25-32
Estimating parameters of sum-stable RV via L-estimators
The Levy distribution has 4 parameter. Each of them has a quantile-based sample equivalent: $\mu$, the location parameter, can be estimated by the median. This is a high efficiency alternative (ARE$\
Estimating parameters of sum-stable RV via L-estimators The Levy distribution has 4 parameter. Each of them has a quantile-based sample equivalent: $\mu$, the location parameter, can be estimated by the median. This is a high efficiency alternative (ARE$\approx 0.85$). $\gamma$, the scale parameter, can be estimated by the median absolute deviation (or more efficiently yet by the Qn estimator (1) with ARE similar to that of the median) $\beta$, the skew parameter, can be estimated by the $S_k$ estimator, with $S_k=(Q_x(\frac{3}{4})-2Q_x(\frac{1}{2})+Q_x(\frac{1}{4}))(Q_x(\frac{3}{4})-Q_x(\frac{1}{4}))^{-1}$ where $Q_x(\tau)$ is the $\tau$^th quantile of $x$. $\alpha$, the tail parameter, can be estimated by Moors's quantile based kurtosis estimator (2). List of references: P.J. Rousseeuw, C. Croux (1993) Alternatives to the Median Absolute Deviation, JASA, 88, 1273-1283. J. J. A. Moors, (1988) A Quantile Alternative for Kurtosis Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 37, No. 1, pp. 25-32
Estimating parameters of sum-stable RV via L-estimators The Levy distribution has 4 parameter. Each of them has a quantile-based sample equivalent: $\mu$, the location parameter, can be estimated by the median. This is a high efficiency alternative (ARE$\
37,476
If the curse of dimensionality exists, how does embedding search work?
The origion of vector space model is as follows: The idea that the meaning of a word might be modeled as a point in a multi- dimensional semantic space came from psychologists like Charles E. Osgood, who had been studying how people responded to the meaning of words by assigning val- ues along scales like happy/sad or hard/soft. Osgood et al. (1957) proposed that the meaning of a word in general could be modeled as a point in a multidimensional Euclidean space, and that the similarity of meaning between two words could be modeled as the distance between these points in the space. For the question how does embedding search work There are two methods: 1) you have embedding A and you compute the cosine distances between A and all the embeddings in a corpus and you rank the embeddings by the distances to find the nearest embeddings; or 2) you try the approximate nearest neighbor searching using FAISS or ScaNN. Why consine? Because it is the normalized dot product since dot product favors long vectors. Embedding is the result of one of the two vector semantic models: sparse vector models and dense vector models. Embeddings are obtained from dense vector models, and the sparse vector models include word-context and term-term matrix. We can also utilize distances between sparse vectors to measure semantic similarities/associations. Reference: Speech and Language Processing: An introduction to natural language processing
If the curse of dimensionality exists, how does embedding search work?
The origion of vector space model is as follows: The idea that the meaning of a word might be modeled as a point in a multi- dimensional semantic space came from psychologists like Charles E. Osgood,
If the curse of dimensionality exists, how does embedding search work? The origion of vector space model is as follows: The idea that the meaning of a word might be modeled as a point in a multi- dimensional semantic space came from psychologists like Charles E. Osgood, who had been studying how people responded to the meaning of words by assigning val- ues along scales like happy/sad or hard/soft. Osgood et al. (1957) proposed that the meaning of a word in general could be modeled as a point in a multidimensional Euclidean space, and that the similarity of meaning between two words could be modeled as the distance between these points in the space. For the question how does embedding search work There are two methods: 1) you have embedding A and you compute the cosine distances between A and all the embeddings in a corpus and you rank the embeddings by the distances to find the nearest embeddings; or 2) you try the approximate nearest neighbor searching using FAISS or ScaNN. Why consine? Because it is the normalized dot product since dot product favors long vectors. Embedding is the result of one of the two vector semantic models: sparse vector models and dense vector models. Embeddings are obtained from dense vector models, and the sparse vector models include word-context and term-term matrix. We can also utilize distances between sparse vectors to measure semantic similarities/associations. Reference: Speech and Language Processing: An introduction to natural language processing
If the curse of dimensionality exists, how does embedding search work? The origion of vector space model is as follows: The idea that the meaning of a word might be modeled as a point in a multi- dimensional semantic space came from psychologists like Charles E. Osgood,
37,477
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting?
Unfortunately, the Kalman filter methodology is a fairly advanced topic in econometrics, hence it is quite difficult to find simple examples, both because it is a complex topic that requires a deeper knowledge of the maths behind it, and because it is easier to make tutorials of simple things. However, here you can find a quite simple example by Maitra, "State Space Model and Kalman Filter for Prediction" in R on Kalman filter for DLM, that is a generalization of ARIMA with external regressors. Nevertheless, since you are interested in time series forecasting and estimation, I suggest you to look at the forecast package in R by Professor Rob J. Hyndman (there should also be a Python version), that allows to estimate various time series models in state-space forms, and maybe the book "Forecasting: Principles and Practice" by Rob J. Hyndman and George Athanasopoulos.
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting?
Unfortunately, the Kalman filter methodology is a fairly advanced topic in econometrics, hence it is quite difficult to find simple examples, both because it is a complex topic that requires a deeper
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting? Unfortunately, the Kalman filter methodology is a fairly advanced topic in econometrics, hence it is quite difficult to find simple examples, both because it is a complex topic that requires a deeper knowledge of the maths behind it, and because it is easier to make tutorials of simple things. However, here you can find a quite simple example by Maitra, "State Space Model and Kalman Filter for Prediction" in R on Kalman filter for DLM, that is a generalization of ARIMA with external regressors. Nevertheless, since you are interested in time series forecasting and estimation, I suggest you to look at the forecast package in R by Professor Rob J. Hyndman (there should also be a Python version), that allows to estimate various time series models in state-space forms, and maybe the book "Forecasting: Principles and Practice" by Rob J. Hyndman and George Athanasopoulos.
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting? Unfortunately, the Kalman filter methodology is a fairly advanced topic in econometrics, hence it is quite difficult to find simple examples, both because it is a complex topic that requires a deeper
37,478
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting?
The best I found till now is this open source interactive book: Kalman and Bayesian Filters in Python Hope this helps !
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting?
The best I found till now is this open source interactive book: Kalman and Bayesian Filters in Python Hope this helps !
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting? The best I found till now is this open source interactive book: Kalman and Bayesian Filters in Python Hope this helps !
Can anyone point me towards tutorials describing how to use the Kalman filter for forecasting? The best I found till now is this open source interactive book: Kalman and Bayesian Filters in Python Hope this helps !
37,479
Intuitive understanding of instrumental variables for natural experiments
I interpret the background to be, you are interested in evaluating the effect of some (endogenous) "treatment" on some "outcome," and we assume that there is a valid "instrumental variable" for "treatment." so should I think intuitively as if it is to some degree a random experiment on a subset of units? Basically, yes. A valid instrument should be very much like a random experiment as follows: With a valid experiment, treatment is randomly assigned, therefore treatment is independent of potential outcomes With a valid instrument, the instrument is randomly assigned, therefore the instrument is independent of potential outcomes (and potential treatments). Generally when you use instrumental variables, you are no longer estimating the effect of "treatment" on the "outcome." Instead, you are estimating the effect of "treatment" on the "outcome" for those units whose "treatment" can be changed by the "instrument." The italicized part is the subset. Is the logic as follows: by using an instrument, you are now comparing the outcomes of those who recieved higher levels of treatment because they had higher exposure to the instrument to those who received lower levels of treatment because they had lower exposure to the instrument, but these latter units would have recieved higher treatment had they been more exposed to the instrument? Regarding the logic, for simplicity assume a binary instrument. You are comparing the outcomes of those exposed to the instrument, to the outcomes of those not exposed to the instrument. You then divide this comparison by the difference in the treatments of those exposed to the instrument, to the treatments of those not exposed to the instrument. This ratio estimates the effect of "treatment" on the "outcome" for those units whose "treatment" can be changed by the "instrument."
Intuitive understanding of instrumental variables for natural experiments
I interpret the background to be, you are interested in evaluating the effect of some (endogenous) "treatment" on some "outcome," and we assume that there is a valid "instrumental variable" for "treat
Intuitive understanding of instrumental variables for natural experiments I interpret the background to be, you are interested in evaluating the effect of some (endogenous) "treatment" on some "outcome," and we assume that there is a valid "instrumental variable" for "treatment." so should I think intuitively as if it is to some degree a random experiment on a subset of units? Basically, yes. A valid instrument should be very much like a random experiment as follows: With a valid experiment, treatment is randomly assigned, therefore treatment is independent of potential outcomes With a valid instrument, the instrument is randomly assigned, therefore the instrument is independent of potential outcomes (and potential treatments). Generally when you use instrumental variables, you are no longer estimating the effect of "treatment" on the "outcome." Instead, you are estimating the effect of "treatment" on the "outcome" for those units whose "treatment" can be changed by the "instrument." The italicized part is the subset. Is the logic as follows: by using an instrument, you are now comparing the outcomes of those who recieved higher levels of treatment because they had higher exposure to the instrument to those who received lower levels of treatment because they had lower exposure to the instrument, but these latter units would have recieved higher treatment had they been more exposed to the instrument? Regarding the logic, for simplicity assume a binary instrument. You are comparing the outcomes of those exposed to the instrument, to the outcomes of those not exposed to the instrument. You then divide this comparison by the difference in the treatments of those exposed to the instrument, to the treatments of those not exposed to the instrument. This ratio estimates the effect of "treatment" on the "outcome" for those units whose "treatment" can be changed by the "instrument."
Intuitive understanding of instrumental variables for natural experiments I interpret the background to be, you are interested in evaluating the effect of some (endogenous) "treatment" on some "outcome," and we assume that there is a valid "instrumental variable" for "treat
37,480
Error Bars for Histogram with Uncertain Data
I thought about it some more, and I have a couple of ideas. (1) About measurement uncertainty: from what you said, it's big enough to take into account. I agree with the formula for qi -- this is just the mass of the distribution for x[i] which falls into B[k]. From that, it looks to me that the mean of the proportion of x which falls into B[k] (let's call that q(B[k])) is the sum of those bits over all the data, i.e., q(B[k]) = sum(qi, i, 1, N). Then the height of the histogram bar k is q(B[k]). and its variance is q(B[k])*(1 - q(B[k])). So I disagree about the variance -- I think the summation over i should be inside q in variance = q*(1 - q), not outside. It occurs to me that you'll want to ensure that the q(B[k]) sum to 1 -- maybe that's guaranteed by construction. In any event you'll want to verify that. EDIT: Also, as the measurement error becomes smaller and smaller, you should find that the q(B[k]) converges to the simple n[k]/sum(n[k]) estimate. (2) About prior information about nonempty bins, I recall that adding a fixed number to the numerator and denominator in n[k]/n, i.e., (n[k] + m[k])/(n + sum(m[k])), is equivalent to assuming a prior over the bin proportion, with the prior mean being m[k]/sum(m[k]). As you can see, the larger m[k], the stronger the influence of the prior. (This business about the prior count is equivalent to assuming a conjugate prior for the bin proportion -- "conjugate prior beta binomial" is a topic you can look up.) Since q(B[k]) is not just a proportion of counts, it's not immediately clear to me how to incorporate the prior count. Maybe you need (q(B[k]) + m[k])/Z where Z is whatever makes the adjusted proportions sum to 1. However, I don't know how hard you should try to fix up the bin proportions. You were saying you don't have enough prior information to pick a parametric distribution -- if so, maybe you also don't have enough to make assumptions about bin proportions. That's a kind of higher-level question you can consider. Good luck and have fun, it seems like an interesting problem.
Error Bars for Histogram with Uncertain Data
I thought about it some more, and I have a couple of ideas. (1) About measurement uncertainty: from what you said, it's big enough to take into account. I agree with the formula for qi -- this is just
Error Bars for Histogram with Uncertain Data I thought about it some more, and I have a couple of ideas. (1) About measurement uncertainty: from what you said, it's big enough to take into account. I agree with the formula for qi -- this is just the mass of the distribution for x[i] which falls into B[k]. From that, it looks to me that the mean of the proportion of x which falls into B[k] (let's call that q(B[k])) is the sum of those bits over all the data, i.e., q(B[k]) = sum(qi, i, 1, N). Then the height of the histogram bar k is q(B[k]). and its variance is q(B[k])*(1 - q(B[k])). So I disagree about the variance -- I think the summation over i should be inside q in variance = q*(1 - q), not outside. It occurs to me that you'll want to ensure that the q(B[k]) sum to 1 -- maybe that's guaranteed by construction. In any event you'll want to verify that. EDIT: Also, as the measurement error becomes smaller and smaller, you should find that the q(B[k]) converges to the simple n[k]/sum(n[k]) estimate. (2) About prior information about nonempty bins, I recall that adding a fixed number to the numerator and denominator in n[k]/n, i.e., (n[k] + m[k])/(n + sum(m[k])), is equivalent to assuming a prior over the bin proportion, with the prior mean being m[k]/sum(m[k]). As you can see, the larger m[k], the stronger the influence of the prior. (This business about the prior count is equivalent to assuming a conjugate prior for the bin proportion -- "conjugate prior beta binomial" is a topic you can look up.) Since q(B[k]) is not just a proportion of counts, it's not immediately clear to me how to incorporate the prior count. Maybe you need (q(B[k]) + m[k])/Z where Z is whatever makes the adjusted proportions sum to 1. However, I don't know how hard you should try to fix up the bin proportions. You were saying you don't have enough prior information to pick a parametric distribution -- if so, maybe you also don't have enough to make assumptions about bin proportions. That's a kind of higher-level question you can consider. Good luck and have fun, it seems like an interesting problem.
Error Bars for Histogram with Uncertain Data I thought about it some more, and I have a couple of ideas. (1) About measurement uncertainty: from what you said, it's big enough to take into account. I agree with the formula for qi -- this is just
37,481
Error Bars for Histogram with Uncertain Data
I have a similar problem and I have a solution in mind, though it's more complicated than I'd like, which is how I stumbled upon this answer: seeking an easier answer. With that preamble out of the way, I'll share the solution I had in mind. It is more complicated than I'd like. And I am sorry that this is coming over a year after your original post. This is almost certainly no longer relevant. To begin with, you do not have a model for the underlying probability distribution. There are several ways that you infer one, and the way we'll be discussing is using a technique called kernel density estimation, which is essentially that you put a gaussian (or other kernel function) at each observation point, then sum all of the gaussians up. The variance of each gaussian is related to a parameter called the bandwidth, and there are some algorithms that you may use to come up with a "good" bandwidth, but it's a hyperparameter, and it's mostly guesswork. There's also a method of using one of these guesses as input to another round of the kernel density estimation this time using the frequency of the points in the neighborhood to automatically adjust the bandwidth of each input point. This is called "variable bandwidth" kernel density estimation. However, your observations also have a significant associated uncertainty. You'll need to convolve each kernel with the gaussian from the measurement uncertainty, and then convolve again during the variable bandwidth stage. (Because convolving a normalized gaussian with another normalized gaussian is as simple as adding variances I recommend using a gaussian kernel. Though the specific kernel you use is irrelevant with enough data points.) (That being said, I am not certain and I haven't fully thought through the consequences of convolving the measurement uncertainty at both stages vs only one or the other. However, it seems to me that both stages is the correct call, n.b. if we consider each observation to be several hundred observations (without associated uncertainty) distributed according to the original observation's uncertainty, then this is roughly equivalent to the convolution (and exactly equivalent in the limit as the number of samples goes to infinity), and in this concrete case is equivalent to convolving at both stages.) This should give you a PDF of the resulting distribution. You may then rebin this density, using the total weight of the pdf contained in that bin along with the total count of all observations, to give error bars for each bin. (In this case, you will also have zero error on any bin with zero value; however, because of the use of gaussians, no bin will be exactly zero, and any bin with near zero value for after using this technique is almost certain to actually have near zero weight.) Variable Bandwidth Kernel Density Estimation You also mentioned that you only have a few hundred points, I have a few 10s of thousand on the other hand, so I want a fast method for summing gaussians. This is given in the paper "Improved fast Gauss transform with variable source scales".
Error Bars for Histogram with Uncertain Data
I have a similar problem and I have a solution in mind, though it's more complicated than I'd like, which is how I stumbled upon this answer: seeking an easier answer. With that preamble out of the wa
Error Bars for Histogram with Uncertain Data I have a similar problem and I have a solution in mind, though it's more complicated than I'd like, which is how I stumbled upon this answer: seeking an easier answer. With that preamble out of the way, I'll share the solution I had in mind. It is more complicated than I'd like. And I am sorry that this is coming over a year after your original post. This is almost certainly no longer relevant. To begin with, you do not have a model for the underlying probability distribution. There are several ways that you infer one, and the way we'll be discussing is using a technique called kernel density estimation, which is essentially that you put a gaussian (or other kernel function) at each observation point, then sum all of the gaussians up. The variance of each gaussian is related to a parameter called the bandwidth, and there are some algorithms that you may use to come up with a "good" bandwidth, but it's a hyperparameter, and it's mostly guesswork. There's also a method of using one of these guesses as input to another round of the kernel density estimation this time using the frequency of the points in the neighborhood to automatically adjust the bandwidth of each input point. This is called "variable bandwidth" kernel density estimation. However, your observations also have a significant associated uncertainty. You'll need to convolve each kernel with the gaussian from the measurement uncertainty, and then convolve again during the variable bandwidth stage. (Because convolving a normalized gaussian with another normalized gaussian is as simple as adding variances I recommend using a gaussian kernel. Though the specific kernel you use is irrelevant with enough data points.) (That being said, I am not certain and I haven't fully thought through the consequences of convolving the measurement uncertainty at both stages vs only one or the other. However, it seems to me that both stages is the correct call, n.b. if we consider each observation to be several hundred observations (without associated uncertainty) distributed according to the original observation's uncertainty, then this is roughly equivalent to the convolution (and exactly equivalent in the limit as the number of samples goes to infinity), and in this concrete case is equivalent to convolving at both stages.) This should give you a PDF of the resulting distribution. You may then rebin this density, using the total weight of the pdf contained in that bin along with the total count of all observations, to give error bars for each bin. (In this case, you will also have zero error on any bin with zero value; however, because of the use of gaussians, no bin will be exactly zero, and any bin with near zero value for after using this technique is almost certain to actually have near zero weight.) Variable Bandwidth Kernel Density Estimation You also mentioned that you only have a few hundred points, I have a few 10s of thousand on the other hand, so I want a fast method for summing gaussians. This is given in the paper "Improved fast Gauss transform with variable source scales".
Error Bars for Histogram with Uncertain Data I have a similar problem and I have a solution in mind, though it's more complicated than I'd like, which is how I stumbled upon this answer: seeking an easier answer. With that preamble out of the wa
37,482
Error Bars for Histogram with Uncertain Data
Check this link about the error bars for cases similar to the one you mention. https://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735 We are measuring the number counts within bins. (Similar to building a histogram). What is usually done? R- The errors in each bin are considered symmetric, with value = \sqrt(N) , where N is the number of events (or counts) in that particular bin. So, the data point N, gets positive and negative errors stretching from (N + \sqrt(N)) down to (N - \sqrt(N) ). " In other words, the default is to use the fact that the event counts, being a random variable drawn from a Poisson distribution, has a variance equal to the mean. " When we see this kind of plot, it is said to consider "Poisson errors"... And that is the "typical" history. But, there is much more to it! From the link https://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735 . I quote here (for easy of access and legacy): " Any statistics textbook explains that the Poisson is a discrete distribution describing the probability to observe N counts when an average of m is expected. Its formula, P(N|m)=[exp(-m)* m^N]/N! (where ! is the symbol for the factorial, such that N!=N*(N-1)(N-2)...*1, and P(N|m) should be read as "the probability that I observe N given an expectation value of m"). So what's the problem with Poisson error bars ? The problem is that those error bars are not representing exactly what we would want them to. A "plus-or-minus-one-sigma" error bar is one which should "cover", on average, 68% of the time the true value of the unknown quantity we have measured: 68% is the fraction of area of a Gaussian distribution contained within -1 and +1 sigma. For Gaussian-distributed random variables a 1-sigma error bar is always a sound idea, but for Poisson-distributed data it is not typically so. What's worse, we do not know what the true variance is, because we only have an estimate of it (N), while the variance is equal to the true value (m). In some cases this makes a huge difference. Take a bin where you observe 9 event counts and the true value was 16: the variance is sqrt(16)=4, so you should assign an error bar of +-4 to your data point at 9. But you do not know the true value, so you correctly estimate it as N=9, whose square root is 3. You thus proceed to plot a point at 9 and draw an error bar of +-3. Upon visually comparing your data point (9+-3) with the expectation from a true model, drawn as a continuous histogram and having value 16 in that bin, you are led to believe you are observing a significant negative departure of event counts from the model, since 9+-3 is over two "sigma" away from 16; 9 is instead less than two sigma away from 16+-4. So the practice of plotting +-sqrt(N) error bars deceives the eye of the user. Worse still is the fact that the Poisson distribution, for small (m<=50 or so) expected counts, is not really symmetric. This causes the +-sqrt(N) bar to misrepresent the situation very badly for small N. Let us see this with an example. Imagine you observe N=1 event count in a bin, and you want to draw two models on top of that observation: one model predicts m=0.01 events there, the other predicts m=1.99. Now, regardless of whether m=0.01 or m=1.99 is the expectation of the event counts, if you see 1 event you are going to draw a error bar extending from 0 (i.e., 1-sqrt(1)) to 2 (1+sqrt(1)), thus apparently "covering" the expectation value in both cases; but while for m=1.99 the probability to observe 1 event is very high (and thus the error bar around your N=1 data point should indeed cover 1.99), for m=0.01 the probability to observe 1 event is very small: P(1|0.01)=exp(-0.01)0.01^1/1!=0.01exp(-0.01)=0.0099. N=1 should definitely not belong to a 1-sigma interval if the expectation is 0.01, since almost all the probability is concentrated at N=0 in that case (P(0|0.01)=0.99)! The solution, of course, is to try and draw error bars that correspond more precisely to the 68% coverage they should be taken to mean. But how to do that ? We simply cannot: as I explained above, we observe N, but we do not know m, so we do not know the variance. Rather, we should realize that the problem is ill-posed. If we observe N, that measurement has NO uncertainty: that is what we saw, with 100% probability. Instead, we should apply a paradigm shift, and insist that the uncertainty should be drawn around the model curve we want to compare our data points to, and not around the data points! If our model predicts m=16, should we then draw a uncertainty bar, or some kind of shading, around that histogram value, extending from 16-sqrt(16) to 16+sqrt(16), i.e. from 12 to 20 ? That would be almost okay, were it not for the asymmetric nature of the Poisson. Instead, we need to work out some prescription to count the probability of different event counts for any given m (where m, the expectation value of the event counts, is not an integer!), finding an interval around m which contains 68% of it. Sound prescriptions do exist. One is the "central interval": we start from the value of N which is the nearest integer to m smaller than m, and proceed to move right and left summing the probability of N+1 and N-1 given m, taking in turn the largest of these. We continue to sum until we exceed 68%: this gives us a continuous range of integer values which includes m and "covers" precisely as it should, given the Poisson nature of the distribution. Another prescription is that of finding the "smallest interval" which contains 68% or more of the total area of the Poisson distribution for a given m. But I do not need to go into that kind of detail. Suffices here to point the interested reader to a preprint recently produced by R.Aggarwal and A.Caldwell, titled "Error Bars for Distributions of Numbers of Events". The paper also deals with the more complicated issue of how to include in the display of model uncertainty the systematics on the model prediction, finding a Bayesian solution to the problem which I consider overkill for the problem at hand. I would be very happy, however, if particle physics experiments turned away from the sqrt(N) error bars and adopted the method of plotting box uncertainties with different colours, as advocated in the cited paper. You would get something like what is shown in the figure on the right. Note how the data points can now be immediately classified here, and more soundly, according to how much they depart to the model, which is now not anymore a line, but a band giving the extra dimensionality of the problem (the model's probability density function, as colour-coded by green for 68% coverage, yellow for 95% coverage, and red for 99% coverage). I would be willing to get away without the red shading -68% and 95% coverages would suffice, and are more in line with current styles of showing expected values adopted in many recent search results (the so-called "Brazil-bands"). Despite the soundness of the approach advocated in the paper, though, I am willing to bet that it will be very hard to impossible to convince the HEP community to stop plotting Poisson error bars as sqrt(N) and start using central intervals around model expectations. An attempt to do so was made in BaBar, but resulted in a failure -everybody continued to use the standard sqrt(N) prescription. There is a definite amount of serendipity in the behaviour of the average HEP experimentalist, I gather! " ===== From the paper "Error bars for distributions of number events" https://arxiv.org/pdf/1112.2593.pdf Firstofall,there is no uncertaint on the number of observed events. We certainly do not mean that there is a high probability that we had 2.3 rather than 2 events in the 7th bin in the plot. Actually, the error bar is intended to represent the uncertainty on a different quantity -the uncertainty on the mean of an assumed underlying Poisson distribution-. ========= So, the best solution seems to be: i) consider an underlying model ii) from the model, get the number of expected events 'N_expected' for a given bin, and add errors to it as +- sqrt(N_expected) . iii) check if [N_observed] is compatible to [N_expected +- sqrt(N_expected)] iv) Now you could argue that the observation is/isn't compatible with expectations, within x sigma . (where the 1 sigma interval ~ sqrt(N_expected) ) v) To improve: could try to incorporate the symmetry for the positive/negative error bars (as discussed in arXiv:1112.2593). vi) Take Away: We should get used to adding error bars to the N_expected from modeling, not to N_observed. There should be no Poisson-errors associated to N_observed, since those could be misleading when trying to decide each model is the best one to explain observations (as discussed in arXiv:1112.2593 and other resources)
Error Bars for Histogram with Uncertain Data
Check this link about the error bars for cases similar to the one you mention. https://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735 We are measuring the number counts wi
Error Bars for Histogram with Uncertain Data Check this link about the error bars for cases similar to the one you mention. https://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735 We are measuring the number counts within bins. (Similar to building a histogram). What is usually done? R- The errors in each bin are considered symmetric, with value = \sqrt(N) , where N is the number of events (or counts) in that particular bin. So, the data point N, gets positive and negative errors stretching from (N + \sqrt(N)) down to (N - \sqrt(N) ). " In other words, the default is to use the fact that the event counts, being a random variable drawn from a Poisson distribution, has a variance equal to the mean. " When we see this kind of plot, it is said to consider "Poisson errors"... And that is the "typical" history. But, there is much more to it! From the link https://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735 . I quote here (for easy of access and legacy): " Any statistics textbook explains that the Poisson is a discrete distribution describing the probability to observe N counts when an average of m is expected. Its formula, P(N|m)=[exp(-m)* m^N]/N! (where ! is the symbol for the factorial, such that N!=N*(N-1)(N-2)...*1, and P(N|m) should be read as "the probability that I observe N given an expectation value of m"). So what's the problem with Poisson error bars ? The problem is that those error bars are not representing exactly what we would want them to. A "plus-or-minus-one-sigma" error bar is one which should "cover", on average, 68% of the time the true value of the unknown quantity we have measured: 68% is the fraction of area of a Gaussian distribution contained within -1 and +1 sigma. For Gaussian-distributed random variables a 1-sigma error bar is always a sound idea, but for Poisson-distributed data it is not typically so. What's worse, we do not know what the true variance is, because we only have an estimate of it (N), while the variance is equal to the true value (m). In some cases this makes a huge difference. Take a bin where you observe 9 event counts and the true value was 16: the variance is sqrt(16)=4, so you should assign an error bar of +-4 to your data point at 9. But you do not know the true value, so you correctly estimate it as N=9, whose square root is 3. You thus proceed to plot a point at 9 and draw an error bar of +-3. Upon visually comparing your data point (9+-3) with the expectation from a true model, drawn as a continuous histogram and having value 16 in that bin, you are led to believe you are observing a significant negative departure of event counts from the model, since 9+-3 is over two "sigma" away from 16; 9 is instead less than two sigma away from 16+-4. So the practice of plotting +-sqrt(N) error bars deceives the eye of the user. Worse still is the fact that the Poisson distribution, for small (m<=50 or so) expected counts, is not really symmetric. This causes the +-sqrt(N) bar to misrepresent the situation very badly for small N. Let us see this with an example. Imagine you observe N=1 event count in a bin, and you want to draw two models on top of that observation: one model predicts m=0.01 events there, the other predicts m=1.99. Now, regardless of whether m=0.01 or m=1.99 is the expectation of the event counts, if you see 1 event you are going to draw a error bar extending from 0 (i.e., 1-sqrt(1)) to 2 (1+sqrt(1)), thus apparently "covering" the expectation value in both cases; but while for m=1.99 the probability to observe 1 event is very high (and thus the error bar around your N=1 data point should indeed cover 1.99), for m=0.01 the probability to observe 1 event is very small: P(1|0.01)=exp(-0.01)0.01^1/1!=0.01exp(-0.01)=0.0099. N=1 should definitely not belong to a 1-sigma interval if the expectation is 0.01, since almost all the probability is concentrated at N=0 in that case (P(0|0.01)=0.99)! The solution, of course, is to try and draw error bars that correspond more precisely to the 68% coverage they should be taken to mean. But how to do that ? We simply cannot: as I explained above, we observe N, but we do not know m, so we do not know the variance. Rather, we should realize that the problem is ill-posed. If we observe N, that measurement has NO uncertainty: that is what we saw, with 100% probability. Instead, we should apply a paradigm shift, and insist that the uncertainty should be drawn around the model curve we want to compare our data points to, and not around the data points! If our model predicts m=16, should we then draw a uncertainty bar, or some kind of shading, around that histogram value, extending from 16-sqrt(16) to 16+sqrt(16), i.e. from 12 to 20 ? That would be almost okay, were it not for the asymmetric nature of the Poisson. Instead, we need to work out some prescription to count the probability of different event counts for any given m (where m, the expectation value of the event counts, is not an integer!), finding an interval around m which contains 68% of it. Sound prescriptions do exist. One is the "central interval": we start from the value of N which is the nearest integer to m smaller than m, and proceed to move right and left summing the probability of N+1 and N-1 given m, taking in turn the largest of these. We continue to sum until we exceed 68%: this gives us a continuous range of integer values which includes m and "covers" precisely as it should, given the Poisson nature of the distribution. Another prescription is that of finding the "smallest interval" which contains 68% or more of the total area of the Poisson distribution for a given m. But I do not need to go into that kind of detail. Suffices here to point the interested reader to a preprint recently produced by R.Aggarwal and A.Caldwell, titled "Error Bars for Distributions of Numbers of Events". The paper also deals with the more complicated issue of how to include in the display of model uncertainty the systematics on the model prediction, finding a Bayesian solution to the problem which I consider overkill for the problem at hand. I would be very happy, however, if particle physics experiments turned away from the sqrt(N) error bars and adopted the method of plotting box uncertainties with different colours, as advocated in the cited paper. You would get something like what is shown in the figure on the right. Note how the data points can now be immediately classified here, and more soundly, according to how much they depart to the model, which is now not anymore a line, but a band giving the extra dimensionality of the problem (the model's probability density function, as colour-coded by green for 68% coverage, yellow for 95% coverage, and red for 99% coverage). I would be willing to get away without the red shading -68% and 95% coverages would suffice, and are more in line with current styles of showing expected values adopted in many recent search results (the so-called "Brazil-bands"). Despite the soundness of the approach advocated in the paper, though, I am willing to bet that it will be very hard to impossible to convince the HEP community to stop plotting Poisson error bars as sqrt(N) and start using central intervals around model expectations. An attempt to do so was made in BaBar, but resulted in a failure -everybody continued to use the standard sqrt(N) prescription. There is a definite amount of serendipity in the behaviour of the average HEP experimentalist, I gather! " ===== From the paper "Error bars for distributions of number events" https://arxiv.org/pdf/1112.2593.pdf Firstofall,there is no uncertaint on the number of observed events. We certainly do not mean that there is a high probability that we had 2.3 rather than 2 events in the 7th bin in the plot. Actually, the error bar is intended to represent the uncertainty on a different quantity -the uncertainty on the mean of an assumed underlying Poisson distribution-. ========= So, the best solution seems to be: i) consider an underlying model ii) from the model, get the number of expected events 'N_expected' for a given bin, and add errors to it as +- sqrt(N_expected) . iii) check if [N_observed] is compatible to [N_expected +- sqrt(N_expected)] iv) Now you could argue that the observation is/isn't compatible with expectations, within x sigma . (where the 1 sigma interval ~ sqrt(N_expected) ) v) To improve: could try to incorporate the symmetry for the positive/negative error bars (as discussed in arXiv:1112.2593). vi) Take Away: We should get used to adding error bars to the N_expected from modeling, not to N_observed. There should be no Poisson-errors associated to N_observed, since those could be misleading when trying to decide each model is the best one to explain observations (as discussed in arXiv:1112.2593 and other resources)
Error Bars for Histogram with Uncertain Data Check this link about the error bars for cases similar to the one you mention. https://www.science20.com/quantum_diaries_survivor/those_deceiving_error_bars-85735 We are measuring the number counts wi
37,483
expected value of a fishing strategy
Following whuber's suggestion, let's look at a simple example. Say we need to catch 2 fish and we are given 4 days. To begin we denote the two fish we have on day two $x_1$ and $x_2$, and the minimum denoted by $x$. On the third day, I release the smallest fish and re-catch if the following holds \begin{align} (1-x^2)\frac{1+x}{2}+x^2\frac{x}{2}-x>0, \end{align} where $1-x^2$ is the probability that in the remaining days I can catch a bigger fish than my current minimum, and $x^2$ is the probability that in the remaining days I will not catch a bigger fish than my current minimum. $(1+x)/2$ and $x/2$ are the conditional expectations in the two cases. The $-x$ is the fish I have to give up in order to re-catch. Solve this inequality on day three, we will release and re-catch if the minimum of the two fish we have on the second day is less than $0.6183$. Now let's turn to the fourth day. Similarly we have \begin{align} (1-x)\frac{1+x}{2}+x\frac{x}{2}-x>0, \end{align} solving this we will release and re-catch if the minimum of the two fish we have on the third day is less than $0.5$. This makes sense since we this is our last chance to catch a fish. Under this strategy we notice if the release and re-catch condition is not met on day $k$, it will not be met on day $k+1$. this formulation does not depend on the total number fish required, but only depend on the number of days left, and $x$, the current minimum of course. In general we should release and re-catch if \begin{align} (1-x^k)\frac{1+x}{2}+x^k\frac{x}{2}-x>0, \end{align} where $k$ is the number of days left. So if there are 10 days, we should release and re-catch on the third days if the minimum from the first two days is less than $0.81$. Following this strategy, what's the expected sum of our fish on the last day? I haven't quite figure it out. Let's come back to the simple example (2 fish, 4 days). On the second day, the expected minimum is $1/3$, and we will need $1.5$ catches on average to catch a fish bigger than $1/3$. So If I round it up to $2$ catches, then on the final day, the expected sum would be $4/3$... These are very rough ideas.
expected value of a fishing strategy
Following whuber's suggestion, let's look at a simple example. Say we need to catch 2 fish and we are given 4 days. To begin we denote the two fish we have on day two $x_1$ and $x_2$, and the minimum
expected value of a fishing strategy Following whuber's suggestion, let's look at a simple example. Say we need to catch 2 fish and we are given 4 days. To begin we denote the two fish we have on day two $x_1$ and $x_2$, and the minimum denoted by $x$. On the third day, I release the smallest fish and re-catch if the following holds \begin{align} (1-x^2)\frac{1+x}{2}+x^2\frac{x}{2}-x>0, \end{align} where $1-x^2$ is the probability that in the remaining days I can catch a bigger fish than my current minimum, and $x^2$ is the probability that in the remaining days I will not catch a bigger fish than my current minimum. $(1+x)/2$ and $x/2$ are the conditional expectations in the two cases. The $-x$ is the fish I have to give up in order to re-catch. Solve this inequality on day three, we will release and re-catch if the minimum of the two fish we have on the second day is less than $0.6183$. Now let's turn to the fourth day. Similarly we have \begin{align} (1-x)\frac{1+x}{2}+x\frac{x}{2}-x>0, \end{align} solving this we will release and re-catch if the minimum of the two fish we have on the third day is less than $0.5$. This makes sense since we this is our last chance to catch a fish. Under this strategy we notice if the release and re-catch condition is not met on day $k$, it will not be met on day $k+1$. this formulation does not depend on the total number fish required, but only depend on the number of days left, and $x$, the current minimum of course. In general we should release and re-catch if \begin{align} (1-x^k)\frac{1+x}{2}+x^k\frac{x}{2}-x>0, \end{align} where $k$ is the number of days left. So if there are 10 days, we should release and re-catch on the third days if the minimum from the first two days is less than $0.81$. Following this strategy, what's the expected sum of our fish on the last day? I haven't quite figure it out. Let's come back to the simple example (2 fish, 4 days). On the second day, the expected minimum is $1/3$, and we will need $1.5$ catches on average to catch a fish bigger than $1/3$. So If I round it up to $2$ catches, then on the final day, the expected sum would be $4/3$... These are very rough ideas.
expected value of a fishing strategy Following whuber's suggestion, let's look at a simple example. Say we need to catch 2 fish and we are given 4 days. To begin we denote the two fish we have on day two $x_1$ and $x_2$, and the minimum
37,484
Why BERT use learned positional embedding?
Here is my current understanding to my own question. It probably related BERT's transfer learning background. The learned-lookup-table indeed increase learning effort in pretrain stage, but the extra effort can be almost ingnored compared to number of the trainable parameters in transformer encoder, it also should be accepted given the pretrain stage one-time effort and meant to be time comsuming. While in the finetune and prediction stages, it's much faster because the sinusoidal positional encoding need to be computed at every position.
Why BERT use learned positional embedding?
Here is my current understanding to my own question. It probably related BERT's transfer learning background. The learned-lookup-table indeed increase learning effort in pretrain stage, but the extra
Why BERT use learned positional embedding? Here is my current understanding to my own question. It probably related BERT's transfer learning background. The learned-lookup-table indeed increase learning effort in pretrain stage, but the extra effort can be almost ingnored compared to number of the trainable parameters in transformer encoder, it also should be accepted given the pretrain stage one-time effort and meant to be time comsuming. While in the finetune and prediction stages, it's much faster because the sinusoidal positional encoding need to be computed at every position.
Why BERT use learned positional embedding? Here is my current understanding to my own question. It probably related BERT's transfer learning background. The learned-lookup-table indeed increase learning effort in pretrain stage, but the extra
37,485
Why BERT use learned positional embedding?
Fixed length BERT, same as Transformer, use attention as a key feature. The attention as used in those models, has a fixed span as well. Cannot reflect relative distance We assume neural networks to be universal function approximators. If that is the case, why wouldn't it be able to learn building the Fourier terms by itself? Why did they use it? Because it was more flexible then the approach used in Transformer. It is learned, so possibly it can figure out by itself something better--that's the general assumption behind deep learning as a whole. It also simply proved to work better.
Why BERT use learned positional embedding?
Fixed length BERT, same as Transformer, use attention as a key feature. The attention as used in those models, has a fixed span as well. Cannot reflect relative distance We assume neural network
Why BERT use learned positional embedding? Fixed length BERT, same as Transformer, use attention as a key feature. The attention as used in those models, has a fixed span as well. Cannot reflect relative distance We assume neural networks to be universal function approximators. If that is the case, why wouldn't it be able to learn building the Fourier terms by itself? Why did they use it? Because it was more flexible then the approach used in Transformer. It is learned, so possibly it can figure out by itself something better--that's the general assumption behind deep learning as a whole. It also simply proved to work better.
Why BERT use learned positional embedding? Fixed length BERT, same as Transformer, use attention as a key feature. The attention as used in those models, has a fixed span as well. Cannot reflect relative distance We assume neural network
37,486
Why are there large discrepancies between Wald and bootstrapped confidence intervals for parameters of a lmer model in R?
Basically, the Wald statistic is not good and you shouldn't trust it for mixed models. It uses a much cruder approximation to the actual likelihood than you get with the profile and boot.ci methods. If R (and SAS and JMP and...) would have been written today, they would not have bothered implementing Wald stats. That's why the summary.merMod method intentionally omits $p$-values from the fixed effect coefficient output. The computational intensity of profile/bootstrap is at most on the scale of minutes by today's standards, but in the olden days, it would take weeks. So, the analyst was expected to do massive amounts of testing and variable transformation methods so that Wald stat might have good-ish properties. EDIT: below is a snippet of a conversation between me, David Dahl, and Douglas Bates back in 2010 when I tried to suggest using the Wald $p$-values for xtable. A user of your lme4 package would like to use xtable on mer objects from lme4. That means defining a function "xtable.mer". He suggests the implementation below. I regrettably am not very familar with lme4. Do you have any suggestions? I appreciate Adam's suggestion and his providing an implementation. Regrettably, I think that the implementation would be controversial, to say the least, and I would prefer not to be the recipient of the fallout. There is a long-standing issue with lme4 regarding p-values on tests of the fixed-effects parameters. For linear mixed models there is a widespread belief that you can calculate a t-statistic (what is labelled here as a "z value") and convert it to a p-value by the simple expedient of determining an approximate number of degrees of freedom. In fact, SAS PROC MIXED offers several (6, I believe) different, and incompatible, ways of determining such degrees of freedom and the corresponding p-values. The fact that these give different answers doesn't deter people from regarding such approximations as "absolute truth". In reality the distribution of such a statistic is not a Student's T. It is much more complicated than that and I advocate other ways of calculating confidence intervals or testing hypotheses. In the case of a generalized linear mixed model I do calculate a p-value from the standard normal distribution, not because the approximation is better for GLMMs than for LMMs but because it is worse. I am writing a book for Springer on lme4 (chapter drafts are available at http://lme4.R-forge.R-project.org/book/) where I describe using likelihood ratio tests for hypothesis tests and techniques based on profiling the LRT statistic to produce confidence intervals on parameters. The examples in that book are based on the development version of the package which uses a different representation of the model. The implementation is not complete, which is why I haven't released it as lme4, but right now I need to concentrate on the writing because the book is going to be used in a seminar which starts next week.
Why are there large discrepancies between Wald and bootstrapped confidence intervals for parameters
Basically, the Wald statistic is not good and you shouldn't trust it for mixed models. It uses a much cruder approximation to the actual likelihood than you get with the profile and boot.ci methods. I
Why are there large discrepancies between Wald and bootstrapped confidence intervals for parameters of a lmer model in R? Basically, the Wald statistic is not good and you shouldn't trust it for mixed models. It uses a much cruder approximation to the actual likelihood than you get with the profile and boot.ci methods. If R (and SAS and JMP and...) would have been written today, they would not have bothered implementing Wald stats. That's why the summary.merMod method intentionally omits $p$-values from the fixed effect coefficient output. The computational intensity of profile/bootstrap is at most on the scale of minutes by today's standards, but in the olden days, it would take weeks. So, the analyst was expected to do massive amounts of testing and variable transformation methods so that Wald stat might have good-ish properties. EDIT: below is a snippet of a conversation between me, David Dahl, and Douglas Bates back in 2010 when I tried to suggest using the Wald $p$-values for xtable. A user of your lme4 package would like to use xtable on mer objects from lme4. That means defining a function "xtable.mer". He suggests the implementation below. I regrettably am not very familar with lme4. Do you have any suggestions? I appreciate Adam's suggestion and his providing an implementation. Regrettably, I think that the implementation would be controversial, to say the least, and I would prefer not to be the recipient of the fallout. There is a long-standing issue with lme4 regarding p-values on tests of the fixed-effects parameters. For linear mixed models there is a widespread belief that you can calculate a t-statistic (what is labelled here as a "z value") and convert it to a p-value by the simple expedient of determining an approximate number of degrees of freedom. In fact, SAS PROC MIXED offers several (6, I believe) different, and incompatible, ways of determining such degrees of freedom and the corresponding p-values. The fact that these give different answers doesn't deter people from regarding such approximations as "absolute truth". In reality the distribution of such a statistic is not a Student's T. It is much more complicated than that and I advocate other ways of calculating confidence intervals or testing hypotheses. In the case of a generalized linear mixed model I do calculate a p-value from the standard normal distribution, not because the approximation is better for GLMMs than for LMMs but because it is worse. I am writing a book for Springer on lme4 (chapter drafts are available at http://lme4.R-forge.R-project.org/book/) where I describe using likelihood ratio tests for hypothesis tests and techniques based on profiling the LRT statistic to produce confidence intervals on parameters. The examples in that book are based on the development version of the package which uses a different representation of the model. The implementation is not complete, which is why I haven't released it as lme4, but right now I need to concentrate on the writing because the book is going to be used in a seminar which starts next week.
Why are there large discrepancies between Wald and bootstrapped confidence intervals for parameters Basically, the Wald statistic is not good and you shouldn't trust it for mixed models. It uses a much cruder approximation to the actual likelihood than you get with the profile and boot.ci methods. I
37,487
Are min$(X_1,\ldots,X_n)$ and min$(X_1Y_1,\ldots,X_nY_n)$ independent for $n$ to infinity?
Just a quick thought of mine: The condition that the product $X_i Y_i$ is smaller than $X_2$ removes all possibilities of $X_i$ being larger than $(X2_/b_n)+a_n$. EDIT: Yes you understood me correctly! Imagine the conditional probability of both (the product and Xi alone) being small enough: P(Xi|XiYi)! Given the product XiYi is small enough, since Yi must be larger than 1, there is no possibility of of Xi being larger than (x2/bn)+an. This means the conditional probability P(Xi|Xi*Yi) is always 1, if x1 is big enough. Meaning your combined probability equals G(x2), although F(x1) can still be smaller than 1.
Are min$(X_1,\ldots,X_n)$ and min$(X_1Y_1,\ldots,X_nY_n)$ independent for $n$ to infinity?
Just a quick thought of mine: The condition that the product $X_i Y_i$ is smaller than $X_2$ removes all possibilities of $X_i$ being larger than $(X2_/b_n)+a_n$. EDIT: Yes you understood me correctl
Are min$(X_1,\ldots,X_n)$ and min$(X_1Y_1,\ldots,X_nY_n)$ independent for $n$ to infinity? Just a quick thought of mine: The condition that the product $X_i Y_i$ is smaller than $X_2$ removes all possibilities of $X_i$ being larger than $(X2_/b_n)+a_n$. EDIT: Yes you understood me correctly! Imagine the conditional probability of both (the product and Xi alone) being small enough: P(Xi|XiYi)! Given the product XiYi is small enough, since Yi must be larger than 1, there is no possibility of of Xi being larger than (x2/bn)+an. This means the conditional probability P(Xi|Xi*Yi) is always 1, if x1 is big enough. Meaning your combined probability equals G(x2), although F(x1) can still be smaller than 1.
Are min$(X_1,\ldots,X_n)$ and min$(X_1Y_1,\ldots,X_nY_n)$ independent for $n$ to infinity? Just a quick thought of mine: The condition that the product $X_i Y_i$ is smaller than $X_2$ removes all possibilities of $X_i$ being larger than $(X2_/b_n)+a_n$. EDIT: Yes you understood me correctl
37,488
Difference between eligibility traces and momentum?
For simplicity, let us consider the case of no discount ($\gamma = 1$). This setting is sufficient to understand the difference between eligibility trace and momentum. Then, $\mathbf{z}_t$ appeared in eligibility trace is the sum of the history of the gradients $\nabla \hat{v}_t$: $$ \mathbf{z}_t = \sum_{k=0}^t \nabla \hat{v}_{t-k} . $$ This quantity is the (short-term) memory in order to enable us to update $\mathbf{w}$ for the previous states because the update rule is $$ \boldsymbol{\theta}\ \leftarrow\ \boldsymbol{\theta}+\alpha\,\delta_t\,\mathbf{z} = \boldsymbol{\theta}+\alpha\,\delta_t \sum_{k=0}^t \nabla \hat{v}_{t-k} = \boldsymbol{\theta}+\alpha\,\delta_t \nabla \hat{v}_{t} + \delta_t \nabla\hat{v}_{t-1} + \delta_t \nabla\hat{v}_{t-2} + \cdots $$ Please notice that $\hat{v}_{t-k}$ represents the value function of $S_{t-k}$. By this rule of eligibility trace, the current reward embedded in $\delta_t$ is transferred to the states in the past. (e.g. information about the result of a play of Go has to be transferred to the states appeared on that game.) On the other hand, $\mathbf{u}$ in momentum is the historical direction of $\theta$'s movement. That's why this method is called momentum. $$ \mathbf{u}\ =\sum_{k=1}^t \delta_k\nabla \hat{v}_k \ (\alpha = \eta = 1\ \text{for simplicity}) $$ The momentum rule can be used to determine the direction of next $\theta$ with considering the historical $\theta$'s movement by avoiding sudden direction change. The sudden change of update direction is risky because the information obtained from the current state is probabilistic and might be wrong. Update Although equations are similar each other, the objective of momentum is totally different from one of eligibility trace. Momentum is used to avoid noise of mini-batch SGD and raven effect. See this post. On the other hand, eligibility trace is used to add the reward $R_t$ to the values of the previous $\{v_{t-k}\}_{k=1,2,\cdots}$. In order to transfer reward to backward, you can not use momentum. Say, for TD($\lambda$), it is necessary to send the reward to the previous states. If you are not familiar with TD($\lambda$), I recommend you to read 12.2 of Sutton's text book. PDF version is free to download.
Difference between eligibility traces and momentum?
For simplicity, let us consider the case of no discount ($\gamma = 1$). This setting is sufficient to understand the difference between eligibility trace and momentum. Then, $\mathbf{z}_t$ appeared in
Difference between eligibility traces and momentum? For simplicity, let us consider the case of no discount ($\gamma = 1$). This setting is sufficient to understand the difference between eligibility trace and momentum. Then, $\mathbf{z}_t$ appeared in eligibility trace is the sum of the history of the gradients $\nabla \hat{v}_t$: $$ \mathbf{z}_t = \sum_{k=0}^t \nabla \hat{v}_{t-k} . $$ This quantity is the (short-term) memory in order to enable us to update $\mathbf{w}$ for the previous states because the update rule is $$ \boldsymbol{\theta}\ \leftarrow\ \boldsymbol{\theta}+\alpha\,\delta_t\,\mathbf{z} = \boldsymbol{\theta}+\alpha\,\delta_t \sum_{k=0}^t \nabla \hat{v}_{t-k} = \boldsymbol{\theta}+\alpha\,\delta_t \nabla \hat{v}_{t} + \delta_t \nabla\hat{v}_{t-1} + \delta_t \nabla\hat{v}_{t-2} + \cdots $$ Please notice that $\hat{v}_{t-k}$ represents the value function of $S_{t-k}$. By this rule of eligibility trace, the current reward embedded in $\delta_t$ is transferred to the states in the past. (e.g. information about the result of a play of Go has to be transferred to the states appeared on that game.) On the other hand, $\mathbf{u}$ in momentum is the historical direction of $\theta$'s movement. That's why this method is called momentum. $$ \mathbf{u}\ =\sum_{k=1}^t \delta_k\nabla \hat{v}_k \ (\alpha = \eta = 1\ \text{for simplicity}) $$ The momentum rule can be used to determine the direction of next $\theta$ with considering the historical $\theta$'s movement by avoiding sudden direction change. The sudden change of update direction is risky because the information obtained from the current state is probabilistic and might be wrong. Update Although equations are similar each other, the objective of momentum is totally different from one of eligibility trace. Momentum is used to avoid noise of mini-batch SGD and raven effect. See this post. On the other hand, eligibility trace is used to add the reward $R_t$ to the values of the previous $\{v_{t-k}\}_{k=1,2,\cdots}$. In order to transfer reward to backward, you can not use momentum. Say, for TD($\lambda$), it is necessary to send the reward to the previous states. If you are not familiar with TD($\lambda$), I recommend you to read 12.2 of Sutton's text book. PDF version is free to download.
Difference between eligibility traces and momentum? For simplicity, let us consider the case of no discount ($\gamma = 1$). This setting is sufficient to understand the difference between eligibility trace and momentum. Then, $\mathbf{z}_t$ appeared in
37,489
Maximum Likelihood Estimator of $P(Y_1=1)$ where $Y_i=1$ if $X_i>0$ and $0$ otherwise, given $X_1,\dots,X_n\sim N(\theta,1)$
First, denoting $\Phi$ to be the standard normal c.d.f., we have \begin{align*}\psi &= P(Y_1=1) \\ &= P(X_1>0) \\ &= 1-P(X_1 \le 0) \\ &= 1-P(X_1-\theta\le -\theta)\\ &=1-\Phi(-\theta)\\ &=\Phi(\theta),\end{align*} where in the third line, we use the fact that $X_1-\theta\sim N(0,1)$. Consequently, by equivariance of the MLE, as $\widehat{\theta}=\overline{X}_n$ is the MLE of $\theta$, $\Phi(\overline{X}_n)$ is the MLE of $\psi$. Credit to StubbornAtom for the hint to this solution.
Maximum Likelihood Estimator of $P(Y_1=1)$ where $Y_i=1$ if $X_i>0$ and $0$ otherwise, given $X_1,\d
First, denoting $\Phi$ to be the standard normal c.d.f., we have \begin{align*}\psi &= P(Y_1=1) \\ &= P(X_1>0) \\ &= 1-P(X_1 \le 0) \\ &= 1-P(X_1-\theta\le -\theta)\\ &=1-\Phi(-\theta)\\ &=\Phi(\theta
Maximum Likelihood Estimator of $P(Y_1=1)$ where $Y_i=1$ if $X_i>0$ and $0$ otherwise, given $X_1,\dots,X_n\sim N(\theta,1)$ First, denoting $\Phi$ to be the standard normal c.d.f., we have \begin{align*}\psi &= P(Y_1=1) \\ &= P(X_1>0) \\ &= 1-P(X_1 \le 0) \\ &= 1-P(X_1-\theta\le -\theta)\\ &=1-\Phi(-\theta)\\ &=\Phi(\theta),\end{align*} where in the third line, we use the fact that $X_1-\theta\sim N(0,1)$. Consequently, by equivariance of the MLE, as $\widehat{\theta}=\overline{X}_n$ is the MLE of $\theta$, $\Phi(\overline{X}_n)$ is the MLE of $\psi$. Credit to StubbornAtom for the hint to this solution.
Maximum Likelihood Estimator of $P(Y_1=1)$ where $Y_i=1$ if $X_i>0$ and $0$ otherwise, given $X_1,\d First, denoting $\Phi$ to be the standard normal c.d.f., we have \begin{align*}\psi &= P(Y_1=1) \\ &= P(X_1>0) \\ &= 1-P(X_1 \le 0) \\ &= 1-P(X_1-\theta\le -\theta)\\ &=1-\Phi(-\theta)\\ &=\Phi(\theta
37,490
Time complexity of simple linear regression in $L_1$ norm
There is a $O(n^2)$ running time algorithm. It is fairly easy to derive: There exists an optimal line that contains one of the given points (in fact, at least 2 points). There exists a $O(n)$ time algorithm to decide the best line that goes through a given point. Basically a weighted median computation. Together, it implies a $O(n^2)$ running time algorithm. The first paper I know that uses this idea is the following. Bloomfield, Peter; Steiger, William, Least absolute deviations curve-fitting, SIAM J. Sci. Stat. Comput. 1, 290-301 (1980). ZBL0471.65007. Edit 1: Using advanced tools from computational geometry, one can improve the running time to $\tilde{O}(n^{4/3})$. Edit 2: There is a paper that gives a $O(n\log^2 n)$ time algorithm. Megiddo, Nimrod; Tamir, Arie, Finding least-distance lines, SIAM J. Algebraic Discrete Methods 4, 207-211 (1983). ZBL0517.05007. There is another paper which I still have to verify, that explains $O(n)$ time is possible. In fact, it seems that $L_1$ linear regression in $d$ dimensions can be solved in $O(n)$ time if $d$ is a constant. Zemel, Eitan, An O(n) algorithm for the linear multiple choice knapsack problem and related problems, Inf. Process. Lett. 18, 123-128 (1984). ZBL0555.90069.
Time complexity of simple linear regression in $L_1$ norm
There is a $O(n^2)$ running time algorithm. It is fairly easy to derive: There exists an optimal line that contains one of the given points (in fact, at least 2 points). There exists a $O(n)$ time alg
Time complexity of simple linear regression in $L_1$ norm There is a $O(n^2)$ running time algorithm. It is fairly easy to derive: There exists an optimal line that contains one of the given points (in fact, at least 2 points). There exists a $O(n)$ time algorithm to decide the best line that goes through a given point. Basically a weighted median computation. Together, it implies a $O(n^2)$ running time algorithm. The first paper I know that uses this idea is the following. Bloomfield, Peter; Steiger, William, Least absolute deviations curve-fitting, SIAM J. Sci. Stat. Comput. 1, 290-301 (1980). ZBL0471.65007. Edit 1: Using advanced tools from computational geometry, one can improve the running time to $\tilde{O}(n^{4/3})$. Edit 2: There is a paper that gives a $O(n\log^2 n)$ time algorithm. Megiddo, Nimrod; Tamir, Arie, Finding least-distance lines, SIAM J. Algebraic Discrete Methods 4, 207-211 (1983). ZBL0517.05007. There is another paper which I still have to verify, that explains $O(n)$ time is possible. In fact, it seems that $L_1$ linear regression in $d$ dimensions can be solved in $O(n)$ time if $d$ is a constant. Zemel, Eitan, An O(n) algorithm for the linear multiple choice knapsack problem and related problems, Inf. Process. Lett. 18, 123-128 (1984). ZBL0555.90069.
Time complexity of simple linear regression in $L_1$ norm There is a $O(n^2)$ running time algorithm. It is fairly easy to derive: There exists an optimal line that contains one of the given points (in fact, at least 2 points). There exists a $O(n)$ time alg
37,491
Alternatives to Journal of Statistical Software?
You can try JOSS: Journal of Open Source Software, that is an academic journal (ISSN 2475-9066) with a formal peer review process that is designed to improve the quality of the software submitted. They create a DOI after publication, follow the Covenant code of conduct and are affiliated of the Open Source Initiative.
Alternatives to Journal of Statistical Software?
You can try JOSS: Journal of Open Source Software, that is an academic journal (ISSN 2475-9066) with a formal peer review process that is designed to improve the quality of the software submitted. Th
Alternatives to Journal of Statistical Software? You can try JOSS: Journal of Open Source Software, that is an academic journal (ISSN 2475-9066) with a formal peer review process that is designed to improve the quality of the software submitted. They create a DOI after publication, follow the Covenant code of conduct and are affiliated of the Open Source Initiative.
Alternatives to Journal of Statistical Software? You can try JOSS: Journal of Open Source Software, that is an academic journal (ISSN 2475-9066) with a formal peer review process that is designed to improve the quality of the software submitted. Th
37,492
How to decide moving window size for time series prediction?
I would say that your first approach seems like a good start, it seems better to me than your second one. Your assessment of the possible risks, are correct as you could interpret this as tweaking hyperparameters on the test set come with the risk with a performance estimate is too optimistic. It could be an idea to tweak your first approach to include a validation set on which you can tweak the window sizes and then only use the test set to obtain a performance estimate. In case you are unfamiliar with this, I quite like what is discussed in this thread: What is the difference between test set and validation set? Hope this helps!
How to decide moving window size for time series prediction?
I would say that your first approach seems like a good start, it seems better to me than your second one. Your assessment of the possible risks, are correct as you could interpret this as tweaking hyp
How to decide moving window size for time series prediction? I would say that your first approach seems like a good start, it seems better to me than your second one. Your assessment of the possible risks, are correct as you could interpret this as tweaking hyperparameters on the test set come with the risk with a performance estimate is too optimistic. It could be an idea to tweak your first approach to include a validation set on which you can tweak the window sizes and then only use the test set to obtain a performance estimate. In case you are unfamiliar with this, I quite like what is discussed in this thread: What is the difference between test set and validation set? Hope this helps!
How to decide moving window size for time series prediction? I would say that your first approach seems like a good start, it seems better to me than your second one. Your assessment of the possible risks, are correct as you could interpret this as tweaking hyp
37,493
Sparse linear regression 0-norm and 1-norm
If features are correlated, you should use elastic net and not lasso. Roughly, if two features are correlated, lasso would choose the feature $i$ over $j$ if it has the better reward on the loss function, this means a smaller absolute value $|\beta_i|$ of the regression coefficient together with a good decrease in the prediction error $||y-X\beta||_2$. On the other hand, the $l_0$-norm based penalty would choose the feature $i$ over $j$ if it leads to a good decrease in the prediction error only, since the size of the coefficient doesn't matter, just if it's different from zero (remember, $||\beta||_0=\#\lbrace\beta_k\neq0\rbrace$). Now, my intuition would be that $l_1$- and $l_0$-norm penalties are equally bad at prediction of correct regression coefficients if features are correlated. The proof of Theorem 2 in this paper should illustrate why this is indeed the case. This would be in contradiction to the statement and example of the paper you cited, though.
Sparse linear regression 0-norm and 1-norm
If features are correlated, you should use elastic net and not lasso. Roughly, if two features are correlated, lasso would choose the feature $i$ over $j$ if it has the better reward on the loss funct
Sparse linear regression 0-norm and 1-norm If features are correlated, you should use elastic net and not lasso. Roughly, if two features are correlated, lasso would choose the feature $i$ over $j$ if it has the better reward on the loss function, this means a smaller absolute value $|\beta_i|$ of the regression coefficient together with a good decrease in the prediction error $||y-X\beta||_2$. On the other hand, the $l_0$-norm based penalty would choose the feature $i$ over $j$ if it leads to a good decrease in the prediction error only, since the size of the coefficient doesn't matter, just if it's different from zero (remember, $||\beta||_0=\#\lbrace\beta_k\neq0\rbrace$). Now, my intuition would be that $l_1$- and $l_0$-norm penalties are equally bad at prediction of correct regression coefficients if features are correlated. The proof of Theorem 2 in this paper should illustrate why this is indeed the case. This would be in contradiction to the statement and example of the paper you cited, though.
Sparse linear regression 0-norm and 1-norm If features are correlated, you should use elastic net and not lasso. Roughly, if two features are correlated, lasso would choose the feature $i$ over $j$ if it has the better reward on the loss funct
37,494
Coding resources: Accessible introductions to Bayesian Structural Time series?
I would say Stan (disclosure: I am one of the Stan developers). For most of the models you mention, there is not going to be a Gibbs sampler with known full-conditional distributions for all of the parameters and even for the exceptions, the chains might not mix well under Gibbs sampling. That said, here are some links: Relevant Stan code for an unfinished book project on state-space models A short course on Stan in econometrics with a chapter on time series Prophet, which is a somewhat structural time series thing built on Stan
Coding resources: Accessible introductions to Bayesian Structural Time series?
I would say Stan (disclosure: I am one of the Stan developers). For most of the models you mention, there is not going to be a Gibbs sampler with known full-conditional distributions for all of the pa
Coding resources: Accessible introductions to Bayesian Structural Time series? I would say Stan (disclosure: I am one of the Stan developers). For most of the models you mention, there is not going to be a Gibbs sampler with known full-conditional distributions for all of the parameters and even for the exceptions, the chains might not mix well under Gibbs sampling. That said, here are some links: Relevant Stan code for an unfinished book project on state-space models A short course on Stan in econometrics with a chapter on time series Prophet, which is a somewhat structural time series thing built on Stan
Coding resources: Accessible introductions to Bayesian Structural Time series? I would say Stan (disclosure: I am one of the Stan developers). For most of the models you mention, there is not going to be a Gibbs sampler with known full-conditional distributions for all of the pa
37,495
Multiple factor interactions with GAM in R?
I've had a similar problem, and I'm afraid I don't have a solution either, only a cheap workaround: Since the by-variable in the smooth term doesn't seem to make the model assess the interaction (as in giving out a coefficient and S.E. for the interaction term, like a linear model would), what you get from something like gam(y ~ s(x, by=fac) + fac) is a curve for each level of 'fac' (e.g. A and B). When you try to extend this to another factor, what you're looking for is probably the effect/curve given each combination of factors. This you can get by conflating the two factors into a single one that comprises the combinations (so, in your example AX, AY, BX, BY), and specify that as the by-variable. As I said, it's not really a solution but might give you an idea of how the factors interact. Hope there will be a 'real' answer to your question.
Multiple factor interactions with GAM in R?
I've had a similar problem, and I'm afraid I don't have a solution either, only a cheap workaround: Since the by-variable in the smooth term doesn't seem to make the model assess the interaction (as i
Multiple factor interactions with GAM in R? I've had a similar problem, and I'm afraid I don't have a solution either, only a cheap workaround: Since the by-variable in the smooth term doesn't seem to make the model assess the interaction (as in giving out a coefficient and S.E. for the interaction term, like a linear model would), what you get from something like gam(y ~ s(x, by=fac) + fac) is a curve for each level of 'fac' (e.g. A and B). When you try to extend this to another factor, what you're looking for is probably the effect/curve given each combination of factors. This you can get by conflating the two factors into a single one that comprises the combinations (so, in your example AX, AY, BX, BY), and specify that as the by-variable. As I said, it's not really a solution but might give you an idea of how the factors interact. Hope there will be a 'real' answer to your question.
Multiple factor interactions with GAM in R? I've had a similar problem, and I'm afraid I don't have a solution either, only a cheap workaround: Since the by-variable in the smooth term doesn't seem to make the model assess the interaction (as i
37,496
Illustration of functional gradient descent
Functional Gradient Descent Functional Gradient Descent - Part 1 and Part 2 will give a brief introduction and theoretical illustration. Functional Gradient Descent was introduced in the NIPS publication Boosting Algorithms as Gradient Descent by Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean in the year 2000. We are all familiar with gradient descent for linear functions $f(x) = w^Tx$. Once we define a loss $L$, gradient descent does the following update steps ($η$ is a parameter called the learning rate. $w \rightarrow w - \eta \nabla L(w)$ where we move around in the space of weights. An example of a loss of $L$ is: $L(w) = \sum_{i=1}^n(y_i - w^Tx_i)^2 + \lambda\lVert w \rVert ^2$ where the first term (the $‘L2’$ term) measures how close $f(x)$ is to $y$, while the second term (the ‘regularization’ term) accounts for the ‘complexity’ of the learned function $f$. Suppose we wanted to extend $L$ to beyond linear functions $f$. We want to minimize something like: $L(f) = \sum_{i=1}^n(y_i - f(x_i))^2 + \lambda\lVert f \rVert ^2$ where $\lVert f \rVert ^2$ again serves as a regularization term, and we have updates of the form: $f \rightarrow f - \eta \nabla L(f)$ where we move around in the space of functions, not weights! Turns out, this is completely possible! And goes by the name of ‘functional’ gradient descent or gradient descent in function space. In general, you can parametrize any function in a number of ways, each parametrization gives rise to different steps (and different functions at each step) in gradient descent. The advantage is that some loss functions that are non-convex when parametrized, can be convex in the function space: this means functional gradient descent can actually converge to global minima when ‘ordinary’ gradient descent could possibly get stuck at local minima or saddle points. Illustration Example Code For more Why functional gradient descent can be useful, What it means to do functional gradient descent, and, How we can do functional gradient descent, with an example. Visit
Illustration of functional gradient descent
Functional Gradient Descent Functional Gradient Descent - Part 1 and Part 2 will give a brief introduction and theoretical illustration. Functional Gradient Descent was introduced in the NIPS publicat
Illustration of functional gradient descent Functional Gradient Descent Functional Gradient Descent - Part 1 and Part 2 will give a brief introduction and theoretical illustration. Functional Gradient Descent was introduced in the NIPS publication Boosting Algorithms as Gradient Descent by Llew Mason, Jonathan Baxter, Peter Bartlett, and Marcus Frean in the year 2000. We are all familiar with gradient descent for linear functions $f(x) = w^Tx$. Once we define a loss $L$, gradient descent does the following update steps ($η$ is a parameter called the learning rate. $w \rightarrow w - \eta \nabla L(w)$ where we move around in the space of weights. An example of a loss of $L$ is: $L(w) = \sum_{i=1}^n(y_i - w^Tx_i)^2 + \lambda\lVert w \rVert ^2$ where the first term (the $‘L2’$ term) measures how close $f(x)$ is to $y$, while the second term (the ‘regularization’ term) accounts for the ‘complexity’ of the learned function $f$. Suppose we wanted to extend $L$ to beyond linear functions $f$. We want to minimize something like: $L(f) = \sum_{i=1}^n(y_i - f(x_i))^2 + \lambda\lVert f \rVert ^2$ where $\lVert f \rVert ^2$ again serves as a regularization term, and we have updates of the form: $f \rightarrow f - \eta \nabla L(f)$ where we move around in the space of functions, not weights! Turns out, this is completely possible! And goes by the name of ‘functional’ gradient descent or gradient descent in function space. In general, you can parametrize any function in a number of ways, each parametrization gives rise to different steps (and different functions at each step) in gradient descent. The advantage is that some loss functions that are non-convex when parametrized, can be convex in the function space: this means functional gradient descent can actually converge to global minima when ‘ordinary’ gradient descent could possibly get stuck at local minima or saddle points. Illustration Example Code For more Why functional gradient descent can be useful, What it means to do functional gradient descent, and, How we can do functional gradient descent, with an example. Visit
Illustration of functional gradient descent Functional Gradient Descent Functional Gradient Descent - Part 1 and Part 2 will give a brief introduction and theoretical illustration. Functional Gradient Descent was introduced in the NIPS publicat
37,497
Is feature transformation (power, log, Box-Cox) necessary in deep learning?
The rule of thumb is: the more data you have available, the less you have to care about feature engineering (which is basically inputting some prior knowledge into the model, based on the domain expertise). Theoritically (with huge enough number of the samples) you could solve imagenet without using any convolutions, only deep feedforward network. But by knowing that pixels are spatially correlated (which makes it so that the convolutions will be much better way to tackle this problem) you can design an algorithm which is much more data-efficient.
Is feature transformation (power, log, Box-Cox) necessary in deep learning?
The rule of thumb is: the more data you have available, the less you have to care about feature engineering (which is basically inputting some prior knowledge into the model, based on the domain exper
Is feature transformation (power, log, Box-Cox) necessary in deep learning? The rule of thumb is: the more data you have available, the less you have to care about feature engineering (which is basically inputting some prior knowledge into the model, based on the domain expertise). Theoritically (with huge enough number of the samples) you could solve imagenet without using any convolutions, only deep feedforward network. But by knowing that pixels are spatially correlated (which makes it so that the convolutions will be much better way to tackle this problem) you can design an algorithm which is much more data-efficient.
Is feature transformation (power, log, Box-Cox) necessary in deep learning? The rule of thumb is: the more data you have available, the less you have to care about feature engineering (which is basically inputting some prior knowledge into the model, based on the domain exper
37,498
Is feature transformation (power, log, Box-Cox) necessary in deep learning?
So the way I view feature engineering ala-box cox is that we have a model that requires normality, we don't have normal data, so we do a transform to get us to normal Data. So on the one hand its true that neural network do not require normalized data so why feature engineer. On the other hand, while a neural net might eventually get there, sometimes feature engineering done by humans can hugely help the initial convergence rate. For example, in case of multichannel signal data, doing the Fourier decomp and computing the cross correlations beforehand greatly increases the speed at which the Neural Net can get to classification (to give a really specific example). Or to give a more sane example, if you know your data has many outliers and these are not important, removing outliers is a form of feature engineering. The network could eventually learn to ignore then, but it might take forever. So when you are fairly sure that the transformation is going to highlight something important about your data, then transform it, if not, then maybe not.
Is feature transformation (power, log, Box-Cox) necessary in deep learning?
So the way I view feature engineering ala-box cox is that we have a model that requires normality, we don't have normal data, so we do a transform to get us to normal Data. So on the one hand its true
Is feature transformation (power, log, Box-Cox) necessary in deep learning? So the way I view feature engineering ala-box cox is that we have a model that requires normality, we don't have normal data, so we do a transform to get us to normal Data. So on the one hand its true that neural network do not require normalized data so why feature engineer. On the other hand, while a neural net might eventually get there, sometimes feature engineering done by humans can hugely help the initial convergence rate. For example, in case of multichannel signal data, doing the Fourier decomp and computing the cross correlations beforehand greatly increases the speed at which the Neural Net can get to classification (to give a really specific example). Or to give a more sane example, if you know your data has many outliers and these are not important, removing outliers is a form of feature engineering. The network could eventually learn to ignore then, but it might take forever. So when you are fairly sure that the transformation is going to highlight something important about your data, then transform it, if not, then maybe not.
Is feature transformation (power, log, Box-Cox) necessary in deep learning? So the way I view feature engineering ala-box cox is that we have a model that requires normality, we don't have normal data, so we do a transform to get us to normal Data. So on the one hand its true
37,499
Elastic net arbitrary alpha selection
The answer to a similar question here advises to follow the glmnet vignette (assuming you're using R): foldid=sample(1:10,size=length(y),replace=TRUE) cv1=cv.glmnet(x,y,foldid=foldid,alpha=1) cv.5=cv.glmnet(x,y,foldid=foldid,alpha=.5) cv0=cv.glmnet(x,y,foldid=foldid,alpha=0) Keep the foldid fixed and assess a grid of $\alpha$ values using cross-validation for $\lambda$.
Elastic net arbitrary alpha selection
The answer to a similar question here advises to follow the glmnet vignette (assuming you're using R): foldid=sample(1:10,size=length(y),replace=TRUE) cv1=cv.glmnet(x,y,foldid=foldid,alpha=1) cv.5=cv.
Elastic net arbitrary alpha selection The answer to a similar question here advises to follow the glmnet vignette (assuming you're using R): foldid=sample(1:10,size=length(y),replace=TRUE) cv1=cv.glmnet(x,y,foldid=foldid,alpha=1) cv.5=cv.glmnet(x,y,foldid=foldid,alpha=.5) cv0=cv.glmnet(x,y,foldid=foldid,alpha=0) Keep the foldid fixed and assess a grid of $\alpha$ values using cross-validation for $\lambda$.
Elastic net arbitrary alpha selection The answer to a similar question here advises to follow the glmnet vignette (assuming you're using R): foldid=sample(1:10,size=length(y),replace=TRUE) cv1=cv.glmnet(x,y,foldid=foldid,alpha=1) cv.5=cv.
37,500
GAM (mgcv): AIC vs Deviance Explained
The respective formulas for these two quantities are: $$\text{deviance} = 2\log\mathcal{L}(\text{saturated model}\, |\, \text{data}) - 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ $$\text{AIC} = 2k- 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ where $\mathcal{L}$ is the likelihood and $k$ is the number of model parameters. For a fixed dataset and model family, the saturated model is fixed, and therefore for our purposes the equation for deviance is: $$\text{deviance} = \text{constant} - 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ Plotting AIC against deviance the way that you've done, we expect the data to fall along a straight line if there exist constants $c_1$ and $c_2$ such that: $$c_1 \cdot \text{AIC} + c_2 \approx \text{Deviance}$$ This can only be the case if $k \propto \log\mathcal{L}$. Although this is not a relationship that I have previously come across, it seems plausible. However it could also be that a different formula for Deviance is being used altogether, as intimated here.
GAM (mgcv): AIC vs Deviance Explained
The respective formulas for these two quantities are: $$\text{deviance} = 2\log\mathcal{L}(\text{saturated model}\, |\, \text{data}) - 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ $$\text{AIC} =
GAM (mgcv): AIC vs Deviance Explained The respective formulas for these two quantities are: $$\text{deviance} = 2\log\mathcal{L}(\text{saturated model}\, |\, \text{data}) - 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ $$\text{AIC} = 2k- 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ where $\mathcal{L}$ is the likelihood and $k$ is the number of model parameters. For a fixed dataset and model family, the saturated model is fixed, and therefore for our purposes the equation for deviance is: $$\text{deviance} = \text{constant} - 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ Plotting AIC against deviance the way that you've done, we expect the data to fall along a straight line if there exist constants $c_1$ and $c_2$ such that: $$c_1 \cdot \text{AIC} + c_2 \approx \text{Deviance}$$ This can only be the case if $k \propto \log\mathcal{L}$. Although this is not a relationship that I have previously come across, it seems plausible. However it could also be that a different formula for Deviance is being used altogether, as intimated here.
GAM (mgcv): AIC vs Deviance Explained The respective formulas for these two quantities are: $$\text{deviance} = 2\log\mathcal{L}(\text{saturated model}\, |\, \text{data}) - 2\log\mathcal{L}(\text{model}\, |\, \text{data})$$ $$\text{AIC} =