idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
48,101
Relation of slopes of predictors when they are correlated in linear regression
It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measured correlation between $X_{1}$ and $X_{2}$ doesn't have to be large and the regression won't suffer from traditional multicollinearity. But as the sample size grows, you should expect the effect size for $X_{1}$ when controlling for $X_{2}$ to approach zero with high statistical significance. This means that even if $X_{1}$ has a meaningful total effect on $Y$ (so that $\beta_{10}$ is statistically significant and large) the third regression can appear to say that only $\beta_{21}$ has a meaningful total effect. This is an example from "Let's Put Garbage-Can Regressions and Garbage-Can Probits Where They Belong".
Relation of slopes of predictors when they are correlated in linear regression
It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measur
Relation of slopes of predictors when they are correlated in linear regression It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measured correlation between $X_{1}$ and $X_{2}$ doesn't have to be large and the regression won't suffer from traditional multicollinearity. But as the sample size grows, you should expect the effect size for $X_{1}$ when controlling for $X_{2}$ to approach zero with high statistical significance. This means that even if $X_{1}$ has a meaningful total effect on $Y$ (so that $\beta_{10}$ is statistically significant and large) the third regression can appear to say that only $\beta_{21}$ has a meaningful total effect. This is an example from "Let's Put Garbage-Can Regressions and Garbage-Can Probits Where They Belong".
Relation of slopes of predictors when they are correlated in linear regression It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measur
48,102
Is it valid to log-transform percentages?
First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as there are no 0%. But is that what you want? Say exports were 200 in 2010 and 205 in 2011. Then growth as a % is 205/200 * 100 = 2.5%. Log(205/200) = .0107, which is the difference between log(205) and log(200). Third, it's a bit hard to tell what you are doing, exactly, but if you have longitudinal data, you will need to account for that.
Is it valid to log-transform percentages?
First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as t
Is it valid to log-transform percentages? First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as there are no 0%. But is that what you want? Say exports were 200 in 2010 and 205 in 2011. Then growth as a % is 205/200 * 100 = 2.5%. Log(205/200) = .0107, which is the difference between log(205) and log(200). Third, it's a bit hard to tell what you are doing, exactly, but if you have longitudinal data, you will need to account for that.
Is it valid to log-transform percentages? First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as t
48,103
How should sampling ratios to estimate quantiles change with population size?
For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$, a sample of size $\Theta ( \frac{1}{\varepsilon^2} \log \frac{1}{\delta} )$ is required, where 0 < δ < 1. But I think this might be an easier problem than it looked like, at least with an asymptotic approximation. For any true/population/full-sample/N p-th quantile $q = F^{-1}(p)$ the limiting distribution is $$ \sqrt{n}(\hat{q}-q) = \sqrt{n}\Delta q \sim N \left(0,\frac{p(1-p)}{f(q)^2} \right) $$ but if we care about (say) 1 percentage point deviations ($\varepsilon = 0.01$) in the form $F(q + \Delta q) \in (p-0.01,p+0.01)$, we can approximate the mass in the $\Delta q$ neighborhood with $f(q) \Delta q$ and try to bound that. Saying that $|f(q) \Delta q | < 0.01$ with a 99% probability ($1-\delta$ above) then becomes the problem of which normal distribution has its 0.995 quantile at 0.01, because its variance then is the bounding $\frac{p(1-p)}{n}$. Solving for the worst-case scenario of $p=0.5$, this gives the critical sample size to be $n = 16,556$ as long as the approximations hold.
How should sampling ratios to estimate quantiles change with population size?
For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$,
How should sampling ratios to estimate quantiles change with population size? For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$, a sample of size $\Theta ( \frac{1}{\varepsilon^2} \log \frac{1}{\delta} )$ is required, where 0 < δ < 1. But I think this might be an easier problem than it looked like, at least with an asymptotic approximation. For any true/population/full-sample/N p-th quantile $q = F^{-1}(p)$ the limiting distribution is $$ \sqrt{n}(\hat{q}-q) = \sqrt{n}\Delta q \sim N \left(0,\frac{p(1-p)}{f(q)^2} \right) $$ but if we care about (say) 1 percentage point deviations ($\varepsilon = 0.01$) in the form $F(q + \Delta q) \in (p-0.01,p+0.01)$, we can approximate the mass in the $\Delta q$ neighborhood with $f(q) \Delta q$ and try to bound that. Saying that $|f(q) \Delta q | < 0.01$ with a 99% probability ($1-\delta$ above) then becomes the problem of which normal distribution has its 0.995 quantile at 0.01, because its variance then is the bounding $\frac{p(1-p)}{n}$. Solving for the worst-case scenario of $p=0.5$, this gives the critical sample size to be $n = 16,556$ as long as the approximations hold.
How should sampling ratios to estimate quantiles change with population size? For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$,
48,104
Maximum Entropy and Multinomial Logistic Function
MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if I'm interested on studying some situation depending only on one real parameter $x$ and I know (from experimental data or from my theoretical model) that the only relevant characteristic of the data distribution of this parameter is it's mean, I can do: $$ \mathbb{E}_{p(x)}[x] = \int_{-\infty}^\infty \mathrm{d}x\ x p(x) \equiv \lambda $$ where $\lambda$ is defined experimentally. Then, using the MaxEnt approach, the probability distribution "more reasonable" (that assumes less conditions over $p(x)$), is the exponential distribution: $$ p(x|\lambda) = \lambda e^{-\lambda x}$$ This method is extremely useful and has many applications on statistical physics, information theory, statistics, machine learning et cetera. More information can be found on Wikipedia and on many different sources. More generally, one can use Discrete MaxEnt with the constraints $\mathbb{E}_p[f_i(y_j)] = \sum_{j=1}^C f_i(y_j)p(y_j) \equiv F_i$ for $i = 1, \dots, K$ to obtain the probability distribution: $$ p_j = p(y_j) = \frac1Z \exp\left( \sum_{i=1}^K \lambda_i f_i(y_j) \right) $$ which can be developed to become a softmax function (I haven't done it myself but I suspect it must be something along the lines of this paper. tl;dr MaxEnt is a method for developing probabilistic models, so it can provide us other classification models that are not SoftMax. It all depends on the (informational) assumptions of your model
Maximum Entropy and Multinomial Logistic Function
MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if
Maximum Entropy and Multinomial Logistic Function MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if I'm interested on studying some situation depending only on one real parameter $x$ and I know (from experimental data or from my theoretical model) that the only relevant characteristic of the data distribution of this parameter is it's mean, I can do: $$ \mathbb{E}_{p(x)}[x] = \int_{-\infty}^\infty \mathrm{d}x\ x p(x) \equiv \lambda $$ where $\lambda$ is defined experimentally. Then, using the MaxEnt approach, the probability distribution "more reasonable" (that assumes less conditions over $p(x)$), is the exponential distribution: $$ p(x|\lambda) = \lambda e^{-\lambda x}$$ This method is extremely useful and has many applications on statistical physics, information theory, statistics, machine learning et cetera. More information can be found on Wikipedia and on many different sources. More generally, one can use Discrete MaxEnt with the constraints $\mathbb{E}_p[f_i(y_j)] = \sum_{j=1}^C f_i(y_j)p(y_j) \equiv F_i$ for $i = 1, \dots, K$ to obtain the probability distribution: $$ p_j = p(y_j) = \frac1Z \exp\left( \sum_{i=1}^K \lambda_i f_i(y_j) \right) $$ which can be developed to become a softmax function (I haven't done it myself but I suspect it must be something along the lines of this paper. tl;dr MaxEnt is a method for developing probabilistic models, so it can provide us other classification models that are not SoftMax. It all depends on the (informational) assumptions of your model
Maximum Entropy and Multinomial Logistic Function MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if
48,105
Maximum Entropy and Multinomial Logistic Function
You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of duality in constrained optimization. Berger, A. L., Pietra, V. J. D., & Pietra, S. A. D. (1996). A maximum entropy approach to natural language processing. Computational Linguistics, 22(1), 39–71.
Maximum Entropy and Multinomial Logistic Function
You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of
Maximum Entropy and Multinomial Logistic Function You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of duality in constrained optimization. Berger, A. L., Pietra, V. J. D., & Pietra, S. A. D. (1996). A maximum entropy approach to natural language processing. Computational Linguistics, 22(1), 39–71.
Maximum Entropy and Multinomial Logistic Function You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of
48,106
Why so many large p-values when I repeat an experiment?
If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two ideas: Are you computing one tail P values? If so, and the actual effect is in an opposite direction to the hypothesized effect (and you compute the one-tail P values correctly), then you'd expect more high P values. How subjective is the data collection and wrangling? Any chance that the people doing the experiment expect no difference, so are biased? Perhaps they throw out "outliers" only when they see an unexpected large difference? Perhaps they repeat the measurement when the difference (effect) is unexpectedly large, but accept it when the effect is small?
Why so many large p-values when I repeat an experiment?
If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two id
Why so many large p-values when I repeat an experiment? If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two ideas: Are you computing one tail P values? If so, and the actual effect is in an opposite direction to the hypothesized effect (and you compute the one-tail P values correctly), then you'd expect more high P values. How subjective is the data collection and wrangling? Any chance that the people doing the experiment expect no difference, so are biased? Perhaps they throw out "outliers" only when they see an unexpected large difference? Perhaps they repeat the measurement when the difference (effect) is unexpectedly large, but accept it when the effect is small?
Why so many large p-values when I repeat an experiment? If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two id
48,107
Hausman test: the larger the sample the more significant the Hausman test statistic?
First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between all your betas in the model. The main diagonal elements contain the variance of each beta. If you take the square root of the main diagonal entries, you get the standard error of your betas. Now to Hausman. Since random effects is a matrix weighted average of the within and between variation in your data it is more efficient (i.e. has lower variance) than the fixed effects estimator which only exploits the within variation. If you want to test the difference between both models, you can write the test statistic as $$H = (\beta_{FE}-\beta_{RE})'[Var(\beta_{FE})-Var(\beta_{RE})]^{-1}(\beta_{FE}-\beta_{RE})$$ Given that RE is more efficient the difference in the variances is positive definite - or at least it should be. If you use different variance estimators in the two regressions then $H$ might as well be negative. Often this is a sign of model miss-specification but this is a tricky discussion as there can be other instances for which the test statistic may be negative. Let's not consider those for the moment for simplicity. If you now increase the sample size, you correctly said that your estimators become more efficient. Consequently $[Var(\beta_{FE})-Var(\beta_{RE})]^{-1}$ becomes smaller. Note that this difference is the denominator of a fraction, so as the denominator becomes smaller the fraction becomes bigger. Maybe this is more intuitive if we consider the case when you are interested in a single variable (call it $k$) only. In this case the test statistic can be written as $$H =\frac{(\beta_{FE,k}-\beta_{RE,k})}{\sqrt{[se(\beta_{FE,k})^{2}-se(\beta_{RE,k})^{2}]}}$$ To give a numerical example let's start first with the small sample. Let's say the difference in coefficients is 100 and their standard errors in FE and RE are 10 and 5, respectively: $$H_{small} =\frac{(100)}{\sqrt{[10^{2}-5^{2}]}} = 11.547$$ Then you increase the sample size and suppose the standard errors reduce by one half: $$H_{large} =\frac{(100)}{\sqrt{[5^{2}-2.5^{2}]}} = 23.094$$ Now you see how the test statistic becomes larger for a larger sample (as the denominator decreases in size thanks to the smaller standard errors). The intuition for the test statistic in matrix notation is the same.
Hausman test: the larger the sample the more significant the Hausman test statistic?
First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between a
Hausman test: the larger the sample the more significant the Hausman test statistic? First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between all your betas in the model. The main diagonal elements contain the variance of each beta. If you take the square root of the main diagonal entries, you get the standard error of your betas. Now to Hausman. Since random effects is a matrix weighted average of the within and between variation in your data it is more efficient (i.e. has lower variance) than the fixed effects estimator which only exploits the within variation. If you want to test the difference between both models, you can write the test statistic as $$H = (\beta_{FE}-\beta_{RE})'[Var(\beta_{FE})-Var(\beta_{RE})]^{-1}(\beta_{FE}-\beta_{RE})$$ Given that RE is more efficient the difference in the variances is positive definite - or at least it should be. If you use different variance estimators in the two regressions then $H$ might as well be negative. Often this is a sign of model miss-specification but this is a tricky discussion as there can be other instances for which the test statistic may be negative. Let's not consider those for the moment for simplicity. If you now increase the sample size, you correctly said that your estimators become more efficient. Consequently $[Var(\beta_{FE})-Var(\beta_{RE})]^{-1}$ becomes smaller. Note that this difference is the denominator of a fraction, so as the denominator becomes smaller the fraction becomes bigger. Maybe this is more intuitive if we consider the case when you are interested in a single variable (call it $k$) only. In this case the test statistic can be written as $$H =\frac{(\beta_{FE,k}-\beta_{RE,k})}{\sqrt{[se(\beta_{FE,k})^{2}-se(\beta_{RE,k})^{2}]}}$$ To give a numerical example let's start first with the small sample. Let's say the difference in coefficients is 100 and their standard errors in FE and RE are 10 and 5, respectively: $$H_{small} =\frac{(100)}{\sqrt{[10^{2}-5^{2}]}} = 11.547$$ Then you increase the sample size and suppose the standard errors reduce by one half: $$H_{large} =\frac{(100)}{\sqrt{[5^{2}-2.5^{2}]}} = 23.094$$ Now you see how the test statistic becomes larger for a larger sample (as the denominator decreases in size thanks to the smaller standard errors). The intuition for the test statistic in matrix notation is the same.
Hausman test: the larger the sample the more significant the Hausman test statistic? First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between a
48,108
What does it mean to use a normalizing factor to "sum to unity"?
Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but this is typically done by dividing, and that's what I would assume in the absence of a more detailed description. If they had normalized so that the values summed to 100 instead, they'd be expressing it as a percentage. Suppose there is a substance made of three chemicals: 5L of Chemical A, 2L of Chemical B, and 3L of Chemical C. You could do a similar normalization and say that each litre of substance contains 0.5L of A, 0.2L of B, and 0.3L of C (each value has been divided by 10, the total, so all the values together sum to one). If you normalized to 100 instead of unity, then you could also say that the substance is 50% A, 20% B, and 30% C. One of the most common uses of this technique is to turn event counts into probabilities. By definition, probabilities have in [0,1] (i.e., greater than or equal to zero and less than or equal to one). Suppose you have an urn with 10 balls in it, seven of which are red and three of which are blue. You could normalize these counts so that they sum to unity and restate this as the probability that a randomly chosen ball is red, $P(ball=\textrm{red}) = \frac{7}{7+3} = \frac{7}{10} = 0.7$ and $P(ball=\textrm{blue}) = \frac{3}{3+7} = \frac{3}{10}=0.3$.
What does it mean to use a normalizing factor to "sum to unity"?
Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but
What does it mean to use a normalizing factor to "sum to unity"? Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but this is typically done by dividing, and that's what I would assume in the absence of a more detailed description. If they had normalized so that the values summed to 100 instead, they'd be expressing it as a percentage. Suppose there is a substance made of three chemicals: 5L of Chemical A, 2L of Chemical B, and 3L of Chemical C. You could do a similar normalization and say that each litre of substance contains 0.5L of A, 0.2L of B, and 0.3L of C (each value has been divided by 10, the total, so all the values together sum to one). If you normalized to 100 instead of unity, then you could also say that the substance is 50% A, 20% B, and 30% C. One of the most common uses of this technique is to turn event counts into probabilities. By definition, probabilities have in [0,1] (i.e., greater than or equal to zero and less than or equal to one). Suppose you have an urn with 10 balls in it, seven of which are red and three of which are blue. You could normalize these counts so that they sum to unity and restate this as the probability that a randomly chosen ball is red, $P(ball=\textrm{red}) = \frac{7}{7+3} = \frac{7}{10} = 0.7$ and $P(ball=\textrm{blue}) = \frac{3}{3+7} = \frac{3}{10}=0.3$.
What does it mean to use a normalizing factor to "sum to unity"? Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but
48,109
What does it mean to use a normalizing factor to "sum to unity"?
A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4. You can find the new conditional probabilities for rolls of 4, 5, or 6 by dividing ${1 \over 6}$ by ${1 \over 2}$ for each of the outcomes of 4, 5, or 6. This process of division ensures the conditional probabilities sum to unity as they must.
What does it mean to use a normalizing factor to "sum to unity"?
A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4.
What does it mean to use a normalizing factor to "sum to unity"? A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4. You can find the new conditional probabilities for rolls of 4, 5, or 6 by dividing ${1 \over 6}$ by ${1 \over 2}$ for each of the outcomes of 4, 5, or 6. This process of division ensures the conditional probabilities sum to unity as they must.
What does it mean to use a normalizing factor to "sum to unity"? A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4.
48,110
Should I treat these ordinal IVs as covariates or factors, in a regression?
The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each level is associated with a different mean response on the dependent variable. Even if the factor is coded using numbers, these numbers have no particular meaning. For example, it's perfectly possible for group ‘2’ to have a lower mean value on the dependent variable than group ‘1’ and ‘3’. Behind the scenes, in a regular ANOVA/linear model, the groups can be represented by a set of “dummy variables” with a different coefficient for each group. Ideally, a covariate should be a continuous and interval-level measure but in any case the values have to be meaningful because the relationship between covariates and outcome/dependent variable is quantitative. A simple linear model will have a single coefficient to capture this relationship. Other models (models with interactions, polynomial regression, splines, etc.) add some complications but it should be meaningful to think about the magnitude of the covariate. The notion that “factors” are essential and “covariates” can be left out stems from common study designs in psychology and some other fields. Typically, the main variable of interest will be manipulated by experimentally setting it to a handful of levels whereas demographic variables (age, personality, etc.) are simply measured on a more-or-less continuous scale. Consequently, the “factor” must definitely figure in the analysis but the “covariates” could possibly be ignored. The experimental design can also ensure that different factors are not correlated and the groups are balanced, which is not necessarily the case if you are merely observing/measuring variables. Mathematically, however it does not make any difference whether you look at it as an ANCOVA, in which the continuous variables are called “covariates”, or a multiple linear regression, in which continuous variables are simply predictors (see When should one use multiple regression with dummy coding vs. ANCOVA?). You can also very much design a study where the main manipulation is quantitative (imagine something like manipulating the temperature of a room) but ancillary measures are binary (say gender). You would probably not call the temperature a “covariate” but it certainly should not be used as a “factor” in an ANOVA or left out of the model. Whether a variable is “essential” or was experimentally manipulated will change the interpretation but not necessarily the way it figures in the model. In your case, whether it is reasonable to treat multi-item Likert scales as interval measures could be debated and will also depend on the specifics of the data but it is certainly pretty standard. They are definitely not nominal.
Should I treat these ordinal IVs as covariates or factors, in a regression?
The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each
Should I treat these ordinal IVs as covariates or factors, in a regression? The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each level is associated with a different mean response on the dependent variable. Even if the factor is coded using numbers, these numbers have no particular meaning. For example, it's perfectly possible for group ‘2’ to have a lower mean value on the dependent variable than group ‘1’ and ‘3’. Behind the scenes, in a regular ANOVA/linear model, the groups can be represented by a set of “dummy variables” with a different coefficient for each group. Ideally, a covariate should be a continuous and interval-level measure but in any case the values have to be meaningful because the relationship between covariates and outcome/dependent variable is quantitative. A simple linear model will have a single coefficient to capture this relationship. Other models (models with interactions, polynomial regression, splines, etc.) add some complications but it should be meaningful to think about the magnitude of the covariate. The notion that “factors” are essential and “covariates” can be left out stems from common study designs in psychology and some other fields. Typically, the main variable of interest will be manipulated by experimentally setting it to a handful of levels whereas demographic variables (age, personality, etc.) are simply measured on a more-or-less continuous scale. Consequently, the “factor” must definitely figure in the analysis but the “covariates” could possibly be ignored. The experimental design can also ensure that different factors are not correlated and the groups are balanced, which is not necessarily the case if you are merely observing/measuring variables. Mathematically, however it does not make any difference whether you look at it as an ANCOVA, in which the continuous variables are called “covariates”, or a multiple linear regression, in which continuous variables are simply predictors (see When should one use multiple regression with dummy coding vs. ANCOVA?). You can also very much design a study where the main manipulation is quantitative (imagine something like manipulating the temperature of a room) but ancillary measures are binary (say gender). You would probably not call the temperature a “covariate” but it certainly should not be used as a “factor” in an ANOVA or left out of the model. Whether a variable is “essential” or was experimentally manipulated will change the interpretation but not necessarily the way it figures in the model. In your case, whether it is reasonable to treat multi-item Likert scales as interval measures could be debated and will also depend on the specifics of the data but it is certainly pretty standard. They are definitely not nominal.
Should I treat these ordinal IVs as covariates or factors, in a regression? The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each
48,111
Classification of Huge number of classes
More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the number of classes) classification. You can probably find some hint in this (a bit old) workshop about large-scale (hierarchical) text classification. About the number of elements within classes, 1 or 2 elements is way too low. Based on my experience you need at least 10-20 examples per class, although this is dependent on several conditions such as type of data and collection. To get new examples for some of the classes, have you considered some type of (semi-)manual labelling of documents to expand your training set? I hope this helps. Regards,
Classification of Huge number of classes
More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the numb
Classification of Huge number of classes More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the number of classes) classification. You can probably find some hint in this (a bit old) workshop about large-scale (hierarchical) text classification. About the number of elements within classes, 1 or 2 elements is way too low. Based on my experience you need at least 10-20 examples per class, although this is dependent on several conditions such as type of data and collection. To get new examples for some of the classes, have you considered some type of (semi-)manual labelling of documents to expand your training set? I hope this helps. Regards,
Classification of Huge number of classes More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the numb
48,112
Classification of Huge number of classes
For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distance from samples of same class while maximizing the distance between samples of classes. You create a cluster space where classes separate from each other, and can be used even for many classes and very few samples of each class.
Classification of Huge number of classes
For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distan
Classification of Huge number of classes For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distance from samples of same class while maximizing the distance between samples of classes. You create a cluster space where classes separate from each other, and can be used even for many classes and very few samples of each class.
Classification of Huge number of classes For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distan
48,113
Can Agresti-Coull binomial confidence intervals be negative?
The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the so called Wilson interval and not the Agresti-Coull interval. Agresti and Coull list the formulas from your link in their paper and call it the score confidence interval (page 120). In the superb paper from Brown et al. (2001) Interval estimation for a binomial proportion, it is called the Wilson interval. It is more commonly know as Wilson interval. They show in their article that the Wilson interval performs well even with small n. In the output of binom.confint the Wilson interval is denoted as wilson and can be calculated by setting the method methods="wilson" in binom.confint. Here is the R code for the (modified) Wilson confidence interval: n <- 5 x <- 0 alpha <- 0.05 p.hat <- x/n upper.lim <- (p.hat + (qnorm(1-(alpha/2))^2/(2*n)) + qnorm(1-(alpha/2)) * sqrt(((p.hat*(1-p.hat))/n) + (qnorm(1-(alpha/2))^2/(4*n^2))))/(1 + (qnorm(1-(alpha/2))^2/(n))) lower.lim <- (p.hat + (qnorm(alpha/2)^2/(2*n)) + qnorm(alpha/2) * sqrt(((p.hat*(1-p.hat))/n) + (qnorm(alpha/2)^2/(4*n^2))))/(1 + (qnorm(alpha/2)^2/(n))) #============================================================================== # Modification for probabilities close to boundaries #============================================================================== if ((n <= 50 & x %in% c(1, 2)) | (n >= 51 & n <= 100 & x %in% c(1:3))) { lower.lim <- 0.5 * qchisq(alpha, 2 * x)/n } if ((n <= 50 & x %in% c(n - 1, n - 2)) | (n >= 51 & n <= 100 & x %in% c(n - (1:3)))) { upper.lim <- 1 - 0.5 * qchisq(alpha, 2 * (n - x))/n } upper.lim [1] 0.4344825 lower.lim [1] 3.139253e-17 Here, the lower limit is clearly 0 (the remaining is a numerical error). The Wilson interval in the output of binom.confint can be calculated by setting the option methods="wilson" and this is the same as the one we've calculated above: library(binom) binom.confint(x=0, n=5, methods="wilson") method x n mean lower upper 1 wilson 0 5 0 3.139253e-17 0.4344825 The function binom.confint implements the formulas given on the Wikipedia page for the Agresti-Coull interval: n.hat <- n + qnorm(1-(alpha/2))^2 p.hat <- (1/n.hat) * (x + (1/2)*qnorm(1-(alpha/2))^2) upper.lim2 <- p.hat + qnorm(1-(alpha/2))*sqrt((1/n.hat)*p.hat*(1-p.hat)) lower.lim2 <- p.hat - qnorm(1-(alpha/2))*sqrt((1/n.hat)*p.hat*(1-p.hat)) upper.lim2 [1] 0.4890549 lower.lim2 [1] -0.05457239 They are the same as the ones by binom.confint with the option methods="ac": library(binom) binom.confint(x=0, n=5, methods="ac") method x n mean lower upper 1 agresti-coull 0 5 0 -0.05457239 0.4890549
Can Agresti-Coull binomial confidence intervals be negative?
The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the
Can Agresti-Coull binomial confidence intervals be negative? The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the so called Wilson interval and not the Agresti-Coull interval. Agresti and Coull list the formulas from your link in their paper and call it the score confidence interval (page 120). In the superb paper from Brown et al. (2001) Interval estimation for a binomial proportion, it is called the Wilson interval. It is more commonly know as Wilson interval. They show in their article that the Wilson interval performs well even with small n. In the output of binom.confint the Wilson interval is denoted as wilson and can be calculated by setting the method methods="wilson" in binom.confint. Here is the R code for the (modified) Wilson confidence interval: n <- 5 x <- 0 alpha <- 0.05 p.hat <- x/n upper.lim <- (p.hat + (qnorm(1-(alpha/2))^2/(2*n)) + qnorm(1-(alpha/2)) * sqrt(((p.hat*(1-p.hat))/n) + (qnorm(1-(alpha/2))^2/(4*n^2))))/(1 + (qnorm(1-(alpha/2))^2/(n))) lower.lim <- (p.hat + (qnorm(alpha/2)^2/(2*n)) + qnorm(alpha/2) * sqrt(((p.hat*(1-p.hat))/n) + (qnorm(alpha/2)^2/(4*n^2))))/(1 + (qnorm(alpha/2)^2/(n))) #============================================================================== # Modification for probabilities close to boundaries #============================================================================== if ((n <= 50 & x %in% c(1, 2)) | (n >= 51 & n <= 100 & x %in% c(1:3))) { lower.lim <- 0.5 * qchisq(alpha, 2 * x)/n } if ((n <= 50 & x %in% c(n - 1, n - 2)) | (n >= 51 & n <= 100 & x %in% c(n - (1:3)))) { upper.lim <- 1 - 0.5 * qchisq(alpha, 2 * (n - x))/n } upper.lim [1] 0.4344825 lower.lim [1] 3.139253e-17 Here, the lower limit is clearly 0 (the remaining is a numerical error). The Wilson interval in the output of binom.confint can be calculated by setting the option methods="wilson" and this is the same as the one we've calculated above: library(binom) binom.confint(x=0, n=5, methods="wilson") method x n mean lower upper 1 wilson 0 5 0 3.139253e-17 0.4344825 The function binom.confint implements the formulas given on the Wikipedia page for the Agresti-Coull interval: n.hat <- n + qnorm(1-(alpha/2))^2 p.hat <- (1/n.hat) * (x + (1/2)*qnorm(1-(alpha/2))^2) upper.lim2 <- p.hat + qnorm(1-(alpha/2))*sqrt((1/n.hat)*p.hat*(1-p.hat)) lower.lim2 <- p.hat - qnorm(1-(alpha/2))*sqrt((1/n.hat)*p.hat*(1-p.hat)) upper.lim2 [1] 0.4890549 lower.lim2 [1] -0.05457239 They are the same as the ones by binom.confint with the option methods="ac": library(binom) binom.confint(x=0, n=5, methods="ac") method x n mean lower upper 1 agresti-coull 0 5 0 -0.05457239 0.4890549
Can Agresti-Coull binomial confidence intervals be negative? The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the
48,114
How to balance classification?
If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positive to negative patterns. Generally, if you have enough data, the "class imbalance problem" doesn't arise. Also, note that if you artificially balance the dataset, you are implying an equal prior probability of positive and negative patterns. If that isn't true, your model may give bad predictions by over-predicting the minority class. More importantly, there may be an overlap between classes such that the Bayes optimal decision is always to assign patterns to the positive class, in which case your model is doing exactly the right thing. Consider the case where there is one explanatory variable, which is distributed according to a standard normal distribution for both classes. In that case, as the positive class has a higher prior probability, the optimal model assigns all patterns to the positive class. Similar examples can be constructed where the class means are not the same, but the difference is small compared with the variation. If classifying the majority class is a problem, that suggests that the misclassification costs of false-positive and false-negative costs are not the same. This can be built into the classifier by changing the threshold, rather than the model, as you are using a logistic loss.
How to balance classification?
If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positiv
How to balance classification? If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positive to negative patterns. Generally, if you have enough data, the "class imbalance problem" doesn't arise. Also, note that if you artificially balance the dataset, you are implying an equal prior probability of positive and negative patterns. If that isn't true, your model may give bad predictions by over-predicting the minority class. More importantly, there may be an overlap between classes such that the Bayes optimal decision is always to assign patterns to the positive class, in which case your model is doing exactly the right thing. Consider the case where there is one explanatory variable, which is distributed according to a standard normal distribution for both classes. In that case, as the positive class has a higher prior probability, the optimal model assigns all patterns to the positive class. Similar examples can be constructed where the class means are not the same, but the difference is small compared with the variation. If classifying the majority class is a problem, that suggests that the misclassification costs of false-positive and false-negative costs are not the same. This can be built into the classifier by changing the threshold, rather than the model, as you are using a logistic loss.
How to balance classification? If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positiv
48,115
ncvTest from R and interpretation
This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) over the values of the response (fitted values). In your specific case, there is some evidence for a non-constant variance of the residuals (heteroscedasticity). A good suggestion would be to plot the residuals vs. the fitted values which you can do in R with plot(reg.mod) after calculating the regression model. Also, have a look at the search page with the term "Breusch-Pagan" of cross validated. There are a lot of questions similar to yours.
ncvTest from R and interpretation
This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) o
ncvTest from R and interpretation This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) over the values of the response (fitted values). In your specific case, there is some evidence for a non-constant variance of the residuals (heteroscedasticity). A good suggestion would be to plot the residuals vs. the fitted values which you can do in R with plot(reg.mod) after calculating the regression model. Also, have a look at the search page with the term "Breusch-Pagan" of cross validated. There are a lot of questions similar to yours.
ncvTest from R and interpretation This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) o
48,116
Why is it called white noise? [closed]
White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unfamilar, the PSD/Power Spectrum/Spectrum is a plot showing the spectral content of a signal; that is, it shows the amount of power in/at each frequency or frequency band). People sometimes also use the term for sequence of uncorrelated random variables. The White Noise page at wikipedia has several examples, if you want to see what it looks like, but the "ssssh" sound of turbulantly flowing air or the static on a detuned analog TV set are pretty reasonable approximations. The name arises from an analogy with white light, which was thought to contain equal energy at all frequencies. This is actually not quite correct--it should be all wavelengths--but the name appears to have stuck and it's not clear to me how literal it was meant to begin with. The Oxford English dictionary reports that it was used as early as 1922, but it looks like the term didn't really catch on until the 1940s: 1922 Nature 1 Apr. 414/2: Just as the spectrum of a hot body normally consists of a continuous spectrum of white light, together with certain spectrum lines the wave~lengths of which are characteristic of the radiating material, so an element emitting X-rays not only gives out ‘white’ radiation, but superposes its characteristic lines on the general spectrum. 1943 Jrnl. Aeronaut. Sci. 10 129/1 Inside the plane it is different; there all frequencies added together at once are heard, producing a noise which is to sound what white light is to light... That white noise is annoying needs little argument. In many, or probably even most contexts, white noise has little to do with light itself. There are other "coloured" noises which have different statistical properties. Red noise, for example, has a power spectrum dominated by low frequencies. As with white noise, the name may have been inspired by the fact that red light is at the low frequency end of the visible spectrum. However, a realization of (e.g.) red noise may not necessarily appear red, and a red-hued patch of noise may not be "red noise".
Why is it called white noise? [closed]
White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unf
Why is it called white noise? [closed] White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unfamilar, the PSD/Power Spectrum/Spectrum is a plot showing the spectral content of a signal; that is, it shows the amount of power in/at each frequency or frequency band). People sometimes also use the term for sequence of uncorrelated random variables. The White Noise page at wikipedia has several examples, if you want to see what it looks like, but the "ssssh" sound of turbulantly flowing air or the static on a detuned analog TV set are pretty reasonable approximations. The name arises from an analogy with white light, which was thought to contain equal energy at all frequencies. This is actually not quite correct--it should be all wavelengths--but the name appears to have stuck and it's not clear to me how literal it was meant to begin with. The Oxford English dictionary reports that it was used as early as 1922, but it looks like the term didn't really catch on until the 1940s: 1922 Nature 1 Apr. 414/2: Just as the spectrum of a hot body normally consists of a continuous spectrum of white light, together with certain spectrum lines the wave~lengths of which are characteristic of the radiating material, so an element emitting X-rays not only gives out ‘white’ radiation, but superposes its characteristic lines on the general spectrum. 1943 Jrnl. Aeronaut. Sci. 10 129/1 Inside the plane it is different; there all frequencies added together at once are heard, producing a noise which is to sound what white light is to light... That white noise is annoying needs little argument. In many, or probably even most contexts, white noise has little to do with light itself. There are other "coloured" noises which have different statistical properties. Red noise, for example, has a power spectrum dominated by low frequencies. As with white noise, the name may have been inspired by the fact that red light is at the low frequency end of the visible spectrum. However, a realization of (e.g.) red noise may not necessarily appear red, and a red-hued patch of noise may not be "red noise".
Why is it called white noise? [closed] White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unf
48,117
Evaluate statistical significance of difference between outcomes of tests
McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, & how many tested -ve in both #1 & #2.
Evaluate statistical significance of difference between outcomes of tests
McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, &
Evaluate statistical significance of difference between outcomes of tests McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, & how many tested -ve in both #1 & #2.
Evaluate statistical significance of difference between outcomes of tests McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, &
48,118
Evaluate statistical significance of difference between outcomes of tests
I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%27s_exact_test.
Evaluate statistical significance of difference between outcomes of tests
I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%
Evaluate statistical significance of difference between outcomes of tests I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%27s_exact_test.
Evaluate statistical significance of difference between outcomes of tests I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%
48,119
Evaluate statistical significance of difference between outcomes of tests
Perform a McNemar test for independence in a 2x2 table
Evaluate statistical significance of difference between outcomes of tests
Perform a McNemar test for independence in a 2x2 table
Evaluate statistical significance of difference between outcomes of tests Perform a McNemar test for independence in a 2x2 table
Evaluate statistical significance of difference between outcomes of tests Perform a McNemar test for independence in a 2x2 table
48,120
Evaluate statistical significance of difference between outcomes of tests
Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_tests/mcnemar.htm freesourcecode.net/matlabprojects/68089 jech.bmj.com/content/37/1/82.abstract
Evaluate statistical significance of difference between outcomes of tests
Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_te
Evaluate statistical significance of difference between outcomes of tests Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_tests/mcnemar.htm freesourcecode.net/matlabprojects/68089 jech.bmj.com/content/37/1/82.abstract
Evaluate statistical significance of difference between outcomes of tests Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_te
48,121
What is an Hypergeometric distribution where the last event is success?
You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without replacement. If in the total population of size $N$, there are $M$ "marked" and $N-M$ "unmarked" elements, and if the sampling (without replacement) is performed until the number of "marked" elements reaches a fixed number $m$, then the random variable $X$ — the number of "unmarked" elements in the sample — has a negative hypergeometric distribution. The random variable $X+m$ — the size of the sample — also has a negative hypergeometric distribution.
What is an Hypergeometric distribution where the last event is success?
You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without re
What is an Hypergeometric distribution where the last event is success? You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without replacement. If in the total population of size $N$, there are $M$ "marked" and $N-M$ "unmarked" elements, and if the sampling (without replacement) is performed until the number of "marked" elements reaches a fixed number $m$, then the random variable $X$ — the number of "unmarked" elements in the sample — has a negative hypergeometric distribution. The random variable $X+m$ — the size of the sample — also has a negative hypergeometric distribution.
What is an Hypergeometric distribution where the last event is success? You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without re
48,122
What is an Hypergeometric distribution where the last event is success?
I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your notation, $N$ is the population of balls containing $B$ black balls ("successes"). You draw sample of $n$ balls. The probability that $k$ of them are black (= the probability that sample sized $n$ is needed to contain $k$ black) is $Prob = PDF.HYPERGEOM(k,N,n,B)$. $Prob$ is the probability that the sample is going to be drawn in any order of balls; there are $n!$ possible versions of order (permutations). Any unique ball could appear the last one (or the first one, or the second one - whatever you like) in $\frac{1}{n} n!$ versions. But you have $k$ black balls which are all the same for you in that respect. So, $\frac{k}{n}$ fraction of the order versions will have a black ball as the last drawn; and thus $\frac{k}{n} Prob$ should be the probability you ask.
What is an Hypergeometric distribution where the last event is success?
I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your no
What is an Hypergeometric distribution where the last event is success? I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your notation, $N$ is the population of balls containing $B$ black balls ("successes"). You draw sample of $n$ balls. The probability that $k$ of them are black (= the probability that sample sized $n$ is needed to contain $k$ black) is $Prob = PDF.HYPERGEOM(k,N,n,B)$. $Prob$ is the probability that the sample is going to be drawn in any order of balls; there are $n!$ possible versions of order (permutations). Any unique ball could appear the last one (or the first one, or the second one - whatever you like) in $\frac{1}{n} n!$ versions. But you have $k$ black balls which are all the same for you in that respect. So, $\frac{k}{n}$ fraction of the order versions will have a black ball as the last drawn; and thus $\frac{k}{n} Prob$ should be the probability you ask.
What is an Hypergeometric distribution where the last event is success? I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your no
48,123
Looking for a test for shape comparison
One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out their means might help too. I'd follow up computing their cross correlation perhaps?
Looking for a test for shape comparison
One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out the
Looking for a test for shape comparison One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out their means might help too. I'd follow up computing their cross correlation perhaps?
Looking for a test for shape comparison One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out the
48,124
Looking for a test for shape comparison
check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a link to the author's site about the text on the subject http://getahead.psu.edu/purplebook_new.html and the actual software package http://www.getahead.psu.edu/EDMA_new.asp alternatively, there are methods of procrustes fitting of shapes to see differences in them, googleing procrustes in R, I see http://cc.oulu.fi/~jarioksa/softhelp/vegan/html/procrustes.html
Looking for a test for shape comparison
check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a li
Looking for a test for shape comparison check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a link to the author's site about the text on the subject http://getahead.psu.edu/purplebook_new.html and the actual software package http://www.getahead.psu.edu/EDMA_new.asp alternatively, there are methods of procrustes fitting of shapes to see differences in them, googleing procrustes in R, I see http://cc.oulu.fi/~jarioksa/softhelp/vegan/html/procrustes.html
Looking for a test for shape comparison check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a li
48,125
Need help finding UMVUE for a Poisson Distribution
(a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get the log-likelihood (denote $\sum X_i$ by $T$ and omit terms which don't contain $\theta$) : $$\ell(\theta) = n\log\theta + T\log(-\log\theta)$$ Therefore, $$\ell'(\theta) = \frac{n}{\theta} + \frac{T}{\theta\log\theta}$$ $$\ell''(\theta) = -\frac{n}{\theta^2} - \frac{T(\log\theta + 1)}{(\theta\log\theta)^2}$$ As $E_\theta(T) = -n\log\theta$, it follows that the Fisher information is $$I(\theta) = -E_\theta(\ell''(\theta)) = -\frac{n}{\theta^2\log\theta}$$ Hence the C-R lower bound is given by $1/I(\theta) = \boxed{-\theta^2\log(\theta)/n}$. Since $\theta = P_\theta(X_i = 0)$, an unbiased estimate of $\theta$ could be the sample proportion which takes 0, namely, $$\hat{\theta} = \frac{\sum_{i = 1}^n \mathrm{I}\{X_i = 0\}}{n}.$$ Or even simpler, just take $$\hat{\theta} = \mathrm{I}\{X_1 = 0\}.$$ (b) This is routine application of Fisher's factorization theorem and the property of exponential family. Not hard if you write things clearly. (c) You may verify that (recall $T \sim \mathrm{Poisson}(n\lambda)$, and here would be easier to go back to work with $\lambda$ but keep in mind $\theta$ and $\lambda$ is one-to-one correspondence): $$E_\theta(T^*) = \sum_{k = 0}^\infty \left(1 - \frac{1}{n}\right)^k e^{-n\lambda}\frac{(n\lambda)^k}{k!} = e^{-n\lambda}e^{n\lambda(1 - n^{-1})} = \theta.$$ Hence $T^*$ can be (also) used as an unbiased estimator of $\theta$ (in fact, it is the Rao-Blackwellization of $\hat{\theta}$, see the last paragraph of this answer). In addition, \begin{align*} E_\theta((T^*)^2) = \sum_{k = 0}^\infty \left(1 - \frac{1}{n}\right)^{2k} e^{-n\lambda}\frac{(n\lambda)^k}{k!} = e^{-n\lambda}e^{n\lambda(1 - n^{-1})^2} = \theta^{2 - \frac{1}{n}}. \end{align*} It then follows that $$\mathrm{Var}_\theta(T^*) = \theta^{2 - \frac{1}{n}} - \theta^2.$$ One may verify that we indeed have the C-R lower bound is strictly less (as @81235 pointed out in the comment) than $\mathrm{Var}_\theta(T^*)$, as the consequence of the famous inequality \begin{align*} e^{\frac{\lambda}{n}} > 1 + \frac{\lambda}{n}. \end{align*} On the other hand, $\mathrm{Var}_\theta(T^*)$ is indeed not bigger than $$\mathrm{Var}_\theta(\hat{\theta}) = P(X_1 = 0)(1 - P(X_1 = 0)) = e^{-\lambda}(1 - e^{-\lambda}) = \theta - \theta^2.$$ Together, we observe that $T^*$ is the UMVUE for $\theta$ (as the Rao-Blackwell Theorem guarantees); The UMVUE does not necessarily achieve the C-R lower bound. Part (c) relates to part (a) and part (b) in "as Rao-Blackwell Theorem indicated", which is worthwhile to elaborate. Basically, we want to show that the Rao-Blackwellized estimate $E(\hat{\theta} \mid T)$ based on the the estimate $\hat{\theta}$ proposed in part (a) exactly yields $T^*$. Indeed, applying the classical distributional result \begin{align*} X_1 \mid X_1 + \cdots + X_n = t \sim \text{Binom}(t, n^{-1}), \end{align*} it follows that \begin{align*} E(\hat{\theta} \mid T) = P(X_1 = 0 \mid T) = (1 - n^{-1})^T = T^*, \end{align*} hence the form of $T^*$.
Need help finding UMVUE for a Poisson Distribution
(a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get t
Need help finding UMVUE for a Poisson Distribution (a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get the log-likelihood (denote $\sum X_i$ by $T$ and omit terms which don't contain $\theta$) : $$\ell(\theta) = n\log\theta + T\log(-\log\theta)$$ Therefore, $$\ell'(\theta) = \frac{n}{\theta} + \frac{T}{\theta\log\theta}$$ $$\ell''(\theta) = -\frac{n}{\theta^2} - \frac{T(\log\theta + 1)}{(\theta\log\theta)^2}$$ As $E_\theta(T) = -n\log\theta$, it follows that the Fisher information is $$I(\theta) = -E_\theta(\ell''(\theta)) = -\frac{n}{\theta^2\log\theta}$$ Hence the C-R lower bound is given by $1/I(\theta) = \boxed{-\theta^2\log(\theta)/n}$. Since $\theta = P_\theta(X_i = 0)$, an unbiased estimate of $\theta$ could be the sample proportion which takes 0, namely, $$\hat{\theta} = \frac{\sum_{i = 1}^n \mathrm{I}\{X_i = 0\}}{n}.$$ Or even simpler, just take $$\hat{\theta} = \mathrm{I}\{X_1 = 0\}.$$ (b) This is routine application of Fisher's factorization theorem and the property of exponential family. Not hard if you write things clearly. (c) You may verify that (recall $T \sim \mathrm{Poisson}(n\lambda)$, and here would be easier to go back to work with $\lambda$ but keep in mind $\theta$ and $\lambda$ is one-to-one correspondence): $$E_\theta(T^*) = \sum_{k = 0}^\infty \left(1 - \frac{1}{n}\right)^k e^{-n\lambda}\frac{(n\lambda)^k}{k!} = e^{-n\lambda}e^{n\lambda(1 - n^{-1})} = \theta.$$ Hence $T^*$ can be (also) used as an unbiased estimator of $\theta$ (in fact, it is the Rao-Blackwellization of $\hat{\theta}$, see the last paragraph of this answer). In addition, \begin{align*} E_\theta((T^*)^2) = \sum_{k = 0}^\infty \left(1 - \frac{1}{n}\right)^{2k} e^{-n\lambda}\frac{(n\lambda)^k}{k!} = e^{-n\lambda}e^{n\lambda(1 - n^{-1})^2} = \theta^{2 - \frac{1}{n}}. \end{align*} It then follows that $$\mathrm{Var}_\theta(T^*) = \theta^{2 - \frac{1}{n}} - \theta^2.$$ One may verify that we indeed have the C-R lower bound is strictly less (as @81235 pointed out in the comment) than $\mathrm{Var}_\theta(T^*)$, as the consequence of the famous inequality \begin{align*} e^{\frac{\lambda}{n}} > 1 + \frac{\lambda}{n}. \end{align*} On the other hand, $\mathrm{Var}_\theta(T^*)$ is indeed not bigger than $$\mathrm{Var}_\theta(\hat{\theta}) = P(X_1 = 0)(1 - P(X_1 = 0)) = e^{-\lambda}(1 - e^{-\lambda}) = \theta - \theta^2.$$ Together, we observe that $T^*$ is the UMVUE for $\theta$ (as the Rao-Blackwell Theorem guarantees); The UMVUE does not necessarily achieve the C-R lower bound. Part (c) relates to part (a) and part (b) in "as Rao-Blackwell Theorem indicated", which is worthwhile to elaborate. Basically, we want to show that the Rao-Blackwellized estimate $E(\hat{\theta} \mid T)$ based on the the estimate $\hat{\theta}$ proposed in part (a) exactly yields $T^*$. Indeed, applying the classical distributional result \begin{align*} X_1 \mid X_1 + \cdots + X_n = t \sim \text{Binom}(t, n^{-1}), \end{align*} it follows that \begin{align*} E(\hat{\theta} \mid T) = P(X_1 = 0 \mid T) = (1 - n^{-1})^T = T^*, \end{align*} hence the form of $T^*$.
Need help finding UMVUE for a Poisson Distribution (a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get t
48,126
Need help finding UMVUE for a Poisson Distribution
How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
Need help finding UMVUE for a Poisson Distribution
How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
Need help finding UMVUE for a Poisson Distribution How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
Need help finding UMVUE for a Poisson Distribution How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
48,127
Need help finding UMVUE for a Poisson Distribution
$x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimator: $$E[T(x)]=1\cdot P(x_1=0) + 0\cdot P(x_1 \neq 0)=P(x_1=0)=\frac{e^{-\lambda}\cdot\lambda^0}{0!}=e^{-\lambda}=\theta$$ Now, let's loot at the MSE: $$E[(T(x)-\lambda)^2]=E[(T(x)-\theta+\theta-\lambda)^2]=E[(T(x)-\theta)^2]-2E[(T(x)-\theta)(\theta-\lambda)]+E[(\theta-\lambda)^2]$$ $T(x)$ is an unbiased estimator of $\theta$ and thus $2E[(T(x)-\theta)(\theta-\lambda)]$ falls. Using the MSE decomposition we get $b^2(T,\lambda)=(\theta-\lambda)^2$, so the bias of $T$ w.r.t. $\lambda$ is simply $\theta-\lambda$. Now, it is time for the mighty CRB: $$Var(T)=E[(T(x)-\theta)^2]-(\theta-\lambda)^2 \geq \frac{\left(1+\frac{d}{d\lambda}b(T,\lambda)\right)^2}{I(\lambda)}$$ a momentary pause: the Fisher information for the Poisson is $I(\lambda)=\frac{1}{\lambda}$, so: $$Var(T)\geq\lambda \left(1+\frac{d}{d\lambda}(\theta-\lambda)\right)^2$$ another pause: we were hinted to differentiate w.r.t. $\theta$, so we need do substitute $\lambda=-log(\theta)$ and substitute the derivation: $$\frac{db}{d\lambda}=\frac{db}{d\theta}\cdot\frac{d\theta}{d\lambda}=\frac{db}{d\theta}\cdot\frac{d}{d\lambda}(e^{-\lambda})=\frac{db}{d\theta}\cdot(-e^{-\lambda})=-\theta\cdot\frac{db}{d\theta}$$ So we substitute: $$Var(T)\geq -log(\theta) \left(1-\theta\frac{d}{d\theta}(\theta+log(\theta))\right)^2=-log(\theta) \left(1-\theta\left(1+\frac{1}{\theta}\right)\right)^2=-log(\theta) (1-\theta -1)^2$$ and finally get: $$Var(T)\geq-log(\theta)\theta^2$$ (b) $$L(x,\lambda)=\prod_{i=1}^{n}{\frac{e^{-\lambda}\lambda^{x_i}}{x_i!}}=\frac{1}{\prod{x_i!}}e^{-n\lambda}\lambda^{\sum{x_i}}=\frac{1}{\prod{x_i!}}exp(-n\lambda)exp(log(\lambda)\sum{x_i})$$ So by denoting $\eta=log(\lambda), S(x)=\sum{x_i}, A(\eta)=n\lambda, h(x)=\frac{1}{\prod{x_i!}}$ we get a representation of the likelihood function as an exponential family. Reparametrizing using $\theta$ we get $\eta=log(-log(\theta)), A(\eta)=-n log(\theta), S(x)=\sum{x_i}, h(x)=\frac{1}{\prod{x_i!}}$, again en exponential family. This is important as we get that $S(x)=\sum{x_i}$ is a sufficient statistic due to properties of exponential families (Fisher-Neyman factorization theorem). $S(x)$ is the sum of $x_1,...,x_n\sim Pois(-log(\theta))$, so $S(x)$ itself is a Poisson RV with parameter $-nlog(\theta)$. Now, let $g(S(x))$ be a function s.t $E[g(S)]=0$: $$E[g(S)]=\theta^{-n}\sum_{s=0}^{\infty}{g(s)\cdot\frac{\left(-n log(\theta) \right)^s}{s!}}$$ The only term that can be zero is $g(s)$, so if $E[g(S(x))]=0$ we conclude that $g(s)=0$ for all values of $s$, hence $S(x)=\sum{x_i}$ is a complete statistic. QED! (c) This is almost cheating: The corollary from the Lehmann-Scheffé theorem is that if S is a complete sufficient statistic, then applying the R-B procedure would yield the UMVUE. last momentary pause: we were told "combine your findings in Parts (a) and (b)", so let's do it In part (a) we got $Var(T)\geq-log(\theta)\theta^2$, in part (b) we got that $S(x)=\sum{x_i}$ is complete sufficient statistic, R-B has yielded $T^*$, so $Var(T^* )\geq-log(\theta)\theta^2$. That's it. PS It would be proper to thank Prof. Pavel Chigansky of HUJI, who has taught us this exact problem in his statistical inference course.
Need help finding UMVUE for a Poisson Distribution
$x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimat
Need help finding UMVUE for a Poisson Distribution $x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimator: $$E[T(x)]=1\cdot P(x_1=0) + 0\cdot P(x_1 \neq 0)=P(x_1=0)=\frac{e^{-\lambda}\cdot\lambda^0}{0!}=e^{-\lambda}=\theta$$ Now, let's loot at the MSE: $$E[(T(x)-\lambda)^2]=E[(T(x)-\theta+\theta-\lambda)^2]=E[(T(x)-\theta)^2]-2E[(T(x)-\theta)(\theta-\lambda)]+E[(\theta-\lambda)^2]$$ $T(x)$ is an unbiased estimator of $\theta$ and thus $2E[(T(x)-\theta)(\theta-\lambda)]$ falls. Using the MSE decomposition we get $b^2(T,\lambda)=(\theta-\lambda)^2$, so the bias of $T$ w.r.t. $\lambda$ is simply $\theta-\lambda$. Now, it is time for the mighty CRB: $$Var(T)=E[(T(x)-\theta)^2]-(\theta-\lambda)^2 \geq \frac{\left(1+\frac{d}{d\lambda}b(T,\lambda)\right)^2}{I(\lambda)}$$ a momentary pause: the Fisher information for the Poisson is $I(\lambda)=\frac{1}{\lambda}$, so: $$Var(T)\geq\lambda \left(1+\frac{d}{d\lambda}(\theta-\lambda)\right)^2$$ another pause: we were hinted to differentiate w.r.t. $\theta$, so we need do substitute $\lambda=-log(\theta)$ and substitute the derivation: $$\frac{db}{d\lambda}=\frac{db}{d\theta}\cdot\frac{d\theta}{d\lambda}=\frac{db}{d\theta}\cdot\frac{d}{d\lambda}(e^{-\lambda})=\frac{db}{d\theta}\cdot(-e^{-\lambda})=-\theta\cdot\frac{db}{d\theta}$$ So we substitute: $$Var(T)\geq -log(\theta) \left(1-\theta\frac{d}{d\theta}(\theta+log(\theta))\right)^2=-log(\theta) \left(1-\theta\left(1+\frac{1}{\theta}\right)\right)^2=-log(\theta) (1-\theta -1)^2$$ and finally get: $$Var(T)\geq-log(\theta)\theta^2$$ (b) $$L(x,\lambda)=\prod_{i=1}^{n}{\frac{e^{-\lambda}\lambda^{x_i}}{x_i!}}=\frac{1}{\prod{x_i!}}e^{-n\lambda}\lambda^{\sum{x_i}}=\frac{1}{\prod{x_i!}}exp(-n\lambda)exp(log(\lambda)\sum{x_i})$$ So by denoting $\eta=log(\lambda), S(x)=\sum{x_i}, A(\eta)=n\lambda, h(x)=\frac{1}{\prod{x_i!}}$ we get a representation of the likelihood function as an exponential family. Reparametrizing using $\theta$ we get $\eta=log(-log(\theta)), A(\eta)=-n log(\theta), S(x)=\sum{x_i}, h(x)=\frac{1}{\prod{x_i!}}$, again en exponential family. This is important as we get that $S(x)=\sum{x_i}$ is a sufficient statistic due to properties of exponential families (Fisher-Neyman factorization theorem). $S(x)$ is the sum of $x_1,...,x_n\sim Pois(-log(\theta))$, so $S(x)$ itself is a Poisson RV with parameter $-nlog(\theta)$. Now, let $g(S(x))$ be a function s.t $E[g(S)]=0$: $$E[g(S)]=\theta^{-n}\sum_{s=0}^{\infty}{g(s)\cdot\frac{\left(-n log(\theta) \right)^s}{s!}}$$ The only term that can be zero is $g(s)$, so if $E[g(S(x))]=0$ we conclude that $g(s)=0$ for all values of $s$, hence $S(x)=\sum{x_i}$ is a complete statistic. QED! (c) This is almost cheating: The corollary from the Lehmann-Scheffé theorem is that if S is a complete sufficient statistic, then applying the R-B procedure would yield the UMVUE. last momentary pause: we were told "combine your findings in Parts (a) and (b)", so let's do it In part (a) we got $Var(T)\geq-log(\theta)\theta^2$, in part (b) we got that $S(x)=\sum{x_i}$ is complete sufficient statistic, R-B has yielded $T^*$, so $Var(T^* )\geq-log(\theta)\theta^2$. That's it. PS It would be proper to thank Prof. Pavel Chigansky of HUJI, who has taught us this exact problem in his statistical inference course.
Need help finding UMVUE for a Poisson Distribution $x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimat
48,128
Binary Classifier with training data for one label only
This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential diagnosis) often faces the same problem. So-called one-class or unary classifiers address this. The idea is to model the "in" class independently of possible other classes. In chemometrics, SIMCA is a popular approach to this. Basically, you compress your class into a PCA models and then develop a boundary outside which you deem it sufficiently improbable that the case belongs to that class. (For multiple independent classes, you do this for each class separtely.) D.M. Tax: One-class classification -- Concept-learning in the absence of counter-examples, Technische Universiteit Delft, 2001 develops a one-class SVM.
Binary Classifier with training data for one label only
This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential di
Binary Classifier with training data for one label only This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential diagnosis) often faces the same problem. So-called one-class or unary classifiers address this. The idea is to model the "in" class independently of possible other classes. In chemometrics, SIMCA is a popular approach to this. Basically, you compress your class into a PCA models and then develop a boundary outside which you deem it sufficiently improbable that the case belongs to that class. (For multiple independent classes, you do this for each class separtely.) D.M. Tax: One-class classification -- Concept-learning in the absence of counter-examples, Technische Universiteit Delft, 2001 develops a one-class SVM.
Binary Classifier with training data for one label only This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential di
48,129
Binary Classifier with training data for one label only
If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed data set or sample there is the possibility, especially with low-probability events and/or small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This is an oversimplification, which is inaccurate and often unhelpful, particularly in probability-based machine learning techniques such as artificial neural networks and hidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero, we avoid the zero-frequency problem. Also see Cromwell's rule. So I would therefore artificially include some data for the other, very rare label/class.
Binary Classifier with training data for one label only
If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed dat
Binary Classifier with training data for one label only If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed data set or sample there is the possibility, especially with low-probability events and/or small data sets, of a possible event not occurring. Its observed frequency is therefore zero, apparently implying a probability of zero. This is an oversimplification, which is inaccurate and often unhelpful, particularly in probability-based machine learning techniques such as artificial neural networks and hidden Markov models. By artificially adjusting the probability of rare (but not impossible) events so those probabilities are not exactly zero, we avoid the zero-frequency problem. Also see Cromwell's rule. So I would therefore artificially include some data for the other, very rare label/class.
Binary Classifier with training data for one label only If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed dat
48,130
Word entropy / frequency in human speech
This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/character, using a neat 'guessing game' paradigm. In the same paper, he estimates the entropy of a word at around 12 bits. Shannon, however, used a relatively small data set[*] and it turns out that the entropy depends on many factors. @Lmorin mentioned time above, but other relevant factors include the topic (children's books have a limited vocabulary, for example), modality, context, author's style, and so on! The general term for $P(\textrm{word})$ is a language model and computational linguists/natural language processing researchers spend a lot of time building them because they're very useful [**]. The models contain the per-character or per-word probability. A language model also often contains information about transitions between words. A trigram (or 3rd-order model) looks like $P(\textrm{Word}_n | \textrm{Word}_{n-1}, \textrm{Word}_{n-2}$). However, the probabilites usually aren't taken directly from the data---it's exceedingly sparse---so there are various smoothing/interpolation/back-off methods designed to produce reasonable probability distributions. Any decent NLP textbook should have a chapter on language modelling. You might start with Chapter 6 of Manning and Schutze's "dice book" or Chapter 4 of Jurafsky and Martin. However, language models are so useful that they'll also show up in contexts as diverse as speech recognition, information retrieval, and even bioinformatics. This slide deck might be a good place to start if you want to read more. There's also a fair bit of literature about human language models. Noam Chompsky famously ranted about how "the notion of 'probability of a sentence' is an entirely useless one, under any known interpretation of this term.” but a lot of people have subsequently disagreed. If you're interested in this, you may want to look for papers on 'statistical learning' (not machine learning; psychologists use the term a bit differently). [*] It was the 50s and he was presumably doing most of this manually, so…fair enough! [**] In particular, it can help resolve ambguities. Suppose you can't tell if a blob is actually a 'T' or an 'I' by itself. If one alternative produces a common word and one doesn't (Iherefore vs Therefore), it's pretty clear which one you should pick.
Word entropy / frequency in human speech
This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/
Word entropy / frequency in human speech This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/character, using a neat 'guessing game' paradigm. In the same paper, he estimates the entropy of a word at around 12 bits. Shannon, however, used a relatively small data set[*] and it turns out that the entropy depends on many factors. @Lmorin mentioned time above, but other relevant factors include the topic (children's books have a limited vocabulary, for example), modality, context, author's style, and so on! The general term for $P(\textrm{word})$ is a language model and computational linguists/natural language processing researchers spend a lot of time building them because they're very useful [**]. The models contain the per-character or per-word probability. A language model also often contains information about transitions between words. A trigram (or 3rd-order model) looks like $P(\textrm{Word}_n | \textrm{Word}_{n-1}, \textrm{Word}_{n-2}$). However, the probabilites usually aren't taken directly from the data---it's exceedingly sparse---so there are various smoothing/interpolation/back-off methods designed to produce reasonable probability distributions. Any decent NLP textbook should have a chapter on language modelling. You might start with Chapter 6 of Manning and Schutze's "dice book" or Chapter 4 of Jurafsky and Martin. However, language models are so useful that they'll also show up in contexts as diverse as speech recognition, information retrieval, and even bioinformatics. This slide deck might be a good place to start if you want to read more. There's also a fair bit of literature about human language models. Noam Chompsky famously ranted about how "the notion of 'probability of a sentence' is an entirely useless one, under any known interpretation of this term.” but a lot of people have subsequently disagreed. If you're interested in this, you may want to look for papers on 'statistical learning' (not machine learning; psychologists use the term a bit differently). [*] It was the 50s and he was presumably doing most of this manually, so…fair enough! [**] In particular, it can help resolve ambguities. Suppose you can't tell if a blob is actually a 'T' or an 'I' by itself. If one alternative produces a common word and one doesn't (Iherefore vs Therefore), it's pretty clear which one you should pick.
Word entropy / frequency in human speech This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/
48,131
Word entropy / frequency in human speech
The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what happened to parenthesis in the 17° century ? http://books.google.com/ngrams/graph?content=%5B%28%5D%2C%5B%29%5D&year_start=1600&year_end=2000&corpus=15&smoothing=3&share=) Cons: I don't know if you can get all that data easily. You only have a percentage of books in which the ngrams appears, it's not really what you wanted. I think that the 2 probability are linked but the link will be hard to find without making questionnable assumptions.
Word entropy / frequency in human speech
The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what hap
Word entropy / frequency in human speech The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what happened to parenthesis in the 17° century ? http://books.google.com/ngrams/graph?content=%5B%28%5D%2C%5B%29%5D&year_start=1600&year_end=2000&corpus=15&smoothing=3&share=) Cons: I don't know if you can get all that data easily. You only have a percentage of books in which the ngrams appears, it's not really what you wanted. I think that the 2 probability are linked but the link will be hard to find without making questionnable assumptions.
Word entropy / frequency in human speech The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what hap
48,132
Generate distribution based on descriptive statistics
You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. bootstrap or density estimation. Without the actual data you cannot do any non-parametric procedure--you must specify a parametric model. Given that you have sample moments, I suggest you pick a model and use method of moments to estimate it. If you don't know anything beyond that it's roughly normal just use a normal distribution, as you have no justification for using anything else.
Generate distribution based on descriptive statistics
You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. boot
Generate distribution based on descriptive statistics You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. bootstrap or density estimation. Without the actual data you cannot do any non-parametric procedure--you must specify a parametric model. Given that you have sample moments, I suggest you pick a model and use method of moments to estimate it. If you don't know anything beyond that it's roughly normal just use a normal distribution, as you have no justification for using anything else.
Generate distribution based on descriptive statistics You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. boot
48,133
Generate distribution based on descriptive statistics
If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a series of transformations to (approximately) satisfy the descriptive stats. Of course, there are many distributions that could satisfy the problem... # function for descriptive stats stats = function(x) c(min(x),max(x),median(x),mean(x),sd(x)) # simple power transformation (hold min and max constant) pow = function(x,lam) { t = (x-min(x))^lam (t/max(t))*(max(x)-min(x))+min(x) } # power transform of upper and lower halves of data (hold min,max,median constant) pow2 = function(par, x) { m = median(x) t1 = pow(m-x[1:74], par[1]) t2 = pow(x[75:148]-m, par[2]) c(m-t1, t2+m) } # transformation to fit minimum and maximum t1 = function(x) { x = ((x-min(x))/diff(range(x)) *110) + 50 } # optimise power transformation match median t2 = function(x) { l = optimise(function(l) { (median(pow(x,l))-97.7)^2 }, c(-5,5))$min pow(x,l) } # optimise power transformation of upper and lower halves to fit mean and sd t3 = function(x) { l2 = optim(c(1,1), function(par) { r = pow2(par,x); (mean(r)-101.73)^2 + (sd(r)-20.45)^2 })$par pow2(l2, x) } d = t1(sort(rnorm(148))) stats(d) d = t2(d) stats(d) d = t3(d) stats(d) # result should match your descriptive stats hist(d) # looks normal-ish # repeat and plot many distributions that satisfy requirements plot(d,cumsum(d), type="l") for(n in 1:500) { d = t3(t2(t1(sort(rnorm(148))))) lines(d,cumsum(d), col=rgb(1,0,0,0.05)) }
Generate distribution based on descriptive statistics
If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a
Generate distribution based on descriptive statistics If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a series of transformations to (approximately) satisfy the descriptive stats. Of course, there are many distributions that could satisfy the problem... # function for descriptive stats stats = function(x) c(min(x),max(x),median(x),mean(x),sd(x)) # simple power transformation (hold min and max constant) pow = function(x,lam) { t = (x-min(x))^lam (t/max(t))*(max(x)-min(x))+min(x) } # power transform of upper and lower halves of data (hold min,max,median constant) pow2 = function(par, x) { m = median(x) t1 = pow(m-x[1:74], par[1]) t2 = pow(x[75:148]-m, par[2]) c(m-t1, t2+m) } # transformation to fit minimum and maximum t1 = function(x) { x = ((x-min(x))/diff(range(x)) *110) + 50 } # optimise power transformation match median t2 = function(x) { l = optimise(function(l) { (median(pow(x,l))-97.7)^2 }, c(-5,5))$min pow(x,l) } # optimise power transformation of upper and lower halves to fit mean and sd t3 = function(x) { l2 = optim(c(1,1), function(par) { r = pow2(par,x); (mean(r)-101.73)^2 + (sd(r)-20.45)^2 })$par pow2(l2, x) } d = t1(sort(rnorm(148))) stats(d) d = t2(d) stats(d) d = t3(d) stats(d) # result should match your descriptive stats hist(d) # looks normal-ish # repeat and plot many distributions that satisfy requirements plot(d,cumsum(d), type="l") for(n in 1:500) { d = t3(t2(t1(sort(rnorm(148))))) lines(d,cumsum(d), col=rgb(1,0,0,0.05)) }
Generate distribution based on descriptive statistics If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a
48,134
Generate distribution based on descriptive statistics
You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an example. # Parameters of the mixture p1 = 0.6 m1 = 95 s1 = 6 m2 = 103 s2 = 26 # Number of obs. n = 148 # Draw the component indicators set.seed(31337) mix_indicator = rep(1,n) mix_indicator[which(runif(n) > p1)] = 2 # Draw the normals draws = rnorm(n)*s1 + m1 draws[which(mix_indicator==2)] = rnorm(sum(mix_indicator==2))*s2 + m2 print(mean(draws)) # 100.9 print(median(draws)) # 97.1 print(sqrt(var(draws))) # 18.4 print(min(draws)) # 49 print(max(draws)) # 175
Generate distribution based on descriptive statistics
You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an
Generate distribution based on descriptive statistics You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an example. # Parameters of the mixture p1 = 0.6 m1 = 95 s1 = 6 m2 = 103 s2 = 26 # Number of obs. n = 148 # Draw the component indicators set.seed(31337) mix_indicator = rep(1,n) mix_indicator[which(runif(n) > p1)] = 2 # Draw the normals draws = rnorm(n)*s1 + m1 draws[which(mix_indicator==2)] = rnorm(sum(mix_indicator==2))*s2 + m2 print(mean(draws)) # 100.9 print(median(draws)) # 97.1 print(sqrt(var(draws))) # 18.4 print(min(draws)) # 49 print(max(draws)) # 175
Generate distribution based on descriptive statistics You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an
48,135
Meta-analysis and homogeneity -- what did these guys do?
One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate. I haven't had a chance to look at the paper but can tell you from the description is that the authors don't fully understand the issue of statistical heterogeneity or how to deal with it. You can't just that all my studies are across the board therefore everything is fine and let's pool. You need to be methodical throughout the whole process and checking the effect of each study on the overall heterogeneity is just one step. First they need to make their data is correctly extracted and inputted. #1 cause of heterogeneity is wrong data (e.g. extracting SE instead of SD). If the data is valid, then they need to check differences at each step in the PICOTSS (especially for clinical heterogeneity). If none there, then comes the statistical sensitivity analyses (e.g. removing one study at a time, unclear/ high risk of bias trial vs. low risk of bias trials, funding sources, etc.). In the end, you may still not find a single source of heterogeneity. In this case, you have to make a judgment call on whether or not to present pooled results or just go with a descriptive analysis (most investigators like to pool). Hope this helps. Ahmed Abou-Setta, MD, PhD
Meta-analysis and homogeneity -- what did these guys do?
One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate.
Meta-analysis and homogeneity -- what did these guys do? One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate. I haven't had a chance to look at the paper but can tell you from the description is that the authors don't fully understand the issue of statistical heterogeneity or how to deal with it. You can't just that all my studies are across the board therefore everything is fine and let's pool. You need to be methodical throughout the whole process and checking the effect of each study on the overall heterogeneity is just one step. First they need to make their data is correctly extracted and inputted. #1 cause of heterogeneity is wrong data (e.g. extracting SE instead of SD). If the data is valid, then they need to check differences at each step in the PICOTSS (especially for clinical heterogeneity). If none there, then comes the statistical sensitivity analyses (e.g. removing one study at a time, unclear/ high risk of bias trial vs. low risk of bias trials, funding sources, etc.). In the end, you may still not find a single source of heterogeneity. In this case, you have to make a judgment call on whether or not to present pooled results or just go with a descriptive analysis (most investigators like to pool). Hope this helps. Ahmed Abou-Setta, MD, PhD
Meta-analysis and homogeneity -- what did these guys do? One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate.
48,136
Meta-analysis and homogeneity -- what did these guys do?
The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no paper/specification level random effects - all of the variance in the estimates across studies is a result of within study errors or fixed effects on study characteristics. (2) There are paper/specification level random effects, and they are well represented by a single normal distribution (3) There are paper/specification level random effects, and they are not well represented by a single normal distribution. Now if (3) is the case, one particular problem will be if there is large excess kutosis. In this case, random effects in the extreme tails will occur with higher frequency than under normally distributed random effects. Now the kludge way to do this is to simply remove the 'outliers' and see if the results change dramatically. The better way to do this is to explicitly model non-normal random effects. There are a few promising ways to do this: (a) use some single non-normal distribution (b) Use multiple distributions, with random assortment (c) use multiple distributions, with assortment via some identifiable characteristics
Meta-analysis and homogeneity -- what did these guys do?
The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no pap
Meta-analysis and homogeneity -- what did these guys do? The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no paper/specification level random effects - all of the variance in the estimates across studies is a result of within study errors or fixed effects on study characteristics. (2) There are paper/specification level random effects, and they are well represented by a single normal distribution (3) There are paper/specification level random effects, and they are not well represented by a single normal distribution. Now if (3) is the case, one particular problem will be if there is large excess kutosis. In this case, random effects in the extreme tails will occur with higher frequency than under normally distributed random effects. Now the kludge way to do this is to simply remove the 'outliers' and see if the results change dramatically. The better way to do this is to explicitly model non-normal random effects. There are a few promising ways to do this: (a) use some single non-normal distribution (b) Use multiple distributions, with random assortment (c) use multiple distributions, with assortment via some identifiable characteristics
Meta-analysis and homogeneity -- what did these guys do? The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no pap
48,137
How do I calculate sample size so I can be confident that the sample mean approximates the population mean?
For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the population mean. Okay. Sampling would be without replacement. With a million in the population? To a first approximation, it doesn't matter enough to be worth worrying about Actually, turns out in this case it does. I'll do it both without replacement and with. With replacement is simpler, and I do it first. Distribution is normal. Don't need it. The sample size will be large enough that with the other assumptions, only really strongly non-normal distributions will have any impact. Can we assume independence (apart from the effect of sampling without replacement)? e.g. sampling completely at random? I'll take it that we can. $\mu = 0.90$ $\sigma = 1.32$ Want 'to be 99% confident that the sample mean is within 1% of the population mean'. i.e. Find $n$ such that $P(|\bar{x}-\mu| < .01\mu) = 0.99$ $\bar{x}-\mu \sim N(0, \frac{\sigma^2}{n})$ 99% of a normal distribution is within 2.576 s.d.'s of the population mean (this figure is gettable from normal tables, or using a function in a program. I used R) ` Thus I need $2.576 \times \sigma/\sqrt{n} < 0.01 \mu = 0.009$ Hence $2.576^2 \sigma^2/n < 0.009^2$ Hence $2.576^2 \sigma^2 < n \times 0.009^2$ Or $n > (2.576 \times 1.32/0.009)^2 = 142742.9$ So if $n$ is about 142700, (the means and sd's and normal table values were only accurate to about the same number of figures - only the first 3-4 digits will be meaningful) then the required probability statement should hold. If we allow for the 'without replacement' the sample size would reduce about 14% percent (google for finite population correction to the variance); other factors are likely to affect you by more than a couple of percent (like not having perfectly random sampling, for one example) Let's look at the without replacement case using the finite population correction now. The finite population correction multiplies the variance by a factor $f = \frac{N-n}{N-1} = 1-\frac{n-1}{N-1}$. Some people approximate this by $1 -\, n/N$, which is easily accurate enough with the large numbers for $n$ and $N$ involved here. However, I'll try to do the first version there. $2.576^2 \sigma^2 (N-n)/(N-1) < n \times 0.009^2$ $(2.576\sigma/0.009)^2 /(N-1) < n/(N-n) $ $(2.576\sigma/0.009)^2 /(N-1) < 1/[N/n\,\,\, -1] $ $142743 \times 1000000/1142742 < n$ So (if I did that right), $n > 124912.7$ Or to the accuracy in the normal value, $n$ should be about $124900$. (assuming the mean and s.d. are actually accurate to at least 4 figures, too) Calculation check: Interval half-width = $(2.576\times 1.32/\sqrt{124900})\sqrt{(1000000-124900)/999999}$ $= 0.00900$
How do I calculate sample size so I can be confident that the sample mean approximates the populatio
For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the populati
How do I calculate sample size so I can be confident that the sample mean approximates the population mean? For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the population mean. Okay. Sampling would be without replacement. With a million in the population? To a first approximation, it doesn't matter enough to be worth worrying about Actually, turns out in this case it does. I'll do it both without replacement and with. With replacement is simpler, and I do it first. Distribution is normal. Don't need it. The sample size will be large enough that with the other assumptions, only really strongly non-normal distributions will have any impact. Can we assume independence (apart from the effect of sampling without replacement)? e.g. sampling completely at random? I'll take it that we can. $\mu = 0.90$ $\sigma = 1.32$ Want 'to be 99% confident that the sample mean is within 1% of the population mean'. i.e. Find $n$ such that $P(|\bar{x}-\mu| < .01\mu) = 0.99$ $\bar{x}-\mu \sim N(0, \frac{\sigma^2}{n})$ 99% of a normal distribution is within 2.576 s.d.'s of the population mean (this figure is gettable from normal tables, or using a function in a program. I used R) ` Thus I need $2.576 \times \sigma/\sqrt{n} < 0.01 \mu = 0.009$ Hence $2.576^2 \sigma^2/n < 0.009^2$ Hence $2.576^2 \sigma^2 < n \times 0.009^2$ Or $n > (2.576 \times 1.32/0.009)^2 = 142742.9$ So if $n$ is about 142700, (the means and sd's and normal table values were only accurate to about the same number of figures - only the first 3-4 digits will be meaningful) then the required probability statement should hold. If we allow for the 'without replacement' the sample size would reduce about 14% percent (google for finite population correction to the variance); other factors are likely to affect you by more than a couple of percent (like not having perfectly random sampling, for one example) Let's look at the without replacement case using the finite population correction now. The finite population correction multiplies the variance by a factor $f = \frac{N-n}{N-1} = 1-\frac{n-1}{N-1}$. Some people approximate this by $1 -\, n/N$, which is easily accurate enough with the large numbers for $n$ and $N$ involved here. However, I'll try to do the first version there. $2.576^2 \sigma^2 (N-n)/(N-1) < n \times 0.009^2$ $(2.576\sigma/0.009)^2 /(N-1) < n/(N-n) $ $(2.576\sigma/0.009)^2 /(N-1) < 1/[N/n\,\,\, -1] $ $142743 \times 1000000/1142742 < n$ So (if I did that right), $n > 124912.7$ Or to the accuracy in the normal value, $n$ should be about $124900$. (assuming the mean and s.d. are actually accurate to at least 4 figures, too) Calculation check: Interval half-width = $(2.576\times 1.32/\sqrt{124900})\sqrt{(1000000-124900)/999999}$ $= 0.00900$
How do I calculate sample size so I can be confident that the sample mean approximates the populatio For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the populati
48,138
How to deal with an unavoidable correlation between two independent variables?
The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess the relative contributions of Class #1 and Class #2 students to the cost, then collinearity is a bigger problem. In a well-behaved, non-colinear model, we might take a model like $y = \beta_0 + \beta_1 \cdot x_1 + \beta_2 \cdot x_2$ and fit it with our data to find the $\beta$ values. We might find that $\beta_1 = 2$ and $\beta_2 = -0.5$, which would indicate that a one unit increase in $x_1$ results in a 2 unit increase in $y$, while a similar change in $x_2$ causes a half-unit decrease in $y$. However, if $x_1$ and $x_2$ are highly correlated, this interpretation goes right out the window. Suppose we fit a model $Y = \beta_0 + \beta_1 \cdot x_1$ and found that $\beta_0 = 0$ and $\beta_1 = 4.$ Everything's great! Now we do something dumb and fit this model instead $Y = \beta_0 + \beta_1 \cdot x_1 + \beta_2 \cdot x_2$, where $x_1 = x_2$ (in other words, $x_1$ and $x_2$ are completely correlated). In this case, we can pick literally any set of $\{\beta_1, \beta_2\}$ values that add up to four: (2,2), (1,3), (1003, -999), and so on: these are all the points on the line $x+y=4$ (hence the name!). These all give you the same prediction, but depending on your choice you would be claiming that a 1 unit increase in $x_1$ is associated with a 2, 1, or 1003 unit increase in $y$, respectively, which can't all be correct! This is obviously an extreme example, but you could imagine similar things happening when the $x_s$ are somewhat less strongly correlated. I'm also tempted to ask why you're separating out students by class--is there some reason to think that Class #1 and Class #2 students contribute differently to the price of lunch? Perhaps a model where you regress lunch cost ~ total number of students would be more appropriate?
How to deal with an unavoidable correlation between two independent variables?
The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess
How to deal with an unavoidable correlation between two independent variables? The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess the relative contributions of Class #1 and Class #2 students to the cost, then collinearity is a bigger problem. In a well-behaved, non-colinear model, we might take a model like $y = \beta_0 + \beta_1 \cdot x_1 + \beta_2 \cdot x_2$ and fit it with our data to find the $\beta$ values. We might find that $\beta_1 = 2$ and $\beta_2 = -0.5$, which would indicate that a one unit increase in $x_1$ results in a 2 unit increase in $y$, while a similar change in $x_2$ causes a half-unit decrease in $y$. However, if $x_1$ and $x_2$ are highly correlated, this interpretation goes right out the window. Suppose we fit a model $Y = \beta_0 + \beta_1 \cdot x_1$ and found that $\beta_0 = 0$ and $\beta_1 = 4.$ Everything's great! Now we do something dumb and fit this model instead $Y = \beta_0 + \beta_1 \cdot x_1 + \beta_2 \cdot x_2$, where $x_1 = x_2$ (in other words, $x_1$ and $x_2$ are completely correlated). In this case, we can pick literally any set of $\{\beta_1, \beta_2\}$ values that add up to four: (2,2), (1,3), (1003, -999), and so on: these are all the points on the line $x+y=4$ (hence the name!). These all give you the same prediction, but depending on your choice you would be claiming that a 1 unit increase in $x_1$ is associated with a 2, 1, or 1003 unit increase in $y$, respectively, which can't all be correct! This is obviously an extreme example, but you could imagine similar things happening when the $x_s$ are somewhat less strongly correlated. I'm also tempted to ask why you're separating out students by class--is there some reason to think that Class #1 and Class #2 students contribute differently to the price of lunch? Perhaps a model where you regress lunch cost ~ total number of students would be more appropriate?
How to deal with an unavoidable correlation between two independent variables? The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess
48,139
How to deal with an unavoidable correlation between two independent variables?
Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whether a student is in class 1 or class 2 the student's age This way, you can ask whether age matters, and whether belonging to class 2 (rather than class 1, which would be a baseline) also matters.
How to deal with an unavoidable correlation between two independent variables?
Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whet
How to deal with an unavoidable correlation between two independent variables? Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whether a student is in class 1 or class 2 the student's age This way, you can ask whether age matters, and whether belonging to class 2 (rather than class 1, which would be a baseline) also matters.
How to deal with an unavoidable correlation between two independent variables? Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whet
48,140
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan?
This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will address the issue in a future release. Thanks for your persistence.
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan?
This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will a
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan? This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will address the issue in a future release. Thanks for your persistence.
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan? This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will a
48,141
Latent variables in Bayes nets with no physical interpretation
The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always very precise and clear, I wonder why this time he didn't use the single word "parameters", that would have been enlightening.
Latent variables in Bayes nets with no physical interpretation
The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always
Latent variables in Bayes nets with no physical interpretation The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always very precise and clear, I wonder why this time he didn't use the single word "parameters", that would have been enlightening.
Latent variables in Bayes nets with no physical interpretation The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always
48,142
Latent variables in Bayes nets with no physical interpretation
First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). Note that incorporating the latent variable H in the left-hand model reduces the parameter space of the overall graphical model. An I-equivalent graph can be drawn without the latent variable H (as in the right-hand model), but it may require many more parameters than a model that incorporates latent variables. Choosing between the two is a modeling decision (that can come down to statistical vs. computational simplicity). H in the left-hand model need not have any physical representation, but it may be included or removed in the model for interpretability, or other requirements of the problem (e.g., sampling, inference). That is, there are often context-specific trade-offs that need to be made in determining the graph's structure. Hope this helps!
Latent variables in Bayes nets with no physical interpretation
First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). N
Latent variables in Bayes nets with no physical interpretation First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). Note that incorporating the latent variable H in the left-hand model reduces the parameter space of the overall graphical model. An I-equivalent graph can be drawn without the latent variable H (as in the right-hand model), but it may require many more parameters than a model that incorporates latent variables. Choosing between the two is a modeling decision (that can come down to statistical vs. computational simplicity). H in the left-hand model need not have any physical representation, but it may be included or removed in the model for interpretability, or other requirements of the problem (e.g., sampling, inference). That is, there are often context-specific trade-offs that need to be made in determining the graph's structure. Hope this helps!
Latent variables in Bayes nets with no physical interpretation First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). N
48,143
Hot deck imputation: validity of double imputation and selection of deck variables for a regression
Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard errors and P values that are too low. For correct statistical inference could use multiple imputation. It is easy to apply hot deck imputation in combination with multiple imputation. The most popular technique for doing this is known as predictive mean matching, and has been implemented on a variety of platforms.
Hot deck imputation: validity of double imputation and selection of deck variables for a regression
Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard
Hot deck imputation: validity of double imputation and selection of deck variables for a regression Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard errors and P values that are too low. For correct statistical inference could use multiple imputation. It is easy to apply hot deck imputation in combination with multiple imputation. The most popular technique for doing this is known as predictive mean matching, and has been implemented on a variety of platforms.
Hot deck imputation: validity of double imputation and selection of deck variables for a regression Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard
48,144
K-L divergence is 0 for clearly different distributions. Why?
I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLdiv cuts off low densities at 1e-4 by default (this can be changed, but I don't know if you'll introduce problems that way). Also, I think you need to evaluate the two densities on the same grid.
K-L divergence is 0 for clearly different distributions. Why?
I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLd
K-L divergence is 0 for clearly different distributions. Why? I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLdiv cuts off low densities at 1e-4 by default (this can be changed, but I don't know if you'll introduce problems that way). Also, I think you need to evaluate the two densities on the same grid.
K-L divergence is 0 for clearly different distributions. Why? I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLd
48,145
Index plot for each cluster sorted by the silhouette
The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b(i)$ the minimum average dissimilarity to members of another cluster. The silhouette values of members of a cluster $k$ are at the same position as the values $k$ in the cluster membership vector cluster.object. So you do not have anything to do. Your seqIplot command will automatically produce one index plot for each cluster with the sequences sorted by their silhouette values in each cluster. Sequences will be sorted bottom up from the lower to the highest silhouette value, meaning that the sequences with the best silhouette values for each cluster are at the top of the plots. Hope this helps.
Index plot for each cluster sorted by the silhouette
The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b
Index plot for each cluster sorted by the silhouette The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b(i)$ the minimum average dissimilarity to members of another cluster. The silhouette values of members of a cluster $k$ are at the same position as the values $k$ in the cluster membership vector cluster.object. So you do not have anything to do. Your seqIplot command will automatically produce one index plot for each cluster with the sequences sorted by their silhouette values in each cluster. Sequences will be sorted bottom up from the lower to the highest silhouette value, meaning that the sequences with the best silhouette values for each cluster are at the top of the plots. Hope this helps.
Index plot for each cluster sorted by the silhouette The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b
48,146
Machine learning predicted value
There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees etc.). The motivation of the SVM is that it is better to solve the particular problem at hand directly, rather than solve a more general problem and simplify the result. So if you are only interested in a hard binary classification, in principle that ought to be easier than estimating the a-posteriori probability of class membership and then thresholding at 0.5. Whether that is true in practice is debatable, but also in my experience in practice you often do want the a-posteriori probabilities becase training set and operational class frequencies are different or variable, or equivalently the misclassification costs are not known at training time or are variable, or you need a reject option etc. So whether a particular method estimates the conditional mean of the response variable depends on what task the method intended to solve. Note for the SVM there is an alternative that does estimate the conditional mean of the data, namely kernel logistic regression for classification and kernel ridge regression for regression problems. The loss function that is minimised has a lot to do with whether the model predicts the conditional mean of the response variable, pretty much any method that minimises a sum of squared error loss (or cross-entropy for classification) will have this property, see e.g. Saerens, M., "Building cost functions minimizing to some summary statistics", IEEE Transactions on Neural Networks, volume: 11 , issue: 6, pages 1263 - 1271, 2000.
Machine learning predicted value
There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees
Machine learning predicted value There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees etc.). The motivation of the SVM is that it is better to solve the particular problem at hand directly, rather than solve a more general problem and simplify the result. So if you are only interested in a hard binary classification, in principle that ought to be easier than estimating the a-posteriori probability of class membership and then thresholding at 0.5. Whether that is true in practice is debatable, but also in my experience in practice you often do want the a-posteriori probabilities becase training set and operational class frequencies are different or variable, or equivalently the misclassification costs are not known at training time or are variable, or you need a reject option etc. So whether a particular method estimates the conditional mean of the response variable depends on what task the method intended to solve. Note for the SVM there is an alternative that does estimate the conditional mean of the data, namely kernel logistic regression for classification and kernel ridge regression for regression problems. The loss function that is minimised has a lot to do with whether the model predicts the conditional mean of the response variable, pretty much any method that minimises a sum of squared error loss (or cross-entropy for classification) will have this property, see e.g. Saerens, M., "Building cost functions minimizing to some summary statistics", IEEE Transactions on Neural Networks, volume: 11 , issue: 6, pages 1263 - 1271, 2000.
Machine learning predicted value There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees
48,147
EM algorithm R code on Cox PH model with frailty
coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Grambsch (2000, Section 9.6). (method actually refers to the method used to select a solution for theta, the heterogeneity parameter, not to the estimation procedure). Both algorithms are clearly detailed in Duchateau and Janssen (2008, Chapter 5). Implementing the EM algorithm might require quite a lot of work, but might be possible following these lines. By the way, there is a SAS macro called gamfrail written by Klein that already does the job, even though it is not very user-friendly. It can be downloaded here together with a guide.
EM algorithm R code on Cox PH model with frailty
coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Gramb
EM algorithm R code on Cox PH model with frailty coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Grambsch (2000, Section 9.6). (method actually refers to the method used to select a solution for theta, the heterogeneity parameter, not to the estimation procedure). Both algorithms are clearly detailed in Duchateau and Janssen (2008, Chapter 5). Implementing the EM algorithm might require quite a lot of work, but might be possible following these lines. By the way, there is a SAS macro called gamfrail written by Klein that already does the job, even though it is not very user-friendly. It can be downloaded here together with a guide.
EM algorithm R code on Cox PH model with frailty coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Gramb
48,148
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance?
First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coefficients, unless you scaled them all prior to the regression, such that they all had the same mean and standard deviation (and even then you have to be careful!). If your variables are un-scaled, variables with larger averages will tend to have larger absolute coefficients. Another option would be to bootstrap sample your data, fit a model to each sample, and calculate confidence intervals around your coefficients. Finally, how are you choosing the "alpha" parameter for your model?
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance?
First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coeffi
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance? First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coefficients, unless you scaled them all prior to the regression, such that they all had the same mean and standard deviation (and even then you have to be careful!). If your variables are un-scaled, variables with larger averages will tend to have larger absolute coefficients. Another option would be to bootstrap sample your data, fit a model to each sample, and calculate confidence intervals around your coefficients. Finally, how are you choosing the "alpha" parameter for your model?
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance? First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coeffi
48,149
Other substitution matrices for missing value state in sequence analysis with TraMineR?
You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The seqdist help page states: "If the OM method is selected, seqdist expects a substitution cost matrix with a row and a column entry for the missing state (symbol defined with the nr option of seqdef)." An easy way to define such a matrix respecting the correct order of the alphabet augmented with the state state, is to first create such a matrix with for example sm <- seqsubm(yourseq, method="CONSTANT", with.missing=TRUE) and then replace the content of the matrix with your wanted costs before passing it to seqdist. You can also make use of the miss.cost argument to set a constant substitution cost for missing states. As for the imputation of missing states, in addition to Brendan Halpin's nice multiple imputation solution, you could also consider exploiting the predictive capacities of Probabilistic suffix trees proposed in the just released Alexis Gabadinho's PST package. Hope this helps.
Other substitution matrices for missing value state in sequence analysis with TraMineR?
You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The se
Other substitution matrices for missing value state in sequence analysis with TraMineR? You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The seqdist help page states: "If the OM method is selected, seqdist expects a substitution cost matrix with a row and a column entry for the missing state (symbol defined with the nr option of seqdef)." An easy way to define such a matrix respecting the correct order of the alphabet augmented with the state state, is to first create such a matrix with for example sm <- seqsubm(yourseq, method="CONSTANT", with.missing=TRUE) and then replace the content of the matrix with your wanted costs before passing it to seqdist. You can also make use of the miss.cost argument to set a constant substitution cost for missing states. As for the imputation of missing states, in addition to Brendan Halpin's nice multiple imputation solution, you could also consider exploiting the predictive capacities of Probabilistic suffix trees proposed in the just released Alexis Gabadinho's PST package. Hope this helps.
Other substitution matrices for missing value state in sequence analysis with TraMineR? You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The se
48,150
Other substitution matrices for missing value state in sequence analysis with TraMineR?
thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way Gilbert described it. Using "seqdef" (and seqsum before) there is indeed the opportunity defining 'real states' but not the gaps with defined index-costs, or am I wrong? Or the other way around: Is there a 'code' defining the gaps? It's because the custom set costs are only accounted for 'real states' (not the gaps) if you set submission costs like this: R> subm.custom <- matrix(c(0, 1, 1, 2, 1, 1, 1, 0, 1, 2, 1, 2, 1, 1, 0, 3, 1, 2, 2, 2, 3, 0, 3, 1, 1, 1, 1, 3, 0, 2, 1, 2, 2, 1, 2, 0), nrow = 6, ncol = 6, byrow = TRUE, dimnames = list(mvad.shortlab, mvad.shortlab)) Anybody can help? (if my question isn't understandable, please let me know)
Other substitution matrices for missing value state in sequence analysis with TraMineR?
thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way G
Other substitution matrices for missing value state in sequence analysis with TraMineR? thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way Gilbert described it. Using "seqdef" (and seqsum before) there is indeed the opportunity defining 'real states' but not the gaps with defined index-costs, or am I wrong? Or the other way around: Is there a 'code' defining the gaps? It's because the custom set costs are only accounted for 'real states' (not the gaps) if you set submission costs like this: R> subm.custom <- matrix(c(0, 1, 1, 2, 1, 1, 1, 0, 1, 2, 1, 2, 1, 1, 0, 3, 1, 2, 2, 2, 3, 0, 3, 1, 1, 1, 1, 3, 0, 2, 1, 2, 2, 1, 2, 0), nrow = 6, ncol = 6, byrow = TRUE, dimnames = list(mvad.shortlab, mvad.shortlab)) Anybody can help? (if my question isn't understandable, please let me know)
Other substitution matrices for missing value state in sequence analysis with TraMineR? thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way G
48,151
Poisson regression with (auto-correlated) time series
I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book, but it looks highly relevant (though fairly technical) as far as I can tell. I also wonder if this can be handled in a GLM framework with Poisson family, a log link function, and Newey-West standard errors. This is one line of code in Stata (after tsseting your data) and perhaps fairly doable in other packages. Here's a link to an old Stata Technical Bulletin article by James Hardin with the variance formulas for the probit, logit, and poisson. Perhaps one of the time-series mavens can comment on whether this would be a terrible idea.
Poisson regression with (auto-correlated) time series
I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book
Poisson regression with (auto-correlated) time series I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book, but it looks highly relevant (though fairly technical) as far as I can tell. I also wonder if this can be handled in a GLM framework with Poisson family, a log link function, and Newey-West standard errors. This is one line of code in Stata (after tsseting your data) and perhaps fairly doable in other packages. Here's a link to an old Stata Technical Bulletin article by James Hardin with the variance formulas for the probit, logit, and poisson. Perhaps one of the time-series mavens can comment on whether this would be a terrible idea.
Poisson regression with (auto-correlated) time series I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book
48,152
Poisson regression with (auto-correlated) time series
Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is zinb. & 4. You could try orthogonalizing the autocorrelated variables In Stata, this is orthog var1 var2 var3, gen(newvar1 newvar2 newvar3)
Poisson regression with (auto-correlated) time series
Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is
Poisson regression with (auto-correlated) time series Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is zinb. & 4. You could try orthogonalizing the autocorrelated variables In Stata, this is orthog var1 var2 var3, gen(newvar1 newvar2 newvar3)
Poisson regression with (auto-correlated) time series Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is
48,153
What's the Bayesian counterpart to Pearson product-moment correlation?
There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for specific Bayesian techniques only arises when you do inference with it, so the appropriate Bayesian approach would depend on what your actual question is. But there's no fundamental reason why it wouldn't invove Pearson's product-moment correlation.
What's the Bayesian counterpart to Pearson product-moment correlation?
There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for
What's the Bayesian counterpart to Pearson product-moment correlation? There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for specific Bayesian techniques only arises when you do inference with it, so the appropriate Bayesian approach would depend on what your actual question is. But there's no fundamental reason why it wouldn't invove Pearson's product-moment correlation.
What's the Bayesian counterpart to Pearson product-moment correlation? There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for
48,154
How to find the rows that meet some conditions in a sequence data set
To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mvad" data set. ## Loading the library library(TraMineR) data(mvad) ## Defining sequence properties mvad.alphabet <- c("employment", "FE", "HE", "joblessness", "school", "training") mvad.lab <- c("employment", "further education", "higher education", "joblessness", "school", "training") mvad.shortlab <- c("EM", "FE", "HE", "JL", "SC", "TR") ## The state sequence object. mvad.seq <- seqdef(mvad, 17:86, alphabet = mvad.alphabet, states = mvad.shortlab, labels = mvad.lab, xtstep = 6) Now we can compute sequence length and build the vector (here all sequences have the same length of 70, so it does not make a lot of sense...). We used "<=" otherwise no sequences are selected, but in your case, you should use "<". condition <- seqlength(mvad.seq) <= 70 seqdplot(mvad.seq[condition, ]) To count the number of time a state appears in each sequence you can use the "seqistatd" function. For instance, if we want to select all sequences with the "JL" (joblessness) state, we can use: state.count <- seqistatd(mvad.seq) condition <- state.count[, "JL"] > 0 seqdplot(mvad.seq[condition, ]) You can use the same strategy for the "*" missing state. There is no need to count "%" (void), since this will lead exactly for the same result as using "seqlength".
How to find the rows that meet some conditions in a sequence data set
To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mva
How to find the rows that meet some conditions in a sequence data set To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mvad" data set. ## Loading the library library(TraMineR) data(mvad) ## Defining sequence properties mvad.alphabet <- c("employment", "FE", "HE", "joblessness", "school", "training") mvad.lab <- c("employment", "further education", "higher education", "joblessness", "school", "training") mvad.shortlab <- c("EM", "FE", "HE", "JL", "SC", "TR") ## The state sequence object. mvad.seq <- seqdef(mvad, 17:86, alphabet = mvad.alphabet, states = mvad.shortlab, labels = mvad.lab, xtstep = 6) Now we can compute sequence length and build the vector (here all sequences have the same length of 70, so it does not make a lot of sense...). We used "<=" otherwise no sequences are selected, but in your case, you should use "<". condition <- seqlength(mvad.seq) <= 70 seqdplot(mvad.seq[condition, ]) To count the number of time a state appears in each sequence you can use the "seqistatd" function. For instance, if we want to select all sequences with the "JL" (joblessness) state, we can use: state.count <- seqistatd(mvad.seq) condition <- state.count[, "JL"] > 0 seqdplot(mvad.seq[condition, ]) You can use the same strategy for the "*" missing state. There is no need to count "%" (void), since this will lead exactly for the same result as using "seqlength".
How to find the rows that meet some conditions in a sequence data set To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mva
48,155
Mahalanobis Distance on Singular Data
Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.szie.hu/dep/talt/lv/CentInpDuplNoHeader.txt" df <- read.table(file = url, header = FALSE) m <- as.matrix(df) dim(m) # [1] 380 372 qr(m)$rank # [1] 300 Examining the matrix's singular values is another way to see the same thing: head(table(svd(df)$d)) # 5.76661502353373e-13 2.57650568058543e-12 0.00929562094651422 # 71 1 1 # 0.0277990885015625 0.0398152894712022 0.0469713341003743 # 1 1 1
Mahalanobis Distance on Singular Data
Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.s
Mahalanobis Distance on Singular Data Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.szie.hu/dep/talt/lv/CentInpDuplNoHeader.txt" df <- read.table(file = url, header = FALSE) m <- as.matrix(df) dim(m) # [1] 380 372 qr(m)$rank # [1] 300 Examining the matrix's singular values is another way to see the same thing: head(table(svd(df)$d)) # 5.76661502353373e-13 2.57650568058543e-12 0.00929562094651422 # 71 1 1 # 0.0277990885015625 0.0398152894712022 0.0469713341003743 # 1 1 1
Mahalanobis Distance on Singular Data Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.s
48,156
Mahalanobis Distance on Singular Data
A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse or pseudoinverse effectively calculates an "inverse-like" matrix that ignores some of this noninformative information. This is superior to other methods that effectively add in a small amount of incorrect information (i.e. add a small constant to all data). Pseudoinverse covariance matrices have been used successfully with the Mahalanobis distance, see http://www.sciencedirect.com/science/article/pii/0146664X79900522.
Mahalanobis Distance on Singular Data
A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse
Mahalanobis Distance on Singular Data A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse or pseudoinverse effectively calculates an "inverse-like" matrix that ignores some of this noninformative information. This is superior to other methods that effectively add in a small amount of incorrect information (i.e. add a small constant to all data). Pseudoinverse covariance matrices have been used successfully with the Mahalanobis distance, see http://www.sciencedirect.com/science/article/pii/0146664X79900522.
Mahalanobis Distance on Singular Data A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse
48,157
Mahalanobis Distance on Singular Data
What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-distance/. You can also check "The Elements of Statistical Learning", by Hastie et al. in particular the sections on ridge regression (it is related) and look up in the index Mahalanobis Distance
Mahalanobis Distance on Singular Data
What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-di
Mahalanobis Distance on Singular Data What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-distance/. You can also check "The Elements of Statistical Learning", by Hastie et al. in particular the sections on ridge regression (it is related) and look up in the index Mahalanobis Distance
Mahalanobis Distance on Singular Data What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-di
48,158
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct?
This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usually can be expected and it would also be of interest. However, this is guesswork, because I don't know the background of the experiment nor the actual data. Wether the temporal effect can be assumed to be linear and wether you need the interaction, you would have to judge from your data. Block is clearly a random effect here and is needed to account for repeated measures. Usually with this experimental setup you have treatment plots within your blocks, which often stay the same over the whole measurement period. Then it could be necessary to account for that, too: fit <- lme(Value ~ Treatment * Year, random = ~1|Block/Plot, data = mydata) You did not mention, what kind of variable Value is. You might need a generalized linear model (look a the family parameter of lme) or need to transform your dependend.
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct?
This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usuall
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct? This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usually can be expected and it would also be of interest. However, this is guesswork, because I don't know the background of the experiment nor the actual data. Wether the temporal effect can be assumed to be linear and wether you need the interaction, you would have to judge from your data. Block is clearly a random effect here and is needed to account for repeated measures. Usually with this experimental setup you have treatment plots within your blocks, which often stay the same over the whole measurement period. Then it could be necessary to account for that, too: fit <- lme(Value ~ Treatment * Year, random = ~1|Block/Plot, data = mydata) You did not mention, what kind of variable Value is. You might need a generalized linear model (look a the family parameter of lme) or need to transform your dependend.
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct? This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usuall
48,159
Is it necessary to report the bivariate correlations when reporting logistic regression?
You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and Reporting of Multivariable Logistic Regression in Transplantation Literature Logistic regression in the medical literature: Standards for use and reporting, with particular attention to one medical domain From a meta-analytical point of view, it is always useful to report bivariate statistics. So, I always report them (put a table in the appendix), even if some of the variables are dichotomous. However, this also depends on the field you are working in.
Is it necessary to report the bivariate correlations when reporting logistic regression?
You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and
Is it necessary to report the bivariate correlations when reporting logistic regression? You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and Reporting of Multivariable Logistic Regression in Transplantation Literature Logistic regression in the medical literature: Standards for use and reporting, with particular attention to one medical domain From a meta-analytical point of view, it is always useful to report bivariate statistics. So, I always report them (put a table in the appendix), even if some of the variables are dichotomous. However, this also depends on the field you are working in.
Is it necessary to report the bivariate correlations when reporting logistic regression? You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and
48,160
Is it necessary to report the bivariate correlations when reporting logistic regression?
I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an appendix will almost surely work. If not .... well, in my reading in the social sciences and medicine, I rarely see the correlations reported. Page limits and all that. If this is for a dissertation, it is almost certainly a good idea to put in the correlations.
Is it necessary to report the bivariate correlations when reporting logistic regression?
I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an
Is it necessary to report the bivariate correlations when reporting logistic regression? I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an appendix will almost surely work. If not .... well, in my reading in the social sciences and medicine, I rarely see the correlations reported. Page limits and all that. If this is for a dissertation, it is almost certainly a good idea to put in the correlations.
Is it necessary to report the bivariate correlations when reporting logistic regression? I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an
48,161
Confidence intervals for proportions (prevalence)
You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t, c(.025,.975)) You can repeat this for each of your 6 subsets of data.
Confidence intervals for proportions (prevalence)
You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t,
Confidence intervals for proportions (prevalence) You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t, c(.025,.975)) You can repeat this for each of your 6 subsets of data.
Confidence intervals for proportions (prevalence) You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t,
48,162
Confidence intervals for proportions (prevalence)
Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% CI = p_hat +/- 1.96*(p_hat*(1-p_hat)/n)^0.5 formula may be used. For a better approximation, use the Wilson score interval (see http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval). Robert
Confidence intervals for proportions (prevalence)
Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% C
Confidence intervals for proportions (prevalence) Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% CI = p_hat +/- 1.96*(p_hat*(1-p_hat)/n)^0.5 formula may be used. For a better approximation, use the Wilson score interval (see http://en.wikipedia.org/wiki/Binomial_proportion_confidence_interval). Robert
Confidence intervals for proportions (prevalence) Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% C
48,163
Confidence intervals for proportions (prevalence)
Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: hospital age risk diagnosed 1 1 0 1 1 0 1 0 1 1 2 1 Then you estimate a binary model, such as a probit, where your dependent variables are dummies for the risk-age group interactions. You may also want to cluster on the hospital. Then you can calculate the predictive margins for each risk-age dummy. This will not work You can hack this in a regression context by simple linear model of $\log(y)$ on a constant, and exponentiating the coefficients and CIs. This will give you geometric mean and its CI, which is appropriate mean to use since you are dealing with rates. Since all your $\mu$s are greater than zero, taking logs won't cost you any data. Here's an example in Stata: . sysuse auto,clear (1978 Automobile Data) . generate logprice=log(price) . regress logprice, eform(GM) Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 0, 73) = 0.00 Model | 0 0 . Prob > F = . Residual | 11.2235331 73 .153747029 R-squared = 0.0000 -------------+------------------------------ Adj R-squared = 0.0000 Total | 11.2235331 73 .153747029 Root MSE = .39211 ------------------------------------------------------------------------------ logprice | GM Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- _cons | 5656.907 257.8496 189.56 0.000 5165.664 6194.866 ------------------------------------------------------------------------------ . means price Variable | Type Obs Mean [95% Conf. Interval] -------------+---------------------------------------------------------- price | Arithmetic 74 6165.257 5481.914 6848.6 | Geometric 74 5656.907 5165.664 6194.865 | Harmonic 74 5296.672 4928.901 5723.75 ------------------------------------------------------------------------ Note that the geometric mean matches the regression output very nicely. I learned about this from Roger Newson's Stata Tip #1.
Confidence intervals for proportions (prevalence)
Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: h
Confidence intervals for proportions (prevalence) Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: hospital age risk diagnosed 1 1 0 1 1 0 1 0 1 1 2 1 Then you estimate a binary model, such as a probit, where your dependent variables are dummies for the risk-age group interactions. You may also want to cluster on the hospital. Then you can calculate the predictive margins for each risk-age dummy. This will not work You can hack this in a regression context by simple linear model of $\log(y)$ on a constant, and exponentiating the coefficients and CIs. This will give you geometric mean and its CI, which is appropriate mean to use since you are dealing with rates. Since all your $\mu$s are greater than zero, taking logs won't cost you any data. Here's an example in Stata: . sysuse auto,clear (1978 Automobile Data) . generate logprice=log(price) . regress logprice, eform(GM) Source | SS df MS Number of obs = 74 -------------+------------------------------ F( 0, 73) = 0.00 Model | 0 0 . Prob > F = . Residual | 11.2235331 73 .153747029 R-squared = 0.0000 -------------+------------------------------ Adj R-squared = 0.0000 Total | 11.2235331 73 .153747029 Root MSE = .39211 ------------------------------------------------------------------------------ logprice | GM Std. Err. t P>|t| [95% Conf. Interval] -------------+---------------------------------------------------------------- _cons | 5656.907 257.8496 189.56 0.000 5165.664 6194.866 ------------------------------------------------------------------------------ . means price Variable | Type Obs Mean [95% Conf. Interval] -------------+---------------------------------------------------------- price | Arithmetic 74 6165.257 5481.914 6848.6 | Geometric 74 5656.907 5165.664 6194.865 | Harmonic 74 5296.672 4928.901 5723.75 ------------------------------------------------------------------------ Note that the geometric mean matches the regression output very nicely. I learned about this from Roger Newson's Stata Tip #1.
Confidence intervals for proportions (prevalence) Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: h
48,164
Probability of visiting all other states before return
This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and your first step is to $1$. Can you say what the probability is that you hit $11$ before you hit $12$?
Probability of visiting all other states before return
This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and yo
Probability of visiting all other states before return This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and your first step is to $1$. Can you say what the probability is that you hit $11$ before you hit $12$?
Probability of visiting all other states before return This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and yo
48,165
K-means Mahalanobis vs Euclidean distance
I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natural groupings (i.e. clusters) in your data. The choice of using Mahalanobis vs Euclidean distance in k-means is really a choice between using the full-covariance of your clusters or ignoring them. When you use Euclidean distance, you assume that the clusters have identity covariances. In 2D, this means that your clusters have circular shapes. Obviously, if the covariances of the natural groupings in your data are not identity matrices, e.g. in 2D, clusters have elliptical shaped covariances, then using Mahalanobis over Euclidean will be much better modeling. You can try both and see whether or not using the Mahalanobis distance gives you a significant gain. It also depends on what you will do after clustering. Clustering itself is usually not the ultimate purpose. You will probably use the clusters in some subsequent processing. So, the choice of Euclidean vs Mahalanobis may be determined by the performance of your subsequent processing.
K-means Mahalanobis vs Euclidean distance
I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natu
K-means Mahalanobis vs Euclidean distance I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natural groupings (i.e. clusters) in your data. The choice of using Mahalanobis vs Euclidean distance in k-means is really a choice between using the full-covariance of your clusters or ignoring them. When you use Euclidean distance, you assume that the clusters have identity covariances. In 2D, this means that your clusters have circular shapes. Obviously, if the covariances of the natural groupings in your data are not identity matrices, e.g. in 2D, clusters have elliptical shaped covariances, then using Mahalanobis over Euclidean will be much better modeling. You can try both and see whether or not using the Mahalanobis distance gives you a significant gain. It also depends on what you will do after clustering. Clustering itself is usually not the ultimate purpose. You will probably use the clusters in some subsequent processing. So, the choice of Euclidean vs Mahalanobis may be determined by the performance of your subsequent processing.
K-means Mahalanobis vs Euclidean distance I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natu
48,166
Calculating Log Prob. of Dirichlet distribution in High Dimensions
The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\theta_i$ is 0, then the whole product is zero. In other words, the support of a Dirichlet distribution is over vectors $\theta$ where each $\theta_i \in (0, 1)$ and $\sum_{i=1}^K \theta_i = 1$. I'm not familiar with Minka's toolkit, but it is bound to have problems with data that includes 0's. As for the uniform column, I believe those values are correct. Here my python code I used to test: import math def lbeta(alpha): return sum(math.lgamma(a) for a in alpha) - math.lgamma(sum(alpha)) def ldirichlet_pdf(alpha, theta): kernel = sum((a - 1) * math.log(t) for a, t in zip(alpha, theta)) return kernel - lbeta(alpha) for k in [4, 10, 50, 100, 500, 1000]: print ldirichlet_pdf([.01] * k, [1.0 / k] * k) Running this script yields the output: 4 -9.71111566837 10 -20.946493708 50 -35.7564901905 100 -4.03613939138 500 779.669123528 1000 2251.99967563 Now lets generate some more likely vectors from our Dirichlet distribution with K=1000. The code for this is quite simple: def sample_dirichlet(alpha): gammas = [random.gammavariate(a, 1) for a in alpha] norm = sum(gammas) return [g / norm for g in gammas] Now if we use this function in combination with our previous ldirichlet_pdf function, we'll see that for K=1000, 2e+3 is a relatively small density. For example, the results of the following code: alpha = [.01] * 1000 ldirichlet_pdf(alpha, sample_dirichlet(alpha)) yield values between 9.4e+4 and 1e+5. The key insight here is to realize that you do not need to have values less than 1 in order for the integral to evaluate to 1. A simple example is $\int_0^1 1 dx = 1$. It just so happens that for the symmetric Dirichlet with K=1000 and concentration .01, the p.d.f. is greater than 1 everywhere, and yet the integral over the entire support is still 1. In higher dimensions, you'll need to have a much smaller concentration to get the uniform to have a negative log p.d.f. For example, with the concentration at .0001, and K=1000, the uniform vector has a log p.d.f. around -2.3e+3.
Calculating Log Prob. of Dirichlet distribution in High Dimensions
The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\th
Calculating Log Prob. of Dirichlet distribution in High Dimensions The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\theta_i$ is 0, then the whole product is zero. In other words, the support of a Dirichlet distribution is over vectors $\theta$ where each $\theta_i \in (0, 1)$ and $\sum_{i=1}^K \theta_i = 1$. I'm not familiar with Minka's toolkit, but it is bound to have problems with data that includes 0's. As for the uniform column, I believe those values are correct. Here my python code I used to test: import math def lbeta(alpha): return sum(math.lgamma(a) for a in alpha) - math.lgamma(sum(alpha)) def ldirichlet_pdf(alpha, theta): kernel = sum((a - 1) * math.log(t) for a, t in zip(alpha, theta)) return kernel - lbeta(alpha) for k in [4, 10, 50, 100, 500, 1000]: print ldirichlet_pdf([.01] * k, [1.0 / k] * k) Running this script yields the output: 4 -9.71111566837 10 -20.946493708 50 -35.7564901905 100 -4.03613939138 500 779.669123528 1000 2251.99967563 Now lets generate some more likely vectors from our Dirichlet distribution with K=1000. The code for this is quite simple: def sample_dirichlet(alpha): gammas = [random.gammavariate(a, 1) for a in alpha] norm = sum(gammas) return [g / norm for g in gammas] Now if we use this function in combination with our previous ldirichlet_pdf function, we'll see that for K=1000, 2e+3 is a relatively small density. For example, the results of the following code: alpha = [.01] * 1000 ldirichlet_pdf(alpha, sample_dirichlet(alpha)) yield values between 9.4e+4 and 1e+5. The key insight here is to realize that you do not need to have values less than 1 in order for the integral to evaluate to 1. A simple example is $\int_0^1 1 dx = 1$. It just so happens that for the symmetric Dirichlet with K=1000 and concentration .01, the p.d.f. is greater than 1 everywhere, and yet the integral over the entire support is still 1. In higher dimensions, you'll need to have a much smaller concentration to get the uniform to have a negative log p.d.f. For example, with the concentration at .0001, and K=1000, the uniform vector has a log p.d.f. around -2.3e+3.
Calculating Log Prob. of Dirichlet distribution in High Dimensions The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\th
48,167
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would compute if you calculated each of the individual intercepts. The fixed effect model assumes that all the groups have the same variance, but each has its own mean and the means are computed independently of each other (basically the mean/intercept for each group). In the mixed effects model the assumption that the random intercepts come from a normal distribution allows the methods to "borrow" information from all the groups in calculating each intercept so the individual intercepts tend to be "shrunken" towards the overall mean, an effect that is often referred to as "regression to the mean". The estimated random effects are not really parameters, but estimates of random variables, so they don't cost the same number of degrees of freedom. But you are correct in being concerned that there should only be a cost of 1 degree of freedom, the true cost is probably somewhere between $1$ and $k-1$. Here is a post that gives some more detail and an additional reference. Of course discussing the degrees of freedom assumes that the ratios follow an F distribution, which is itself questionable in mixed effects models.
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would compute if you calculated each of the individual intercepts. The fixed effect model assumes that all the groups have the same variance, but each has its own mean and the means are computed independently of each other (basically the mean/intercept for each group). In the mixed effects model the assumption that the random intercepts come from a normal distribution allows the methods to "borrow" information from all the groups in calculating each intercept so the individual intercepts tend to be "shrunken" towards the overall mean, an effect that is often referred to as "regression to the mean". The estimated random effects are not really parameters, but estimates of random variables, so they don't cost the same number of degrees of freedom. But you are correct in being concerned that there should only be a cost of 1 degree of freedom, the true cost is probably somewhere between $1$ and $k-1$. Here is a post that gives some more detail and an additional reference. Of course discussing the degrees of freedom assumes that the ratios follow an F distribution, which is itself questionable in mixed effects models.
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would
48,168
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is not really a estimation in the classical OLS sense. in my understand it is better to call it "prediction" since we estimate the outcome of a random variable rather then of a fixed parameter of the given universe. in case you can read stuff in german i could recommend you a book if you like.
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is no
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is not really a estimation in the classical OLS sense. in my understand it is better to call it "prediction" since we estimate the outcome of a random variable rather then of a fixed parameter of the given universe. in case you can read stuff in german i could recommend you a book if you like.
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is no
48,169
Fewer variables have higher R-squared value in logistic regression
You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable generally means the variables are interacting in a way that is not proving additional explanation of the model. One of the causes may be, as you point out, that there are issues with intervening variables in the model. If this is the case you may need to find an instrumental variable, or use a structural model.
Fewer variables have higher R-squared value in logistic regression
You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable gene
Fewer variables have higher R-squared value in logistic regression You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable generally means the variables are interacting in a way that is not proving additional explanation of the model. One of the causes may be, as you point out, that there are issues with intervening variables in the model. If this is the case you may need to find an instrumental variable, or use a structural model.
Fewer variables have higher R-squared value in logistic regression You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable gene
48,170
Bayesian models and exchangeability
You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditionally i.i.d. $\implies$ exchangeability), it does not stem from deFinetti's theorem (this theorem claims that exchangeability implies conditionally i.i.d. for an infinite sequence $(X_1, \ldots, X_n, \ldots)$).
Bayesian models and exchangeability
You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditional
Bayesian models and exchangeability You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditionally i.i.d. $\implies$ exchangeability), it does not stem from deFinetti's theorem (this theorem claims that exchangeability implies conditionally i.i.d. for an infinite sequence $(X_1, \ldots, X_n, \ldots)$).
Bayesian models and exchangeability You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditional
48,171
Bayesian models and exchangeability
There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representation theorem. Stéphane Laurent is right to characterise this as an elementary result (proof below). (IID $\impliedby$ infinite exchangeability): De Finetti's representation theorem (and its extension by Hewitt and Savage) show that exchangeability of an infinite sequence implies the conditional IID form. Finite exchangeability is not sufficient to give the conditional IID form, but there are some results showing that it comes close (i.e., finite exchangeability is sufficient to show that the true probabilities pertaining to a set of values are within a particular bound of the conditional IID form). These results do not require a prior: Both of the above results hold, inter alia, with respect to the sampling distribution of the observable values in the problem, and so they hold without any specification of a prior distribution. Indeed, you do not even have to be working within the Bayesian paradigm at all for these results to be applicable (see O'Neill 2009 for further discussion on this issue). THEOREM: If $X_1,...,X_n$ are IID conditional on $\theta$ then they are exchangeable. PROOF: Choose and arbitrary permutation $\pi$ on the set $\{ 1,...,n \}$. Since the values $X_1,...,X_n$ are IID conditional on $\theta$ we have: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \leqslant x_1,...,X_n \leqslant x_n | \theta) &= \prod_{i=1}^n \mathbb{P}(X_i \leqslant x_i | \theta) \\[6pt] &= \prod_{i=1}^n \mathbb{P}(X_{\pi(i)} \leqslant x_i | \theta) \\[6pt] &= \mathbb{P}(X_{\pi(1)} \leqslant x_1,...,X_{\pi(n)} \leqslant x_n | \theta). \\[6pt] \end{aligned} \end{equation}$$ (The second step above follows simply by taking the product of values in the order of the permutation - it follows from the associativity of multiplication.) We then have: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \leqslant x_1,...,X_n \leqslant x_n) &= \mathbb{E}_\theta \bigg[ \mathbb{P}(X_1 \leqslant x_1,...,X_n \leqslant x_n | \theta) \bigg] \\[6pt] &= \mathbb{E}_\theta \bigg[ \mathbb{P}(X_{\pi(1)} \leqslant x_1,...,X_{\pi(n)} \leqslant x_n | \theta) \bigg] \\[6pt] &= \mathbb{P}(X_{\pi(1)} \leqslant x_1,...,X_{\pi(n)} \leqslant x_n), \\[6pt] \end{aligned} \end{equation}$$ which establishes exchangeability of $X_1,...,X_n$. $\blacksquare$
Bayesian models and exchangeability
There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representat
Bayesian models and exchangeability There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representation theorem. Stéphane Laurent is right to characterise this as an elementary result (proof below). (IID $\impliedby$ infinite exchangeability): De Finetti's representation theorem (and its extension by Hewitt and Savage) show that exchangeability of an infinite sequence implies the conditional IID form. Finite exchangeability is not sufficient to give the conditional IID form, but there are some results showing that it comes close (i.e., finite exchangeability is sufficient to show that the true probabilities pertaining to a set of values are within a particular bound of the conditional IID form). These results do not require a prior: Both of the above results hold, inter alia, with respect to the sampling distribution of the observable values in the problem, and so they hold without any specification of a prior distribution. Indeed, you do not even have to be working within the Bayesian paradigm at all for these results to be applicable (see O'Neill 2009 for further discussion on this issue). THEOREM: If $X_1,...,X_n$ are IID conditional on $\theta$ then they are exchangeable. PROOF: Choose and arbitrary permutation $\pi$ on the set $\{ 1,...,n \}$. Since the values $X_1,...,X_n$ are IID conditional on $\theta$ we have: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \leqslant x_1,...,X_n \leqslant x_n | \theta) &= \prod_{i=1}^n \mathbb{P}(X_i \leqslant x_i | \theta) \\[6pt] &= \prod_{i=1}^n \mathbb{P}(X_{\pi(i)} \leqslant x_i | \theta) \\[6pt] &= \mathbb{P}(X_{\pi(1)} \leqslant x_1,...,X_{\pi(n)} \leqslant x_n | \theta). \\[6pt] \end{aligned} \end{equation}$$ (The second step above follows simply by taking the product of values in the order of the permutation - it follows from the associativity of multiplication.) We then have: $$\begin{equation} \begin{aligned} \mathbb{P}(X_1 \leqslant x_1,...,X_n \leqslant x_n) &= \mathbb{E}_\theta \bigg[ \mathbb{P}(X_1 \leqslant x_1,...,X_n \leqslant x_n | \theta) \bigg] \\[6pt] &= \mathbb{E}_\theta \bigg[ \mathbb{P}(X_{\pi(1)} \leqslant x_1,...,X_{\pi(n)} \leqslant x_n | \theta) \bigg] \\[6pt] &= \mathbb{P}(X_{\pi(1)} \leqslant x_1,...,X_{\pi(n)} \leqslant x_n), \\[6pt] \end{aligned} \end{equation}$$ which establishes exchangeability of $X_1,...,X_n$. $\blacksquare$
Bayesian models and exchangeability There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representat
48,172
Bayesian models and exchangeability
No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutation tests. Often in doing statistical inference we assume observations are independent and identically distributed and this of course implies exchangeability.
Bayesian models and exchangeability
No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutat
Bayesian models and exchangeability No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutation tests. Often in doing statistical inference we assume observations are independent and identically distributed and this of course implies exchangeability.
Bayesian models and exchangeability No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutat
48,173
How to go about selecting an algorithm for approximate Bayesian inference
At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avoid complex integrations in some cases. In case you have a strong background in statistics and you want to integrate a lot you can try methods like variational lower bound or expectation propagation. So, you have to choose carefully a batch of parameters (for example, in case you try a variational lower bound approach you have to select distribution you can integrate out to replace initial distribution, so you have to use your intuition (or use normal distribution)). If this problem is new and no other approaches were tried you can simply try to use gaussian or Laplace approximation. Also, in many cases yo can use a method proposed in the state of the art. For example, all methods you mention were successfully used to proceed heteroscedasticity gaussian processes regression (see, for example paper http://www.tsc.uc3m.es/~miguel/papers/vhgpr_icml.pdf from the 2011 ICML conference). P.S. In ICML 2012 article http://icml.cc/discuss/2012/360.html interesting and simple method for variational inference was proposed, so you can try it for your problem.
How to go about selecting an algorithm for approximate Bayesian inference
At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avo
How to go about selecting an algorithm for approximate Bayesian inference At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avoid complex integrations in some cases. In case you have a strong background in statistics and you want to integrate a lot you can try methods like variational lower bound or expectation propagation. So, you have to choose carefully a batch of parameters (for example, in case you try a variational lower bound approach you have to select distribution you can integrate out to replace initial distribution, so you have to use your intuition (or use normal distribution)). If this problem is new and no other approaches were tried you can simply try to use gaussian or Laplace approximation. Also, in many cases yo can use a method proposed in the state of the art. For example, all methods you mention were successfully used to proceed heteroscedasticity gaussian processes regression (see, for example paper http://www.tsc.uc3m.es/~miguel/papers/vhgpr_icml.pdf from the 2011 ICML conference). P.S. In ICML 2012 article http://icml.cc/discuss/2012/360.html interesting and simple method for variational inference was proposed, so you can try it for your problem.
How to go about selecting an algorithm for approximate Bayesian inference At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avo
48,174
How to go about selecting an algorithm for approximate Bayesian inference
I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be more helpful. Another point - are variables you consider independent or not. If they are you can use Expectation Propagation or a method which takes this issue into account. Also, you can test your distributions do they have close-to-normal form. In this case you can just plot them or use some normality hypothesis testing. If they are close to normal you can use gaussian or Laplace approximation
How to go about selecting an algorithm for approximate Bayesian inference
I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be
How to go about selecting an algorithm for approximate Bayesian inference I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be more helpful. Another point - are variables you consider independent or not. If they are you can use Expectation Propagation or a method which takes this issue into account. Also, you can test your distributions do they have close-to-normal form. In this case you can just plot them or use some normality hypothesis testing. If they are close to normal you can use gaussian or Laplace approximation
How to go about selecting an algorithm for approximate Bayesian inference I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be
48,175
How do I get a $p$-value from the Cochran-Armitage trend test?
This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ by a factor of $1/N^2$. However you should note that the chi square stitistic is the same in either case. For $T_2$ there is a factor of $1/N^2$ in the numerator and denominator that cancels and does not appear in the formula using $T_1$. You use the same test statistic either way.
How do I get a $p$-value from the Cochran-Armitage trend test?
This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ b
How do I get a $p$-value from the Cochran-Armitage trend test? This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ by a factor of $1/N^2$. However you should note that the chi square stitistic is the same in either case. For $T_2$ there is a factor of $1/N^2$ in the numerator and denominator that cancels and does not appear in the formula using $T_1$. You use the same test statistic either way.
How do I get a $p$-value from the Cochran-Armitage trend test? This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ b
48,176
Calculate R-squared with JAGS and R
There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to $\hat{\beta}X$ in OLS. You then calculate the variance of the residuals as usual and plug it into the $R^2$ calculation. You would do this in R, of course. An alternative would be to use the posterior mean of the variance ($1/\tau$) as the estimate of residual variance in the $R^2$ calculation, again done in R. The former comes closest to how $R^2$ is calculated in classical statistics. No doubt there are other approaches, which (hopefully) others will point out in their answers. However, the bigger issues are a) with $R^2$ as a criterion and b) with comparison of OLS estimation to anything else using $R^2$ as a criterion. I'll skip over the first, pointing you to statisticalengineering.com and Andrew Gelman as references. The second issue arises because OLS maximizes $R^2$ (a consequence of the "least squares" property) and therefore no other technique (that is not equivalent to OLS) will generate as high an $R^2$. Consequently, your Bayesian approach is doomed if maximizing $R^2$ is the criterion of choice. You might be able to suggest a more out-of-sample criterion instead, for example, a k-fold cross-validation, which would necessitate multiple runs of JAGS on subsets of the data, then comparing the out-of-sample predicted values to the actual values. You can generate the predicted values inside JAGS as observed in the answer to Missing values in response variable in JAGS, or in R of course. I'll also point out that the dgamma(0.01,0.01) distribution has largely fallen out of favor, as it is actually quite informative near zero. The answers to priors for lognormal models might help with that.
Calculate R-squared with JAGS and R
There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to
Calculate R-squared with JAGS and R There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to $\hat{\beta}X$ in OLS. You then calculate the variance of the residuals as usual and plug it into the $R^2$ calculation. You would do this in R, of course. An alternative would be to use the posterior mean of the variance ($1/\tau$) as the estimate of residual variance in the $R^2$ calculation, again done in R. The former comes closest to how $R^2$ is calculated in classical statistics. No doubt there are other approaches, which (hopefully) others will point out in their answers. However, the bigger issues are a) with $R^2$ as a criterion and b) with comparison of OLS estimation to anything else using $R^2$ as a criterion. I'll skip over the first, pointing you to statisticalengineering.com and Andrew Gelman as references. The second issue arises because OLS maximizes $R^2$ (a consequence of the "least squares" property) and therefore no other technique (that is not equivalent to OLS) will generate as high an $R^2$. Consequently, your Bayesian approach is doomed if maximizing $R^2$ is the criterion of choice. You might be able to suggest a more out-of-sample criterion instead, for example, a k-fold cross-validation, which would necessitate multiple runs of JAGS on subsets of the data, then comparing the out-of-sample predicted values to the actual values. You can generate the predicted values inside JAGS as observed in the answer to Missing values in response variable in JAGS, or in R of course. I'll also point out that the dgamma(0.01,0.01) distribution has largely fallen out of favor, as it is actually quite informative near zero. The answers to priors for lognormal models might help with that.
Calculate R-squared with JAGS and R There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to
48,177
Information gain as a feature selection for 3-class classification problem
Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes for building decision trees. Note that a classic problem with decision tress is when to stop adding decision nodes---too many nodes usually leads to poor generalization. IG will help you determine an ordering of features from most useful to least useful. You will need another method (such as evaluation on a hold-out set) to determine a cut-off point. You may be interested in reading A Comparative Study on Feature Selection in Text Categorization (1997), which evaluates IG against other methods. Note that your problem sounds more like ordinal regression (which encodes an ordering in the labels) than regular classification.
Information gain as a feature selection for 3-class classification problem
Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes f
Information gain as a feature selection for 3-class classification problem Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes for building decision trees. Note that a classic problem with decision tress is when to stop adding decision nodes---too many nodes usually leads to poor generalization. IG will help you determine an ordering of features from most useful to least useful. You will need another method (such as evaluation on a hold-out set) to determine a cut-off point. You may be interested in reading A Comparative Study on Feature Selection in Text Categorization (1997), which evaluates IG against other methods. Note that your problem sounds more like ordinal regression (which encodes an ordering in the labels) than regular classification.
Information gain as a feature selection for 3-class classification problem Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes f
48,178
Upper/lower standard error makes sense?
According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done in your case is to divide the population in two (values upper and lower than the mean) and calculate the standard error of THAT population, not of your real population and therefore, I find it personally strange to assign the length of the error bars in that way. Apparently the idea that was done here was taken from here. However, IMHO, the idea of dividing the sample and calculating an "upper and lower" standard deviation doesn't make much sense (or at least it botters me). In physics (my area and apparently yours), however, it has been somewhat standard to show 68% confidence intervals for the sample median or the mean (depending on your choice of a location statistic; let's call this statistic $\bar{X}$ for the moment) in the following way for non-symmetric distributions (apparently emulating what would be a central credible interval): with your data points, you calculate $\bar{X}$ and then report an upper error bar of length $L_u$, where $L_u$ is calculated in order to satisfy $P(\bar{X}<\mu<\bar{X}+L_u)= 0.34$, where $\mu$ is the real (unknown) parameter. Then, for your lower error bar of length $L_l$, you repeat the same procedure but now downwards of the location statistic $\bar{X}$, i.e., $P(\bar{X}-L_l<\mu<\bar{X})= 0.34$. Of course, because the distribution of $\bar{X}$ is usually not known this is usually done with non-parametric methods (such as the Bootstrap or some variant of it). As was also pointed out by @onestop, you can also obtain bayesian credible intervals, where you actually calculate the probability (density, in the continuous case) of your parameter given your data. Let's call this probability $p(x|D)$. The length of the lower error bar is now calculated in a more "natural way" (at least for me), in order to satisfy $P(\hat{x}-L_l<x<\hat{x}|D)=0.34$, and the length of the upper error bar is now calculated in order to satisfy $P(\hat{x}<x<\hat{x}+L_u|D)=0.34$, where $\hat{x}$ is your point estimate of the parameter (usually the median or even the mode). All of the above, of course, makes sense only if your parameter is unimodal.
Upper/lower standard error makes sense?
According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done
Upper/lower standard error makes sense? According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done in your case is to divide the population in two (values upper and lower than the mean) and calculate the standard error of THAT population, not of your real population and therefore, I find it personally strange to assign the length of the error bars in that way. Apparently the idea that was done here was taken from here. However, IMHO, the idea of dividing the sample and calculating an "upper and lower" standard deviation doesn't make much sense (or at least it botters me). In physics (my area and apparently yours), however, it has been somewhat standard to show 68% confidence intervals for the sample median or the mean (depending on your choice of a location statistic; let's call this statistic $\bar{X}$ for the moment) in the following way for non-symmetric distributions (apparently emulating what would be a central credible interval): with your data points, you calculate $\bar{X}$ and then report an upper error bar of length $L_u$, where $L_u$ is calculated in order to satisfy $P(\bar{X}<\mu<\bar{X}+L_u)= 0.34$, where $\mu$ is the real (unknown) parameter. Then, for your lower error bar of length $L_l$, you repeat the same procedure but now downwards of the location statistic $\bar{X}$, i.e., $P(\bar{X}-L_l<\mu<\bar{X})= 0.34$. Of course, because the distribution of $\bar{X}$ is usually not known this is usually done with non-parametric methods (such as the Bootstrap or some variant of it). As was also pointed out by @onestop, you can also obtain bayesian credible intervals, where you actually calculate the probability (density, in the continuous case) of your parameter given your data. Let's call this probability $p(x|D)$. The length of the lower error bar is now calculated in a more "natural way" (at least for me), in order to satisfy $P(\hat{x}-L_l<x<\hat{x}|D)=0.34$, and the length of the upper error bar is now calculated in order to satisfy $P(\hat{x}<x<\hat{x}+L_u|D)=0.34$, where $\hat{x}$ is your point estimate of the parameter (usually the median or even the mode). All of the above, of course, makes sense only if your parameter is unimodal.
Upper/lower standard error makes sense? According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done
48,179
Upper/lower standard error makes sense?
It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here represent confidence intervals, though they might also be credible intervals if they were constructed using Bayesian methods. Hard to be sure if you're not going to tell us where you got this graph from.
Upper/lower standard error makes sense?
It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here rep
Upper/lower standard error makes sense? It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here represent confidence intervals, though they might also be credible intervals if they were constructed using Bayesian methods. Hard to be sure if you're not going to tell us where you got this graph from.
Upper/lower standard error makes sense? It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here rep
48,180
What is an appropriate method for providing bounds when performing maximum likelihood parameter estimation?
What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- function(par, x) -sum(dnorm(x, par[1], par[2], log=TRUE)) > > foo.constr <- function(par, x) + { + ll <- NA + if (par[1] > 0 && par[1] < 5 && par[2] > 0 && par[2] < 5) + { + ll <- -sum(dnorm(x, par[1], par[2], log=TRUE)) + } + ll + } > > x <- rnorm(100,1,1) > par <- c(1,1) > optim(par, foo.constr, x=x) $par [1] 1.147690 1.077712 $value [1] 149.3724 > > par <- c(1,1) > optim(par, foo.unconstr, lower=c(0,0), upper=c(5,5), method="L-BFGS-B", x=x) $par [1] 1.147652 1.077654 $value [1] 149.3724 They won't give quite the same answers, because they are different algorithms. I'll answer your constrOptim question over there, so other people who might be interested will see it.
What is an appropriate method for providing bounds when performing maximum likelihood parameter esti
What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- functi
What is an appropriate method for providing bounds when performing maximum likelihood parameter estimation? What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- function(par, x) -sum(dnorm(x, par[1], par[2], log=TRUE)) > > foo.constr <- function(par, x) + { + ll <- NA + if (par[1] > 0 && par[1] < 5 && par[2] > 0 && par[2] < 5) + { + ll <- -sum(dnorm(x, par[1], par[2], log=TRUE)) + } + ll + } > > x <- rnorm(100,1,1) > par <- c(1,1) > optim(par, foo.constr, x=x) $par [1] 1.147690 1.077712 $value [1] 149.3724 > > par <- c(1,1) > optim(par, foo.unconstr, lower=c(0,0), upper=c(5,5), method="L-BFGS-B", x=x) $par [1] 1.147652 1.077654 $value [1] 149.3724 They won't give quite the same answers, because they are different algorithms. I'll answer your constrOptim question over there, so other people who might be interested will see it.
What is an appropriate method for providing bounds when performing maximum likelihood parameter esti What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- functi
48,181
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution for the parameter?
$\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-\lambda}}{\int_0^\infty \frac{\lambda^n}{n!}e^{-\lambda} \cdot e^{-\lambda}\, d\lambda} = 2^{n+1}\frac{\lambda^n}{n!} e^{-2\lambda} $ which is a Gamma distribution with parameters $n+1$ and $2$.
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution
$\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution for the parameter? $\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-\lambda}}{\int_0^\infty \frac{\lambda^n}{n!}e^{-\lambda} \cdot e^{-\lambda}\, d\lambda} = 2^{n+1}\frac{\lambda^n}{n!} e^{-2\lambda} $ which is a Gamma distribution with parameters $n+1$ and $2$.
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution $\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-
48,182
Selecting features using Adaboost
Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this feature-classifier ambiguity. Going back to the question, there are numerous ways to get feature ranking from a boosting -- starting from counting how deep in boost structure the feature lies up to some permutation tests. In the work you quoted there is, in fact, no feature selection -- in training, they select the best feature/threshold pair, in prediction, they compute features "on demand" while the prediction goes through a boost to save computational time.
Selecting features using Adaboost
Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this fe
Selecting features using Adaboost Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this feature-classifier ambiguity. Going back to the question, there are numerous ways to get feature ranking from a boosting -- starting from counting how deep in boost structure the feature lies up to some permutation tests. In the work you quoted there is, in fact, no feature selection -- in training, they select the best feature/threshold pair, in prediction, they compute features "on demand" while the prediction goes through a boost to save computational time.
Selecting features using Adaboost Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this fe
48,183
Dealing with lots of ties in kNN model
In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might be able to work is that you add a very small pseudo-random noise to the data. This will reduce the number of data items that happen to be equidistant. Note that the noise should be as small as possible so as to bias the results but large enough to reduce the ties.
Dealing with lots of ties in kNN model
In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might b
Dealing with lots of ties in kNN model In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might be able to work is that you add a very small pseudo-random noise to the data. This will reduce the number of data items that happen to be equidistant. Note that the noise should be as small as possible so as to bias the results but large enough to reduce the ties.
Dealing with lots of ties in kNN model In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might b
48,184
Dealing with lots of ties in kNN model
I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose a point $x$ has 2 neighbors of class 1, 2 neighbors of class 2 and 1 neighbor of class 3. namely $x_1,x_4\in C_1$, $x_2,x_3\in C_2$ and $x_5\in C_3$, and $$ d(x,x_1)<d(x,x_2)<d(x,x_3)<d(x,x_4)<d(x,x_5) $$ Basically you need to choose whether you pick $1$ or $2$. To break a tie you may have to use a different criterion to select the class, such as making partial sums of distances for each class $$S_1 = e^{-d(x,x_1)-d(x,x_4)}; S_2 = e^{-d(x,x_2)-d(x,x_3)}$$ and pick the label with highest sum, i.e. pick $1$ if $S_1>S_2$. Another possible approach is to decrease your neighbor size $k$ by $1$ until you solve the tie.
Dealing with lots of ties in kNN model
I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose
Dealing with lots of ties in kNN model I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose a point $x$ has 2 neighbors of class 1, 2 neighbors of class 2 and 1 neighbor of class 3. namely $x_1,x_4\in C_1$, $x_2,x_3\in C_2$ and $x_5\in C_3$, and $$ d(x,x_1)<d(x,x_2)<d(x,x_3)<d(x,x_4)<d(x,x_5) $$ Basically you need to choose whether you pick $1$ or $2$. To break a tie you may have to use a different criterion to select the class, such as making partial sums of distances for each class $$S_1 = e^{-d(x,x_1)-d(x,x_4)}; S_2 = e^{-d(x,x_2)-d(x,x_3)}$$ and pick the label with highest sum, i.e. pick $1$ if $S_1>S_2$. Another possible approach is to decrease your neighbor size $k$ by $1$ until you solve the tie.
Dealing with lots of ties in kNN model I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose
48,185
Dealing with lots of ties in kNN model
I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input dataset for the KNN. Problem solved for the KNN to execute. I imputed the mode value for the dependent variable (also 0 in this case) for the hundreds of tied rows. This is probably the same result I would have gotten from adding noise, but it appears to me to be a cleaner approach. The random noise might have affected other observations. To summarize, check your data for frequent combinations of the input variables, handle these separately.
Dealing with lots of ties in kNN model
I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input datase
Dealing with lots of ties in kNN model I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input dataset for the KNN. Problem solved for the KNN to execute. I imputed the mode value for the dependent variable (also 0 in this case) for the hundreds of tied rows. This is probably the same result I would have gotten from adding noise, but it appears to me to be a cleaner approach. The random noise might have affected other observations. To summarize, check your data for frequent combinations of the input variables, handle these separately.
Dealing with lots of ties in kNN model I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input datase
48,186
Minimax estimator for the mean of a Poisson distribution
Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, and the integrated risk is \begin{gather} r_n = \int^\infty_0 (\lambda-\delta_n)^2 Poi(x|\lambda)Ga(\lambda|a_n,b_n)d\lambda \end{gather} which reduces to \begin{gather} r_n = \delta_n^2 - 2\delta_n E_n\lambda + E_n\lambda^2 \end{gather} \begin{gather} = \delta_n^2 - 2\delta_n E_n\lambda + V_n\lambda +(E_n\lambda)^2 \end{gather} \begin{gather} = \delta_n^2 - 2\delta_n^2 + \delta_n(b_n+1) + \delta_n^2 \end{gather} \begin{gather} = \delta_n(b_n+1). \end{gather} The limiting Bayes estimator is \begin{gather} \delta = \lim_{n\rightarrow\infty}\delta_n=\lim_{n\rightarrow\infty}(a_n+x)(b_n+1) = x \end{gather} and the limiting integrated risk is \begin{gather} r = \lim_{n\rightarrow\infty}\delta_n(b_n+1)=\lim_{n\rightarrow\infty}(a_n+x)(b_n+1)^2=x \end{gather} The limiting rule $\delta$ is Bayes with respect to the improper prior $\pi_\infty$. Because the integrated risk $r$ is constant for all $\lambda$, the estimator $\delta = x$ is minimax.
Minimax estimator for the mean of a Poisson distribution
Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, a
Minimax estimator for the mean of a Poisson distribution Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, and the integrated risk is \begin{gather} r_n = \int^\infty_0 (\lambda-\delta_n)^2 Poi(x|\lambda)Ga(\lambda|a_n,b_n)d\lambda \end{gather} which reduces to \begin{gather} r_n = \delta_n^2 - 2\delta_n E_n\lambda + E_n\lambda^2 \end{gather} \begin{gather} = \delta_n^2 - 2\delta_n E_n\lambda + V_n\lambda +(E_n\lambda)^2 \end{gather} \begin{gather} = \delta_n^2 - 2\delta_n^2 + \delta_n(b_n+1) + \delta_n^2 \end{gather} \begin{gather} = \delta_n(b_n+1). \end{gather} The limiting Bayes estimator is \begin{gather} \delta = \lim_{n\rightarrow\infty}\delta_n=\lim_{n\rightarrow\infty}(a_n+x)(b_n+1) = x \end{gather} and the limiting integrated risk is \begin{gather} r = \lim_{n\rightarrow\infty}\delta_n(b_n+1)=\lim_{n\rightarrow\infty}(a_n+x)(b_n+1)^2=x \end{gather} The limiting rule $\delta$ is Bayes with respect to the improper prior $\pi_\infty$. Because the integrated risk $r$ is constant for all $\lambda$, the estimator $\delta = x$ is minimax.
Minimax estimator for the mean of a Poisson distribution Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, a
48,187
Minimax estimator for the mean of a Poisson distribution
The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R}R(\lambda,\widehat \lambda)=\infty$. Obviously, it cannot be minimax. However, this estimator is minimax with respect to the risk $R(\lambda, \delta)=\mathbb E_\lambda (\delta(x)-\lambda)^2/\lambda$. Note that this risk for the estimator $\widehat \lambda(x)=x$ is constant and the extended Bayes approach suggested in the hint works.
Minimax estimator for the mean of a Poisson distribution
The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R
Minimax estimator for the mean of a Poisson distribution The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R}R(\lambda,\widehat \lambda)=\infty$. Obviously, it cannot be minimax. However, this estimator is minimax with respect to the risk $R(\lambda, \delta)=\mathbb E_\lambda (\delta(x)-\lambda)^2/\lambda$. Note that this risk for the estimator $\widehat \lambda(x)=x$ is constant and the extended Bayes approach suggested in the hint works.
Minimax estimator for the mean of a Poisson distribution The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R
48,188
How to, or what is the best way, to apply propensity scores after matching?
This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you compute the difference in outcome $Y$ for each pair, and then calculate the mean difference across pairs. That's your treatment effect. However, it is also possible to match each treated person with multiple untreated folks. Matching using additional nearest neighbors increases the bias, as the next best matches are necessarily worse matches, but decreases the variance, because more information is being used to construct the counterfactual for each treated person. Different matching estimators differ in how they weight the neighbor(s) in calculating this difference. One important question is whether you can pair the same control group person with more than one treated person, essentially recycling them. Matching without replacement can yield very bad matches if the number of comparison observations comparable to the treated observations is small. It keeps variance low at the cost of potential bias, while matching with replacement keeps bias low at the cost of a larger variance since you are using the same info over and over. That is another trade-off. But I digress. Here are some ways to do propensity score matching, in increasing order of complexity: The simplest form of matching is using only one control dude who has the closest propensity score (with or without replacement), and calculating the mean difference for all pairs. Another strategy is divide the $ps(X)$ into $S$ buckets or intervals. For example, say you have some treated observations with $ps(X)$ between 0.3 and 0.4. Then you take all the control group folks with scores between 0.30 and 0.4 and then use their average $Y$ as the counterfactual. The total treatment effect is $\Sigma_{s}(\bar{Y}_{T=1}-\bar{Y}_{T=0})*w_{s}$, where $w_{s}$ is the the fraction of all treated folks in bucket $s$. For example, you might start with 10 $PS$ buckets and they don't need to have the same width. Note that some treated observations may not have any matches! This is known as the common support problem. Yet another way would be to grab all control group members within a fixed radius of treated unit $i$ and use them as the counterfactuals. Call them group $J_{i}$. The treatment effect is $\frac{1}{T}\Sigma_{i}(\bar{Y}_{i,T=1}-\bar{Y}_{J})*w_{s}$. The bandwidth problem here takes the form of picking the radius. Kernel matching. Here you weight the control group observations who are further away in PS less heavily, maybe not at all. How do you pick a method? All matching estimators are consistent, because as the sample gets arbitrarily large, the units being compared get arbitrarily close to one another in terms of their characteristics. In finite samples, which one you choose can make a difference. If comparison observations are few, single nearest neighbor matching without replacement is a bad idea. If comparison observations are many and are evenly distributed, multiple nearest neighbor matching will make use of the rich comparison group data. If comparison observations are many but unevenly distributed (check the PS kernel densities for the two groups), kernel matching is helpful because it will use the additional data where it exists, but not take bad matches where it does not exist. One complications is that standard errors don't take into account that you estimated the propensity score (since the real thing is not observed), so they are too small. People either ignore this or bootstrap, which may or may not be bad idea.
How to, or what is the best way, to apply propensity scores after matching?
This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you co
How to, or what is the best way, to apply propensity scores after matching? This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you compute the difference in outcome $Y$ for each pair, and then calculate the mean difference across pairs. That's your treatment effect. However, it is also possible to match each treated person with multiple untreated folks. Matching using additional nearest neighbors increases the bias, as the next best matches are necessarily worse matches, but decreases the variance, because more information is being used to construct the counterfactual for each treated person. Different matching estimators differ in how they weight the neighbor(s) in calculating this difference. One important question is whether you can pair the same control group person with more than one treated person, essentially recycling them. Matching without replacement can yield very bad matches if the number of comparison observations comparable to the treated observations is small. It keeps variance low at the cost of potential bias, while matching with replacement keeps bias low at the cost of a larger variance since you are using the same info over and over. That is another trade-off. But I digress. Here are some ways to do propensity score matching, in increasing order of complexity: The simplest form of matching is using only one control dude who has the closest propensity score (with or without replacement), and calculating the mean difference for all pairs. Another strategy is divide the $ps(X)$ into $S$ buckets or intervals. For example, say you have some treated observations with $ps(X)$ between 0.3 and 0.4. Then you take all the control group folks with scores between 0.30 and 0.4 and then use their average $Y$ as the counterfactual. The total treatment effect is $\Sigma_{s}(\bar{Y}_{T=1}-\bar{Y}_{T=0})*w_{s}$, where $w_{s}$ is the the fraction of all treated folks in bucket $s$. For example, you might start with 10 $PS$ buckets and they don't need to have the same width. Note that some treated observations may not have any matches! This is known as the common support problem. Yet another way would be to grab all control group members within a fixed radius of treated unit $i$ and use them as the counterfactuals. Call them group $J_{i}$. The treatment effect is $\frac{1}{T}\Sigma_{i}(\bar{Y}_{i,T=1}-\bar{Y}_{J})*w_{s}$. The bandwidth problem here takes the form of picking the radius. Kernel matching. Here you weight the control group observations who are further away in PS less heavily, maybe not at all. How do you pick a method? All matching estimators are consistent, because as the sample gets arbitrarily large, the units being compared get arbitrarily close to one another in terms of their characteristics. In finite samples, which one you choose can make a difference. If comparison observations are few, single nearest neighbor matching without replacement is a bad idea. If comparison observations are many and are evenly distributed, multiple nearest neighbor matching will make use of the rich comparison group data. If comparison observations are many but unevenly distributed (check the PS kernel densities for the two groups), kernel matching is helpful because it will use the additional data where it exists, but not take bad matches where it does not exist. One complications is that standard errors don't take into account that you estimated the propensity score (since the real thing is not observed), so they are too small. People either ignore this or bootstrap, which may or may not be bad idea.
How to, or what is the best way, to apply propensity scores after matching? This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you co
48,189
How to, or what is the best way, to apply propensity scores after matching?
You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work in situations where you can't, or don't want, to deal with matching. This seems like a decent overview.
How to, or what is the best way, to apply propensity scores after matching?
You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work
How to, or what is the best way, to apply propensity scores after matching? You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work in situations where you can't, or don't want, to deal with matching. This seems like a decent overview.
How to, or what is the best way, to apply propensity scores after matching? You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work
48,190
How to, or what is the best way, to apply propensity scores after matching?
It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
How to, or what is the best way, to apply propensity scores after matching?
It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
How to, or what is the best way, to apply propensity scores after matching? It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
How to, or what is the best way, to apply propensity scores after matching? It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
48,191
MCMC for infinite variance posteriors
There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no specific feature to "fight" for or against infinite variance distributions. The difficulty with infinite variance distributions is at the Monte Carlo level, for instance if you want to compute $$ \mathfrak{I} = \int_0^\infty \sqrt{x} \dfrac{1}{\pi}\dfrac{1}{1+x^2} \,\text{d}x $$ the integral exists (and is finite), but using $$ \dfrac{1}{N} \sum_{i=1}^N \sqrt{|x_i|} $$ when the $x_i$'s are Cauchy leads to an infinite variance estimate. See, e.g., > expl=matrix(abs(rcauchy(10^6)),ncol=1000) > est=apply(expl,2,mean)/2 > quantile(est,c(.9,.99,.999)) 90% 99% 99.9% 6.484375 37.393755 160.869406 which shows that the estimator can get very large! And away from the true value > integrate(function(x){sqrt(x)*dcauchy(x)},low=0,up=Inf) 0.7071078 with absolute error < 2e-05 In this case, you need to use importance sampling.
MCMC for infinite variance posteriors
There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no speci
MCMC for infinite variance posteriors There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no specific feature to "fight" for or against infinite variance distributions. The difficulty with infinite variance distributions is at the Monte Carlo level, for instance if you want to compute $$ \mathfrak{I} = \int_0^\infty \sqrt{x} \dfrac{1}{\pi}\dfrac{1}{1+x^2} \,\text{d}x $$ the integral exists (and is finite), but using $$ \dfrac{1}{N} \sum_{i=1}^N \sqrt{|x_i|} $$ when the $x_i$'s are Cauchy leads to an infinite variance estimate. See, e.g., > expl=matrix(abs(rcauchy(10^6)),ncol=1000) > est=apply(expl,2,mean)/2 > quantile(est,c(.9,.99,.999)) 90% 99% 99.9% 6.484375 37.393755 160.869406 which shows that the estimator can get very large! And away from the true value > integrate(function(x){sqrt(x)*dcauchy(x)},low=0,up=Inf) 0.7071078 with absolute error < 2e-05 In this case, you need to use importance sampling.
MCMC for infinite variance posteriors There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no speci
48,192
How to determine the marginal pdf, the posterior?
What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $$ \text{tr}((\sigma^2\Sigma)^{-1}S)=\sigma^{-2}\text{tr}(\Sigma^{-1}S)\,.) $$ From this property, you can derive the normalising constant.
How to determine the marginal pdf, the posterior?
What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $
How to determine the marginal pdf, the posterior? What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $$ \text{tr}((\sigma^2\Sigma)^{-1}S)=\sigma^{-2}\text{tr}(\Sigma^{-1}S)\,.) $$ From this property, you can derive the normalising constant.
How to determine the marginal pdf, the posterior? What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $
48,193
How to determine the marginal pdf, the posterior?
Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\infty}(\sigma^{2})^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)d\sigma^2=\frac{\Gamma(a)}{b^a}$$ Your integral is of this form for certain choice of $a$ and $b$.
How to determine the marginal pdf, the posterior?
Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\
How to determine the marginal pdf, the posterior? Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\infty}(\sigma^{2})^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)d\sigma^2=\frac{\Gamma(a)}{b^a}$$ Your integral is of this form for certain choice of $a$ and $b$.
How to determine the marginal pdf, the posterior? Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\
48,194
Resources about probability proportional to size (PPS) sampling method
To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assumes a knowledge somewhere between Lohr (2009) "Sampling: Design and Analysis" and Thompson (1997) Theory of Sample Surveys. The latter lists and explains about half a dozen PPS methods (Brewer gives about 50).
Resources about probability proportional to size (PPS) sampling method
To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assume
Resources about probability proportional to size (PPS) sampling method To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assumes a knowledge somewhere between Lohr (2009) "Sampling: Design and Analysis" and Thompson (1997) Theory of Sample Surveys. The latter lists and explains about half a dozen PPS methods (Brewer gives about 50).
Resources about probability proportional to size (PPS) sampling method To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assume
48,195
Error exponent in hypothesis testing
Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $e_n$. This is because the decision regions $A_n$ and $A_n^c$ are different. In contrast to your (1) and (2), the behavior is of the form $$\begin{align*} -\frac{1}{n}\log \alpha_n &\rightarrow D(P_\lambda||P_1)\\ -\frac{1}{n} \log \beta_n &\rightarrow D(P_\lambda ||P_2) \end{align*}$$ so that $$ \lim -\frac{1}{n} \log e_n = \min\{D(P_\lambda||P_1), \,\,D(P_\lambda||P_2)\}. $$ Since $D(P_\lambda||P_1)$ is an increasing function of $\lambda$ while $D(P_\lambda||P_2)$ is a decreasing function of $\lambda$, choosing $\lambda$ such that $D(P_\lambda||P_1)=D(P_\lambda||P_2)$ gives $C(P_1,P_2)$. All this is described in Chapter 12 of the first edition of Cover and Thomas. Has it been deleted in the second edition since you refer us to Chapter 11 of Cover and Thomas?
Error exponent in hypothesis testing
Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $
Error exponent in hypothesis testing Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $e_n$. This is because the decision regions $A_n$ and $A_n^c$ are different. In contrast to your (1) and (2), the behavior is of the form $$\begin{align*} -\frac{1}{n}\log \alpha_n &\rightarrow D(P_\lambda||P_1)\\ -\frac{1}{n} \log \beta_n &\rightarrow D(P_\lambda ||P_2) \end{align*}$$ so that $$ \lim -\frac{1}{n} \log e_n = \min\{D(P_\lambda||P_1), \,\,D(P_\lambda||P_2)\}. $$ Since $D(P_\lambda||P_1)$ is an increasing function of $\lambda$ while $D(P_\lambda||P_2)$ is a decreasing function of $\lambda$, choosing $\lambda$ such that $D(P_\lambda||P_1)=D(P_\lambda||P_2)$ gives $C(P_1,P_2)$. All this is described in Chapter 12 of the first edition of Cover and Thomas. Has it been deleted in the second edition since you refer us to Chapter 11 of Cover and Thomas?
Error exponent in hypothesis testing Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $
48,196
Combining p-values for averaging technical protein quantification replicates in python
To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and $P=g(P_1,\ldots, P_n)$ has a uniform distribution when the $P_i$ are independently uniformly distributed. Symmetry means no one of the $n$ tests is favored over any other. Strict increase means that each test genuinely influences the combined result in the expected way: when all other tests remain the same but a given p-value gets larger (less significant), then the combined result should get less significant, too. The uniform distribution is a basic property of p-values: it assures that the chance of a combined p-value being smaller than any level $0 \lt \alpha \lt 1$ is exactly $\alpha$. For many situations these properties imply that when the $p_i$ are p-values for independent tests of hypotheses, $g(p_1,\ldots,p_n)$ is a p-value for the null hypothesis that all $n$ of the separate hypotheses are true. Fisher's method is $$g(p_1,\ldots, p_n) = 1 - \frac{1}{(n-1)!}\int_0^{-\log(p_1p_2\cdots p_n)} t^{n-1}e^{-t}dt.$$ Properties (1) and (2) are obvious, while the third property (uniform distribution) follows from standard relationships among uniform and Gamma random variables. I claim the integral can be eliminated, leaving a relatively simple algebraic function of the $p_i$ and their logarithms. To see this, define $$F_n(x) = C_n\int_0^x t^{n-1}e^{-t}dt$$ where $$C_n =\frac{1}{(n-1)!} = \frac{1}{n-1}C_{n-1}$$ (provided, in the latter case, that $n \ge 2$). When $n=1$ this has the simple expression $$F_1(x) = \int_0^x e^{-t}dt = 1 - e^{-t}.$$ When $n\ge 2$, integrate by parts to find $$\eqalign{ F_n(x) &=\left. - C_n t^{n-1} e^{-t}\right|_0^x + (n-1)C_n \int_0^x t^{n-2}e^{-t}dt \\ &= -\frac{x^{n-1} e^{-x}}{(n-1)!} + F_{n-1}(x). }$$ Apply this $n-1$ times to the right hand side until the subscript of $F$ reduces to $1$, for which we have the simple formula shown above. Upon indexing the steps by $i=1, 2, \ldots, n-1$ and then setting $j=n-i$, the result is $$F_n(x) = 1 - e^{-x} - \sum_{i=1}^{n-1} \frac{x^{n-i} e^{-x}}{(n-i)!} = 1 - e^{-x} \sum_{j=0}^{n-1} \frac{x^j}{j!}.$$ Write $p=p_1p_2\cdots p_n$ for the product of the p-values. Setting $$x=\log(p) = \log(p_1)+\cdots+\log(p_n)$$ yields $$\eqalign{ g(p_1,\ldots, p_n) &= 1 - F_n(-x) = p \sum_{j=0}^{n-1} \frac{(-x)^j}{j!} \\ &=p_1p_2\cdots p_n \sum_{j=0}^{n-1} (-1)^j \frac{\left(\log(p_1)+\cdots+\log(p_n)\right)^j}{j!}. }$$ This is most useful for its insight into combining p-values, but for small $n$ isn't too shabby a method of calculation in its own right, provided you avoid operations that lose floating point precision or create underflow. (One method is illustrated in the R code below, which computes the logarithms of each term in $g$ rather than computing the terms themselves.) Here are the formulas for $n=2$ and $n=3$, for instance: $$\eqalign{ g(p_1,p_2) &= p_1p_2\left(1 - \log(p_1p_2)\right) \\ g(p_1,p_2,p_3) &= p_1p_2p_3\left(1 - \log(p_1p_2p_3) + \frac{1}{2} \left(\log(p_1p_2p_3)\right)^2\right). }$$ The terms in parentheses are factors (always greater than $1$) that correct the naive estimate that the combined p-value should be the product of the individual p-values. Fisher.algebraic <- function(p) { if (length(p)==1) return(p) x <- sum(log(p)) return(sum(exp(x + cumsum(c(0, log(-x / 1:(length(p)-1))))))) # # Straightforward (but numerically limited) method: # n <- length(p) j <- 0:(n-1) x <- sum(log(p)) return(prod(p) * sum((-x)^j / factorial(j))) } Fisher <- function(p) { pgamma(-sum(log(p)), length(p), lower.tail=FALSE) } # # Compare the two calculations with one example. # n <- 10 # Try `n <- 1e6`; then try it with the straightforward method. p <- runif(n) c(Fisher=Fisher(p), Algebraic=Fisher.algebraic(p)) # # Compare the timing. # For n < 10, approximately, the timing is about the same. After that, # the integral method `Fisher` becomes superior (as one might expect, # because of the two sums involved in the algebraic method). # N <- ceiling(log(n) * 5e5/n) # Limits the timing to about 1 second system.time(replicate(N, Fisher(p))) system.time(replicate(N, Fisher.algebraic(p))) # # Show that uniform p-values produce a uniform combined p-value. # p <- replicate(N, Fisher.algebraic(runif(n))) hist(p)
Combining p-values for averaging technical protein quantification replicates in python
To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and
Combining p-values for averaging technical protein quantification replicates in python To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and $P=g(P_1,\ldots, P_n)$ has a uniform distribution when the $P_i$ are independently uniformly distributed. Symmetry means no one of the $n$ tests is favored over any other. Strict increase means that each test genuinely influences the combined result in the expected way: when all other tests remain the same but a given p-value gets larger (less significant), then the combined result should get less significant, too. The uniform distribution is a basic property of p-values: it assures that the chance of a combined p-value being smaller than any level $0 \lt \alpha \lt 1$ is exactly $\alpha$. For many situations these properties imply that when the $p_i$ are p-values for independent tests of hypotheses, $g(p_1,\ldots,p_n)$ is a p-value for the null hypothesis that all $n$ of the separate hypotheses are true. Fisher's method is $$g(p_1,\ldots, p_n) = 1 - \frac{1}{(n-1)!}\int_0^{-\log(p_1p_2\cdots p_n)} t^{n-1}e^{-t}dt.$$ Properties (1) and (2) are obvious, while the third property (uniform distribution) follows from standard relationships among uniform and Gamma random variables. I claim the integral can be eliminated, leaving a relatively simple algebraic function of the $p_i$ and their logarithms. To see this, define $$F_n(x) = C_n\int_0^x t^{n-1}e^{-t}dt$$ where $$C_n =\frac{1}{(n-1)!} = \frac{1}{n-1}C_{n-1}$$ (provided, in the latter case, that $n \ge 2$). When $n=1$ this has the simple expression $$F_1(x) = \int_0^x e^{-t}dt = 1 - e^{-t}.$$ When $n\ge 2$, integrate by parts to find $$\eqalign{ F_n(x) &=\left. - C_n t^{n-1} e^{-t}\right|_0^x + (n-1)C_n \int_0^x t^{n-2}e^{-t}dt \\ &= -\frac{x^{n-1} e^{-x}}{(n-1)!} + F_{n-1}(x). }$$ Apply this $n-1$ times to the right hand side until the subscript of $F$ reduces to $1$, for which we have the simple formula shown above. Upon indexing the steps by $i=1, 2, \ldots, n-1$ and then setting $j=n-i$, the result is $$F_n(x) = 1 - e^{-x} - \sum_{i=1}^{n-1} \frac{x^{n-i} e^{-x}}{(n-i)!} = 1 - e^{-x} \sum_{j=0}^{n-1} \frac{x^j}{j!}.$$ Write $p=p_1p_2\cdots p_n$ for the product of the p-values. Setting $$x=\log(p) = \log(p_1)+\cdots+\log(p_n)$$ yields $$\eqalign{ g(p_1,\ldots, p_n) &= 1 - F_n(-x) = p \sum_{j=0}^{n-1} \frac{(-x)^j}{j!} \\ &=p_1p_2\cdots p_n \sum_{j=0}^{n-1} (-1)^j \frac{\left(\log(p_1)+\cdots+\log(p_n)\right)^j}{j!}. }$$ This is most useful for its insight into combining p-values, but for small $n$ isn't too shabby a method of calculation in its own right, provided you avoid operations that lose floating point precision or create underflow. (One method is illustrated in the R code below, which computes the logarithms of each term in $g$ rather than computing the terms themselves.) Here are the formulas for $n=2$ and $n=3$, for instance: $$\eqalign{ g(p_1,p_2) &= p_1p_2\left(1 - \log(p_1p_2)\right) \\ g(p_1,p_2,p_3) &= p_1p_2p_3\left(1 - \log(p_1p_2p_3) + \frac{1}{2} \left(\log(p_1p_2p_3)\right)^2\right). }$$ The terms in parentheses are factors (always greater than $1$) that correct the naive estimate that the combined p-value should be the product of the individual p-values. Fisher.algebraic <- function(p) { if (length(p)==1) return(p) x <- sum(log(p)) return(sum(exp(x + cumsum(c(0, log(-x / 1:(length(p)-1))))))) # # Straightforward (but numerically limited) method: # n <- length(p) j <- 0:(n-1) x <- sum(log(p)) return(prod(p) * sum((-x)^j / factorial(j))) } Fisher <- function(p) { pgamma(-sum(log(p)), length(p), lower.tail=FALSE) } # # Compare the two calculations with one example. # n <- 10 # Try `n <- 1e6`; then try it with the straightforward method. p <- runif(n) c(Fisher=Fisher(p), Algebraic=Fisher.algebraic(p)) # # Compare the timing. # For n < 10, approximately, the timing is about the same. After that, # the integral method `Fisher` becomes superior (as one might expect, # because of the two sums involved in the algebraic method). # N <- ceiling(log(n) * 5e5/n) # Limits the timing to about 1 second system.time(replicate(N, Fisher(p))) system.time(replicate(N, Fisher.algebraic(p))) # # Show that uniform p-values produce a uniform combined p-value. # p <- replicate(N, Fisher.algebraic(runif(n))) hist(p)
Combining p-values for averaging technical protein quantification replicates in python To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and
48,197
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes?
Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also some parametric bootstrap confidence intervals, also for the MLEs. I won't try to reproduce the math here, since there's rather a lot of it and it's in the paper anyway.
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes?
Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also som
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes? Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also some parametric bootstrap confidence intervals, also for the MLEs. I won't try to reproduce the math here, since there's rather a lot of it and it's in the paper anyway.
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes? Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also som
48,198
Power analysis for matched poisson variables
The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be: use as test statistic $\sum_i X_{1i} - X_{2i}$ ; assume H0 (same rate in two groups), estimate the common rate $\lambda$ using pooled data, and simulate N (N = big) times two groups of 22 variables $\sim \mathcal P(\lambda)$ to get an empirical distribution of you test statistic. If your rates are big enough, you could also use the fact that if $X\sim\mathcal P (\lambda)$, then the distribution of $2 \sqrt X$ is approximated by a normal $\mathcal N(2\sqrt\lambda,1)$. This leads to normal based test (the distribution of $X_{1i} - X_{2i}$ is known under $H_0$, independently of $\lambda$), and can lead to a nice paper-pen computation to the power. The figure below illustrate the (in)accuracy of this approximation for various $\lambda$ (in black, the cdf of $2 \sqrt X$, in red, the normal approximation).
Power analysis for matched poisson variables
The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be
Power analysis for matched poisson variables The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be: use as test statistic $\sum_i X_{1i} - X_{2i}$ ; assume H0 (same rate in two groups), estimate the common rate $\lambda$ using pooled data, and simulate N (N = big) times two groups of 22 variables $\sim \mathcal P(\lambda)$ to get an empirical distribution of you test statistic. If your rates are big enough, you could also use the fact that if $X\sim\mathcal P (\lambda)$, then the distribution of $2 \sqrt X$ is approximated by a normal $\mathcal N(2\sqrt\lambda,1)$. This leads to normal based test (the distribution of $X_{1i} - X_{2i}$ is known under $H_0$, independently of $\lambda$), and can lead to a nice paper-pen computation to the power. The figure below illustrate the (in)accuracy of this approximation for various $\lambda$ (in black, the cdf of $2 \sqrt X$, in red, the normal approximation).
Power analysis for matched poisson variables The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be
48,199
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence?
These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example. Someone experiencing an outbreak of herpes once in their life contributes one event to the denominator of lifetime risk, regardless of the number of recurrences, or whether total outbreak time is a day or a year. Morbid risk refers specifically to disease or unfavorable outcomes (aside from mortality) whereas some other risk outcomes might be favorable like adoption of a child. Lifetime prevelance is the proportion of time someone spends in a specific outcome state. Returning to the herpes example, this measure would be different for an individual experiencing a one-day outbreak (defined as visible open sores) or a one-year outbreak. This has in the numerator total person-time in morbid state and in the denominator total-person time observed. Cumulative incidence measures repeated binary outcomes like hemorrhagic strokes or stubbed toes. Number of events is in the denominator and person-time at risk is in the numerator.
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif
These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example.
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence? These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example. Someone experiencing an outbreak of herpes once in their life contributes one event to the denominator of lifetime risk, regardless of the number of recurrences, or whether total outbreak time is a day or a year. Morbid risk refers specifically to disease or unfavorable outcomes (aside from mortality) whereas some other risk outcomes might be favorable like adoption of a child. Lifetime prevelance is the proportion of time someone spends in a specific outcome state. Returning to the herpes example, this measure would be different for an individual experiencing a one-day outbreak (defined as visible open sores) or a one-year outbreak. This has in the numerator total person-time in morbid state and in the denominator total-person time observed. Cumulative incidence measures repeated binary outcomes like hemorrhagic strokes or stubbed toes. Number of events is in the denominator and person-time at risk is in the numerator.
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example.
48,200
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence?
Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epidemiology: An Introduction where he uses the Porta's definition of cohorts. I don't think any of the books cover lifetime in any detail but I looked into Porta that I like. Unfortunately I can't recommend Rothmans introductory book but I think I've seen a lot of people praise his Modern Epidemiology but I haven't bought it... yet... The following are from Porta Lifetime risk The risk to an individual that a given health effect will occur at any time after exposure without regard fro the time at which that effect occurs Lifetime morbid risk I couldn't find this as a definition. I would look into the different definitions and combine them but it seems like a strange measure to use when you have the lifetime incidence & prevalence. I guess as @Adam Omidpanah notes the key is that morbidity is a unfavorable event. Lifetime prevalence The proportion of individuals who have had the disease or condition for at least part of their lives at any time during their life-course. $Prevalence = \frac{number\ of\ cases}{population\ at\ risk}$ Lifetime cumulative incidence I couldn't find a clear definition for lifetime but in the cumulative incidence you can extract this definition: The number or proportion of a group (cohort) of people who experience the onset of a health-related event during a life-course. $Incidence\ rate = \frac{number\ of\ new\ events}{time*population\ at\ risk}$ When you have a lifetime the time is irrelevant but you may experience issues with events that can recur and therefore you might get a nonsense number. Usually you use the first event (ex. first episode of low-back pain) as your counter in these cases but it's much better to use event/person/time.
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif
Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epid
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence? Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epidemiology: An Introduction where he uses the Porta's definition of cohorts. I don't think any of the books cover lifetime in any detail but I looked into Porta that I like. Unfortunately I can't recommend Rothmans introductory book but I think I've seen a lot of people praise his Modern Epidemiology but I haven't bought it... yet... The following are from Porta Lifetime risk The risk to an individual that a given health effect will occur at any time after exposure without regard fro the time at which that effect occurs Lifetime morbid risk I couldn't find this as a definition. I would look into the different definitions and combine them but it seems like a strange measure to use when you have the lifetime incidence & prevalence. I guess as @Adam Omidpanah notes the key is that morbidity is a unfavorable event. Lifetime prevalence The proportion of individuals who have had the disease or condition for at least part of their lives at any time during their life-course. $Prevalence = \frac{number\ of\ cases}{population\ at\ risk}$ Lifetime cumulative incidence I couldn't find a clear definition for lifetime but in the cumulative incidence you can extract this definition: The number or proportion of a group (cohort) of people who experience the onset of a health-related event during a life-course. $Incidence\ rate = \frac{number\ of\ new\ events}{time*population\ at\ risk}$ When you have a lifetime the time is irrelevant but you may experience issues with events that can recur and therefore you might get a nonsense number. Usually you use the first event (ex. first episode of low-back pain) as your counter in these cases but it's much better to use event/person/time.
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epid