idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
48,101
Relation of slopes of predictors when they are correlated in linear regression
It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measured correlation between $X_{1}$ and $X_{2}$ doesn't have to be large and the regression won't suffer from traditional mul...
Relation of slopes of predictors when they are correlated in linear regression
It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measur
Relation of slopes of predictors when they are correlated in linear regression It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measured correlation between $X_{1}$ and $X_{2}...
Relation of slopes of predictors when they are correlated in linear regression It can be even worse. Suppose $X_{1}$ and $X_{2}$ are not linearly related but still are causally related, so that $X_{1}$ displays an impact on the target $Y$ via $X_{1} \to X_{2} \to Y$. The measur
48,102
Is it valid to log-transform percentages?
First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as there are no 0%. But is that what you want? Say exports were 200 in 2010 and 205 in 2011. Then growth as a % is 205/200 *...
Is it valid to log-transform percentages?
First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as t
Is it valid to log-transform percentages? First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as there are no 0%. But is that what you want? Say exports were 200 in 2010 and 20...
Is it valid to log-transform percentages? First, the data do not need to be normal, the residuals of the model do (at least for ordinary least squares regression). Second, certainly it is possible to change a percentage to a log, as long as t
48,103
How should sampling ratios to estimate quantiles change with population size?
For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$, a sample of size $\Theta ( \frac{1}{\varepsilon^2} \log \frac{1}{\delta} )$ is required, where 0 < δ < 1. But I think t...
How should sampling ratios to estimate quantiles change with population size?
For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$,
How should sampling ratios to estimate quantiles change with population size? For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$, a sample of size $\Theta ( \frac{1}{\varep...
How should sampling ratios to estimate quantiles change with population size? For the order of the sample size, there is direct reference here (with big-Theta notation): in order to estimate the quantiles with precision $\varepsilon n$, with probability at least $1 − \delta$,
48,104
Maximum Entropy and Multinomial Logistic Function
MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if I'm interested on studying some situation depending only on one real parameter $x$ and I know (from experimental data o...
Maximum Entropy and Multinomial Logistic Function
MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if
Maximum Entropy and Multinomial Logistic Function MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if I'm interested on studying some situation depending only on one real ...
Maximum Entropy and Multinomial Logistic Function MaxEnt is a method for designing models, whereas SoftMax is a model in itself. MaxEnt is a method that describes an observer state of knowledge about some system and it's variables. For instance, if
48,105
Maximum Entropy and Multinomial Logistic Function
You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of duality in constrained optimization. Berger, A. L., Pietra, V. J. D., & Pietra, S. A. D. (1996). A maximum entropy ...
Maximum Entropy and Multinomial Logistic Function
You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of
Maximum Entropy and Multinomial Logistic Function You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of duality in constrained optimization. Berger, A. L., Pietra, V. J. D...
Maximum Entropy and Multinomial Logistic Function You should compare maximum entropy with maximum likelihood, not Multinomial Logistic Regression. The duality of maximum entropy and maximum likelihood is an example of the more general phenomenon of
48,106
Why so many large p-values when I repeat an experiment?
If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two ideas: Are you computing one tail P values? If so, and the actual effect is in an opposite direction to the hypothesized ...
Why so many large p-values when I repeat an experiment?
If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two id
Why so many large p-values when I repeat an experiment? If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two ideas: Are you computing one tail P values? If so, and the actual...
Why so many large p-values when I repeat an experiment? If the null hypothesis is true, you expect a uniform distribution. If the null hypothesis is not true, then you'd expect more small P values. But you have more high P values, which is strange. Two id
48,107
Hausman test: the larger the sample the more significant the Hausman test statistic?
First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between all your betas in the model. The main diagonal elements contain the variance of each beta. If you take the square root of...
Hausman test: the larger the sample the more significant the Hausman test statistic?
First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between a
Hausman test: the larger the sample the more significant the Hausman test statistic? First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between all your betas in the model. The mai...
Hausman test: the larger the sample the more significant the Hausman test statistic? First for your question about the variance-covariance and s.e. relationship: the variance-covariance matrix is a symmetric matrix which contains on the off-diagonal elements the covariances between a
48,108
What does it mean to use a normalizing factor to "sum to unity"?
Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but this is typically done by dividing, and that's what I would assume in the absence of a more detailed description. If th...
What does it mean to use a normalizing factor to "sum to unity"?
Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but
What does it mean to use a normalizing factor to "sum to unity"? Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but this is typically done by dividing, and that's what I ...
What does it mean to use a normalizing factor to "sum to unity"? Unity just means 1, so they have presumably normalized their values so that they all sum to 1 instead of whatever their "natural" total is. I could imagine a few specialized normalization schemes, but
48,109
What does it mean to use a normalizing factor to "sum to unity"?
A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4. You can find the new conditional probabilities for rolls of 4, 5, or 6 by dividing ${1 \over 6}$ by ${1 \over 2}$ for ea...
What does it mean to use a normalizing factor to "sum to unity"?
A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4.
What does it mean to use a normalizing factor to "sum to unity"? A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4. You can find the new conditional probabilities for roll...
What does it mean to use a normalizing factor to "sum to unity"? A natural application is conditional probabilities. If I roll a die, the unconditional probability of each outcome is ${1 \over 6}.$ But suppose I roll it and tell you that the outcome is at least 4.
48,110
Should I treat these ordinal IVs as covariates or factors, in a regression?
The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each level is associated with a different mean response on the dependent variable. Even if the factor is coded using numbers,...
Should I treat these ordinal IVs as covariates or factors, in a regression?
The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each
Should I treat these ordinal IVs as covariates or factors, in a regression? The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each level is associated with a different mean re...
Should I treat these ordinal IVs as covariates or factors, in a regression? The distinction between a “factor” and a “covariate” is related to the nature of the predictor/independent variable. A factor is a nominal variable that can take a number of values or levels and each
48,111
Classification of Huge number of classes
More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the number of classes) classification. You can probably find some hint in this (a bit old) workshop about large-scale (hierarchi...
Classification of Huge number of classes
More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the numb
Classification of Huge number of classes More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the number of classes) classification. You can probably find some hint in this (a bit o...
Classification of Huge number of classes More than 100 classes shouldn't be a problem for most classification algorithms. However, if that number increases you should start thinking about new models for large-scale (in this case for the numb
48,112
Classification of Huge number of classes
For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distance from samples of same class while maximizing the distance between samples of classes. You create a cluster space where...
Classification of Huge number of classes
For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distan
Classification of Huge number of classes For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distance from samples of same class while maximizing the distance between samples of ...
Classification of Huge number of classes For that many classes and classes with very few samples, I would try triplet loss. For every sample, choose other of same class and other from another class, train with the goal to minimize the distan
48,113
Can Agresti-Coull binomial confidence intervals be negative?
The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the so called Wilson interval and not the Agresti-Coull interval. Agresti and Coull list the formulas from your link in thei...
Can Agresti-Coull binomial confidence intervals be negative?
The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the
Can Agresti-Coull binomial confidence intervals be negative? The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the so called Wilson interval and not the Agresti-Coull interva...
Can Agresti-Coull binomial confidence intervals be negative? The lower limit of the formula from your link cannot be negative. But the interval from your link is not the Agresti-Coull interval, it is the Wilson interval. The formulas from your link are for the
48,114
How to balance classification?
If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positive to negative patterns. Generally, if you have enough data, the "class imbalance problem" doesn't arise. Also, note tha...
How to balance classification?
If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positiv
How to balance classification? If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positive to negative patterns. Generally, if you have enough data, the "class imbalance problem"...
How to balance classification? If it is only 70%-30% there is probably no need to balance the dataset. The class imbalance problem is caused by not having enough patterns for the minority class, rather than a high ratio of positiv
48,115
ncvTest from R and interpretation
This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) over the values of the response (fitted values). In your specific case, there is some evidence for a non-constant varianc...
ncvTest from R and interpretation
This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) o
ncvTest from R and interpretation This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) over the values of the response (fitted values). In your specific case, there is some e...
ncvTest from R and interpretation This test is more prominently know as Breusch-Pagan test. It is a test for heteroscedasticity. In a standard linear model, the variance of the residuals are assumed to be constant (i.e. independent) o
48,116
Why is it called white noise? [closed]
White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unfamilar, the PSD/Power Spectrum/Spectrum is a plot showing the spectral content of a signal; that is, it shows the amount...
Why is it called white noise? [closed]
White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unf
Why is it called white noise? [closed] White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unfamilar, the PSD/Power Spectrum/Spectrum is a plot showing the spectral content of...
Why is it called white noise? [closed] White noise is a signal (e.g., a sound or image) that has approximately equal power in every frequency band. In other words, its power spectral density (PSD) or power spectrum, is flat. (If you're unf
48,117
Evaluate statistical significance of difference between outcomes of tests
McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, & how many tested -ve in both #1 & #2.
Evaluate statistical significance of difference between outcomes of tests
McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, &
Evaluate statistical significance of difference between outcomes of tests McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, & how many tested -ve in both #1 & #2.
Evaluate statistical significance of difference between outcomes of tests McNemar's test is the two by two comparison. NB You want to record for tests #1 & #2 how many patients tested +ve in both #1 & #2, how many +ve in #1 but -ve in #2, how many -ve in #1 but +ve in #2, &
48,118
Evaluate statistical significance of difference between outcomes of tests
I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%27s_exact_test.
Evaluate statistical significance of difference between outcomes of tests
I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%
Evaluate statistical significance of difference between outcomes of tests I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%27s_exact_test.
Evaluate statistical significance of difference between outcomes of tests I was going to suggest the chi-squared test, but I think that your suggestion of McNemar's test would be better. A related topic would be the Fisher's exact test: https://en.wikipedia.org/wiki/Fisher%
48,119
Evaluate statistical significance of difference between outcomes of tests
Perform a McNemar test for independence in a 2x2 table
Evaluate statistical significance of difference between outcomes of tests
Perform a McNemar test for independence in a 2x2 table
Evaluate statistical significance of difference between outcomes of tests Perform a McNemar test for independence in a 2x2 table
Evaluate statistical significance of difference between outcomes of tests Perform a McNemar test for independence in a 2x2 table
48,120
Evaluate statistical significance of difference between outcomes of tests
Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_tests/mcnemar.htm freesourcecode.net/matlabprojects/68089 jech.bmj.com/content/37/1/82.abstract
Evaluate statistical significance of difference between outcomes of tests
Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_te
Evaluate statistical significance of difference between outcomes of tests Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_tests/mcnemar.htm freesourcecode.net/matlabproje...
Evaluate statistical significance of difference between outcomes of tests Paired proportions have traditionally been compared using McNemar's test but an exact alternative due to Liddell (1983) is preferable. Useful links: www.statsdirect.com/help/default.htm#chi_square_te
48,121
What is an Hypergeometric distribution where the last event is success?
You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without replacement. If in the total population of size $N$, there are $M$ "marked" and $N-M$ "unmarked" elements, and if the samp...
What is an Hypergeometric distribution where the last event is success?
You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without re
What is an Hypergeometric distribution where the last event is success? You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without replacement. If in the total population of size $N...
What is an Hypergeometric distribution where the last event is success? You're thinking of the negative hypergeometric distribution. The top result in a search led to this description: A negative hypergeometric distribution often arises in a scheme of sampling without re
48,122
What is an Hypergeometric distribution where the last event is success?
I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your notation, $N$ is the population of balls containing $B$ black balls ("successes"). You draw sample of $n$ balls. The proba...
What is an Hypergeometric distribution where the last event is success?
I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your no
What is an Hypergeometric distribution where the last event is success? I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your notation, $N$ is the population of balls containin...
What is an Hypergeometric distribution where the last event is success? I'm far not a distributional connoisseur, but it seems to me there is no need for a special distribution. Hypergeometric distribution is for sampling without replacement and will work here. In your no
48,123
Looking for a test for shape comparison
One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out their means might help too. I'd follow up computing their cross correlation perhaps?
Looking for a test for shape comparison
One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out the
Looking for a test for shape comparison One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out their means might help too. I'd follow up computing their cross correlation perhap...
Looking for a test for shape comparison One thing I might do is some sort of local smoothing? I assume the smallest jitter would be noise that you don't want to influence your analysis. Not sure if scaling both series or subtracting out the
48,124
Looking for a test for shape comparison
check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a link to the author's site about the text on the subject http://getahead.psu.edu/purplebook_new.html and the actual softwar...
Looking for a test for shape comparison
check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a li
Looking for a test for shape comparison check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a link to the author's site about the text on the subject http://getahead.psu.edu/pu...
Looking for a test for shape comparison check out the EDMA (euclidean distance matrix analysis), it's used for biological shape comparison and uses a nonparametric bootstrap of the differences in the coordinates between shapes, here is a li
48,125
Need help finding UMVUE for a Poisson Distribution
(a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get the log-likelihood (denote $\sum X_i$ by $T$ and omit terms which don't contain $\theta$) : $$\ell(\theta) = n\log\theta ...
Need help finding UMVUE for a Poisson Distribution
(a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get t
Need help finding UMVUE for a Poisson Distribution (a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get the log-likelihood (denote $\sum X_i$ by $T$ and omit terms which don'...
Need help finding UMVUE for a Poisson Distribution (a) As I mentioned in comment, you should focus on the parameter of interest $\theta$, it is not good to write some formulas contain both $\theta$ and $\lambda$. Following this, it is routine to get t
48,126
Need help finding UMVUE for a Poisson Distribution
How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
Need help finding UMVUE for a Poisson Distribution
How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
Need help finding UMVUE for a Poisson Distribution How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
Need help finding UMVUE for a Poisson Distribution How about the indicator function: $g(X_1)=I_{(X_1=0)}=\begin{cases} 1, & \text{if $X_1=0$ } \\ 0 & \text{otherwise} \\ \end{cases}$
48,127
Need help finding UMVUE for a Poisson Distribution
$x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimator: $$E[T(x)]=1\cdot P(x_1=0) + 0\cdot P(x_1 \neq 0)=P(x_1=0)=\frac{e^{-\lambda}\cdot\lambda^0}{0!}=e^{-\lambda}=\theta$...
Need help finding UMVUE for a Poisson Distribution
$x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimat
Need help finding UMVUE for a Poisson Distribution $x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimator: $$E[T(x)]=1\cdot P(x_1=0) + 0\cdot P(x_1 \neq 0)=P(x_1=0)=\frac{e...
Need help finding UMVUE for a Poisson Distribution $x_1,...,x_n\sim Pois(\lambda)$ (a) We want to estimate $\theta=e^{-\lambda}$, which is exactly $P(x_1=0)$. We take $T(x)=I\{x_1=0\}$ as an estimator. It might look dumb, but it is an unbiased estimat
48,128
Binary Classifier with training data for one label only
This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential diagnosis) often faces the same problem. So-called one-class or unary classifiers address this. The idea is to model the "...
Binary Classifier with training data for one label only
This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential di
Binary Classifier with training data for one label only This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential diagnosis) often faces the same problem. So-called one-class or un...
Binary Classifier with training data for one label only This is actually a widespread situation, for example in industrial quality control, you want to decide whether a batch of product is fit for sale. Also medical diagnosis (if it isn't a differential di
48,129
Binary Classifier with training data for one label only
If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed data set or sample there is the possibility, especially with low-probability events and/or small data sets, of a possible e...
Binary Classifier with training data for one label only
If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed dat
Binary Classifier with training data for one label only If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed data set or sample there is the possibility, especially with low-pr...
Binary Classifier with training data for one label only If I understood you correctly, you have many data for class A (auth.) and almost any for class B (imposter) in your (randomly chosen?) training set? From Wikipedia (Pseudocount), In any observed dat
48,130
Word entropy / frequency in human speech
This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/character, using a neat 'guessing game' paradigm. In the same paper, he estimates the entropy of a word at around 12 bit...
Word entropy / frequency in human speech
This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/
Word entropy / frequency in human speech This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/character, using a neat 'guessing game' paradigm. In the same paper, he estimat...
Word entropy / frequency in human speech This is a surprisingly frustrating thing to pin down. Shannon looked at this in one of the earliest infomation theory papers (Shannon, 1951) and estimated the entropy of printed text at around 1 bit/
48,131
Word entropy / frequency in human speech
The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what happened to parenthesis in the 17° century ? http://books.google.com/ngrams/graph?content=%5B%28%5D%2C%5B%29%5D&year_start=...
Word entropy / frequency in human speech
The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what hap
Word entropy / frequency in human speech The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what happened to parenthesis in the 17° century ? http://books.google.com/ngrams/graph?...
Word entropy / frequency in human speech The better answer I can give you: http://books.google.com/ngrams Pros: As you can see $p(x)$ is in fact $p(x,t)$, I think there's a lot of interesting (or funny) to do with this information. (what hap
48,132
Generate distribution based on descriptive statistics
You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. bootstrap or density estimation. Without the actual data you cannot do any non-parametric procedure--you must specify a para...
Generate distribution based on descriptive statistics
You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. boot
Generate distribution based on descriptive statistics You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. bootstrap or density estimation. Without the actual data you cannot do...
Generate distribution based on descriptive statistics You must specify a model. You cannot estimate the model or generate a distribution function given the summary statistics. If you had the data, you could at best do non-parametric estimation, e.g. boot
48,133
Generate distribution based on descriptive statistics
If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a series of transformations to (approximately) satisfy the descriptive stats. Of course, there are many distributions tha...
Generate distribution based on descriptive statistics
If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a
Generate distribution based on descriptive statistics If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a series of transformations to (approximately) satisfy the descript...
Generate distribution based on descriptive statistics If you just want a distribution that looks approximately normal and satisfies your descriptive stats, here is one possible approach. Start with a normally distributed sample of 148 numbers and apply a
48,134
Generate distribution based on descriptive statistics
You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an example. # Parameters of the mixture p1 = 0.6 m1 = 95 s1 = 6 m2 = 103 s2 = 26 # Number of obs. n = 148 # Draw the comp...
Generate distribution based on descriptive statistics
You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an
Generate distribution based on descriptive statistics You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an example. # Parameters of the mixture p1 = 0.6 m1 = 95 s1 = 6 m2 = ...
Generate distribution based on descriptive statistics You could use a mixture of normals. Choose the smallest number of components which gets you close enough to the distribution you have in mind. "Close enough" is a matter for your judgement. Here's an
48,135
Meta-analysis and homogeneity -- what did these guys do?
One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate. I haven't had a chance to look at the paper but can tell you from the description is that the authors don't fully unders...
Meta-analysis and homogeneity -- what did these guys do?
One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate.
Meta-analysis and homogeneity -- what did these guys do? One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate. I haven't had a chance to look at the paper but can tell you fr...
Meta-analysis and homogeneity -- what did these guys do? One of the meta-analytic techniques for sensitivity analyses is known as "one study removed" and it simply means that. What effect does each single included study have on the overall effect estimate.
48,136
Meta-analysis and homogeneity -- what did these guys do?
The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no paper/specification level random effects - all of the variance in the estimates across studies is a result of within study ...
Meta-analysis and homogeneity -- what did these guys do?
The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no pap
Meta-analysis and homogeneity -- what did these guys do? The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no paper/specification level random effects - all of the variance in ...
Meta-analysis and homogeneity -- what did these guys do? The more sophisticated underlying problem is this - are the apparent study level or specification level random effects approximately normal. Now consider the following Hypotheses: (1) There are no pap
48,137
How do I calculate sample size so I can be confident that the sample mean approximates the population mean?
For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the population mean. Okay. Sampling would be without replacement. With a million in the population? To a first approximation, it ...
How do I calculate sample size so I can be confident that the sample mean approximates the populatio
For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the populati
How do I calculate sample size so I can be confident that the sample mean approximates the population mean? For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the population mean. Ok...
How do I calculate sample size so I can be confident that the sample mean approximates the populatio For example, for a population of 1,000,000 with a mean of 0.90 and a population standard deviation of 1.32 I would need a sample n to be 99% confident that the sample mean is within 1% of the populati
48,138
How to deal with an unavoidable correlation between two independent variables?
The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess the relative contributions of Class #1 and Class #2 students to the cost, then collinearity is a bigger problem. In a w...
How to deal with an unavoidable correlation between two independent variables?
The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess
How to deal with an unavoidable correlation between two independent variables? The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess the relative contributions of Class #1 an...
How to deal with an unavoidable correlation between two independent variables? The first question to ask is: do you actually need to care? If you're just trying to predict the cost of future lunches, then this isn't really an issue. On the other hand, if you're trying to assess
48,139
How to deal with an unavoidable correlation between two independent variables?
Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whether a student is in class 1 or class 2 the student's age This way, you can ask whether age matters, and whether belongi...
How to deal with an unavoidable correlation between two independent variables?
Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whet
How to deal with an unavoidable correlation between two independent variables? Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whether a student is in class 1 or class 2 th...
How to deal with an unavoidable correlation between two independent variables? Based on the fact that it's the average age of Class 2 vs. Class 1 that (you hypothesize) may matter, you could try a model where the response is Lunch Cost, and the predictors are a factor for whet
48,140
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan?
This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will address the issue in a future release. Thanks for your persistence.
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan?
This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will a
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan? This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will address the issue in a future release. Thanks for y...
Why does MICE fail to impute multilevel data with 2l.norm and 2l.pan? This is a bug in mice 2.15 and before. mice.impute.2l.norm() and mice.impute.2l.pan() will fail if the cluster variable is a factor. Use as.integer(dfr$group) as a temporary fix in your data. I will a
48,141
Latent variables in Bayes nets with no physical interpretation
The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always very precise and clear, I wonder why this time he didn't use the single word "parameters", that would have been enlighte...
Latent variables in Bayes nets with no physical interpretation
The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always
Latent variables in Bayes nets with no physical interpretation The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always very precise and clear, I wonder why this time he didn't ...
Latent variables in Bayes nets with no physical interpretation The only reasonable answer to me seems that latent variables are the parameters of a distribution written as they were real variables, while they haven't any physical interpretation. Bishop is always
48,142
Latent variables in Bayes nets with no physical interpretation
First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). Note that incorporating the latent variable H in the left-hand model reduces the parameter space of the overall graphical...
Latent variables in Bayes nets with no physical interpretation
First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). N
Latent variables in Bayes nets with no physical interpretation First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). Note that incorporating the latent variable H in the left-...
Latent variables in Bayes nets with no physical interpretation First, note that observed variables and latent variables both have probability distributions, parameters are fixed. A helpful example can be found in Koller and Friedman's PGM textbook (see below). N
48,143
Hot deck imputation: validity of double imputation and selection of deck variables for a regression
Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard errors and P values that are too low. For correct statistical inference could use multiple imputation. It is easy to ap...
Hot deck imputation: validity of double imputation and selection of deck variables for a regression
Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard
Hot deck imputation: validity of double imputation and selection of deck variables for a regression Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard errors and P values...
Hot deck imputation: validity of double imputation and selection of deck variables for a regression Hot deck is often a good idea to obtain sensible imputations as it produces imputations that are draws from the observed data. However, filling in a single value for the missing data produces standard
48,144
K-L divergence is 0 for clearly different distributions. Why?
I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLdiv cuts off low densities at 1e-4 by default (this can be changed, but I don't know if you'll introduce problems that wa...
K-L divergence is 0 for clearly different distributions. Why?
I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLd
K-L divergence is 0 for clearly different distributions. Why? I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLdiv cuts off low densities at 1e-4 by default (this can be ...
K-L divergence is 0 for clearly different distributions. Why? I suspected numerical instability of some sort. What appears to be happening is that because the ranges of A and D are so large, the densities at each data point are very small. It appears as if KLd
48,145
Index plot for each cluster sorted by the silhouette
The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b(i)$ the minimum average dissimilarity to members of another cluster. The silhouette values of members of a cluster $k$ ...
Index plot for each cluster sorted by the silhouette
The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b
Index plot for each cluster sorted by the silhouette The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b(i)$ the minimum average dissimilarity to members of another cluste...
Index plot for each cluster sorted by the silhouette The silhouette is computed for each observation $i$ as $s(i) = \frac{b(i) - a(i)}{\max(a(i), b(i))}$ where $a(i)$ is the average dissimilarity with members of the cluster to which $i$ belongs, and $b
48,146
Machine learning predicted value
There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees etc.). The motivation of the SVM is that it is better to solve the particular problem at hand directly, rather than sol...
Machine learning predicted value
There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees
Machine learning predicted value There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees etc.). The motivation of the SVM is that it is better to solve the particular problem ...
Machine learning predicted value There are many machine learning methods that do aim to estimate the conditional mean of the data, such as artificial neural networks, but also there are many that do not (such as SVMs, decision trees
48,147
EM algorithm R code on Cox PH model with frailty
coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Grambsch (2000, Section 9.6). (method actually refers to the method used to select a solution for theta, the heterogeneity pa...
EM algorithm R code on Cox PH model with frailty
coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Gramb
EM algorithm R code on Cox PH model with frailty coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Grambsch (2000, Section 9.6). (method actually refers to the method used to ...
EM algorithm R code on Cox PH model with frailty coxph() actually implements a penalised log-likelihood approach which turns out to return the same estimates as the EM algorithm in the case of gamma frailties when method="em"; see Therneau and Gramb
48,148
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance?
First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coefficients, unless you scaled them all prior to the regression, such that they all had the same mean and standard deviation ...
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance?
First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coeffi
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance? First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coefficients, unless you scaled them all...
How to interpret the coefficients returned by cv.glmnet? Are they feature-importance? First of all, any variable with a coefficient of zero has been dropped from the model, so you can say it was unimportant. Second of all, you can't really make inferences about the importance of coeffi
48,149
Other substitution matrices for missing value state in sequence analysis with TraMineR?
You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The seqdist help page states: "If the OM method is selected, seqdist expects a substitution cost matrix with a row and a colum...
Other substitution matrices for missing value state in sequence analysis with TraMineR?
You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The se
Other substitution matrices for missing value state in sequence analysis with TraMineR? You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The seqdist help page states: "If the ...
Other substitution matrices for missing value state in sequence analysis with TraMineR? You are right, to compute "OM" dissimilarities with missing states you need substitution costs for replacing missing values. However, this is exactly what the TraMineR seqdist function expects. The se
48,150
Other substitution matrices for missing value state in sequence analysis with TraMineR?
thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way Gilbert described it. Using "seqdef" (and seqsum before) there is indeed the opportunity defining 'real states' but not t...
Other substitution matrices for missing value state in sequence analysis with TraMineR?
thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way G
Other substitution matrices for missing value state in sequence analysis with TraMineR? thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way Gilbert described it. Using "seqd...
Other substitution matrices for missing value state in sequence analysis with TraMineR? thank you for answering. But as far as I see, my question isn't answered. Because I'm not even sure, whether I posted the way it should be, I try to ask again this way: We tried this before the way G
48,151
Poisson regression with (auto-correlated) time series
I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book, but it looks highly relevant (though fairly technical) as far as I can tell. I also wonder if this can be handled in a...
Poisson regression with (auto-correlated) time series
I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book
Poisson regression with (auto-correlated) time series I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book, but it looks highly relevant (though fairly technical) as far as...
Poisson regression with (auto-correlated) time series I had a similar problem and was told to consult Chapter 4 of Regression Models for Time Series Analysis by Benjamin Kedem and Konstantinos Fokianos. I have not yet gotten around to digesting this book
48,152
Poisson regression with (auto-correlated) time series
Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is zinb. & 4. You could try orthogonalizing the autocorrelated variables In Stata, this is orthog var1 var2 var3, gen(newva...
Poisson regression with (auto-correlated) time series
Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is
Poisson regression with (auto-correlated) time series Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is zinb. & 4. You could try orthogonalizing the autocorrelated variab...
Poisson regression with (auto-correlated) time series Use negative binomial regression, which deals with the overdispersion. In Stata, this is nbreg. Use zero-inflated negative binomial regression, which deals with the excessive zeros. In Stata, this is
48,153
What's the Bayesian counterpart to Pearson product-moment correlation?
There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for specific Bayesian techniques only arises when you do inference with it, so the appropriate Bayesian approach would depe...
What's the Bayesian counterpart to Pearson product-moment correlation?
There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for
What's the Bayesian counterpart to Pearson product-moment correlation? There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for specific Bayesian techniques only arises when yo...
What's the Bayesian counterpart to Pearson product-moment correlation? There is no essential Bayesian / frequentist divide with a correlation any more than there is a Bayesian equivalent of a mean or median. A correlation is just an arithmetic calculation. The need for
48,154
How to find the rows that meet some conditions in a sequence data set
To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mvad" data set. ## Loading the library library(TraMineR) data(mvad) ## Defining sequence properties mvad.alphabet <- c("emp...
How to find the rows that meet some conditions in a sequence data set
To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mva
How to find the rows that meet some conditions in a sequence data set To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mvad" data set. ## Loading the library library(TraMin...
How to find the rows that meet some conditions in a sequence data set To select some sequences, you need to create a condition vector. For instance, you can select the sequences with a length lower than 1440 using the seqlength function. Here is an example with the "mva
48,155
Mahalanobis Distance on Singular Data
Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.szie.hu/dep/talt/lv/CentInpDuplNoHeader.txt" df <- read.table(file = url, header = FALSE) m <- as.matrix(df) dim(m) # [1...
Mahalanobis Distance on Singular Data
Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.s
Mahalanobis Distance on Singular Data Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.szie.hu/dep/talt/lv/CentInpDuplNoHeader.txt" df <- read.table(file = url, header = ...
Mahalanobis Distance on Singular Data Why do you think there is no way that matrix could be singular? A QR decomposition shows that the rank of this 380 x 372 matrix is just 300. In other words, it is highly singular: url <- "http://mkk.s
48,156
Mahalanobis Distance on Singular Data
A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse or pseudoinverse effectively calculates an "inverse-like" matrix that ignores some of this noninformative information. T...
Mahalanobis Distance on Singular Data
A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse
Mahalanobis Distance on Singular Data A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse or pseudoinverse effectively calculates an "inverse-like" matrix that ignores some...
Mahalanobis Distance on Singular Data A singular matrix means that some of the vectors are linear combinations of others. Thus, some vectors do not add any useful information to the Mahalanobis distance calculation. A generalized inverse
48,157
Mahalanobis Distance on Singular Data
What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-distance/. You can also check "The Elements of Statistical Learning", by Hastie et al. in particular the sections on ridge...
Mahalanobis Distance on Singular Data
What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-di
Mahalanobis Distance on Singular Data What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-distance/. You can also check "The Elements of Statistical Learning", by Hastie et a...
Mahalanobis Distance on Singular Data What I would suggest as a solution is Penalized Mahalanobis distance. You can see this blog post for details http://stefansavev.com/blog/better-euclidean-distance-with-the-svd-penalized-mahalanobis-di
48,158
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct?
This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usually can be expected and it would also be of interest. However, this is guesswork, because I don't know the background of t...
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct?
This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usuall
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct? This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usually can be expected a...
Using lme to analyse a complete randomized block design with repeated measures: Is my model correct? This is the model I might start with: fit <- lme(Value ~ Treatment * Year, random = ~1|Block, data = mydata) I would include the year as a fixed effect, since a temporal trend of biodiversity usuall
48,159
Is it necessary to report the bivariate correlations when reporting logistic regression?
You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and Reporting of Multivariable Logistic Regression in Transplantation Literature Logistic regression in the medical literatu...
Is it necessary to report the bivariate correlations when reporting logistic regression?
You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and
Is it necessary to report the bivariate correlations when reporting logistic regression? You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and Reporting of Multivariable Logi...
Is it necessary to report the bivariate correlations when reporting logistic regression? You might want to check the following papers which discuss how to report findings from logistic regression analysis: Reporting results of a logistic regression Recommendations for the Assessment and
48,160
Is it necessary to report the bivariate correlations when reporting logistic regression?
I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an appendix will almost surely work. If not .... well, in my reading in the social sciences and medicine, I rarely see th...
Is it necessary to report the bivariate correlations when reporting logistic regression?
I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an
Is it necessary to report the bivariate correlations when reporting logistic regression? I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an appendix will almost surely wo...
Is it necessary to report the bivariate correlations when reporting logistic regression? I am pretty sure that APA6 makes no recommendation on this. If this is for a journal, you should check with them. If they have online appendices, then @Bernd 's idea of putting the correlations in an
48,161
Confidence intervals for proportions (prevalence)
You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t, c(.025,.975)) You can repeat this for each of your 6 subsets of data.
Confidence intervals for proportions (prevalence)
You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t,
Confidence intervals for proportions (prevalence) You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t, c(.025,.975)) You can repeat this for each of your 6 subsets of data...
Confidence intervals for proportions (prevalence) You could try a nonparametric bootstrap approach. For example require(boot) the.means = function(dt, i) {mean(dt[i])} boot.obj <- boot(data=mydata, statistic=the.means , R=10000) quantile(boot.obj$t,
48,162
Confidence intervals for proportions (prevalence)
Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% CI = p_hat +/- 1.96*(p_hat*(1-p_hat)/n)^0.5 formula may be used. For a better approximation, use the Wilson score interv...
Confidence intervals for proportions (prevalence)
Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% C
Confidence intervals for proportions (prevalence) Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% CI = p_hat +/- 1.96*(p_hat*(1-p_hat)/n)^0.5 formula may be used. For a...
Confidence intervals for proportions (prevalence) Joe, Check to see if (sample size)*(proportion diagnosed) >= 5 for each hospital or group of hospitals by age/risk score. If so, then the normal dbn closely approximates the binomial dbn and the 95% C
48,163
Confidence intervals for proportions (prevalence)
Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: hospital age risk diagnosed 1 1 0 1 1 0 1 0 1 1 ...
Confidence intervals for proportions (prevalence)
Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: h
Confidence intervals for proportions (prevalence) Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: hospital age risk diagnosed 1 1 0 1 ...
Confidence intervals for proportions (prevalence) Updated Regression Approach Here's a way that might work. You can "expand" your data to patient level, so each row corresponds to a patient, who is either diagnosed or not. It might look like this: h
48,164
Probability of visiting all other states before return
This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and your first step is to $1$. Can you say what the probability is that you hit $11$ before you hit $12$?
Probability of visiting all other states before return
This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and yo
Probability of visiting all other states before return This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and your first step is to $1$. Can you say what the probability is that...
Probability of visiting all other states before return This looks like homework so I'm trying to give a hint, not a solution. For part (b), you definitely want to use the structure of the graph. Without loss of generality suppose you start at $12$ and yo
48,165
K-means Mahalanobis vs Euclidean distance
I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natural groupings (i.e. clusters) in your data. The choice of using Mahalanobis vs Euclidean distance in k-means is really ...
K-means Mahalanobis vs Euclidean distance
I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natu
K-means Mahalanobis vs Euclidean distance I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natural groupings (i.e. clusters) in your data. The choice of using Mahalanobis v...
K-means Mahalanobis vs Euclidean distance I haven't understood the type of transformation you used, so my answer will be a general one. The short answer is: How much you will gain using Mahalanobis distance really depends on the shape of natu
48,166
Calculating Log Prob. of Dirichlet distribution in High Dimensions
The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\theta_i$ is 0, then the whole product is zero. In other words, the support of a Dirichlet distribution is over vectors $\t...
Calculating Log Prob. of Dirichlet distribution in High Dimensions
The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\th
Calculating Log Prob. of Dirichlet distribution in High Dimensions The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\theta_i$ is 0, then the whole product is zero. In other...
Calculating Log Prob. of Dirichlet distribution in High Dimensions The p.d.f of the Dirichlet distribution is defined as $$ f(\theta; \alpha) = B^{-1} \prod_{i=1}^K \theta_i^{\alpha_i - 1} $$ where $B(\alpha)$ is the generalized Beta function. Notice that if any $\th
48,167
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would compute if you calculated each of the individual intercepts. The fixed effect model assumes that all the groups have t...
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would compute if you calculated each of t...
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? Things are a little more complicated with mixed effects models (as you are realizing). The estimated random effects from the normal model are not exactly the same as the fixed effects that you would
48,168
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is not really a estimation in the classical OLS sense. in my understand it is better to call it "prediction" since we estimat...
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter?
as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is no
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is not really a estimation in the classic...
Why is it valid to account for k-1 intercepts w/ only 1 random intercept parameter? as ive studied the random vs fixed effecs i neither could get the thing where u "estimate" parameters without loosing df. but the big difference as already mentioned is that the estimating of RE is no
48,169
Fewer variables have higher R-squared value in logistic regression
You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable generally means the variables are interacting in a way that is not proving additional explanation of the model. One of the c...
Fewer variables have higher R-squared value in logistic regression
You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable gene
Fewer variables have higher R-squared value in logistic regression You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable generally means the variables are interacting in a way th...
Fewer variables have higher R-squared value in logistic regression You should be careful just relying on the R^2 when interpreting fit in a non-linear regression. You may want to compare the Log-Likelihood. However, a decrease in R^2 with an increase in variable gene
48,170
Bayesian models and exchangeability
You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditionally i.i.d. $\implies$ exchangeability), it does not stem from deFinetti's theorem (this theorem claims that exchangeabili...
Bayesian models and exchangeability
You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditional
Bayesian models and exchangeability You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditionally i.i.d. $\implies$ exchangeability), it does not stem from deFinetti's theorem (th...
Bayesian models and exchangeability You're right but: More precisely, we should say that $X_1$, $\ldots,$, $X_n$ are exchangeable under the prior predictive distribution (as well as the posterior) This fact is elementary (conditional
48,171
Bayesian models and exchangeability
There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representation theorem. Stéphane Laurent is right to characterise this as an elementary result (proof below). (IID $\impliedby$ in...
Bayesian models and exchangeability
There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representat
Bayesian models and exchangeability There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representation theorem. Stéphane Laurent is right to characterise this as an elementary result...
Bayesian models and exchangeability There are a few points worth noting here: (IID $\implies$ exchangeability): The conditional IID form immediately implies exchangeability of the values. This does not require de Finetti's representat
48,172
Bayesian models and exchangeability
No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutation tests. Often in doing statistical inference we assume observations are independent and identically distributed and ...
Bayesian models and exchangeability
No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutat
Bayesian models and exchangeability No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutation tests. Often in doing statistical inference we assume observations are independ...
Bayesian models and exchangeability No, I think your reasoning is right. Exchangeability was a very important property to de Finetti in his development of probability theory (which is Bayesian). It also is important regarding permutat
48,173
How to go about selecting an algorithm for approximate Bayesian inference
At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avoid complex integrations in some cases. In case you have a strong background in statistics and you want to integrate a l...
How to go about selecting an algorithm for approximate Bayesian inference
At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avo
How to go about selecting an algorithm for approximate Bayesian inference At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avoid complex integrations in some cases. In cas...
How to go about selecting an algorithm for approximate Bayesian inference At first you have to decide what amount of time you can afford. In case you have a large amount of time for your numerical experiments you can try MCMC method, also in this case it is possible to avo
48,174
How to go about selecting an algorithm for approximate Bayesian inference
I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be more helpful. Another point - are variables you consider independent or not. If they are you can use Expectation Propag...
How to go about selecting an algorithm for approximate Bayesian inference
I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be
How to go about selecting an algorithm for approximate Bayesian inference I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be more helpful. Another point - are variables y...
How to go about selecting an algorithm for approximate Bayesian inference I think that there are no universal solution. So, I try yo give a couple of general advices. If problem dimension is high you have to use MCMC gingerly, in this case another methods are seems to be
48,175
How do I get a $p$-value from the Cochran-Armitage trend test?
This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ by a factor of $1/N^2$. However you should note that the chi square stitistic is the same in either case. For $T_2$ the...
How do I get a $p$-value from the Cochran-Armitage trend test?
This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ b
How do I get a $p$-value from the Cochran-Armitage trend test? This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ by a factor of $1/N^2$. However you should note that the ...
How do I get a $p$-value from the Cochran-Armitage trend test? This is just a different definition of the statistic $T$. Call your statistic $T_1$ and the other $T_2$. Note the $T_2 = T_1/N$ and that is the reason that the variance of $T_2$ differs from $T_1$ b
48,176
Calculate R-squared with JAGS and R
There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to $\hat{\beta}X$ in OLS. You then calculate the variance of the residuals as usual and plug it into the $R^2$ calculation...
Calculate R-squared with JAGS and R
There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to
Calculate R-squared with JAGS and R There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to $\hat{\beta}X$ in OLS. You then calculate the variance of the residuals as usual an...
Calculate R-squared with JAGS and R There are a couple of ways you can do this. The first would be to use the mean of the posterior for each of the $\mu_i$, and calculate a residual using this as the "estimated value" corresponding to
48,177
Information gain as a feature selection for 3-class classification problem
Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes for building decision trees. Note that a classic problem with decision tress is when to stop adding decision nodes---too...
Information gain as a feature selection for 3-class classification problem
Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes f
Information gain as a feature selection for 3-class classification problem Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes for building decision trees. Note that a clas...
Information gain as a feature selection for 3-class classification problem Information gain is a reasonable objective to use for selecting features (even when there are multiple classes). Note that information gain is a traditional metric for selecting decision attributes f
48,178
Upper/lower standard error makes sense?
According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done in your case is to divide the population in two (values upper and lower than the mean) and calculate the standard error ...
Upper/lower standard error makes sense?
According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done
Upper/lower standard error makes sense? According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done in your case is to divide the population in two (values upper and lower than the...
Upper/lower standard error makes sense? According to your updated question, the claim of @onestop is still valid: it's not ok to call them standard errors. Furthermore, the method seems strange and non-standard at all. What really was done
48,179
Upper/lower standard error makes sense?
It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here represent confidence intervals, though they might also be credible intervals if they were constructed using Bayesian method...
Upper/lower standard error makes sense?
It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here rep
Upper/lower standard error makes sense? It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here represent confidence intervals, though they might also be credible intervals if the...
Upper/lower standard error makes sense? It's OK to call them error bars, but as they're asymmetric they are not representing the standard error so it's not correct to talk about 'lower/upper standard error'. I assume the error bars here rep
48,180
What is an appropriate method for providing bounds when performing maximum likelihood parameter estimation?
What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- function(par, x) -sum(dnorm(x, par[1], par[2], log=TRUE)) > > foo.constr <- function(par, x) + { + ll <- NA + if (par[1] ...
What is an appropriate method for providing bounds when performing maximum likelihood parameter esti
What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- functi
What is an appropriate method for providing bounds when performing maximum likelihood parameter estimation? What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- function(par, x) -...
What is an appropriate method for providing bounds when performing maximum likelihood parameter esti What you are doing in your first code block is indeed equivalent to box-constrained optimisation. Here's some sample code, with some unnecessary output removed to save space: > foo.unconstr <- functi
48,181
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution for the parameter?
$\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-\lambda}}{\int_0^\infty \frac{\lambda^n}{n!}e^{-\lambda} \cdot e^{-\lambda}\, d\lambda} = 2^{n+1}\frac{\lambda^n}{n!} ...
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution
$\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution for the parameter? $\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-\...
How do I calculate a posterior distribution for a Poisson model with exponential prior distribution $\Pr(\text{data}|\text{model}) =\Pr(N=n|\lambda) = \frac{\lambda^n}{n!}e^{-\lambda}$. $p(\text{model}) = p(\lambda) = e^{-\lambda}$. $p(\lambda|N=n) = \dfrac{\frac{\lambda^n}{n!}e^{-\lambda}\cdot e^{-
48,182
Selecting features using Adaboost
Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this feature-classifier ambiguity. Going back to the question, there are numerous ways to get feature ranking from a boosting -...
Selecting features using Adaboost
Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this fe
Selecting features using Adaboost Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this feature-classifier ambiguity. Going back to the question, there are numerous ways to get...
Selecting features using Adaboost Well, first of all in the presentation you mentioned they just used a value of one feature is larger/smaller than some threshold (i.e. decision tree of a depth 1) as a partial classifier, thus this fe
48,183
Dealing with lots of ties in kNN model
In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might be able to work is that you add a very small pseudo-random noise to the data. This will reduce the number of data items ...
Dealing with lots of ties in kNN model
In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might b
Dealing with lots of ties in kNN model In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might be able to work is that you add a very small pseudo-random noise to the data. Thi...
Dealing with lots of ties in kNN model In some situation you have a lot of data items that are might be considered to be tied in distance, especially if your data is discrete (e.g. your matrix is made up of integers). A "hack" that might b
48,184
Dealing with lots of ties in kNN model
I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose a point $x$ has 2 neighbors of class 1, 2 neighbors of class 2 and 1 neighbor of class 3. namely $x_1,x_4\in C_1$, $x_2,...
Dealing with lots of ties in kNN model
I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose
Dealing with lots of ties in kNN model I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose a point $x$ has 2 neighbors of class 1, 2 neighbors of class 2 and 1 neighbor of ...
Dealing with lots of ties in kNN model I guess that you have ties because you are solving a multi-class problem? This might occur for instance if you pick $k=5$ neighbors and your points belong to $1$ out of $3$ possible classes. Suppose
48,185
Dealing with lots of ties in kNN model
I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input dataset for the KNN. Problem solved for the KNN to execute. I imputed the mode value for the dependent variable (also 0 in t...
Dealing with lots of ties in kNN model
I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input datase
Dealing with lots of ties in kNN model I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input dataset for the KNN. Problem solved for the KNN to execute. I imputed the mode value ...
Dealing with lots of ties in kNN model I had this problem in some real world data. Exploring the dataset I found that there were a number of hundreds of rows that all had 0 for the 3 independent vars. I removed these from the input datase
48,186
Minimax estimator for the mean of a Poisson distribution
Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, and the integrated risk is \begin{gather} r_n = \int^\infty_0 (\lambda-\delta_n)^2 Poi(x|\lambda)Ga(\lambda|a_n,b_n)d\lam...
Minimax estimator for the mean of a Poisson distribution
Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, a
Minimax estimator for the mean of a Poisson distribution Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, and the integrated risk is \begin{gather} r_n = \int^\infty_0 (\...
Minimax estimator for the mean of a Poisson distribution Define a sequence of prior distributions, $\pi_n = Ga(\lambda|a_n,b_n)$, for the sequence $a_n = \alpha/n$ and $b_n = \beta/n$. The Bayes estimator for this sequence is $\delta_n = (a_n+x)(b_n+1)$, a
48,187
Minimax estimator for the mean of a Poisson distribution
The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R}R(\lambda,\widehat \lambda)=\infty$. Obviously, it cannot be minimax. However, this estimator is minimax with respect t...
Minimax estimator for the mean of a Poisson distribution
The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R
Minimax estimator for the mean of a Poisson distribution The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R}R(\lambda,\widehat \lambda)=\infty$. Obviously, it cannot be m...
Minimax estimator for the mean of a Poisson distribution The MSE risk of the estimator $\widehat \lambda(x)=x$ is its variance $R(\lambda, \widehat \lambda)=\mathrm{Var}_\lambda(x)=\lambda$ and hence its minimax risk is infinite, $\sup_{\lambda\in \mathbb R
48,188
How to, or what is the best way, to apply propensity scores after matching?
This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you compute the difference in outcome $Y$ for each pair, and then calculate the mean difference across pairs. That's your trea...
How to, or what is the best way, to apply propensity scores after matching?
This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you co
How to, or what is the best way, to apply propensity scores after matching? This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you compute the difference in outcome $Y$ for each...
How to, or what is the best way, to apply propensity scores after matching? This is a complicated question. The simple nearest neighbor matching pairs each observation in the treatment group with a single person in control group who has a similar propensity score. Then you co
48,189
How to, or what is the best way, to apply propensity scores after matching?
You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work in situations where you can't, or don't want, to deal with matching. This seems like a decent overview.
How to, or what is the best way, to apply propensity scores after matching?
You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work
How to, or what is the best way, to apply propensity scores after matching? You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work in situations where you can't, or don't wan...
How to, or what is the best way, to apply propensity scores after matching? You may want to consider other strategies based on propensity scores, like including them as model covariates, or very similar concepts, like Inverse-Probability-of-Treatment weights. These might work
48,190
How to, or what is the best way, to apply propensity scores after matching?
It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
How to, or what is the best way, to apply propensity scores after matching?
It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
How to, or what is the best way, to apply propensity scores after matching? It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
How to, or what is the best way, to apply propensity scores after matching? It is not recommended to include PS as a covariate in an outcome model. You might want to consider a stratified analysis based on strata of the PS.
48,191
MCMC for infinite variance posteriors
There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no specific feature to "fight" for or against infinite variance distributions. The difficulty with infinite variance distributio...
MCMC for infinite variance posteriors
There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no speci
MCMC for infinite variance posteriors There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no specific feature to "fight" for or against infinite variance distributions. The difficu...
MCMC for infinite variance posteriors There is nothing wrong with infinite variance distributions, per se... For instance, simulating a Cauchy using rcauchy(10^3) produces a sample truly from a Cauchy distribution! Hence MCMC has no speci
48,192
How to determine the marginal pdf, the posterior?
What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $$ \text{tr}((\sigma^2\Sigma)^{-1}S)=\sigma^{-2}\text{tr}(\Sigma^{-1}S)\,.) $$ From this property, you can derive the ...
How to determine the marginal pdf, the posterior?
What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $
How to determine the marginal pdf, the posterior? What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $$ \text{tr}((\sigma^2\Sigma)^{-1}S)=\sigma^{-2}\text{tr}(\Sigma^{-1}S...
How to determine the marginal pdf, the posterior? What you get as your bottom line is of the form $$ (\sigma^2) ^{-\alpha-1-nd/2}\exp\{-A\sigma^{-2}\} $$ so the posterior distribution in $\sigma^{-2}$ is an inverse gamma distribution. (Note that $
48,193
How to determine the marginal pdf, the posterior?
Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\infty}(\sigma^{2})^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)d\sigma^2=\frac{\Gamma(a)}{b^a}$$ Your integral is of thi...
How to determine the marginal pdf, the posterior?
Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\
How to determine the marginal pdf, the posterior? Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\infty}(\sigma^{2})^{-(a+1)}\exp\left(-\frac{b}{\sigma^2}\right)d\sigma...
How to determine the marginal pdf, the posterior? Note that the normalising constant for a IG variable is $$\frac{b^a}{\Gamma(a)}$$ This is equal to the reciprical of the integral over $\sigma^{2}$ of the kernel of the pdf. hence we have $$\int_0^{\
48,194
Resources about probability proportional to size (PPS) sampling method
To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assumes a knowledge somewhere between Lohr (2009) "Sampling: Design and Analysis" and Thompson (1997) Theory of Sample Surveys...
Resources about probability proportional to size (PPS) sampling method
To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assume
Resources about probability proportional to size (PPS) sampling method To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assumes a knowledge somewhere between Lohr (2009) "Samp...
Resources about probability proportional to size (PPS) sampling method To me, the ultimate resource on PPS is Brewer and Hanif (1982) Sampling with Unequal Probabilities. Unfortunately, it is nearly impossible to lay one's hands on. It is also highly technical and assume
48,195
Error exponent in hypothesis testing
Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $e_n$. This is because the decision regions $A_n$ and $A_n^c$ are different. In contrast to your (1) and (2), the behav...
Error exponent in hypothesis testing
Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $
Error exponent in hypothesis testing Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $e_n$. This is because the decision regions $A_n$ and $A_n^c$ are different. In co...
Error exponent in hypothesis testing Essentially, the answer to your question is that the behavior of $\alpha_n$ and $\beta_n$ is somewhat different when the Bayesian minimum-error-probability rule is used and one is trying to minimize $
48,196
Combining p-values for averaging technical protein quantification replicates in python
To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and $P=g(P_1,\ldots, P_n)$ has a uniform distribution when the $P_i$ are independently uniformly distributed. Symmetry me...
Combining p-values for averaging technical protein quantification replicates in python
To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and
Combining p-values for averaging technical protein quantification replicates in python To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and $P=g(P_1,\ldots, P_n)$ has a unif...
Combining p-values for averaging technical protein quantification replicates in python To combine p-values means to find formulas $g(p_1,p_2, \ldots, p_n)$ (one for each $n\ge 2$) for which $g$ is symmetric in its arguments; $g$ is strictly increasing separately in each variable; and
48,197
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes?
Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also some parametric bootstrap confidence intervals, also for the MLEs. I won't try to reproduce the math here, since there's r...
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes?
Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also som
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes? Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also some parametric bootstrap co...
Simultaneous confidence intervals for multinomial parameters, for small samples, many classes? Glaz and Sison (Journal of Statistical Planning and Inference, 1999) contains formulae for the Sison and Glaz confidence intervals for the MLE, which simulation showed perform quite well, and also som
48,198
Power analysis for matched poisson variables
The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be: use as test statistic $\sum_i X_{1i} - X_{2i}$ ; assume H0 (same rate in two groups), estimate the common rate $\lambd...
Power analysis for matched poisson variables
The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be
Power analysis for matched poisson variables The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be: use as test statistic $\sum_i X_{1i} - X_{2i}$ ; assume H0 (same rate in ...
Power analysis for matched poisson variables The power analysis by simulation is ok, I think what you’re really asking for is a way to compare matched Poisson variables, other than Wilcoxon test or paired t-test. A brute force approach would be
48,199
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence?
These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example. Someone experiencing an outbreak of herpes once in their life contributes one event to the denominator of lifetime risk...
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif
These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example.
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence? These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an ...
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif These terms describe various longitudinal measures of disease severity using units of time or occurrences in both the denominator and numerator of the quantity measured. Consider herpes as an example.
48,200
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence?
Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epidemiology: An Introduction where he uses the Porta's definition of cohorts. I don't think any of the books cover lifetime...
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif
Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epid
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lifetime cumulative incidence? Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in ...
What is the difference between lifetime risk, lifetime morbid risk, and lifetime prevalence, and lif Since you asked for a reference regarding the terms I use Porta's "A Dictionary of Epidemiology" when I need to look up epidemiological terms. I found him through one of Rothmans reference in his Epid