idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
45,801
Interpreting the random effect in a mixed-effect model
I'd suggest centering the interpretation around intraclass correlations: For DV1, the total random variation not explained by the group differences (the fixed factor) is $\hat\sigma^2_{tot} = 6.2 + 67.7 + 50.2 = 124.1$, so that the total SD is $\hat\sigma_{tot}=11.14$. For DV2, these figures are $\hat\sigma^2_{tot} = ...
Interpreting the random effect in a mixed-effect model
I'd suggest centering the interpretation around intraclass correlations: For DV1, the total random variation not explained by the group differences (the fixed factor) is $\hat\sigma^2_{tot} = 6.2 + 6
Interpreting the random effect in a mixed-effect model I'd suggest centering the interpretation around intraclass correlations: For DV1, the total random variation not explained by the group differences (the fixed factor) is $\hat\sigma^2_{tot} = 6.2 + 67.7 + 50.2 = 124.1$, so that the total SD is $\hat\sigma_{tot}=11...
Interpreting the random effect in a mixed-effect model I'd suggest centering the interpretation around intraclass correlations: For DV1, the total random variation not explained by the group differences (the fixed factor) is $\hat\sigma^2_{tot} = 6.2 + 6
45,802
Interpreting the random effect in a mixed-effect model
A few things. First, be careful with the general term "variability." In a general context it is fine, but if you are referring to variance or standard deviation specifically, use the correct term. Also, as per my comment to your statement, remember that these are variances of the distributions of subject- and time-spec...
Interpreting the random effect in a mixed-effect model
A few things. First, be careful with the general term "variability." In a general context it is fine, but if you are referring to variance or standard deviation specifically, use the correct term. Als
Interpreting the random effect in a mixed-effect model A few things. First, be careful with the general term "variability." In a general context it is fine, but if you are referring to variance or standard deviation specifically, use the correct term. Also, as per my comment to your statement, remember that these are v...
Interpreting the random effect in a mixed-effect model A few things. First, be careful with the general term "variability." In a general context it is fine, but if you are referring to variance or standard deviation specifically, use the correct term. Als
45,803
Unwritten laws and dirty tricks to influence the outcome of a regression analysis
Just because an analysis is sensitive to changes to the model doesn't necessarily imply that the researcher tried many models and then published the one that worked. One need only hypothesize the existence of a number of researchers each considering a somewhat similar question, each trying only one analysis, and the fi...
Unwritten laws and dirty tricks to influence the outcome of a regression analysis
Just because an analysis is sensitive to changes to the model doesn't necessarily imply that the researcher tried many models and then published the one that worked. One need only hypothesize the exis
Unwritten laws and dirty tricks to influence the outcome of a regression analysis Just because an analysis is sensitive to changes to the model doesn't necessarily imply that the researcher tried many models and then published the one that worked. One need only hypothesize the existence of a number of researchers each ...
Unwritten laws and dirty tricks to influence the outcome of a regression analysis Just because an analysis is sensitive to changes to the model doesn't necessarily imply that the researcher tried many models and then published the one that worked. One need only hypothesize the exis
45,804
Unwritten laws and dirty tricks to influence the outcome of a regression analysis
not even robust to small changes in the modeling setup I'm analytical chemist/chemometrician. In my field, the related key words related to demonstrating/stating how robust the model/the whole analytical method is against certain influences are robustness and ruggedness (There's a whole body of literature including r...
Unwritten laws and dirty tricks to influence the outcome of a regression analysis
not even robust to small changes in the modeling setup I'm analytical chemist/chemometrician. In my field, the related key words related to demonstrating/stating how robust the model/the whole analy
Unwritten laws and dirty tricks to influence the outcome of a regression analysis not even robust to small changes in the modeling setup I'm analytical chemist/chemometrician. In my field, the related key words related to demonstrating/stating how robust the model/the whole analytical method is against certain influe...
Unwritten laws and dirty tricks to influence the outcome of a regression analysis not even robust to small changes in the modeling setup I'm analytical chemist/chemometrician. In my field, the related key words related to demonstrating/stating how robust the model/the whole analy
45,805
Probability of seeing k faces that appear more than 3 times when rolling 10 dice
There is a simple elegant algebraic method. It amounts to little more than repeated pattern matching and replacement (with very simple patterns), making it efficient (at least for small problems like this one). The aim is to compute the generating function for the number of faces appearing $k$ or more times out of $n$...
Probability of seeing k faces that appear more than 3 times when rolling 10 dice
There is a simple elegant algebraic method. It amounts to little more than repeated pattern matching and replacement (with very simple patterns), making it efficient (at least for small problems like
Probability of seeing k faces that appear more than 3 times when rolling 10 dice There is a simple elegant algebraic method. It amounts to little more than repeated pattern matching and replacement (with very simple patterns), making it efficient (at least for small problems like this one). The aim is to compute the g...
Probability of seeing k faces that appear more than 3 times when rolling 10 dice There is a simple elegant algebraic method. It amounts to little more than repeated pattern matching and replacement (with very simple patterns), making it efficient (at least for small problems like
45,806
Intercept from standardized coefficients in logistic regression
Start with a simple logistic regression: $\,\,\text{logit}(\mu) \,= \beta_0 + \beta_1 x\quad\quad$ (original) $\quad\quad\quad\quad= \beta_0^* + \beta_1^* (x-\bar{x})/s_x\quad\quad$ (standardized x) $\quad\quad\quad\quad= (\beta_0^* -\beta_1^*\bar{x}/s_x)+ (\beta_1^*/s_x) x$ So $\beta_1=\beta_1^*/s_x$ and $\beta_0=\b...
Intercept from standardized coefficients in logistic regression
Start with a simple logistic regression: $\,\,\text{logit}(\mu) \,= \beta_0 + \beta_1 x\quad\quad$ (original) $\quad\quad\quad\quad= \beta_0^* + \beta_1^* (x-\bar{x})/s_x\quad\quad$ (standardized x)
Intercept from standardized coefficients in logistic regression Start with a simple logistic regression: $\,\,\text{logit}(\mu) \,= \beta_0 + \beta_1 x\quad\quad$ (original) $\quad\quad\quad\quad= \beta_0^* + \beta_1^* (x-\bar{x})/s_x\quad\quad$ (standardized x) $\quad\quad\quad\quad= (\beta_0^* -\beta_1^*\bar{x}/s_x...
Intercept from standardized coefficients in logistic regression Start with a simple logistic regression: $\,\,\text{logit}(\mu) \,= \beta_0 + \beta_1 x\quad\quad$ (original) $\quad\quad\quad\quad= \beta_0^* + \beta_1^* (x-\bar{x})/s_x\quad\quad$ (standardized x)
45,807
How to get confidence on classification predictions with multi-class Vowpal Wabbit
Unfortunately, because of the filter tree / elimination implementation in ECT, getting a measure of confidence is not straight-forward. If you can sacrifice some speed, using -oaa with logistic loss and the -r (--raw_predictions) option gives you raw scores that you can convert to a normalized measure of relative "con...
How to get confidence on classification predictions with multi-class Vowpal Wabbit
Unfortunately, because of the filter tree / elimination implementation in ECT, getting a measure of confidence is not straight-forward. If you can sacrifice some speed, using -oaa with logistic loss
How to get confidence on classification predictions with multi-class Vowpal Wabbit Unfortunately, because of the filter tree / elimination implementation in ECT, getting a measure of confidence is not straight-forward. If you can sacrifice some speed, using -oaa with logistic loss and the -r (--raw_predictions) option...
How to get confidence on classification predictions with multi-class Vowpal Wabbit Unfortunately, because of the filter tree / elimination implementation in ECT, getting a measure of confidence is not straight-forward. If you can sacrifice some speed, using -oaa with logistic loss
45,808
Multilevel models including random slopes: how to calculate variance
Let me start with simpler models (detailed answers below), and let me use a slightly different notation (NB: random effecs at different levels assumed to be uncorrelated). Model 1 (Two levels, random intercept, fixed slope). There are $N$ observations of $J$ students: $$\begin{align}y_{ij}&=\pi_{0j}+\beta_1 X_{ij}+\var...
Multilevel models including random slopes: how to calculate variance
Let me start with simpler models (detailed answers below), and let me use a slightly different notation (NB: random effecs at different levels assumed to be uncorrelated). Model 1 (Two levels, random
Multilevel models including random slopes: how to calculate variance Let me start with simpler models (detailed answers below), and let me use a slightly different notation (NB: random effecs at different levels assumed to be uncorrelated). Model 1 (Two levels, random intercept, fixed slope). There are $N$ observations...
Multilevel models including random slopes: how to calculate variance Let me start with simpler models (detailed answers below), and let me use a slightly different notation (NB: random effecs at different levels assumed to be uncorrelated). Model 1 (Two levels, random
45,809
Multilevel models including random slopes: how to calculate variance
I believe your intuition is correct. With the following model, ranging over schools $k$, individual within school $j$ and observation within individual within school $i$. (If I get your notation right). $Y_{ijk}=\beta_0+\beta_1 t + b_{0k} + b_{1k}t + b_{jk} + ϵ_{ijk}$ With $b_{1k} \sim N(0,\sigma_4^2)$, $b_{0k} \sim N...
Multilevel models including random slopes: how to calculate variance
I believe your intuition is correct. With the following model, ranging over schools $k$, individual within school $j$ and observation within individual within school $i$. (If I get your notation righ
Multilevel models including random slopes: how to calculate variance I believe your intuition is correct. With the following model, ranging over schools $k$, individual within school $j$ and observation within individual within school $i$. (If I get your notation right). $Y_{ijk}=\beta_0+\beta_1 t + b_{0k} + b_{1k}t +...
Multilevel models including random slopes: how to calculate variance I believe your intuition is correct. With the following model, ranging over schools $k$, individual within school $j$ and observation within individual within school $i$. (If I get your notation righ
45,810
Can someone give a clear-cut idea of $E(X|X<Y)$?
The i.i.d requirement creates useful symmetries, so indeed there is a factor of $1/2$ in some formulas. The following is one attempt to interpret "looks like" in a useful and intuitive fashion. The possibilities for $(X,Y)$ partition into three events: $X\lt Y,$ $X\gt Y,$ and $X=Y.$ Therefore, assuming all expectati...
Can someone give a clear-cut idea of $E(X|X<Y)$?
The i.i.d requirement creates useful symmetries, so indeed there is a factor of $1/2$ in some formulas. The following is one attempt to interpret "looks like" in a useful and intuitive fashion. The
Can someone give a clear-cut idea of $E(X|X<Y)$? The i.i.d requirement creates useful symmetries, so indeed there is a factor of $1/2$ in some formulas. The following is one attempt to interpret "looks like" in a useful and intuitive fashion. The possibilities for $(X,Y)$ partition into three events: $X\lt Y,$ $X\gt ...
Can someone give a clear-cut idea of $E(X|X<Y)$? The i.i.d requirement creates useful symmetries, so indeed there is a factor of $1/2$ in some formulas. The following is one attempt to interpret "looks like" in a useful and intuitive fashion. The
45,811
Trend in residuals vs dependent - but not in residuals vs fitted
1) The residuals and the fitted are uncorrelated by construction. In fact if there was any correlation between them, there would be uncaptured linear trend in the data - we could get a closer fit by changing the coefficients until they were uncorrelated. 2) The residuals and the y-variable are always positively correla...
Trend in residuals vs dependent - but not in residuals vs fitted
1) The residuals and the fitted are uncorrelated by construction. In fact if there was any correlation between them, there would be uncaptured linear trend in the data - we could get a closer fit by c
Trend in residuals vs dependent - but not in residuals vs fitted 1) The residuals and the fitted are uncorrelated by construction. In fact if there was any correlation between them, there would be uncaptured linear trend in the data - we could get a closer fit by changing the coefficients until they were uncorrelated. ...
Trend in residuals vs dependent - but not in residuals vs fitted 1) The residuals and the fitted are uncorrelated by construction. In fact if there was any correlation between them, there would be uncaptured linear trend in the data - we could get a closer fit by c
45,812
Decision Tree - Splitting Factor Variables
rpart treats differently ordinal and nominal qualitative variables (factors, in R parlance). For your first variable, provided it has been defined as an ordered factor, the only splits considered would be: {tiny} {small, medium, large, huge}, {tiny,small} {medium, large, huge}, {tiny,small,medium} {large, huge} {tiny,...
Decision Tree - Splitting Factor Variables
rpart treats differently ordinal and nominal qualitative variables (factors, in R parlance). For your first variable, provided it has been defined as an ordered factor, the only splits considered woul
Decision Tree - Splitting Factor Variables rpart treats differently ordinal and nominal qualitative variables (factors, in R parlance). For your first variable, provided it has been defined as an ordered factor, the only splits considered would be: {tiny} {small, medium, large, huge}, {tiny,small} {medium, large, huge...
Decision Tree - Splitting Factor Variables rpart treats differently ordinal and nominal qualitative variables (factors, in R parlance). For your first variable, provided it has been defined as an ordered factor, the only splits considered woul
45,813
Decision Tree - Splitting Factor Variables
Most decision trees do not consider ordinal factors but just categorical and numerical factors. You can code ordinal factors as numerical if you want to build trees more efficiently. However, if you use them as categorical a tree can help you check whether your data or ordinal codification has any inconsistency. Most d...
Decision Tree - Splitting Factor Variables
Most decision trees do not consider ordinal factors but just categorical and numerical factors. You can code ordinal factors as numerical if you want to build trees more efficiently. However, if you u
Decision Tree - Splitting Factor Variables Most decision trees do not consider ordinal factors but just categorical and numerical factors. You can code ordinal factors as numerical if you want to build trees more efficiently. However, if you use them as categorical a tree can help you check whether your data or ordinal...
Decision Tree - Splitting Factor Variables Most decision trees do not consider ordinal factors but just categorical and numerical factors. You can code ordinal factors as numerical if you want to build trees more efficiently. However, if you u
45,814
Book recommendation: sample size determination for hypothesis testing of the mean
Some very basic sample size calculations are discussed here: http://www.itl.nist.gov/div898/handbook/prc/section2/prc222.htm http://www.itl.nist.gov/div898/handbook/prc/section2/prc242.htm For a basic introduction, the book by Moore and McCabe - Introduction to the Practice of Statistics covers some of the basics in ch...
Book recommendation: sample size determination for hypothesis testing of the mean
Some very basic sample size calculations are discussed here: http://www.itl.nist.gov/div898/handbook/prc/section2/prc222.htm http://www.itl.nist.gov/div898/handbook/prc/section2/prc242.htm For a basic
Book recommendation: sample size determination for hypothesis testing of the mean Some very basic sample size calculations are discussed here: http://www.itl.nist.gov/div898/handbook/prc/section2/prc222.htm http://www.itl.nist.gov/div898/handbook/prc/section2/prc242.htm For a basic introduction, the book by Moore and M...
Book recommendation: sample size determination for hypothesis testing of the mean Some very basic sample size calculations are discussed here: http://www.itl.nist.gov/div898/handbook/prc/section2/prc222.htm http://www.itl.nist.gov/div898/handbook/prc/section2/prc242.htm For a basic
45,815
Book recommendation: sample size determination for hypothesis testing of the mean
I will try to provide some answers, while also giving the disclaimer that I am the author (Ryan) of one of the books that was mentioned. First, I would also recommend the paper by Lenth that was mentioned, as I always provide my students with that paper in the online courses on sample size determination that I teach. ...
Book recommendation: sample size determination for hypothesis testing of the mean
I will try to provide some answers, while also giving the disclaimer that I am the author (Ryan) of one of the books that was mentioned. First, I would also recommend the paper by Lenth that was ment
Book recommendation: sample size determination for hypothesis testing of the mean I will try to provide some answers, while also giving the disclaimer that I am the author (Ryan) of one of the books that was mentioned. First, I would also recommend the paper by Lenth that was mentioned, as I always provide my students...
Book recommendation: sample size determination for hypothesis testing of the mean I will try to provide some answers, while also giving the disclaimer that I am the author (Ryan) of one of the books that was mentioned. First, I would also recommend the paper by Lenth that was ment
45,816
Book recommendation: sample size determination for hypothesis testing of the mean
This is a broad question and there are books on the subject. How Many Subjects?: Statistical Power Analysis in Research (Second Edition) is one that many have found accessible. See also the link Good text on Clinical Trials? provided in a comment.
Book recommendation: sample size determination for hypothesis testing of the mean
This is a broad question and there are books on the subject. How Many Subjects?: Statistical Power Analysis in Research (Second Edition) is one that many have found accessible. See also the link Good
Book recommendation: sample size determination for hypothesis testing of the mean This is a broad question and there are books on the subject. How Many Subjects?: Statistical Power Analysis in Research (Second Edition) is one that many have found accessible. See also the link Good text on Clinical Trials? provided in ...
Book recommendation: sample size determination for hypothesis testing of the mean This is a broad question and there are books on the subject. How Many Subjects?: Statistical Power Analysis in Research (Second Edition) is one that many have found accessible. See also the link Good
45,817
How exactly to partition training-set for k-fold cross validation on multi-class dataset?
First you need to decide whether you need model/parameter selection, or just model. Once your model is fixed, bootstrap seems make more sense to determine how your modeling procedure performs. If you are implementing cross validation on multiple dataset, just randomly partition the data without considering their labe...
How exactly to partition training-set for k-fold cross validation on multi-class dataset?
First you need to decide whether you need model/parameter selection, or just model. Once your model is fixed, bootstrap seems make more sense to determine how your modeling procedure performs. If yo
How exactly to partition training-set for k-fold cross validation on multi-class dataset? First you need to decide whether you need model/parameter selection, or just model. Once your model is fixed, bootstrap seems make more sense to determine how your modeling procedure performs. If you are implementing cross valid...
How exactly to partition training-set for k-fold cross validation on multi-class dataset? First you need to decide whether you need model/parameter selection, or just model. Once your model is fixed, bootstrap seems make more sense to determine how your modeling procedure performs. If yo
45,818
How exactly to partition training-set for k-fold cross validation on multi-class dataset?
I think you would generally not want to incorporate the known classifications into the selection of training and test samples. If you do that, the proportion of each class in the testing sample will always be the same as the proportions in the sample used to train the machine you are, but you actually want sampling var...
How exactly to partition training-set for k-fold cross validation on multi-class dataset?
I think you would generally not want to incorporate the known classifications into the selection of training and test samples. If you do that, the proportion of each class in the testing sample will a
How exactly to partition training-set for k-fold cross validation on multi-class dataset? I think you would generally not want to incorporate the known classifications into the selection of training and test samples. If you do that, the proportion of each class in the testing sample will always be the same as the propo...
How exactly to partition training-set for k-fold cross validation on multi-class dataset? I think you would generally not want to incorporate the known classifications into the selection of training and test samples. If you do that, the proportion of each class in the testing sample will a
45,819
Eigenvalues of correlation matrices exhibit exponential decay
Everything has already been pretty much figured out in the comments, thanks to @AndyW, @whuber, and @UriCohen, but I would still like to write it up as a coherent answer. First, let me illustrate the original question. Here is the eigenspectrum of some actual real data (neural recordings) that I happen to work with rig...
Eigenvalues of correlation matrices exhibit exponential decay
Everything has already been pretty much figured out in the comments, thanks to @AndyW, @whuber, and @UriCohen, but I would still like to write it up as a coherent answer. First, let me illustrate the
Eigenvalues of correlation matrices exhibit exponential decay Everything has already been pretty much figured out in the comments, thanks to @AndyW, @whuber, and @UriCohen, but I would still like to write it up as a coherent answer. First, let me illustrate the original question. Here is the eigenspectrum of some actua...
Eigenvalues of correlation matrices exhibit exponential decay Everything has already been pretty much figured out in the comments, thanks to @AndyW, @whuber, and @UriCohen, but I would still like to write it up as a coherent answer. First, let me illustrate the
45,820
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variables
This is a compound Poisson distribution The law of total variance gets you the answer. Note that this answer doesn't rely on the normality of the $X$'s at all; it applies to any distribution with mean $\mu$ and variance $\sigma^2$. In the following, $N$ is the Poisson r.v., $X$'s are individual components, $Y$ is the s...
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variable
This is a compound Poisson distribution The law of total variance gets you the answer. Note that this answer doesn't rely on the normality of the $X$'s at all; it applies to any distribution with mean
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variables This is a compound Poisson distribution The law of total variance gets you the answer. Note that this answer doesn't rely on the normality of the $X$'s at all; it applies to any distribution with mean $\mu$ and varianc...
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variable This is a compound Poisson distribution The law of total variance gets you the answer. Note that this answer doesn't rely on the normality of the $X$'s at all; it applies to any distribution with mean
45,821
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variables
Here's the answer I got through trial and error, which seems to come close to Monte Carlo simulated estimates in every scenario I have tried. $\text{Var} = λ(μ^2+σ^2)$ (Edit: Yes, this agrees with Glen_b's updated answer, much to my delight!)
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variable
Here's the answer I got through trial and error, which seems to come close to Monte Carlo simulated estimates in every scenario I have tried. $\text{Var} = λ(μ^2+σ^2)$ (Edit: Yes, this agrees with Gle
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variables Here's the answer I got through trial and error, which seems to come close to Monte Carlo simulated estimates in every scenario I have tried. $\text{Var} = λ(μ^2+σ^2)$ (Edit: Yes, this agrees with Glen_b's updated answ...
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variable Here's the answer I got through trial and error, which seems to come close to Monte Carlo simulated estimates in every scenario I have tried. $\text{Var} = λ(μ^2+σ^2)$ (Edit: Yes, this agrees with Gle
45,822
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variables
By the law of the product of variances, if two random variables are independent, the variance of their product $\ D(X*Y)$ = $\ E(X)^2*D(X)+E(Y)^2*D(Y)+D(X)*D(Y)$ which is $\ E(X^2)*E(Y^2)-E(X)^2*E(Y)^2$. So, if the distributions are Poisson and Normal, I'd guess it's $\lambda*σ^2$.
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variable
By the law of the product of variances, if two random variables are independent, the variance of their product $\ D(X*Y)$ = $\ E(X)^2*D(X)+E(Y)^2*D(Y)+D(X)*D(Y)$ which is $\ E(X^2)*E(Y^2)-E(X)^2*E(Y)^
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variables By the law of the product of variances, if two random variables are independent, the variance of their product $\ D(X*Y)$ = $\ E(X)^2*D(X)+E(Y)^2*D(Y)+D(X)*D(Y)$ which is $\ E(X^2)*E(Y^2)-E(X)^2*E(Y)^2$. So, if the di...
Variance of the sum of a Poisson-distributed random number of (normally distributed) random variable By the law of the product of variances, if two random variables are independent, the variance of their product $\ D(X*Y)$ = $\ E(X)^2*D(X)+E(Y)^2*D(Y)+D(X)*D(Y)$ which is $\ E(X^2)*E(Y^2)-E(X)^2*E(Y)^
45,823
How to describe/explain the shape of a distribution which has two peaks?
To describe such a two-peak shape to another person, you'd call it 'bimodal' (which just means 'two modes' - generally taken to be two local modes, even though only one of them might be 'the mode' of the distribution). You could then seek to describe the locations and spreads and relative proportions or heights of the ...
How to describe/explain the shape of a distribution which has two peaks?
To describe such a two-peak shape to another person, you'd call it 'bimodal' (which just means 'two modes' - generally taken to be two local modes, even though only one of them might be 'the mode' of
How to describe/explain the shape of a distribution which has two peaks? To describe such a two-peak shape to another person, you'd call it 'bimodal' (which just means 'two modes' - generally taken to be two local modes, even though only one of them might be 'the mode' of the distribution). You could then seek to descr...
How to describe/explain the shape of a distribution which has two peaks? To describe such a two-peak shape to another person, you'd call it 'bimodal' (which just means 'two modes' - generally taken to be two local modes, even though only one of them might be 'the mode' of
45,824
95% confidence interval for a given data set
A few comments: Your title "95% confidence interval for a given data set" is misleading. Confidence intervals are calculated for computed values like means. It doesn't mean anything really to have a confidence interval for a data set. Your proposed method will work with huge data sets, but will fail with small data s...
95% confidence interval for a given data set
A few comments: Your title "95% confidence interval for a given data set" is misleading. Confidence intervals are calculated for computed values like means. It doesn't mean anything really to have a
95% confidence interval for a given data set A few comments: Your title "95% confidence interval for a given data set" is misleading. Confidence intervals are calculated for computed values like means. It doesn't mean anything really to have a confidence interval for a data set. Your proposed method will work with hu...
95% confidence interval for a given data set A few comments: Your title "95% confidence interval for a given data set" is misleading. Confidence intervals are calculated for computed values like means. It doesn't mean anything really to have a
45,825
95% confidence interval for a given data set
An outlier is a surprising value. Therefore, your method is flawed. If the data set is large, then, if it is normally distributed, you expect some values beyond 2 sd from the mean. If the data set is small, and normally distributed, you don't expect values even 1 sd from the mean. e.g. it is not surprising that there a...
95% confidence interval for a given data set
An outlier is a surprising value. Therefore, your method is flawed. If the data set is large, then, if it is normally distributed, you expect some values beyond 2 sd from the mean. If the data set is
95% confidence interval for a given data set An outlier is a surprising value. Therefore, your method is flawed. If the data set is large, then, if it is normally distributed, you expect some values beyond 2 sd from the mean. If the data set is small, and normally distributed, you don't expect values even 1 sd from the...
95% confidence interval for a given data set An outlier is a surprising value. Therefore, your method is flawed. If the data set is large, then, if it is normally distributed, you expect some values beyond 2 sd from the mean. If the data set is
45,826
95% confidence interval for a given data set
Another approach you might consider is Tukey’s Outlier Filter. It is robust as well, since it is based on quantiles. The idea behind is to classify as outlier data points that lies: above 3rd quartile + 1.5 times the Inter Quartile Range (IQR, distance between the 1st and 3rd quartiles), and below 1st quartile - 1.5...
95% confidence interval for a given data set
Another approach you might consider is Tukey’s Outlier Filter. It is robust as well, since it is based on quantiles. The idea behind is to classify as outlier data points that lies: above 3rd quarti
95% confidence interval for a given data set Another approach you might consider is Tukey’s Outlier Filter. It is robust as well, since it is based on quantiles. The idea behind is to classify as outlier data points that lies: above 3rd quartile + 1.5 times the Inter Quartile Range (IQR, distance between the 1st and ...
95% confidence interval for a given data set Another approach you might consider is Tukey’s Outlier Filter. It is robust as well, since it is based on quantiles. The idea behind is to classify as outlier data points that lies: above 3rd quarti
45,827
Performing a time series ARIMA model on natural gas power demand using the forecast package from R
Unfortunately you have few technical errors here. You cannot make ARIMAX-model with library(forecast) function auto.arima. Xreg argument makes it regression model with ARMA errors. That is something which I had to learn hard way by wondering the results... :) And you have to supply FUTURE values for the xreg argume...
Performing a time series ARIMA model on natural gas power demand using the forecast package from R
Unfortunately you have few technical errors here. You cannot make ARIMAX-model with library(forecast) function auto.arima. Xreg argument makes it regression model with ARMA errors. That is something
Performing a time series ARIMA model on natural gas power demand using the forecast package from R Unfortunately you have few technical errors here. You cannot make ARIMAX-model with library(forecast) function auto.arima. Xreg argument makes it regression model with ARMA errors. That is something which I had to learn...
Performing a time series ARIMA model on natural gas power demand using the forecast package from R Unfortunately you have few technical errors here. You cannot make ARIMAX-model with library(forecast) function auto.arima. Xreg argument makes it regression model with ARMA errors. That is something
45,828
Performing a time series ARIMA model on natural gas power demand using the forecast package from R
Unobserved Components Model (UCM) is a state space modeling approach to time series forecasting and regression analysis. It is a very flexible modeling approach, easy to interpret method. I'm not trained in statistics. UCM is a very intuitive method that a non statistician like me could easily adopt and focus on solvin...
Performing a time series ARIMA model on natural gas power demand using the forecast package from R
Unobserved Components Model (UCM) is a state space modeling approach to time series forecasting and regression analysis. It is a very flexible modeling approach, easy to interpret method. I'm not trai
Performing a time series ARIMA model on natural gas power demand using the forecast package from R Unobserved Components Model (UCM) is a state space modeling approach to time series forecasting and regression analysis. It is a very flexible modeling approach, easy to interpret method. I'm not trained in statistics. UC...
Performing a time series ARIMA model on natural gas power demand using the forecast package from R Unobserved Components Model (UCM) is a state space modeling approach to time series forecasting and regression analysis. It is a very flexible modeling approach, easy to interpret method. I'm not trai
45,829
Performing a time series ARIMA model on natural gas power demand using the forecast package from R
Fortunately one doesn't have to assume the lag structure that is appropriate as this can be suggested via the Impulse Response Weights which can be(was) modified empirically via model diagnostics. Additionally one doesn't have to assume the number of seasonal indicators and their start dates as this can be easily found...
Performing a time series ARIMA model on natural gas power demand using the forecast package from R
Fortunately one doesn't have to assume the lag structure that is appropriate as this can be suggested via the Impulse Response Weights which can be(was) modified empirically via model diagnostics. Add
Performing a time series ARIMA model on natural gas power demand using the forecast package from R Fortunately one doesn't have to assume the lag structure that is appropriate as this can be suggested via the Impulse Response Weights which can be(was) modified empirically via model diagnostics. Additionally one doesn't...
Performing a time series ARIMA model on natural gas power demand using the forecast package from R Fortunately one doesn't have to assume the lag structure that is appropriate as this can be suggested via the Impulse Response Weights which can be(was) modified empirically via model diagnostics. Add
45,830
Interpretation of p-value in comparing proportions between two small groups in R
You are mixing a couple of concepts (one of which is really outdated in the age of computers, but still persists anyways). For numerical variables you should use the t-test when you are calculating the standard deviation(s) from the sample(s). If you know the true population standard deviation(s) then you can use the ...
Interpretation of p-value in comparing proportions between two small groups in R
You are mixing a couple of concepts (one of which is really outdated in the age of computers, but still persists anyways). For numerical variables you should use the t-test when you are calculating th
Interpretation of p-value in comparing proportions between two small groups in R You are mixing a couple of concepts (one of which is really outdated in the age of computers, but still persists anyways). For numerical variables you should use the t-test when you are calculating the standard deviation(s) from the sample...
Interpretation of p-value in comparing proportions between two small groups in R You are mixing a couple of concepts (one of which is really outdated in the age of computers, but still persists anyways). For numerical variables you should use the t-test when you are calculating th
45,831
Why is k-fold cross validation a better idea than k-times resampling true validation?
The problem with the second approach is that the training set is smaller (half of the available data) than for the cross-validation approach ((k-1)/k of the available data). As most learning algorithms perform better the more data they are trained on, this means that the second approach gives a more pessimistically bi...
Why is k-fold cross validation a better idea than k-times resampling true validation?
The problem with the second approach is that the training set is smaller (half of the available data) than for the cross-validation approach ((k-1)/k of the available data). As most learning algorith
Why is k-fold cross validation a better idea than k-times resampling true validation? The problem with the second approach is that the training set is smaller (half of the available data) than for the cross-validation approach ((k-1)/k of the available data). As most learning algorithms perform better the more data th...
Why is k-fold cross validation a better idea than k-times resampling true validation? The problem with the second approach is that the training set is smaller (half of the available data) than for the cross-validation approach ((k-1)/k of the available data). As most learning algorith
45,832
Why is k-fold cross validation a better idea than k-times resampling true validation?
@Dikran has already provided a detailed analysis. Cross validation helps you with model selection. According to Hoeffding inequality, the expected out-of-sample error can be estimated based on your validation error: $E_{out} \leq E_{val} + O(\sqrt \frac{lnM}{K})$ where $M$ is the model number, and $K$ is the number amo...
Why is k-fold cross validation a better idea than k-times resampling true validation?
@Dikran has already provided a detailed analysis. Cross validation helps you with model selection. According to Hoeffding inequality, the expected out-of-sample error can be estimated based on your va
Why is k-fold cross validation a better idea than k-times resampling true validation? @Dikran has already provided a detailed analysis. Cross validation helps you with model selection. According to Hoeffding inequality, the expected out-of-sample error can be estimated based on your validation error: $E_{out} \leq E_{v...
Why is k-fold cross validation a better idea than k-times resampling true validation? @Dikran has already provided a detailed analysis. Cross validation helps you with model selection. According to Hoeffding inequality, the expected out-of-sample error can be estimated based on your va
45,833
How many clusters for linear mixed models and GEE?
Recommendations for the number of groups and units per group are good at the study design phase. At this point in your research, you can only hope to produce decent estimates with the data that you have at hand, and that's probably the literature that you should be studying, and questions that you should be asking. Beh...
How many clusters for linear mixed models and GEE?
Recommendations for the number of groups and units per group are good at the study design phase. At this point in your research, you can only hope to produce decent estimates with the data that you ha
How many clusters for linear mixed models and GEE? Recommendations for the number of groups and units per group are good at the study design phase. At this point in your research, you can only hope to produce decent estimates with the data that you have at hand, and that's probably the literature that you should be stu...
How many clusters for linear mixed models and GEE? Recommendations for the number of groups and units per group are good at the study design phase. At this point in your research, you can only hope to produce decent estimates with the data that you ha
45,834
Factor analysis on a "rotating" subset of survey questions
Because the design incorporates planned missingness, the data can be assumed to be missing completely at random and an imputation procedure could be adopted to deal with the missing data. I'd go with this option if you really have no idea how the items should load/number of factors to extract, since missingness tends t...
Factor analysis on a "rotating" subset of survey questions
Because the design incorporates planned missingness, the data can be assumed to be missing completely at random and an imputation procedure could be adopted to deal with the missing data. I'd go with
Factor analysis on a "rotating" subset of survey questions Because the design incorporates planned missingness, the data can be assumed to be missing completely at random and an imputation procedure could be adopted to deal with the missing data. I'd go with this option if you really have no idea how the items should l...
Factor analysis on a "rotating" subset of survey questions Because the design incorporates planned missingness, the data can be assumed to be missing completely at random and an imputation procedure could be adopted to deal with the missing data. I'd go with
45,835
Factor analysis on a "rotating" subset of survey questions
EFA is based on the covariance between items. If items are well represented in the dataset, then you should be okay when estimating factor loadings and such. If you have many participants completing a large minority of random items, then your data should a good representation of the possible item combinations. This wou...
Factor analysis on a "rotating" subset of survey questions
EFA is based on the covariance between items. If items are well represented in the dataset, then you should be okay when estimating factor loadings and such. If you have many participants completing a
Factor analysis on a "rotating" subset of survey questions EFA is based on the covariance between items. If items are well represented in the dataset, then you should be okay when estimating factor loadings and such. If you have many participants completing a large minority of random items, then your data should a good...
Factor analysis on a "rotating" subset of survey questions EFA is based on the covariance between items. If items are well represented in the dataset, then you should be okay when estimating factor loadings and such. If you have many participants completing a
45,836
Factor analysis on a "rotating" subset of survey questions
This approach reminds me a lot of Synthetic Aperture Personality Assessment (SAPA). You may want to read further into it to see what you can learn about the method and judge how closely it resembles what you have in mind. If it's sufficiently equivalent for your purposes, the news appears to be good. This page says: 5...
Factor analysis on a "rotating" subset of survey questions
This approach reminds me a lot of Synthetic Aperture Personality Assessment (SAPA). You may want to read further into it to see what you can learn about the method and judge how closely it resembles w
Factor analysis on a "rotating" subset of survey questions This approach reminds me a lot of Synthetic Aperture Personality Assessment (SAPA). You may want to read further into it to see what you can learn about the method and judge how closely it resembles what you have in mind. If it's sufficiently equivalent for you...
Factor analysis on a "rotating" subset of survey questions This approach reminds me a lot of Synthetic Aperture Personality Assessment (SAPA). You may want to read further into it to see what you can learn about the method and judge how closely it resembles w
45,837
How to interpret interaction continuous variables in logistic regression?
When nnd is 0 a unit change in gonad is associated with a $(\exp(-1.5718 ) - 1)*100\% \approx -79 \% $ decrease in the odds of fullyspawned. For every unit increase in nnd this effect of gonad increases by $(\exp( 0.6407 ) - 1)*100\% \approx 90 \%$. So, when nnd is 1 the odds ratio for gonad is $1.9^1 \times .21 \app...
How to interpret interaction continuous variables in logistic regression?
When nnd is 0 a unit change in gonad is associated with a $(\exp(-1.5718 ) - 1)*100\% \approx -79 \% $ decrease in the odds of fullyspawned. For every unit increase in nnd this effect of gonad increa
How to interpret interaction continuous variables in logistic regression? When nnd is 0 a unit change in gonad is associated with a $(\exp(-1.5718 ) - 1)*100\% \approx -79 \% $ decrease in the odds of fullyspawned. For every unit increase in nnd this effect of gonad increases by $(\exp( 0.6407 ) - 1)*100\% \approx 90 ...
How to interpret interaction continuous variables in logistic regression? When nnd is 0 a unit change in gonad is associated with a $(\exp(-1.5718 ) - 1)*100\% \approx -79 \% $ decrease in the odds of fullyspawned. For every unit increase in nnd this effect of gonad increa
45,838
Error bars using median absolute deviation
It sounds like you're talking about what's sometimes called a regressogram, with a log-scaled x-variable. There are a number of issues here, not necessarily in logical order: the quantity you're plotting is a mean, so if you want to plot median absolute deviation, it's the MAD of the means you want. your suggestion $\...
Error bars using median absolute deviation
It sounds like you're talking about what's sometimes called a regressogram, with a log-scaled x-variable. There are a number of issues here, not necessarily in logical order: the quantity you're plot
Error bars using median absolute deviation It sounds like you're talking about what's sometimes called a regressogram, with a log-scaled x-variable. There are a number of issues here, not necessarily in logical order: the quantity you're plotting is a mean, so if you want to plot median absolute deviation, it's the MA...
Error bars using median absolute deviation It sounds like you're talking about what's sometimes called a regressogram, with a log-scaled x-variable. There are a number of issues here, not necessarily in logical order: the quantity you're plot
45,839
Error bars using median absolute deviation
A standard error means something. You don't just take any old statistic and divide by sqrt(n). Why not just plot your MAD and have your error bar a representation of variability in the data? If you want something to represent the quality of your median estimate then just calculate a confidence interval of the median.
Error bars using median absolute deviation
A standard error means something. You don't just take any old statistic and divide by sqrt(n). Why not just plot your MAD and have your error bar a representation of variability in the data? If you wa
Error bars using median absolute deviation A standard error means something. You don't just take any old statistic and divide by sqrt(n). Why not just plot your MAD and have your error bar a representation of variability in the data? If you want something to represent the quality of your median estimate then just calcu...
Error bars using median absolute deviation A standard error means something. You don't just take any old statistic and divide by sqrt(n). Why not just plot your MAD and have your error bar a representation of variability in the data? If you wa
45,840
Error bars using median absolute deviation
Whatever you do, plot your raw data or minimally make them available some how. If you choose median absolute deviation (MAD), do make it absolutely clear whether it is of deviations from the mean or the median, as I've seen MAD used as an abbreviation for both and in any case any ambiguity benefits no-one. Plotting +...
Error bars using median absolute deviation
Whatever you do, plot your raw data or minimally make them available some how. If you choose median absolute deviation (MAD), do make it absolutely clear whether it is of deviations from the mean or
Error bars using median absolute deviation Whatever you do, plot your raw data or minimally make them available some how. If you choose median absolute deviation (MAD), do make it absolutely clear whether it is of deviations from the mean or the median, as I've seen MAD used as an abbreviation for both and in any case...
Error bars using median absolute deviation Whatever you do, plot your raw data or minimally make them available some how. If you choose median absolute deviation (MAD), do make it absolutely clear whether it is of deviations from the mean or
45,841
How to report the ratio of two sets of experimental results?
I would symmetrize the problem and recognize the matching by working with the logs of the individual ratios, say $z_i = \ln(x_i/y_i)$, getting the limits of a $100(1-\alpha)$% confidence interval for the mean of $z$ the usual way as $\bar{z} \pm t_{9,1-\alpha/2}\,s_z/\sqrt{10}$. (I know it's not strictly justified, but...
How to report the ratio of two sets of experimental results?
I would symmetrize the problem and recognize the matching by working with the logs of the individual ratios, say $z_i = \ln(x_i/y_i)$, getting the limits of a $100(1-\alpha)$% confidence interval for
How to report the ratio of two sets of experimental results? I would symmetrize the problem and recognize the matching by working with the logs of the individual ratios, say $z_i = \ln(x_i/y_i)$, getting the limits of a $100(1-\alpha)$% confidence interval for the mean of $z$ the usual way as $\bar{z} \pm t_{9,1-\alpha...
How to report the ratio of two sets of experimental results? I would symmetrize the problem and recognize the matching by working with the logs of the individual ratios, say $z_i = \ln(x_i/y_i)$, getting the limits of a $100(1-\alpha)$% confidence interval for
45,842
How to report the ratio of two sets of experimental results?
You ask a very interesting question. The key problem is, as you state, that the theoretical distribution of both $X$ and $Y$ is unknown. If it was known, however, it might be possible to derive the variance of the ratio and thus find a sample estimate of the standard error. Suppose for a moment that both random variab...
How to report the ratio of two sets of experimental results?
You ask a very interesting question. The key problem is, as you state, that the theoretical distribution of both $X$ and $Y$ is unknown. If it was known, however, it might be possible to derive the va
How to report the ratio of two sets of experimental results? You ask a very interesting question. The key problem is, as you state, that the theoretical distribution of both $X$ and $Y$ is unknown. If it was known, however, it might be possible to derive the variance of the ratio and thus find a sample estimate of the ...
How to report the ratio of two sets of experimental results? You ask a very interesting question. The key problem is, as you state, that the theoretical distribution of both $X$ and $Y$ is unknown. If it was known, however, it might be possible to derive the va
45,843
How to report the ratio of two sets of experimental results?
Why doesn't the Taylor expansion look right? If you want a symmetric statistic, you can try look at the difference instead $\bar{x}-\bar{y}$, it is easy enough to work out the variance of $\bar{x}-\bar{y}$ with any distribution (not just normal) for $X, Y$, provided the variance exist (uniform??). This difference sho...
How to report the ratio of two sets of experimental results?
Why doesn't the Taylor expansion look right? If you want a symmetric statistic, you can try look at the difference instead $\bar{x}-\bar{y}$, it is easy enough to work out the variance of $\bar{x}-\
How to report the ratio of two sets of experimental results? Why doesn't the Taylor expansion look right? If you want a symmetric statistic, you can try look at the difference instead $\bar{x}-\bar{y}$, it is easy enough to work out the variance of $\bar{x}-\bar{y}$ with any distribution (not just normal) for $X, Y$,...
How to report the ratio of two sets of experimental results? Why doesn't the Taylor expansion look right? If you want a symmetric statistic, you can try look at the difference instead $\bar{x}-\bar{y}$, it is easy enough to work out the variance of $\bar{x}-\
45,844
How to report the ratio of two sets of experimental results?
Given that you have so few data points, it hardly makes sense to use all those statistical assumptions.. why not just report the standard statistics: mean x_i/y_i, median x_i/y_i, percentile etc.
How to report the ratio of two sets of experimental results?
Given that you have so few data points, it hardly makes sense to use all those statistical assumptions.. why not just report the standard statistics: mean x_i/y_i, median x_i/y_i, percentile etc.
How to report the ratio of two sets of experimental results? Given that you have so few data points, it hardly makes sense to use all those statistical assumptions.. why not just report the standard statistics: mean x_i/y_i, median x_i/y_i, percentile etc.
How to report the ratio of two sets of experimental results? Given that you have so few data points, it hardly makes sense to use all those statistical assumptions.. why not just report the standard statistics: mean x_i/y_i, median x_i/y_i, percentile etc.
45,845
Show that for a Geometric distribution, the probability generating function is given by $\frac{ps}{1-qs}$, $q=1-p$
It's normal you'd arrive at the wrong answer in this case. The problem is that your index is wrong. There are two definitions for the pdf of a geometric distribution. The one you use, where $E(X)=\frac{1}{p}$ is defined from 1 to infinity. At zero it is not defined. So, the generating function needs to take this into a...
Show that for a Geometric distribution, the probability generating function is given by $\frac{ps}{1
It's normal you'd arrive at the wrong answer in this case. The problem is that your index is wrong. There are two definitions for the pdf of a geometric distribution. The one you use, where $E(X)=\fra
Show that for a Geometric distribution, the probability generating function is given by $\frac{ps}{1-qs}$, $q=1-p$ It's normal you'd arrive at the wrong answer in this case. The problem is that your index is wrong. There are two definitions for the pdf of a geometric distribution. The one you use, where $E(X)=\frac{1}{...
Show that for a Geometric distribution, the probability generating function is given by $\frac{ps}{1 It's normal you'd arrive at the wrong answer in this case. The problem is that your index is wrong. There are two definitions for the pdf of a geometric distribution. The one you use, where $E(X)=\fra
45,846
Why does my proof for showing that the Kaplan-Meier estimate is unbiased not work?
The flaw in your argument is that $\hat\Lambda(t)-\Lambda(t)$ is not a martingale for all times $t$. It is only a martingale up to the time $T$ when the experiment ends, that is, when the last survivor either dies or becomes censored (i.e. drops out of the study). After that, $\Lambda(t)$ continues to increase but $\...
Why does my proof for showing that the Kaplan-Meier estimate is unbiased not work?
The flaw in your argument is that $\hat\Lambda(t)-\Lambda(t)$ is not a martingale for all times $t$. It is only a martingale up to the time $T$ when the experiment ends, that is, when the last surviv
Why does my proof for showing that the Kaplan-Meier estimate is unbiased not work? The flaw in your argument is that $\hat\Lambda(t)-\Lambda(t)$ is not a martingale for all times $t$. It is only a martingale up to the time $T$ when the experiment ends, that is, when the last survivor either dies or becomes censored (i...
Why does my proof for showing that the Kaplan-Meier estimate is unbiased not work? The flaw in your argument is that $\hat\Lambda(t)-\Lambda(t)$ is not a martingale for all times $t$. It is only a martingale up to the time $T$ when the experiment ends, that is, when the last surviv
45,847
What do p-values for levels of a categorical variable represent in Poisson regression?
The model coefficients are estimated contrasts based on how the data frame generates contrasts in the factor levels for density. Take a look at this: fit <- glm(events ~ as.factor(density), df, family = poisson) model.matrix(fit) To see how these contrasts are estimated, store the GLM as an object in the workspace. T...
What do p-values for levels of a categorical variable represent in Poisson regression?
The model coefficients are estimated contrasts based on how the data frame generates contrasts in the factor levels for density. Take a look at this: fit <- glm(events ~ as.factor(density), df, family
What do p-values for levels of a categorical variable represent in Poisson regression? The model coefficients are estimated contrasts based on how the data frame generates contrasts in the factor levels for density. Take a look at this: fit <- glm(events ~ as.factor(density), df, family = poisson) model.matrix(fit) T...
What do p-values for levels of a categorical variable represent in Poisson regression? The model coefficients are estimated contrasts based on how the data frame generates contrasts in the factor levels for density. Take a look at this: fit <- glm(events ~ as.factor(density), df, family
45,848
What do p-values for levels of a categorical variable represent in Poisson regression?
R, by default, uses reference cell coding (which I explain here: regression-based-for-example-on-days-of-week). Note that, this is called using "treatment contrasts" in R. (Many types of coding schemes are described at UCLA's stats help site.) As @COOLSerdash states, the p-values for your indicated factor levels are...
What do p-values for levels of a categorical variable represent in Poisson regression?
R, by default, uses reference cell coding (which I explain here: regression-based-for-example-on-days-of-week). Note that, this is called using "treatment contrasts" in R. (Many types of coding schem
What do p-values for levels of a categorical variable represent in Poisson regression? R, by default, uses reference cell coding (which I explain here: regression-based-for-example-on-days-of-week). Note that, this is called using "treatment contrasts" in R. (Many types of coding schemes are described at UCLA's stats ...
What do p-values for levels of a categorical variable represent in Poisson regression? R, by default, uses reference cell coding (which I explain here: regression-based-for-example-on-days-of-week). Note that, this is called using "treatment contrasts" in R. (Many types of coding schem
45,849
How to interpret and compare models in Cox regression?
Disclaimer: As in the comments, these are not ways to ensure best prediction, but rather the musings of an epidemiologist on model building for survival models trying to elucidate the relationship between an outcome O and an exposure E with a number of covariates: The goal of these is not actually to make the best pred...
How to interpret and compare models in Cox regression?
Disclaimer: As in the comments, these are not ways to ensure best prediction, but rather the musings of an epidemiologist on model building for survival models trying to elucidate the relationship bet
How to interpret and compare models in Cox regression? Disclaimer: As in the comments, these are not ways to ensure best prediction, but rather the musings of an epidemiologist on model building for survival models trying to elucidate the relationship between an outcome O and an exposure E with a number of covariates: ...
How to interpret and compare models in Cox regression? Disclaimer: As in the comments, these are not ways to ensure best prediction, but rather the musings of an epidemiologist on model building for survival models trying to elucidate the relationship bet
45,850
What distribution to use for this QQ plot?
I'll turn my comments into an answer; I can delete this or add more if necessary. Based on your original qq-plot, it appears to me that the tails of your distribution may be too short--at least relative to the normal distribution. (This is based on my interpretation that the data values are on the Y axis "Ordered Val...
What distribution to use for this QQ plot?
I'll turn my comments into an answer; I can delete this or add more if necessary. Based on your original qq-plot, it appears to me that the tails of your distribution may be too short--at least relat
What distribution to use for this QQ plot? I'll turn my comments into an answer; I can delete this or add more if necessary. Based on your original qq-plot, it appears to me that the tails of your distribution may be too short--at least relative to the normal distribution. (This is based on my interpretation that the...
What distribution to use for this QQ plot? I'll turn my comments into an answer; I can delete this or add more if necessary. Based on your original qq-plot, it appears to me that the tails of your distribution may be too short--at least relat
45,851
How to interpret parameter estimates correlated with the intercept parameter estimate?
In simple terms, imagine you fix one parameter, say the intercept, and estimate the slope. The question is: as you vary the fixed parameter, will your slope estimate change? It will to some degree, and the strength/direction of the effect is the correlation of the parameters. Suppose you have a simple linear regressio...
How to interpret parameter estimates correlated with the intercept parameter estimate?
In simple terms, imagine you fix one parameter, say the intercept, and estimate the slope. The question is: as you vary the fixed parameter, will your slope estimate change? It will to some degree, an
How to interpret parameter estimates correlated with the intercept parameter estimate? In simple terms, imagine you fix one parameter, say the intercept, and estimate the slope. The question is: as you vary the fixed parameter, will your slope estimate change? It will to some degree, and the strength/direction of the e...
How to interpret parameter estimates correlated with the intercept parameter estimate? In simple terms, imagine you fix one parameter, say the intercept, and estimate the slope. The question is: as you vary the fixed parameter, will your slope estimate change? It will to some degree, an
45,852
How to interpret parameter estimates correlated with the intercept parameter estimate?
Just in case, the correlation refers to the estimated parameters, and springs from the fact that they are derived using the same data. It does not imply a correlation between the unknown parameters being estimated, which being constants (in the frequentist approach), cannot have "correlation". In that sense, it does no...
How to interpret parameter estimates correlated with the intercept parameter estimate?
Just in case, the correlation refers to the estimated parameters, and springs from the fact that they are derived using the same data. It does not imply a correlation between the unknown parameters be
How to interpret parameter estimates correlated with the intercept parameter estimate? Just in case, the correlation refers to the estimated parameters, and springs from the fact that they are derived using the same data. It does not imply a correlation between the unknown parameters being estimated, which being consta...
How to interpret parameter estimates correlated with the intercept parameter estimate? Just in case, the correlation refers to the estimated parameters, and springs from the fact that they are derived using the same data. It does not imply a correlation between the unknown parameters be
45,853
Difference between fixed effects models in R (plm) and Stata (xtreg)
Welcome to the site, @gwatson! You are right that effect = "twoways" sets up both "individual" and "year" effects. I tested with Produc data from R package plm and found the main results are the same (see the codes and outputs below). The only apparent difference I found is the year effect, which is caused by contrast ...
Difference between fixed effects models in R (plm) and Stata (xtreg)
Welcome to the site, @gwatson! You are right that effect = "twoways" sets up both "individual" and "year" effects. I tested with Produc data from R package plm and found the main results are the same
Difference between fixed effects models in R (plm) and Stata (xtreg) Welcome to the site, @gwatson! You are right that effect = "twoways" sets up both "individual" and "year" effects. I tested with Produc data from R package plm and found the main results are the same (see the codes and outputs below). The only apparen...
Difference between fixed effects models in R (plm) and Stata (xtreg) Welcome to the site, @gwatson! You are right that effect = "twoways" sets up both "individual" and "year" effects. I tested with Produc data from R package plm and found the main results are the same
45,854
Why does my ROC curve look like this (is it correct?)
ROC curve 101 An ROC curve visualizes the predictive performance of a classifier for various levels of conservatism (measured by confidence scores). In simple terms, it illustrates the price you pay in terms of false positive rate to increase the true positive rate. The conservatism is controlled via thresholds on conf...
Why does my ROC curve look like this (is it correct?)
ROC curve 101 An ROC curve visualizes the predictive performance of a classifier for various levels of conservatism (measured by confidence scores). In simple terms, it illustrates the price you pay i
Why does my ROC curve look like this (is it correct?) ROC curve 101 An ROC curve visualizes the predictive performance of a classifier for various levels of conservatism (measured by confidence scores). In simple terms, it illustrates the price you pay in terms of false positive rate to increase the true positive rate....
Why does my ROC curve look like this (is it correct?) ROC curve 101 An ROC curve visualizes the predictive performance of a classifier for various levels of conservatism (measured by confidence scores). In simple terms, it illustrates the price you pay i
45,855
Covariates considered moderator or control variables?
A control variable (confounder, potential omitted variable) is a variable you include in the model because you suspect it is confounding the main relationship you are interested in (so it is suspected to be related to both the main independent variable (explanatory variable, predictor, treatment) of interest and to the...
Covariates considered moderator or control variables?
A control variable (confounder, potential omitted variable) is a variable you include in the model because you suspect it is confounding the main relationship you are interested in (so it is suspected
Covariates considered moderator or control variables? A control variable (confounder, potential omitted variable) is a variable you include in the model because you suspect it is confounding the main relationship you are interested in (so it is suspected to be related to both the main independent variable (explanatory ...
Covariates considered moderator or control variables? A control variable (confounder, potential omitted variable) is a variable you include in the model because you suspect it is confounding the main relationship you are interested in (so it is suspected
45,856
Derive percentiles from binned data
It is quite easy actually. Let's say the sum of the counts is N, and you that you want the 0.3 (30%) bottom percentile. This means the threshold value will occur after 0.3*N counts. Now you look at the cumulative distribution, and when it reaches 0.3*N, you have the value. It is very easy to implement. For example, you...
Derive percentiles from binned data
It is quite easy actually. Let's say the sum of the counts is N, and you that you want the 0.3 (30%) bottom percentile. This means the threshold value will occur after 0.3*N counts. Now you look at th
Derive percentiles from binned data It is quite easy actually. Let's say the sum of the counts is N, and you that you want the 0.3 (30%) bottom percentile. This means the threshold value will occur after 0.3*N counts. Now you look at the cumulative distribution, and when it reaches 0.3*N, you have the value. It is very...
Derive percentiles from binned data It is quite easy actually. Let's say the sum of the counts is N, and you that you want the 0.3 (30%) bottom percentile. This means the threshold value will occur after 0.3*N counts. Now you look at th
45,857
Derive percentiles from binned data
PercentileBinnedData is an implementation of the algorithm described above that I have developed, QuickSort will ensure that your binned data is sorted in increasing order Function PercentileBinnedData(rng As Range, percentile As Double) As Double Dim v As Variant v = rng.Value QuickSortArray v, , , 1 ' sor...
Derive percentiles from binned data
PercentileBinnedData is an implementation of the algorithm described above that I have developed, QuickSort will ensure that your binned data is sorted in increasing order Function PercentileBinnedDat
Derive percentiles from binned data PercentileBinnedData is an implementation of the algorithm described above that I have developed, QuickSort will ensure that your binned data is sorted in increasing order Function PercentileBinnedData(rng As Range, percentile As Double) As Double Dim v As Variant v = rng.Val...
Derive percentiles from binned data PercentileBinnedData is an implementation of the algorithm described above that I have developed, QuickSort will ensure that your binned data is sorted in increasing order Function PercentileBinnedDat
45,858
Using a gamm4 model to predict estimates in new data
I'm not sure what you want here. Have you looked at ?predict.gam # Load the gamm4 package library(gamm4) # Using gamm4's built-in data simulation capabilities to give us some data: set.seed(100) dat <- gamSim(6, n=100, scale=2) # Fitting a model and plotting it: mod <- gamm4(y~s(x0)+s(x1)+s(x2), data=dat, random = ~...
Using a gamm4 model to predict estimates in new data
I'm not sure what you want here. Have you looked at ?predict.gam # Load the gamm4 package library(gamm4) # Using gamm4's built-in data simulation capabilities to give us some data: set.seed(100) dat
Using a gamm4 model to predict estimates in new data I'm not sure what you want here. Have you looked at ?predict.gam # Load the gamm4 package library(gamm4) # Using gamm4's built-in data simulation capabilities to give us some data: set.seed(100) dat <- gamSim(6, n=100, scale=2) # Fitting a model and plotting it: m...
Using a gamm4 model to predict estimates in new data I'm not sure what you want here. Have you looked at ?predict.gam # Load the gamm4 package library(gamm4) # Using gamm4's built-in data simulation capabilities to give us some data: set.seed(100) dat
45,859
Is there any intuitive meaning to the quantity P(A|B)P(B|C)?
There are several good visual and physical metaphors to help the intuition. I offer one of each. Conditional probabilities of the form $\Pr(A|C)$ can be represented on graphs where the events $C$ and $A$ are nodes, a directed edge connects $C$ to $A$, and the edge is weighted by this probability. This graphical metap...
Is there any intuitive meaning to the quantity P(A|B)P(B|C)?
There are several good visual and physical metaphors to help the intuition. I offer one of each. Conditional probabilities of the form $\Pr(A|C)$ can be represented on graphs where the events $C$ and
Is there any intuitive meaning to the quantity P(A|B)P(B|C)? There are several good visual and physical metaphors to help the intuition. I offer one of each. Conditional probabilities of the form $\Pr(A|C)$ can be represented on graphs where the events $C$ and $A$ are nodes, a directed edge connects $C$ to $A$, and th...
Is there any intuitive meaning to the quantity P(A|B)P(B|C)? There are several good visual and physical metaphors to help the intuition. I offer one of each. Conditional probabilities of the form $\Pr(A|C)$ can be represented on graphs where the events $C$ and
45,860
F test and t test in linear regression model
The misunderstanding is your first premise "F test and $t$-test are performed between two populations", this is incorrect or at least incomplete. The $t$-test that is next to a coefficient tests the null hypothesis that that coefficient equals 0. If the corresponding variable is binary, for example 0 = male, 1 = female...
F test and t test in linear regression model
The misunderstanding is your first premise "F test and $t$-test are performed between two populations", this is incorrect or at least incomplete. The $t$-test that is next to a coefficient tests the n
F test and t test in linear regression model The misunderstanding is your first premise "F test and $t$-test are performed between two populations", this is incorrect or at least incomplete. The $t$-test that is next to a coefficient tests the null hypothesis that that coefficient equals 0. If the corresponding variabl...
F test and t test in linear regression model The misunderstanding is your first premise "F test and $t$-test are performed between two populations", this is incorrect or at least incomplete. The $t$-test that is next to a coefficient tests the n
45,861
F test and t test in linear regression model
Some notations in the very beginning, I'm using z~N(0,1), u~χ2(p), v~χ2(q) and z, u and v are mutually independent(important condition) t = z/sqrt(u/p). For each of the coefficient βj, if you test whether h0: βj =0. Then (βj-0)/1 is basically z, and sample variances (n-2)S^2~χ2(n-2), then you also have your bottom par...
F test and t test in linear regression model
Some notations in the very beginning, I'm using z~N(0,1), u~χ2(p), v~χ2(q) and z, u and v are mutually independent(important condition) t = z/sqrt(u/p). For each of the coefficient βj, if you test wh
F test and t test in linear regression model Some notations in the very beginning, I'm using z~N(0,1), u~χ2(p), v~χ2(q) and z, u and v are mutually independent(important condition) t = z/sqrt(u/p). For each of the coefficient βj, if you test whether h0: βj =0. Then (βj-0)/1 is basically z, and sample variances (n-2)S^...
F test and t test in linear regression model Some notations in the very beginning, I'm using z~N(0,1), u~χ2(p), v~χ2(q) and z, u and v are mutually independent(important condition) t = z/sqrt(u/p). For each of the coefficient βj, if you test wh
45,862
Are latent variable models modelling causality?
Not necessarily. Or, perhaps a better answer is, "it depends on what you mean by causality". It is presumed, in the latent factor model, that the individual items measure the latent factor; that's really the point. So, on, e.g. the MMPI, the different questions are supposed to be measuring aspects of personality, and ...
Are latent variable models modelling causality?
Not necessarily. Or, perhaps a better answer is, "it depends on what you mean by causality". It is presumed, in the latent factor model, that the individual items measure the latent factor; that's re
Are latent variable models modelling causality? Not necessarily. Or, perhaps a better answer is, "it depends on what you mean by causality". It is presumed, in the latent factor model, that the individual items measure the latent factor; that's really the point. So, on, e.g. the MMPI, the different questions are suppo...
Are latent variable models modelling causality? Not necessarily. Or, perhaps a better answer is, "it depends on what you mean by causality". It is presumed, in the latent factor model, that the individual items measure the latent factor; that's re
45,863
What is a "posterior median"?
Well, you have posterior means, posterior modes, posterior standard deviations... any functional relationship calculated in a probability density can be calculated in a posterior density.
What is a "posterior median"?
Well, you have posterior means, posterior modes, posterior standard deviations... any functional relationship calculated in a probability density can be calculated in a posterior density.
What is a "posterior median"? Well, you have posterior means, posterior modes, posterior standard deviations... any functional relationship calculated in a probability density can be calculated in a posterior density.
What is a "posterior median"? Well, you have posterior means, posterior modes, posterior standard deviations... any functional relationship calculated in a probability density can be calculated in a posterior density.
45,864
Probability that a random number will be the largest from sets
When you pick randomly and independently from three random variables $A$, $B$, and $C$, having (cumulative) distributions $F_A$, $F_B$, and $F_C$ and corresponding distribution functions $f_A$, $f_B$, and $f_C$, then by definition of independence the chance that all three numbers are less than some value $x$ equals $$\...
Probability that a random number will be the largest from sets
When you pick randomly and independently from three random variables $A$, $B$, and $C$, having (cumulative) distributions $F_A$, $F_B$, and $F_C$ and corresponding distribution functions $f_A$, $f_B$,
Probability that a random number will be the largest from sets When you pick randomly and independently from three random variables $A$, $B$, and $C$, having (cumulative) distributions $F_A$, $F_B$, and $F_C$ and corresponding distribution functions $f_A$, $f_B$, and $f_C$, then by definition of independence the chance...
Probability that a random number will be the largest from sets When you pick randomly and independently from three random variables $A$, $B$, and $C$, having (cumulative) distributions $F_A$, $F_B$, and $F_C$ and corresponding distribution functions $f_A$, $f_B$,
45,865
Relation between observed power and p-value?
Answers to question 1,2,3,4 ($Z$-test) The decreasing link between the $p$-value and the observed power is intuitively highly expected: the $p$-value $p^{\text{obs}}$ is low when the observed sample mean $\bar y^{\text{obs}}$ is high ($H_1$ favoured), and since $\bar y^{\text{obs}} = \hat\mu$ the observed power is high...
Relation between observed power and p-value?
Answers to question 1,2,3,4 ($Z$-test) The decreasing link between the $p$-value and the observed power is intuitively highly expected: the $p$-value $p^{\text{obs}}$ is low when the observed sample m
Relation between observed power and p-value? Answers to question 1,2,3,4 ($Z$-test) The decreasing link between the $p$-value and the observed power is intuitively highly expected: the $p$-value $p^{\text{obs}}$ is low when the observed sample mean $\bar y^{\text{obs}}$ is high ($H_1$ favoured), and since $\bar y^{\tex...
Relation between observed power and p-value? Answers to question 1,2,3,4 ($Z$-test) The decreasing link between the $p$-value and the observed power is intuitively highly expected: the $p$-value $p^{\text{obs}}$ is low when the observed sample m
45,866
Propagation of uncertainty through a linear system of equations
Let me translate into statistician. So $B$ is a random variable where $B = \beta + \varepsilon$, with $\text{Var}(\varepsilon)$ = $\Sigma_B$, for $\Sigma_B$ known. An observation is taken, and the observed value of $B$ is $b$. Assuming $A$ is invertible, the solution of $Ax = b$ is $A^{-1}b$. Let $C = A^{-1}$ for the m...
Propagation of uncertainty through a linear system of equations
Let me translate into statistician. So $B$ is a random variable where $B = \beta + \varepsilon$, with $\text{Var}(\varepsilon)$ = $\Sigma_B$, for $\Sigma_B$ known. An observation is taken, and the obs
Propagation of uncertainty through a linear system of equations Let me translate into statistician. So $B$ is a random variable where $B = \beta + \varepsilon$, with $\text{Var}(\varepsilon)$ = $\Sigma_B$, for $\Sigma_B$ known. An observation is taken, and the observed value of $B$ is $b$. Assuming $A$ is invertible, t...
Propagation of uncertainty through a linear system of equations Let me translate into statistician. So $B$ is a random variable where $B = \beta + \varepsilon$, with $\text{Var}(\varepsilon)$ = $\Sigma_B$, for $\Sigma_B$ known. An observation is taken, and the obs
45,867
Bayesian posterior: mean vs highest probability
I think the frequentist analogues are that of estimating equations to posterior mean and maximum likelihood to posterior mode. They are not equivalent by any means, but have some important similarities. When you estimate a posterior mode, you're doing Bayesian "maximum likelihood". The posterior mode is not often pref...
Bayesian posterior: mean vs highest probability
I think the frequentist analogues are that of estimating equations to posterior mean and maximum likelihood to posterior mode. They are not equivalent by any means, but have some important similaritie
Bayesian posterior: mean vs highest probability I think the frequentist analogues are that of estimating equations to posterior mean and maximum likelihood to posterior mode. They are not equivalent by any means, but have some important similarities. When you estimate a posterior mode, you're doing Bayesian "maximum li...
Bayesian posterior: mean vs highest probability I think the frequentist analogues are that of estimating equations to posterior mean and maximum likelihood to posterior mode. They are not equivalent by any means, but have some important similaritie
45,868
Bayesian posterior: mean vs highest probability
Both are used (along with the median). Which is "best" depends on the context of how you are going to use it. Generally to a Bayesian the whole posterior distribution is interesting, not just one single number from it. Also interesting is the Credible intervals, but again you have choices, do you want the Highest Po...
Bayesian posterior: mean vs highest probability
Both are used (along with the median). Which is "best" depends on the context of how you are going to use it. Generally to a Bayesian the whole posterior distribution is interesting, not just one si
Bayesian posterior: mean vs highest probability Both are used (along with the median). Which is "best" depends on the context of how you are going to use it. Generally to a Bayesian the whole posterior distribution is interesting, not just one single number from it. Also interesting is the Credible intervals, but ag...
Bayesian posterior: mean vs highest probability Both are used (along with the median). Which is "best" depends on the context of how you are going to use it. Generally to a Bayesian the whole posterior distribution is interesting, not just one si
45,869
Bayesian posterior: mean vs highest probability
It's not always the case that the mean is more relevant than the mode. That is part of the value of representing the full distribution within the Bayesian approach, if you have the full distribution you can extract whatever statistical information is required. The average of the distribution, will often be useful for ...
Bayesian posterior: mean vs highest probability
It's not always the case that the mean is more relevant than the mode. That is part of the value of representing the full distribution within the Bayesian approach, if you have the full distribution
Bayesian posterior: mean vs highest probability It's not always the case that the mean is more relevant than the mode. That is part of the value of representing the full distribution within the Bayesian approach, if you have the full distribution you can extract whatever statistical information is required. The averag...
Bayesian posterior: mean vs highest probability It's not always the case that the mean is more relevant than the mode. That is part of the value of representing the full distribution within the Bayesian approach, if you have the full distribution
45,870
'Stationarity requirement' why?
First let's decide what form of stationarity you are asking about. There are two types: (1) strict stationarity: All aspects of a time series behavior are not dependent on time. i.e. for every m & n the distribtions of $\newcommand{\Cov}{\operatorname{Cov}}Z_t, Z_{t1}, \dots, Z_{t+m+n}$ are the same. (2) weak statio...
'Stationarity requirement' why?
First let's decide what form of stationarity you are asking about. There are two types: (1) strict stationarity: All aspects of a time series behavior are not dependent on time. i.e. for every m &
'Stationarity requirement' why? First let's decide what form of stationarity you are asking about. There are two types: (1) strict stationarity: All aspects of a time series behavior are not dependent on time. i.e. for every m & n the distribtions of $\newcommand{\Cov}{\operatorname{Cov}}Z_t, Z_{t1}, \dots, Z_{t+m+n...
'Stationarity requirement' why? First let's decide what form of stationarity you are asking about. There are two types: (1) strict stationarity: All aspects of a time series behavior are not dependent on time. i.e. for every m &
45,871
'Stationarity requirement' why?
I am new in Time series analysis (TSA), but my basic understanding is that the main aim of TSA is forecasting. A time series is made up of several components: Trend, Seasonal, cyclical, and irregular. By transforming or 'stationarizing' the series, its statistical properties (mean, variance) are easily forecasted as th...
'Stationarity requirement' why?
I am new in Time series analysis (TSA), but my basic understanding is that the main aim of TSA is forecasting. A time series is made up of several components: Trend, Seasonal, cyclical, and irregular.
'Stationarity requirement' why? I am new in Time series analysis (TSA), but my basic understanding is that the main aim of TSA is forecasting. A time series is made up of several components: Trend, Seasonal, cyclical, and irregular. By transforming or 'stationarizing' the series, its statistical properties (mean, varia...
'Stationarity requirement' why? I am new in Time series analysis (TSA), but my basic understanding is that the main aim of TSA is forecasting. A time series is made up of several components: Trend, Seasonal, cyclical, and irregular.
45,872
Is a larger beta weight a better predictor than a high t-statistic?
It would help if you could answer @whuber's question. However, in general, this question cannot be answered. The reason is that variables are almost always on incommensurate scales. Consider the typical study that involves human subjects (as in medical research or the social and behavioral sciences), what covariate...
Is a larger beta weight a better predictor than a high t-statistic?
It would help if you could answer @whuber's question. However, in general, this question cannot be answered. The reason is that variables are almost always on incommensurate scales. Consider the t
Is a larger beta weight a better predictor than a high t-statistic? It would help if you could answer @whuber's question. However, in general, this question cannot be answered. The reason is that variables are almost always on incommensurate scales. Consider the typical study that involves human subjects (as in med...
Is a larger beta weight a better predictor than a high t-statistic? It would help if you could answer @whuber's question. However, in general, this question cannot be answered. The reason is that variables are almost always on incommensurate scales. Consider the t
45,873
Is a larger beta weight a better predictor than a high t-statistic?
$\beta$ and t-stat tell very different bits of information about a variable's effect in a statistical model and they are not exchangeable: $\beta$ (standardized effect size) is far more important. It tells how strong or meaningful the effect is. B (unstandardized effect size) is usually easier to interpret and so is p...
Is a larger beta weight a better predictor than a high t-statistic?
$\beta$ and t-stat tell very different bits of information about a variable's effect in a statistical model and they are not exchangeable: $\beta$ (standardized effect size) is far more important. It
Is a larger beta weight a better predictor than a high t-statistic? $\beta$ and t-stat tell very different bits of information about a variable's effect in a statistical model and they are not exchangeable: $\beta$ (standardized effect size) is far more important. It tells how strong or meaningful the effect is. B (un...
Is a larger beta weight a better predictor than a high t-statistic? $\beta$ and t-stat tell very different bits of information about a variable's effect in a statistical model and they are not exchangeable: $\beta$ (standardized effect size) is far more important. It
45,874
Is a larger beta weight a better predictor than a high t-statistic?
You're getting at two distinct concepts: Statistical significance Material significance, some notion of practical significance. In economics, people call number (2) economic significance. In biology, I imagine they call it biological significance. Statistical significance can be seen from the p-value (assuming your s...
Is a larger beta weight a better predictor than a high t-statistic?
You're getting at two distinct concepts: Statistical significance Material significance, some notion of practical significance. In economics, people call number (2) economic significance. In biology
Is a larger beta weight a better predictor than a high t-statistic? You're getting at two distinct concepts: Statistical significance Material significance, some notion of practical significance. In economics, people call number (2) economic significance. In biology, I imagine they call it biological significance. St...
Is a larger beta weight a better predictor than a high t-statistic? You're getting at two distinct concepts: Statistical significance Material significance, some notion of practical significance. In economics, people call number (2) economic significance. In biology
45,875
Is a larger beta weight a better predictor than a high t-statistic?
Beta weight itself does not say much.You should also consider its standart deviation, therefore t statistic is better for you to understand which variable(a or b) has a greater effect.
Is a larger beta weight a better predictor than a high t-statistic?
Beta weight itself does not say much.You should also consider its standart deviation, therefore t statistic is better for you to understand which variable(a or b) has a greater effect.
Is a larger beta weight a better predictor than a high t-statistic? Beta weight itself does not say much.You should also consider its standart deviation, therefore t statistic is better for you to understand which variable(a or b) has a greater effect.
Is a larger beta weight a better predictor than a high t-statistic? Beta weight itself does not say much.You should also consider its standart deviation, therefore t statistic is better for you to understand which variable(a or b) has a greater effect.
45,876
Using Duan Smear factor on a two-part model
The OLS model is the model of expected cost given that there is a non-zero cost. Therefore by conditionality principal you simply don't use the data that has zero observed cost when fitting that part of the model. Reading between the lines I'm guessing your real issue is how to calculate the Duan factors. The duan fa...
Using Duan Smear factor on a two-part model
The OLS model is the model of expected cost given that there is a non-zero cost. Therefore by conditionality principal you simply don't use the data that has zero observed cost when fitting that part
Using Duan Smear factor on a two-part model The OLS model is the model of expected cost given that there is a non-zero cost. Therefore by conditionality principal you simply don't use the data that has zero observed cost when fitting that part of the model. Reading between the lines I'm guessing your real issue is how...
Using Duan Smear factor on a two-part model The OLS model is the model of expected cost given that there is a non-zero cost. Therefore by conditionality principal you simply don't use the data that has zero observed cost when fitting that part
45,877
Using Duan Smear factor on a two-part model
I have not seen the Duan smearing correction used in this way, but at first blush, it seems sensible. Here's the intuition. The general re-transformation problem we have is that we want to get \begin{equation}E[y_i \vert x_i]=\exp (x_i'\beta) \cdot E[\exp (u_i)].\end{equation} For the first term, you can use the expone...
Using Duan Smear factor on a two-part model
I have not seen the Duan smearing correction used in this way, but at first blush, it seems sensible. Here's the intuition. The general re-transformation problem we have is that we want to get \begin{
Using Duan Smear factor on a two-part model I have not seen the Duan smearing correction used in this way, but at first blush, it seems sensible. Here's the intuition. The general re-transformation problem we have is that we want to get \begin{equation}E[y_i \vert x_i]=\exp (x_i'\beta) \cdot E[\exp (u_i)].\end{equation...
Using Duan Smear factor on a two-part model I have not seen the Duan smearing correction used in this way, but at first blush, it seems sensible. Here's the intuition. The general re-transformation problem we have is that we want to get \begin{
45,878
Methods for evaluating predictive models in two-outcome systems
The following is taken from a related answer I gave here: Suppose your model does indeed predict A has a 40% chance and B has a 60% chance. In some circumstances you might wish to convert this into a classification that B will happen (since it is more likely than A). Once converted into a classification, every predi...
Methods for evaluating predictive models in two-outcome systems
The following is taken from a related answer I gave here: Suppose your model does indeed predict A has a 40% chance and B has a 60% chance. In some circumstances you might wish to convert this into
Methods for evaluating predictive models in two-outcome systems The following is taken from a related answer I gave here: Suppose your model does indeed predict A has a 40% chance and B has a 60% chance. In some circumstances you might wish to convert this into a classification that B will happen (since it is more li...
Methods for evaluating predictive models in two-outcome systems The following is taken from a related answer I gave here: Suppose your model does indeed predict A has a 40% chance and B has a 60% chance. In some circumstances you might wish to convert this into
45,879
Methods for evaluating predictive models in two-outcome systems
Predictive accuracy and AUC are quite limited in certain aspects. Try the Bayesian Information Reward (BIR), which addresses all your bullet points. The intuition of BIR is as follows: a bettor is rewarded not just for identifying the ultimate winners and losers (0's and 1's), but more importantly for identifying the a...
Methods for evaluating predictive models in two-outcome systems
Predictive accuracy and AUC are quite limited in certain aspects. Try the Bayesian Information Reward (BIR), which addresses all your bullet points. The intuition of BIR is as follows: a bettor is rew
Methods for evaluating predictive models in two-outcome systems Predictive accuracy and AUC are quite limited in certain aspects. Try the Bayesian Information Reward (BIR), which addresses all your bullet points. The intuition of BIR is as follows: a bettor is rewarded not just for identifying the ultimate winners and ...
Methods for evaluating predictive models in two-outcome systems Predictive accuracy and AUC are quite limited in certain aspects. Try the Bayesian Information Reward (BIR), which addresses all your bullet points. The intuition of BIR is as follows: a bettor is rew
45,880
Methods for evaluating predictive models in two-outcome systems
The idea of concordance probability ($c$-index; ROC area) is a special case of a $U$-statistic. A $U$-statistic is available for testing what you want by asking the following question: How much more concordant with the outcome is model 1 than model 2? "More concordant" can be taken to mean that in a pair of observati...
Methods for evaluating predictive models in two-outcome systems
The idea of concordance probability ($c$-index; ROC area) is a special case of a $U$-statistic. A $U$-statistic is available for testing what you want by asking the following question: How much more
Methods for evaluating predictive models in two-outcome systems The idea of concordance probability ($c$-index; ROC area) is a special case of a $U$-statistic. A $U$-statistic is available for testing what you want by asking the following question: How much more concordant with the outcome is model 1 than model 2? "M...
Methods for evaluating predictive models in two-outcome systems The idea of concordance probability ($c$-index; ROC area) is a special case of a $U$-statistic. A $U$-statistic is available for testing what you want by asking the following question: How much more
45,881
For "Was this page helpful" data, should I take response rate into account?
In answer to your question, you should take response rate into account as it is giving you extra information. The point is that if there are pages with low response rates and low ratings, then it is possible that many people were looking for different information, rather than judging the quality of the page, and that ...
For "Was this page helpful" data, should I take response rate into account?
In answer to your question, you should take response rate into account as it is giving you extra information. The point is that if there are pages with low response rates and low ratings, then it is
For "Was this page helpful" data, should I take response rate into account? In answer to your question, you should take response rate into account as it is giving you extra information. The point is that if there are pages with low response rates and low ratings, then it is possible that many people were looking for d...
For "Was this page helpful" data, should I take response rate into account? In answer to your question, you should take response rate into account as it is giving you extra information. The point is that if there are pages with low response rates and low ratings, then it is
45,882
For "Was this page helpful" data, should I take response rate into account?
Requests of this type often get answers from two groups: The extremely pleased and extremely displeased, while people who are sort of generally satisfied don't bother. Although I don't know of research into page ratings specifically, I've seen this with response cards in other situations (e.g. rating of hotel service).
For "Was this page helpful" data, should I take response rate into account?
Requests of this type often get answers from two groups: The extremely pleased and extremely displeased, while people who are sort of generally satisfied don't bother. Although I don't know of researc
For "Was this page helpful" data, should I take response rate into account? Requests of this type often get answers from two groups: The extremely pleased and extremely displeased, while people who are sort of generally satisfied don't bother. Although I don't know of research into page ratings specifically, I've seen ...
For "Was this page helpful" data, should I take response rate into account? Requests of this type often get answers from two groups: The extremely pleased and extremely displeased, while people who are sort of generally satisfied don't bother. Although I don't know of researc
45,883
Required conditions for using a t-test
First, you have to understand why there are two tests, for a same quantity. Let's say you have a sample $x_1, \dots, x_n$, drawn from an unknown distribution and you want to test if the mean of the distribution is zero or not. So you compute the sample mean $\overline x = {1\over n} \sum_{i=1}^n x_i$. And you compute t...
Required conditions for using a t-test
First, you have to understand why there are two tests, for a same quantity. Let's say you have a sample $x_1, \dots, x_n$, drawn from an unknown distribution and you want to test if the mean of the di
Required conditions for using a t-test First, you have to understand why there are two tests, for a same quantity. Let's say you have a sample $x_1, \dots, x_n$, drawn from an unknown distribution and you want to test if the mean of the distribution is zero or not. So you compute the sample mean $\overline x = {1\over ...
Required conditions for using a t-test First, you have to understand why there are two tests, for a same quantity. Let's say you have a sample $x_1, \dots, x_n$, drawn from an unknown distribution and you want to test if the mean of the di
45,884
Required conditions for using a t-test
You can actually use the t-test if you like -- it's just more conservative. As your sample size grows larger, the Central Limit Theorem says that the distribution of your mean approaches a normal distribution, regardless of the underlying population distribution. Therefore, you can use the Z-test, since that compares...
Required conditions for using a t-test
You can actually use the t-test if you like -- it's just more conservative. As your sample size grows larger, the Central Limit Theorem says that the distribution of your mean approaches a normal dis
Required conditions for using a t-test You can actually use the t-test if you like -- it's just more conservative. As your sample size grows larger, the Central Limit Theorem says that the distribution of your mean approaches a normal distribution, regardless of the underlying population distribution. Therefore, you ...
Required conditions for using a t-test You can actually use the t-test if you like -- it's just more conservative. As your sample size grows larger, the Central Limit Theorem says that the distribution of your mean approaches a normal dis
45,885
Required conditions for using a t-test
I believe the reason for the third rule is in its need to adhere to CLT, and therefore be nearly normal. CLT states that a sampling distribution model is relatively normal for a large sampling frame, regardless of the distribution of the population, as long as the sampled individuals are independent. This 10% rule is t...
Required conditions for using a t-test
I believe the reason for the third rule is in its need to adhere to CLT, and therefore be nearly normal. CLT states that a sampling distribution model is relatively normal for a large sampling frame,
Required conditions for using a t-test I believe the reason for the third rule is in its need to adhere to CLT, and therefore be nearly normal. CLT states that a sampling distribution model is relatively normal for a large sampling frame, regardless of the distribution of the population, as long as the sampled individu...
Required conditions for using a t-test I believe the reason for the third rule is in its need to adhere to CLT, and therefore be nearly normal. CLT states that a sampling distribution model is relatively normal for a large sampling frame,
45,886
Required conditions for using a t-test
I don't see the necessity for the comparison. T-test and Z-test I believe, operate under different conditions. T-test is Parametric, while Z-test is one of the known four nonparametric equivalent. Please somebody should correct me if my assumption wrong.
Required conditions for using a t-test
I don't see the necessity for the comparison. T-test and Z-test I believe, operate under different conditions. T-test is Parametric, while Z-test is one of the known four nonparametric equivalent. Ple
Required conditions for using a t-test I don't see the necessity for the comparison. T-test and Z-test I believe, operate under different conditions. T-test is Parametric, while Z-test is one of the known four nonparametric equivalent. Please somebody should correct me if my assumption wrong.
Required conditions for using a t-test I don't see the necessity for the comparison. T-test and Z-test I believe, operate under different conditions. T-test is Parametric, while Z-test is one of the known four nonparametric equivalent. Ple
45,887
Expected value of a transformed random variable
In general, the expectation of $g(X)$ can often be approximated using a Taylor expansion around the mean; let $a=E(X)$ $$g(X) = g(a) + g'(a) (X-a) + \frac{1}{2!}g''(a) (X-a)^2 +\cdots$$ $$E[g(X)] = g(a) + \frac{1}{2!}g^{(2)}(a) \, m_2 + \frac{1}{3!} g^{(3)}(a) \, m_3 + \cdots $$ where $g^{(n)}(a)$ is the $n-$th deri...
Expected value of a transformed random variable
In general, the expectation of $g(X)$ can often be approximated using a Taylor expansion around the mean; let $a=E(X)$ $$g(X) = g(a) + g'(a) (X-a) + \frac{1}{2!}g''(a) (X-a)^2 +\cdots$$ $$E[g(X)] = g(
Expected value of a transformed random variable In general, the expectation of $g(X)$ can often be approximated using a Taylor expansion around the mean; let $a=E(X)$ $$g(X) = g(a) + g'(a) (X-a) + \frac{1}{2!}g''(a) (X-a)^2 +\cdots$$ $$E[g(X)] = g(a) + \frac{1}{2!}g^{(2)}(a) \, m_2 + \frac{1}{3!} g^{(3)}(a) \, m_3 +...
Expected value of a transformed random variable In general, the expectation of $g(X)$ can often be approximated using a Taylor expansion around the mean; let $a=E(X)$ $$g(X) = g(a) + g'(a) (X-a) + \frac{1}{2!}g''(a) (X-a)^2 +\cdots$$ $$E[g(X)] = g(
45,888
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms
I like to use the caret package for predictive modeling, as it provided a unified interface to a variety of algorithms, including nnet. It's very easy to get predicted probabilities for any model that supports them, neural networks included: set.seed(42) require(caret) model <- train(Species~., data=iris, method='nnet...
Methods & CRAN packages to predict probability using neural networks or others machine learning algo
I like to use the caret package for predictive modeling, as it provided a unified interface to a variety of algorithms, including nnet. It's very easy to get predicted probabilities for any model tha
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms I like to use the caret package for predictive modeling, as it provided a unified interface to a variety of algorithms, including nnet. It's very easy to get predicted probabilities for any model that supports th...
Methods & CRAN packages to predict probability using neural networks or others machine learning algo I like to use the caret package for predictive modeling, as it provided a unified interface to a variety of algorithms, including nnet. It's very easy to get predicted probabilities for any model tha
45,889
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms
I would second the recommendation of an "ensemble" method such as Random Forests. There is a variant of RF, "randomSurvivalForest" that is specific to survival analysis. Here is a link to the the R manual for the package randomSurvivalForest.
Methods & CRAN packages to predict probability using neural networks or others machine learning algo
I would second the recommendation of an "ensemble" method such as Random Forests. There is a variant of RF, "randomSurvivalForest" that is specific to survival analysis. Here is a link to the the R ma
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms I would second the recommendation of an "ensemble" method such as Random Forests. There is a variant of RF, "randomSurvivalForest" that is specific to survival analysis. Here is a link to the the R manual for the ...
Methods & CRAN packages to predict probability using neural networks or others machine learning algo I would second the recommendation of an "ensemble" method such as Random Forests. There is a variant of RF, "randomSurvivalForest" that is specific to survival analysis. Here is a link to the the R ma
45,890
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms
you can try for example the glm function with the family=binomial and the logit link (or the probit) doing a logistic regression, which outputs are probabilitys.. Here you have a link logistic regression R You also can try with trees or with "ensembles" of trees , boosting, bagging or random forest wuth packages like r...
Methods & CRAN packages to predict probability using neural networks or others machine learning algo
you can try for example the glm function with the family=binomial and the logit link (or the probit) doing a logistic regression, which outputs are probabilitys.. Here you have a link logistic regress
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms you can try for example the glm function with the family=binomial and the logit link (or the probit) doing a logistic regression, which outputs are probabilitys.. Here you have a link logistic regression R You als...
Methods & CRAN packages to predict probability using neural networks or others machine learning algo you can try for example the glm function with the family=binomial and the logit link (or the probit) doing a logistic regression, which outputs are probabilitys.. Here you have a link logistic regress
45,891
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms
You might also want to look at package RSNNS, which provides an R interface to the package SNNS, with ample functionality for general neural networks.
Methods & CRAN packages to predict probability using neural networks or others machine learning algo
You might also want to look at package RSNNS, which provides an R interface to the package SNNS, with ample functionality for general neural networks.
Methods & CRAN packages to predict probability using neural networks or others machine learning algorithms You might also want to look at package RSNNS, which provides an R interface to the package SNNS, with ample functionality for general neural networks.
Methods & CRAN packages to predict probability using neural networks or others machine learning algo You might also want to look at package RSNNS, which provides an R interface to the package SNNS, with ample functionality for general neural networks.
45,892
R fitting Poisson distribution with weighting
Note that 'fitdistr does nothing but maximum likelihood estimation. That is to say, you can do it by yourself by writing down the likelihood. Below is an example for the poisson distribution in R. It can be adapted to upweight/downweight the contribution to the likelihood of each data. density (as in R) $$f(x; \lambda...
R fitting Poisson distribution with weighting
Note that 'fitdistr does nothing but maximum likelihood estimation. That is to say, you can do it by yourself by writing down the likelihood. Below is an example for the poisson distribution in R. It
R fitting Poisson distribution with weighting Note that 'fitdistr does nothing but maximum likelihood estimation. That is to say, you can do it by yourself by writing down the likelihood. Below is an example for the poisson distribution in R. It can be adapted to upweight/downweight the contribution to the likelihood o...
R fitting Poisson distribution with weighting Note that 'fitdistr does nothing but maximum likelihood estimation. That is to say, you can do it by yourself by writing down the likelihood. Below is an example for the poisson distribution in R. It
45,893
R fitting Poisson distribution with weighting
If you're looking to do this just for the poisson distribution (where you're just estimating the point estimate of the one parameter lambda), you could use the glm function: fit2 <- glm(randoms ~ 1, family = poisson(link = "log"), weights = weighting[1:15]) As a check, fit1 <- glm(randoms ~ 1, family = poisson(link ...
R fitting Poisson distribution with weighting
If you're looking to do this just for the poisson distribution (where you're just estimating the point estimate of the one parameter lambda), you could use the glm function: fit2 <- glm(randoms ~ 1, f
R fitting Poisson distribution with weighting If you're looking to do this just for the poisson distribution (where you're just estimating the point estimate of the one parameter lambda), you could use the glm function: fit2 <- glm(randoms ~ 1, family = poisson(link = "log"), weights = weighting[1:15]) As a check, f...
R fitting Poisson distribution with weighting If you're looking to do this just for the poisson distribution (where you're just estimating the point estimate of the one parameter lambda), you could use the glm function: fit2 <- glm(randoms ~ 1, f
45,894
R fitting Poisson distribution with weighting
If you look at the likelihood, the only bit of it that depends on both lambda and the x's is product lambda^x_i, which is lambda^{sum x_i}. Thus the sum of the x's, or equivalently the sample mean, is a sufficient statistic for lambda. What that means is that you "fit a Poisson" by estimating the mean, using the sample...
R fitting Poisson distribution with weighting
If you look at the likelihood, the only bit of it that depends on both lambda and the x's is product lambda^x_i, which is lambda^{sum x_i}. Thus the sum of the x's, or equivalently the sample mean, is
R fitting Poisson distribution with weighting If you look at the likelihood, the only bit of it that depends on both lambda and the x's is product lambda^x_i, which is lambda^{sum x_i}. Thus the sum of the x's, or equivalently the sample mean, is a sufficient statistic for lambda. What that means is that you "fit a Poi...
R fitting Poisson distribution with weighting If you look at the likelihood, the only bit of it that depends on both lambda and the x's is product lambda^x_i, which is lambda^{sum x_i}. Thus the sum of the x's, or equivalently the sample mean, is
45,895
Interpret regression coefficients after WLS
There is no change in the interpretation of the parameters since the parameters being estimated are algebraically identical between the linear regression model with heteroskedasticity and the transformed model, OLS on which gives the WLS estimator. Let us take this at a leisurely pace. Linear regression model The linea...
Interpret regression coefficients after WLS
There is no change in the interpretation of the parameters since the parameters being estimated are algebraically identical between the linear regression model with heteroskedasticity and the transfor
Interpret regression coefficients after WLS There is no change in the interpretation of the parameters since the parameters being estimated are algebraically identical between the linear regression model with heteroskedasticity and the transformed model, OLS on which gives the WLS estimator. Let us take this at a leisu...
Interpret regression coefficients after WLS There is no change in the interpretation of the parameters since the parameters being estimated are algebraically identical between the linear regression model with heteroskedasticity and the transfor
45,896
What is the interpretation of "generalized" partial correlations?
Partial correlation coefficient inhabits the domain of linear relationships/regression. You admitted this yourself when giving the definition for partial r in your question. Partial r is just another way of standardazing the linear regression coefficient, the other way being the standardized coefficient beta. So, parti...
What is the interpretation of "generalized" partial correlations?
Partial correlation coefficient inhabits the domain of linear relationships/regression. You admitted this yourself when giving the definition for partial r in your question. Partial r is just another
What is the interpretation of "generalized" partial correlations? Partial correlation coefficient inhabits the domain of linear relationships/regression. You admitted this yourself when giving the definition for partial r in your question. Partial r is just another way of standardazing the linear regression coefficient...
What is the interpretation of "generalized" partial correlations? Partial correlation coefficient inhabits the domain of linear relationships/regression. You admitted this yourself when giving the definition for partial r in your question. Partial r is just another
45,897
Claiming validity of a study's negative finding
"Are these enough to have confidence in the negative finding of our study" - it depends on what you mean by "have confidence". Can you walk away and say "The association is negative, we're done here". No. You can be confident that, having looked at it several ways, you're not detecting an association in your data. But ...
Claiming validity of a study's negative finding
"Are these enough to have confidence in the negative finding of our study" - it depends on what you mean by "have confidence". Can you walk away and say "The association is negative, we're done here".
Claiming validity of a study's negative finding "Are these enough to have confidence in the negative finding of our study" - it depends on what you mean by "have confidence". Can you walk away and say "The association is negative, we're done here". No. You can be confident that, having looked at it several ways, you're...
Claiming validity of a study's negative finding "Are these enough to have confidence in the negative finding of our study" - it depends on what you mean by "have confidence". Can you walk away and say "The association is negative, we're done here".
45,898
Claiming validity of a study's negative finding
This is a really great question! Negative studies need to be published more often in the literature to reduce or eliminate publication bias. I like it that you have done so many thoughtful analyses. EpiGrad provides some good caveats in his answer. However I think there is a glass half full way to look at this that ...
Claiming validity of a study's negative finding
This is a really great question! Negative studies need to be published more often in the literature to reduce or eliminate publication bias. I like it that you have done so many thoughtful analyses.
Claiming validity of a study's negative finding This is a really great question! Negative studies need to be published more often in the literature to reduce or eliminate publication bias. I like it that you have done so many thoughtful analyses. EpiGrad provides some good caveats in his answer. However I think ther...
Claiming validity of a study's negative finding This is a really great question! Negative studies need to be published more often in the literature to reduce or eliminate publication bias. I like it that you have done so many thoughtful analyses.
45,899
Robust correlation in R? [closed]
MASS::cov.rob (link to man page) has two methods for robust covariances, which you can standardize to correlations with cov2cor. @whuber is right that the "best" method will depend on what you want to do with it, though..
Robust correlation in R? [closed]
MASS::cov.rob (link to man page) has two methods for robust covariances, which you can standardize to correlations with cov2cor. @whuber is right that the "best" method will depend on what you want t
Robust correlation in R? [closed] MASS::cov.rob (link to man page) has two methods for robust covariances, which you can standardize to correlations with cov2cor. @whuber is right that the "best" method will depend on what you want to do with it, though..
Robust correlation in R? [closed] MASS::cov.rob (link to man page) has two methods for robust covariances, which you can standardize to correlations with cov2cor. @whuber is right that the "best" method will depend on what you want t
45,900
Robust correlation in R? [closed]
I implemented these correlation measures in R, it is super easy using robustbase package: http://www.stat.tugraz.at/AJS/ausg111+2/111+2Shevlyakov.pdf The assessment of performance for contaminated sample case is provided in the end of the article (for n=20 and n=1000). You may concentrate on $Q_n$ correlation, it works...
Robust correlation in R? [closed]
I implemented these correlation measures in R, it is super easy using robustbase package: http://www.stat.tugraz.at/AJS/ausg111+2/111+2Shevlyakov.pdf The assessment of performance for contaminated sam
Robust correlation in R? [closed] I implemented these correlation measures in R, it is super easy using robustbase package: http://www.stat.tugraz.at/AJS/ausg111+2/111+2Shevlyakov.pdf The assessment of performance for contaminated sample case is provided in the end of the article (for n=20 and n=1000). You may concentr...
Robust correlation in R? [closed] I implemented these correlation measures in R, it is super easy using robustbase package: http://www.stat.tugraz.at/AJS/ausg111+2/111+2Shevlyakov.pdf The assessment of performance for contaminated sam