idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
7,901
Line graph has too many lines, is there a better solution?
Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25 lines, you can vary them by line type and color and perhaps plotting symbol and get some clarity Then stack the graphs ...
Line graph has too many lines, is there a better solution?
Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25
Line graph has too many lines, is there a better solution? Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25 lines, you can vary them by line type and color and perhaps ...
Line graph has too many lines, is there a better solution? Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25
7,902
Line graph has too many lines, is there a better solution?
I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to work with and allows you to display more information in an easy to follow way. Your primary focus must be on the end user e...
Line graph has too many lines, is there a better solution?
I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to wor
Line graph has too many lines, is there a better solution? I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to work with and allows you to display more information in an easy ...
Line graph has too many lines, is there a better solution? I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to wor
7,903
Line graph has too many lines, is there a better solution?
If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you could show a lot more charts at once by removing axis labels and units. Many data tools have them built in (Microsoft Exce...
Line graph has too many lines, is there a better solution?
If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you co
Line graph has too many lines, is there a better solution? If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you could show a lot more charts at once by removing axis labels an...
Line graph has too many lines, is there a better solution? If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you co
7,904
Is p-value a point estimate?
Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample standard deviation the p-value is not an useful estimator of an interesting distribution parameter. Look at the answer by @...
Is p-value a point estimate?
Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample sta
Is p-value a point estimate? Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample standard deviation the p-value is not an useful estimator of an interesting distribution param...
Is p-value a point estimate? Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample sta
7,905
Is p-value a point estimate?
Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotically unbiased. But, asymptotically, the mean p-value for the null hypothesis is $1/2$ (ideally; for some tests it might be...
Is p-value a point estimate?
Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotical
Is p-value a point estimate? Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotically unbiased. But, asymptotically, the mean p-value for the null hypothesis is $1/2$ (ideall...
Is p-value a point estimate? Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotical
7,906
Is p-value a point estimate?
$p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you could be interested in interval estimate of it, but in hypothesis testing scenario you would rather compare the sample me...
Is p-value a point estimate?
$p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you c
Is p-value a point estimate? $p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you could be interested in interval estimate of it, but in hypothesis testing scenario you would...
Is p-value a point estimate? $p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you c
7,907
Likelihood ratio test in R
Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced, family=binomial, data=infert) > lrtest (model0, model1) Likelihood ratio test for MLE method Chi-squared 1 d.f. = 36...
Likelihood ratio test in R
Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced
Likelihood ratio test in R Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced, family=binomial, data=infert) > lrtest (model0, model1) Likelihood ratio test for MLE metho...
Likelihood ratio test in R Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced
7,908
Likelihood ratio test in R
An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that work with GLMs: > require(lmtest) Loading required package: lmtest Loading required package: zoo > ## with data from Gr...
Likelihood ratio test in R
An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that
Likelihood ratio test in R An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that work with GLMs: > require(lmtest) Loading required package: lmtest Loading required package:...
Likelihood ratio test in R An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that
7,909
Definition of Conditional Probability with multiple conditions
You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{P(A \cap C)}{P(C)}$$ Now fill in $(B \cap \theta)$ for $C$ again and you have it: $$\frac{P(A \cap C)}{P(C)} = \frac{P(A...
Definition of Conditional Probability with multiple conditions
You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{
Definition of Conditional Probability with multiple conditions You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{P(A \cap C)}{P(C)}$$ Now fill in $(B \cap \theta)$ for $C...
Definition of Conditional Probability with multiple conditions You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{
7,910
Definition of Conditional Probability with multiple conditions
I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditions, I find it easiest to think about it this way: temporarily remove the condition(s) that you want to remain as condi...
Definition of Conditional Probability with multiple conditions
I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditi
Definition of Conditional Probability with multiple conditions I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditions, I find it easiest to think about it this way: tempo...
Definition of Conditional Probability with multiple conditions I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditi
7,911
Asymptotic distribution of sample variance of non-normal sample
To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X_i-\mu)(\bar x-\mu)\Big)+\sum_{i=1}^n\Big(\bar x-\mu\Big)^2$$ and after a little manipualtion, $$=\sum_{i=1}^n\Big(X_i-\...
Asymptotic distribution of sample variance of non-normal sample
To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X
Asymptotic distribution of sample variance of non-normal sample To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X_i-\mu)(\bar x-\mu)\Big)+\sum_{i=1}^n\Big(\bar x-\mu\Big...
Asymptotic distribution of sample variance of non-normal sample To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X
7,912
Asymptotic distribution of sample variance of non-normal sample
You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1} \sum_{i=1}^n \left(X_i - \bar{X} \right)^2 $$ does not depend on $E(X) = \xi$, say. Asymptotically, it also does not ma...
Asymptotic distribution of sample variance of non-normal sample
You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1}
Asymptotic distribution of sample variance of non-normal sample You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1} \sum_{i=1}^n \left(X_i - \bar{X} \right)^2 $$ does not ...
Asymptotic distribution of sample variance of non-normal sample You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1}
7,913
Asymptotic distribution of sample variance of non-normal sample
The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see asymptotic results presented using the normal distribution, and this is useful for stating the theorems. However, prac...
Asymptotic distribution of sample variance of non-normal sample
The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see
Asymptotic distribution of sample variance of non-normal sample The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see asymptotic results presented using the normal distribut...
Asymptotic distribution of sample variance of non-normal sample The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see
7,914
What's in a name: Precision (inverse of variance)
Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive" than variance because it says how concentrated are the values around the mean rather than how spread they are. It is sai...
What's in a name: Precision (inverse of variance)
Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive"
What's in a name: Precision (inverse of variance) Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive" than variance because it says how concentrated are the values around t...
What's in a name: Precision (inverse of variance) Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive"
7,915
What's in a name: Precision (inverse of variance)
Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add the precisions. Variance does not have this property. On the other hand, when you're accumulating observations, you ave...
What's in a name: Precision (inverse of variance)
Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add
What's in a name: Precision (inverse of variance) Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add the precisions. Variance does not have this property. On the other ha...
What's in a name: Precision (inverse of variance) Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add
7,916
What's in a name: Precision (inverse of variance)
Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrument (e.g., measuring a distance with measuring tape). If you were to take several measurements of the quantity of interes...
What's in a name: Precision (inverse of variance)
Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrume
What's in a name: Precision (inverse of variance) Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrument (e.g., measuring a distance with measuring tape). If you were to ta...
What's in a name: Precision (inverse of variance) Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrume
7,917
What do confidence intervals say about precision (if anything)?
In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = precision" is wrong. This is not to say that any competent frequentist, Bayesian, or likelihoodist would be confused by th...
What do confidence intervals say about precision (if anything)?
In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = pr
What do confidence intervals say about precision (if anything)? In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = precision" is wrong. This is not to say that any competent...
What do confidence intervals say about precision (if anything)? In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = pr
7,918
What do confidence intervals say about precision (if anything)?
First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision and CI width can be theoretically demonstrated. Take an estimate for the mean (when it exists). If your CI for the mean is...
What do confidence intervals say about precision (if anything)?
First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision an
What do confidence intervals say about precision (if anything)? First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision and CI width can be theoretically demonstrated. Take an es...
What do confidence intervals say about precision (if anything)? First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision an
7,919
What do confidence intervals say about precision (if anything)?
I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a sample $\{x_1, x_2, \ldots , x_n \}$ from a normal$(\mu, \sigma^2)$ distribution and wish to construct a confidence inter...
What do confidence intervals say about precision (if anything)?
I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a s
What do confidence intervals say about precision (if anything)? I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a sample $\{x_1, x_2, \ldots , x_n \}$ from a normal$(\mu, ...
What do confidence intervals say about precision (if anything)? I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a s
7,920
What do confidence intervals say about precision (if anything)?
I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms. Quoting from Wikipedia: The precision of a measurement system, related to reproducibility and repeatability, is the deg...
What do confidence intervals say about precision (if anything)?
I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms.
What do confidence intervals say about precision (if anything)? I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms. Quoting from Wikipedia: The precision of a measurement ...
What do confidence intervals say about precision (if anything)? I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms.
7,921
What do confidence intervals say about precision (if anything)?
@Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that one cannot give a pop quiz. On close examination this means one cannot guarantee the quiz is a surprise. Yet most of the t...
What do confidence intervals say about precision (if anything)?
@Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that on
What do confidence intervals say about precision (if anything)? @Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that one cannot give a pop quiz. On close examination this mean...
What do confidence intervals say about precision (if anything)? @Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that on
7,922
How do I calculate confidence intervals for a non-normal distribution?
Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with replacement B times. For each of these samples calculate the sample mean. Calculate an appropriate bootstrap confidence int...
How do I calculate confidence intervals for a non-normal distribution?
Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with rep
How do I calculate confidence intervals for a non-normal distribution? Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with replacement B times. For each of these samples calcu...
How do I calculate confidence intervals for a non-normal distribution? Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with rep
7,923
How do I calculate confidence intervals for a non-normal distribution?
Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)median not the mean, but then if the data is heavily non-normal maybe the median is a more informative measure.
How do I calculate confidence intervals for a non-normal distribution?
Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)media
How do I calculate confidence intervals for a non-normal distribution? Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)median not the mean, but then if the data is heavily n...
How do I calculate confidence intervals for a non-normal distribution? Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)media
7,924
How do I calculate confidence intervals for a non-normal distribution?
You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard interval (using the critical points of the T-distribution), even if the underlying data is non-normal. In fact, so long as...
How do I calculate confidence intervals for a non-normal distribution?
You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard int
How do I calculate confidence intervals for a non-normal distribution? You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard interval (using the critical points of the T-distrib...
How do I calculate confidence intervals for a non-normal distribution? You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard int
7,925
How do I calculate confidence intervals for a non-normal distribution?
For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^2}{2} \pm t_{df}\sqrt{\frac{S^2}{n} + \frac{S^4}{2(n-1)} } $$ Where $ Y = \log(X)$, the sample mean of $Y$ is $\bar{Y}$ ...
How do I calculate confidence intervals for a non-normal distribution?
For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^
How do I calculate confidence intervals for a non-normal distribution? For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^2}{2} \pm t_{df}\sqrt{\frac{S^2}{n} + \frac{S^4}{...
How do I calculate confidence intervals for a non-normal distribution? For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^
7,926
What distribution does my data follow?
The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual process generating these observations probably won't be anything that I could suggest either. As sample size increases,...
What distribution does my data follow?
The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual
What distribution does my data follow? The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual process generating these observations probably won't be anything that I could su...
What distribution does my data follow? The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual
7,927
What distribution does my data follow?
The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My guess is that your data are consistent with more than just the beta distribution. In general, the beta distribution is...
What distribution does my data follow?
The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My
What distribution does my data follow? The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My guess is that your data are consistent with more than just the beta distribution....
What distribution does my data follow? The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My
7,928
What distribution does my data follow?
For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e^{-0.33781 (x-11.7025)^2}+0.229776 e^{-0.245814 (x-6.66864)^2}$$ Using 3 distributions to make a mixture distribution th...
What distribution does my data follow?
For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e
What distribution does my data follow? For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e^{-0.33781 (x-11.7025)^2}+0.229776 e^{-0.245814 (x-6.66864)^2}$$ Using 3 distribu...
What distribution does my data follow? For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e
7,929
How to get started with neural networks
Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means multi-layer feedforward network trained with back-propagation. Or, you might come across any of the dozens of rarely used, b...
How to get started with neural networks
Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means mult
How to get started with neural networks Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means multi-layer feedforward network trained with back-propagation. Or, you might come ac...
How to get started with neural networks Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means mult
7,930
How to get started with neural networks
I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to make it accessible for beginners.
How to get started with neural networks
I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to m
How to get started with neural networks I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to make it accessible for beginners.
How to get started with neural networks I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to m
7,931
How to get started with neural networks
These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and they are fairly old, from 1995. Nevertheless, they remain very useful. Both books start from scratch, by explaining wha...
How to get started with neural networks
These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and
How to get started with neural networks These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and they are fairly old, from 1995. Nevertheless, they remain very useful. Both boo...
How to get started with neural networks These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and
7,932
How to get started with neural networks
As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng's deep learning tutorial reading the relevant chapters in the original Parallel Distributed Processing I want to draw ...
How to get started with neural networks
As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng'
How to get started with neural networks As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng's deep learning tutorial reading the relevant chapters in the original Parallel ...
How to get started with neural networks As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng'
7,933
How to get started with neural networks
If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical background. http://www.stats.ox.ac.uk/~ripley/PRbook/
How to get started with neural networks
If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical backg
How to get started with neural networks If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical background. http://www.stats.ox.ac.uk/~ripley/PRbook/
How to get started with neural networks If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical backg
7,934
How to get started with neural networks
I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation functions, training settings) and observe how the settings affect the predictions. All datasets have preconfigured values that c...
How to get started with neural networks
I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation function
How to get started with neural networks I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation functions, training settings) and observe how the settings affect the predictions. All d...
How to get started with neural networks I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation function
7,935
How to get started with neural networks
I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you learn concepts, try to implement them in code, from scratch Keep a few toy datasets and problems in your pocket for testin...
How to get started with neural networks
I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you le
How to get started with neural networks I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you learn concepts, try to implement them in code, from scratch Keep a few toy dataset...
How to get started with neural networks I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you le
7,936
Encoding Angle Data for Neural Network
Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an representation/encoding of the output, which I do in this answer. I remain thinking that there is a better way, where you can j...
Encoding Angle Data for Neural Network
Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an represe
Encoding Angle Data for Neural Network Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an representation/encoding of the output, which I do in this answer. I remain thinking that...
Encoding Angle Data for Neural Network Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an represe
7,937
Encoding Angle Data for Neural Network
Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: cos_sin Test Error: 0.017772154610047136 Encoding: binned Test Error: 0.043398792553251526 Training Size: 100 Training Ep...
Encoding Angle Data for Neural Network
Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: co
Encoding Angle Data for Neural Network Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: cos_sin Test Error: 0.017772154610047136 Encoding: binned Test Error: 0.04339879255...
Encoding Angle Data for Neural Network Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: co
7,938
Encoding Angle Data for Neural Network
Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and success metrics. Each network tested has one hidden layer (size = 500) with logistic neurons. The output neurons are eith...
Encoding Angle Data for Neural Network
Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and su
Encoding Angle Data for Neural Network Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and success metrics. Each network tested has one hidden layer (size = 500) with logisti...
Encoding Angle Data for Neural Network Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and su
7,939
Encoding Angle Data for Neural Network
Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined at theta = 0. I don't have the time to train a network and compare to the other encodings but in this paper the technique...
Encoding Angle Data for Neural Network
Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined a
Encoding Angle Data for Neural Network Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined at theta = 0. I don't have the time to train a network and compare to the other en...
Encoding Angle Data for Neural Network Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined a
7,940
"Absolutely continuous random variable" vs. "Continuous random variable"?
The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continuous function. One definition (usually the first one people encounter in their education) is that for each $x$ and for a...
"Absolutely continuous random variable" vs. "Continuous random variable"?
The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continu
"Absolutely continuous random variable" vs. "Continuous random variable"? The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continuous function. One definition (usually the fir...
"Absolutely continuous random variable" vs. "Continuous random variable"? The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continu
7,941
PCA in numpy and sklearn produces different results [closed]
The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are observing this difference: PCA on correlation or covariance? If you replace pca.fit_transform(x) with x_std = StandardSc...
PCA in numpy and sklearn produces different results [closed]
The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are o
PCA in numpy and sklearn produces different results [closed] The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are observing this difference: PCA on correlation or covariance?...
PCA in numpy and sklearn produces different results [closed] The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are o
7,942
PCA in numpy and sklearn produces different results [closed]
Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. import numpy as np from scipy import linalg as LA x = np.array([ [0.387,4878, 5.42], [0.723,12104,5.25], ...
PCA in numpy and sklearn produces different results [closed]
Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. impor
PCA in numpy and sklearn produces different results [closed] Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. import numpy as np from scipy import linalg as LA x = np.array(...
PCA in numpy and sklearn produces different results [closed] Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. impor
7,943
How to interpret the dendrogram of a hierarchical cluster analysis
1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins FL. 3) Hawaii does join rather late; at about 50. This means that the cluster it joins is closer together before HI joi...
How to interpret the dendrogram of a hierarchical cluster analysis
1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins
How to interpret the dendrogram of a hierarchical cluster analysis 1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins FL. 3) Hawaii does join rather late; at about 50. Th...
How to interpret the dendrogram of a hierarchical cluster analysis 1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins
7,944
How to interpret the dendrogram of a hierarchical cluster analysis
I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is already clear about the procedure, others who browse through the question can probably use the pdf, its very simple and cle...
How to interpret the dendrogram of a hierarchical cluster analysis
I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is alr
How to interpret the dendrogram of a hierarchical cluster analysis I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is already clear about the procedure, others who browse thr...
How to interpret the dendrogram of a hierarchical cluster analysis I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is alr
7,945
How to interpret the dendrogram of a hierarchical cluster analysis
The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the splitting of a vertical line into two vertical lines. The vertical position of the split, shown by a short bar gives the d...
How to interpret the dendrogram of a hierarchical cluster analysis
The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the sp
How to interpret the dendrogram of a hierarchical cluster analysis The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the splitting of a vertical line into two vertical lines. T...
How to interpret the dendrogram of a hierarchical cluster analysis The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the sp
7,946
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
7,947
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
7,948
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it allows us to use the normal distribution as an approximation in cases where we do not know the true distribution. You coul...
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it all
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it allows us to use the normal distribution a...
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it all
7,949
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All pieces of paper are the same size and folded in the same fashion after I've calculated the average. This is the populati...
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All
How do you convey the beauty of the Central Limit Theorem to a non-statistician? I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All pieces of paper are the same size and f...
How do you convey the beauty of the Central Limit Theorem to a non-statistician? I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All
7,950
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runif(M,min=0, max=1))}) hist(meanvals, breaks=50, prob=TRUE)
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runi
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runif(M,min=0, max=1))}) hist(meanvals, br...
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runi
7,951
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
How do you convey the beauty of the Central Limit Theorem to a non-statistician? If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
How do you convey the beauty of the Central Limit Theorem to a non-statistician? If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
7,952
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical testing, the CLT helps you protect the type I error but does little to keep the type II error at bay. For example, the t-...
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical te
How do you convey the beauty of the Central Limit Theorem to a non-statistician? In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical testing, the CLT helps you protect the ty...
How do you convey the beauty of the Central Limit Theorem to a non-statistician? In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical te
7,953
How is finding the centroid different from finding the mean?
As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate data. To find the centroid, one computes the (arithmetic) mean of the points' positions separately for each dimension. ...
How is finding the centroid different from finding the mean?
As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate
How is finding the centroid different from finding the mean? As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate data. To find the centroid, one computes the (arithmetic) ...
How is finding the centroid different from finding the mean? As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate
7,954
How is finding the centroid different from finding the mean?
The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and cluster 2 - that is n^2 distances added together and then divides by n^2 to the average. Centroid method first computes the ...
How is finding the centroid different from finding the mean?
The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and clus
How is finding the centroid different from finding the mean? The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and cluster 2 - that is n^2 distances added together and then divid...
How is finding the centroid different from finding the mean? The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and clus
7,955
How is finding the centroid different from finding the mean?
In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a mathematical proof: Let $x_1,\dots ,x_n\in \mathbb{R}^d$ and $\{C_1,C_2\}$ a partition of $\{1,\dots,n\}$. Let $d$ be a me...
How is finding the centroid different from finding the mean?
In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a ma
How is finding the centroid different from finding the mean? In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a mathematical proof: Let $x_1,\dots ,x_n\in \mathbb{R}^d$ and ...
How is finding the centroid different from finding the mean? In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a ma
7,956
How is finding the centroid different from finding the mean?
centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original data
How is finding the centroid different from finding the mean?
centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original d
How is finding the centroid different from finding the mean? centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original data
How is finding the centroid different from finding the mean? centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original d
7,957
What are 'aliased coefficients'?
I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vif( lm( y ~ x1 + x2 ) ) produces your error. In this context, ''alias'' refers to the variables that are linearly depend...
What are 'aliased coefficients'?
I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vi
What are 'aliased coefficients'? I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vif( lm( y ~ x1 + x2 ) ) produces your error. In this context, ''alias'' refers to the v...
What are 'aliased coefficients'? I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vi
7,958
What are 'aliased coefficients'?
This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as the default is singular.ok = TRUE. Other packages/functions are more conservative. For example, for the linearHypothesi...
What are 'aliased coefficients'?
This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as
What are 'aliased coefficients'? This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as the default is singular.ok = TRUE. Other packages/functions are more conservative. For...
What are 'aliased coefficients'? This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as
7,959
What are 'aliased coefficients'?
maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might be removing one dummy manually.
What are 'aliased coefficients'?
maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might
What are 'aliased coefficients'? maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might be removing one dummy manually.
What are 'aliased coefficients'? maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might
7,960
What are the assumptions of negative binomial regression?
I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without either the variable names nor any of the actual values. and came to the conclusion a negative binomial regression would be ...
What are the assumptions of negative binomial regression?
I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without eith
What are the assumptions of negative binomial regression? I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without either the variable names nor any of the actual values. and came ...
What are the assumptions of negative binomial regression? I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without eith
7,961
What are the assumptions of negative binomial regression?
Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler, B. Bolker, and S. Walker. 2015. Fitting linear mixed-effects models using lme4. J. Stat. Software 67: 1-48. Bolker, B.M....
What are the assumptions of negative binomial regression?
Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler,
What are the assumptions of negative binomial regression? Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler, B. Bolker, and S. Walker. 2015. Fitting linear mixed-effects m...
What are the assumptions of negative binomial regression? Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler,
7,962
Best factor extraction methods in factor analysis
To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give rather similar results. They are "common" because they represent classical factor model, the common factors + unique fa...
Best factor extraction methods in factor analysis
To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give
Best factor extraction methods in factor analysis To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give rather similar results. They are "common" because they represent clas...
Best factor extraction methods in factor analysis To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give
7,963
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on randomness of low-order bits from RNGs. Most of the supplied uniform generators return 32-bit integer values that are c...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on randomness of ...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on
7,964
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an average of $2^{32}\left(1-\left(1-\frac{1}{2^{32}}\right)^{10^5}\right) \approx 10^5 - 1.1634$ distinct values, noting that $\...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an averag
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an average of $2^{32}\le...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an averag
7,965
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (as already mentioned in the discussion, by the way) and treats this question thoroughly. It has been written by an exper...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (as already ment...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (
7,966
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: There is a big difference between "random with replacement" and " random permutation of a known set. " Mathematically, ...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: T
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: There is a big d...
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: T
7,967
ROC vs Precision-recall curves on imbalanced dataset
First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare their properties, without judging their value. ROC curves can sometimes be misleading in some very imbalanced applicati...
ROC vs Precision-recall curves on imbalanced dataset
First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare
ROC vs Precision-recall curves on imbalanced dataset First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare their properties, without judging their value. ROC curves can some...
ROC vs Precision-recall curves on imbalanced dataset First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare
7,968
ROC vs Precision-recall curves on imbalanced dataset
Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negative samples. Only in this case, PR AUC is more "meaningful" than ROC AUC. Consider a detector with TP=9, FN=1, TN=900, FP...
ROC vs Precision-recall curves on imbalanced dataset
Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negati
ROC vs Precision-recall curves on imbalanced dataset Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negative samples. Only in this case, PR AUC is more "meaningful" than ROC...
ROC vs Precision-recall curves on imbalanced dataset Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negati
7,969
ROC vs Precision-recall curves on imbalanced dataset
You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what you should look at are ALL your classes. So for your negative class, your P = 0 and your R = 0. And you usually don't jus...
ROC vs Precision-recall curves on imbalanced dataset
You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what y
ROC vs Precision-recall curves on imbalanced dataset You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what you should look at are ALL your classes. So for your negative class,...
ROC vs Precision-recall curves on imbalanced dataset You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what y
7,970
Convolutional Layers: To pad or not to pad?
There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer to another because dimensions will just "work". It allows us to design deeper networks. Without padding, reduction in v...
Convolutional Layers: To pad or not to pad?
There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer
Convolutional Layers: To pad or not to pad? There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer to another because dimensions will just "work". It allows us to design deep...
Convolutional Layers: To pad or not to pad? There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer
7,971
Convolutional Layers: To pad or not to pad?
It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures (like residual nets, inception nets, fractal nets) operate on the outputs of different layers, which requires a consiste...
Convolutional Layers: To pad or not to pad?
It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures
Convolutional Layers: To pad or not to pad? It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures (like residual nets, inception nets, fractal nets) operate on the outputs of...
Convolutional Layers: To pad or not to pad? It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures
7,972
Convolutional Layers: To pad or not to pad?
There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant padding types in deep learning: valid (no padding at all) same (keep image size by adding zeros around the image - tha...
Convolutional Layers: To pad or not to pad?
There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant
Convolutional Layers: To pad or not to pad? There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant padding types in deep learning: valid (no padding at all) same (keep image...
Convolutional Layers: To pad or not to pad? There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant
7,973
Convolutional Layers: To pad or not to pad?
Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No real borders exist. So it is a limitation of the medium. Besides preserving size, does it matter? I am not aware of a sat...
Convolutional Layers: To pad or not to pad?
Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No r
Convolutional Layers: To pad or not to pad? Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No real borders exist. So it is a limitation of the medium. Besides preserving s...
Convolutional Layers: To pad or not to pad? Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No r
7,974
Convolutional Layers: To pad or not to pad?
Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, would contribute to the resulting feature map multiple times.Thus, we pad the image See figure: 2.
Convolutional Layers: To pad or not to pad?
Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, w
Convolutional Layers: To pad or not to pad? Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, would contribute to the resulting feature map multiple times.Thus, we pad the...
Convolutional Layers: To pad or not to pad? Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, w
7,975
Convolutional Layers: To pad or not to pad?
this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last time? image feature vector resolution is very small, and pixel value means a kind of vector of some global size. I think i...
Convolutional Layers: To pad or not to pad?
this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last ti
Convolutional Layers: To pad or not to pad? this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last time? image feature vector resolution is very small, and pixel value means a k...
Convolutional Layers: To pad or not to pad? this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last ti
7,976
Convolutional Layers: To pad or not to pad?
I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, either "Valid" or "same". Same will preserve the size of the output and will keep it the same as that of the input by addi...
Convolutional Layers: To pad or not to pad?
I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, ei
Convolutional Layers: To pad or not to pad? I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, either "Valid" or "same". Same will preserve the size of the output and will k...
Convolutional Layers: To pad or not to pad? I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, ei
7,977
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes $\beta_2$ and $\beta_3$ are both positive so we can say that (i) $Y$ increases as $X_2$ increases, if $X_3$ is held cons...
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes $\beta_2$ and $\beta_3$ are...
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes
7,978
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other predictors held constant)
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other pre
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other predictors held constant)
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other pre
7,979
Student t as mixture of gaussian
The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\sqrt{\tau}}{\sqrt{2 \pi}} e^{-\frac{\tau(x-\mu )^2}{2 }}dx.$$ The PDF of a Gamma distribution is $$h_{\alpha, \beta}(\ta...
Student t as mixture of gaussian
The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\s
Student t as mixture of gaussian The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\sqrt{\tau}}{\sqrt{2 \pi}} e^{-\frac{\tau(x-\mu )^2}{2 }}dx.$$ The PDF of a Gamma distrib...
Student t as mixture of gaussian The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\s
7,980
Student t as mixture of gaussian
I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ degree freedom can be regarded as a Normal distribution with variance mixture $Y$, where $Y$ follows inverse gamma distribu...
Student t as mixture of gaussian
I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ deg
Student t as mixture of gaussian I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ degree freedom can be regarded as a Normal distribution with variance mixture $Y$, where $...
Student t as mixture of gaussian I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ deg
7,981
Student t as mixture of gaussian
To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\tau$, $Y$ is Gaussian with precision $\tau$, and the prior $\tau$ is as desired. Then it remains to show that $\sqrt{1/\...
Student t as mixture of gaussian
To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\
Student t as mixture of gaussian To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\tau$, $Y$ is Gaussian with precision $\tau$, and the prior $\tau$ is as desired. Then i...
Student t as mixture of gaussian To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\
7,982
Propensity score matching after multiple imputation
The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation, as mentioned by Stas. I can't see any advantage in using it. There is an excellent discussion of the issues surroundin...
Propensity score matching after multiple imputation
The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation
Propensity score matching after multiple imputation The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation, as mentioned by Stas. I can't see any advantage in using it. There...
Propensity score matching after multiple imputation The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation
7,983
Propensity score matching after multiple imputation
There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-defined posterior distribution of the data, otherwise you are screwed. Propensity score matching, on the other hand, is a...
Propensity score matching after multiple imputation
There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-d
Propensity score matching after multiple imputation There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-defined posterior distribution of the data, otherwise you are screwed...
Propensity score matching after multiple imputation There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-d
7,984
Propensity score matching after multiple imputation
I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets and random sampling to build a single data set. That doesn't necessarily mean it's wrong but it's a strange approach to...
Propensity score matching after multiple imputation
I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets
Propensity score matching after multiple imputation I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets and random sampling to build a single data set. That doesn't necess...
Propensity score matching after multiple imputation I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets
7,985
How to measure smoothness of a time series in R?
The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6, -0.8, 1.0) sd(diff(x)) sd(diff(y)) Update: As Cyan points out, that gives you a scale-dependent measure. A similar scal...
How to measure smoothness of a time series in R?
The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6,
How to measure smoothness of a time series in R? The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6, -0.8, 1.0) sd(diff(x)) sd(diff(y)) Update: As Cyan points out, that gi...
How to measure smoothness of a time series in R? The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6,
7,986
How to measure smoothness of a time series in R?
The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly varying series scores near 0 imply that there's no overall linear relationship between a data point and the following on...
How to measure smoothness of a time series in R?
The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly
How to measure smoothness of a time series in R? The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly varying series scores near 0 imply that there's no overall linear relat...
How to measure smoothness of a time series in R? The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly
7,987
How to measure smoothness of a time series in R?
To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends (because of using the second difference). firstD = diff(x) normFirstD = (firstD - mean(firstD)) / sd(firstD) roughness =...
How to measure smoothness of a time series in R?
To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends
How to measure smoothness of a time series in R? To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends (because of using the second difference). firstD = diff(x) normFirstD =...
How to measure smoothness of a time series in R? To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends
7,988
How to measure smoothness of a time series in R?
You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very different timeseries, so I don't know how well that works as a comparison.
How to measure smoothness of a time series in R?
You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very diffe
How to measure smoothness of a time series in R? You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very different timeseries, so I don't know how well that works as a comparison.
How to measure smoothness of a time series in R? You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very diffe
7,989
What are the properties of a half Cauchy distribution?
A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density must then be doubled. Hence the 2 in your pdf (though it's missing a $\frac{1}{\pi}$ as whuber noted in comments). The ha...
What are the properties of a half Cauchy distribution?
A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density m
What are the properties of a half Cauchy distribution? A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density must then be doubled. Hence the 2 in your pdf (though it's missing...
What are the properties of a half Cauchy distribution? A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density m
7,990
In boosting, why are the learners "weak"?
So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by an inverse polynomial) than random guessing. It's main advantage is speed. When Schapire presented it in 1990 it was a b...
In boosting, why are the learners "weak"?
So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by a
In boosting, why are the learners "weak"? So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by an inverse polynomial) than random guessing. It's main advantage is speed. Whe...
In boosting, why are the learners "weak"? So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by a
7,991
In boosting, why are the learners "weak"?
I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why not boost with "strong" learning methods - are we more prone to overfitting?) The main reasons, in my understanding, are...
In boosting, why are the learners "weak"?
I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why n
In boosting, why are the learners "weak"? I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why not boost with "strong" learning methods - are we more prone to overfitting?) ...
In boosting, why are the learners "weak"? I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why n
7,992
In boosting, why are the learners "weak"?
In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them. On the other hand, a decision tree may be a lot faster, then I can train lots of them. Let's say I use 100 learners. I...
In boosting, why are the learners "weak"?
In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them
In boosting, why are the learners "weak"? In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them. On the other hand, a decision tree may be a lot faster, then I can train lot...
In boosting, why are the learners "weak"? In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them
7,993
Why is variable selection necessary?
Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remaining variables and huge understatement of standard errors. It is a mistake to believe that variable selection done in the ...
Why is variable selection necessary?
Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remainin
Why is variable selection necessary? Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remaining variables and huge understatement of standard errors. It is a mistake to believe...
Why is variable selection necessary? Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remainin
7,994
Why is variable selection necessary?
First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all variables unrelated to the DV (so called all relevant problem, very hard task, unrelated to the used model/classifier), ...
Why is variable selection necessary?
First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all
Why is variable selection necessary? First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all variables unrelated to the DV (so called all relevant problem, very hard task, unre...
Why is variable selection necessary? First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all
7,995
Why is variable selection necessary?
Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to over-fit. It's a good idea to exclude these variables from analysis. Furthermore, you can't include all the variables that...
Why is variable selection necessary?
Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to ov
Why is variable selection necessary? Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to over-fit. It's a good idea to exclude these variables from analysis. Furthermore, you...
Why is variable selection necessary? Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to ov
7,996
Modelling longitudinal data where the effect of time varies in functional form between individuals
I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the partitioning (package kml, and references included in the online help); so basically, it won't help identifying specific ...
Modelling longitudinal data where the effect of time varies in functional form between individuals
I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the p
Modelling longitudinal data where the effect of time varies in functional form between individuals I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the partitioning (package ...
Modelling longitudinal data where the effect of time varies in functional form between individuals I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the p
7,997
Modelling longitudinal data where the effect of time varies in functional form between individuals
I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mixed effects multivariate adaptive splines model for the analysis of longitudinal and growth curve data In addition, see ...
Modelling longitudinal data where the effect of time varies in functional form between individuals
I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mix
Modelling longitudinal data where the effect of time varies in functional form between individuals I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mixed effects multivaria...
Modelling longitudinal data where the effect of time varies in functional form between individuals I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mix
7,998
Modelling longitudinal data where the effect of time varies in functional form between individuals
It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definitely have to check them out). Latent group based trajectory models have become really popular lately in criminology. But ...
Modelling longitudinal data where the effect of time varies in functional form between individuals
It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definit
Modelling longitudinal data where the effect of time varies in functional form between individuals It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definitely have to check the...
Modelling longitudinal data where the effect of time varies in functional form between individuals It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definit
7,999
Modelling longitudinal data where the effect of time varies in functional form between individuals
Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: Tools like jags, stan, WinBugs, and so on potentially combined with their respective R interface packages (e.g., rjags, r...
Modelling longitudinal data where the effect of time varies in functional form between individuals
Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: T
Modelling longitudinal data where the effect of time varies in functional form between individuals Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: Tools like jags, stan,...
Modelling longitudinal data where the effect of time varies in functional form between individuals Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: T
8,000
Modelling longitudinal data where the effect of time varies in functional form between individuals
John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's a lot of great stuff there (and Fox' books are generally quite good!).
Modelling longitudinal data where the effect of time varies in functional form between individuals
John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's
Modelling longitudinal data where the effect of time varies in functional form between individuals John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's a lot of great stuff...
Modelling longitudinal data where the effect of time varies in functional form between individuals John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's