idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
7,901
Line graph has too many lines, is there a better solution?
Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25 lines, you can vary them by line type and color and perhaps plotting symbol and get some clarity Then stack the graphs vertically with a single time axis. This would be pretty easy in R or SAS (at least if you have v. 9 of SAS).
Line graph has too many lines, is there a better solution?
Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25
Line graph has too many lines, is there a better solution? Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25 lines, you can vary them by line type and color and perhaps plotting symbol and get some clarity Then stack the graphs vertically with a single time axis. This would be pretty easy in R or SAS (at least if you have v. 9 of SAS).
Line graph has too many lines, is there a better solution? Sure. First, sort by average number of actions. Then make (say) 4 graphs, each with 25 lines, one for each quartile. That means you can shrink the y-axes (but make the y axis label clear). And with 25
7,902
Line graph has too many lines, is there a better solution?
I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to work with and allows you to display more information in an easy to follow way. Your primary focus must be on the end user experience.
Line graph has too many lines, is there a better solution?
I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to wor
Line graph has too many lines, is there a better solution? I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to work with and allows you to display more information in an easy to follow way. Your primary focus must be on the end user experience.
Line graph has too many lines, is there a better solution? I find that when your running out if options regarding type if graph and graph settings introduction of time through animation is the best way to display because it gives you an extra dimension to wor
7,903
Line graph has too many lines, is there a better solution?
If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you could show a lot more charts at once by removing axis labels and units. Many data tools have them built in (Microsoft Excel has sparklines), but I'm guessing you'd want to pull in a package to build them in R.
Line graph has too many lines, is there a better solution?
If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you co
Line graph has too many lines, is there a better solution? If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you could show a lot more charts at once by removing axis labels and units. Many data tools have them built in (Microsoft Excel has sparklines), but I'm guessing you'd want to pull in a package to build them in R.
Line graph has too many lines, is there a better solution? If you're most interested in the change for individual users, maybe this is a good situation for a collection of Sparklines (like this example from The Pudding): These are pretty detailed, but you co
7,904
Is p-value a point estimate?
Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample standard deviation the p-value is not an useful estimator of an interesting distribution parameter. Look at the answer by @whuber for technical details. The p-value for a test-statistic gives the probability of observing a deviation from the expected value of the test-statistic as least as large as observed in the sample, calculated under the assumption that the null hypothesis is true. If you have the entire distribution it is either consistent with the null hypothesis, or it is not. This can be described with by indicator variable (again, see the answer by @whuber). But the p-value cannot be used as an useful estimator of the indicator variable because it is not consistent as the p-value does not converge as the sample size increases if the null hypothesis is true. This is a pretty complicated alternate way of stating that a statistical test can either reject or fail to reject the null, but never confirm it.
Is p-value a point estimate?
Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample sta
Is p-value a point estimate? Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample standard deviation the p-value is not an useful estimator of an interesting distribution parameter. Look at the answer by @whuber for technical details. The p-value for a test-statistic gives the probability of observing a deviation from the expected value of the test-statistic as least as large as observed in the sample, calculated under the assumption that the null hypothesis is true. If you have the entire distribution it is either consistent with the null hypothesis, or it is not. This can be described with by indicator variable (again, see the answer by @whuber). But the p-value cannot be used as an useful estimator of the indicator variable because it is not consistent as the p-value does not converge as the sample size increases if the null hypothesis is true. This is a pretty complicated alternate way of stating that a statistical test can either reject or fail to reject the null, but never confirm it.
Is p-value a point estimate? Point estimates and confidence intervals are for parameters that describe the distribution, e.g. mean or standard deviation. But unlike other sample statistics like the sample mean and the sample sta
7,905
Is p-value a point estimate?
Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotically unbiased. But, asymptotically, the mean p-value for the null hypothesis is $1/2$ (ideally; for some tests it might be some other nonzero number) and for any other hypothesis it is $0$. Thus, the p-value could be considered an estimator of one-half the indicator function for the null hypothesis. Admittedly it takes some creativity to view a p-value in this way. We could do a little better by viewing the estimator in question as the decision we make by means of the p-value: is the underlying distribution a member of the null hypothesis or of the alternate hypothesis? Let's call this set of possible decisions $D$. Jack Kiefer writes We suppose that there is an experiment whose outcome the statistician can observe. This outcome is described by a random variable or random vector $X$ ... . The probability law of $X$ is unknown to the statistician, but it is known that the distribution function $F$ of $X$ is a member of a specified class $\Omega$ of distribution functions. ... A statistical problem is said to be a problem of point estimation if $D$ is the collection of possible values of some real or vector-valued property of $F$ which depends on $F$ in a reasonably smooth way. In this case, because $D$ is discrete, "reasonably smooth" is not a restriction at all. Kiefer's terminology reflects this by referring to statistical procedures with discrete decision spaces as "tests" instead of "point estimators." Although it is interesting to explore the limits (and limitations) of such definitions, as this question invites us to do, perhaps we should not insist too strongly that a p-value is a point estimator, because this distinction between estimators and tests is both useful and conventional. In a comment to this question, Christian Robert brought attention to a 1992 paper where he and co-authors took exactly this point of view and analyzed the admissibility of the p-value as an estimator of the indicator function. See the link in the references below. The paper begins, Approaches to hypothesis testing have usually treated the problem of testing as one of decision-making rather than estimation. More precisely, a formal hypothesis test will result in a conclusion as to whether a hypothesis is true, and not provide a measure of evidence to associate with that conclusion. In this paper we consider hypothesis testing as an estimation problem within a decision-theoretic framework ... . [Emphasis added.] References Jiunn Tzon Hwang, George Casella, Christian Robert, Martin T. Wells, and Roger H. Farrell, Estimation of Accuracy in Testing. Ann. Statist. Volume 20, Number 1 (1992), 490-509. Open access. Jack Carl Kiefer, Introduction to Statistical Inference. Springer-Verlag, 1987.
Is p-value a point estimate?
Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotical
Is p-value a point estimate? Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotically unbiased. But, asymptotically, the mean p-value for the null hypothesis is $1/2$ (ideally; for some tests it might be some other nonzero number) and for any other hypothesis it is $0$. Thus, the p-value could be considered an estimator of one-half the indicator function for the null hypothesis. Admittedly it takes some creativity to view a p-value in this way. We could do a little better by viewing the estimator in question as the decision we make by means of the p-value: is the underlying distribution a member of the null hypothesis or of the alternate hypothesis? Let's call this set of possible decisions $D$. Jack Kiefer writes We suppose that there is an experiment whose outcome the statistician can observe. This outcome is described by a random variable or random vector $X$ ... . The probability law of $X$ is unknown to the statistician, but it is known that the distribution function $F$ of $X$ is a member of a specified class $\Omega$ of distribution functions. ... A statistical problem is said to be a problem of point estimation if $D$ is the collection of possible values of some real or vector-valued property of $F$ which depends on $F$ in a reasonably smooth way. In this case, because $D$ is discrete, "reasonably smooth" is not a restriction at all. Kiefer's terminology reflects this by referring to statistical procedures with discrete decision spaces as "tests" instead of "point estimators." Although it is interesting to explore the limits (and limitations) of such definitions, as this question invites us to do, perhaps we should not insist too strongly that a p-value is a point estimator, because this distinction between estimators and tests is both useful and conventional. In a comment to this question, Christian Robert brought attention to a 1992 paper where he and co-authors took exactly this point of view and analyzed the admissibility of the p-value as an estimator of the indicator function. See the link in the references below. The paper begins, Approaches to hypothesis testing have usually treated the problem of testing as one of decision-making rather than estimation. More precisely, a formal hypothesis test will result in a conclusion as to whether a hypothesis is true, and not provide a measure of evidence to associate with that conclusion. In this paper we consider hypothesis testing as an estimation problem within a decision-theoretic framework ... . [Emphasis added.] References Jiunn Tzon Hwang, George Casella, Christian Robert, Martin T. Wells, and Roger H. Farrell, Estimation of Accuracy in Testing. Ann. Statist. Volume 20, Number 1 (1992), 490-509. Open access. Jack Carl Kiefer, Introduction to Statistical Inference. Springer-Verlag, 1987.
Is p-value a point estimate? Yes, it could be (and has been) argued that a p-value is a point estimate. In order to identify whatever property of a distribution a p-value might estimate, we would have to assume it is asymptotical
7,906
Is p-value a point estimate?
$p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you could be interested in interval estimate of it, but in hypothesis testing scenario you would rather compare the sample mean $\overline x$ with population mean $\mu$ to see if they differ. In fact in hypothesis testing scenario you are not interested in their particular values, but rather if they are below some threshold (e.g. $p < 0.05$). With $p$-values you are not that much interested in their point values, but rather you want to know if your data provides enough evidence against null hypothesis. In hypothesis testing scenario, you would not be comparing different $p$-values to each other, but rather use each of them to make separate decisions about your hypotheses. You don't really want to know anything about the hull hypothesis, as far as you know if you can reject it or not. This makes their values inseparable from the decision context and so they differ from point estimates, because with point estimates we are interested in their values per se.
Is p-value a point estimate?
$p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you c
Is p-value a point estimate? $p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you could be interested in interval estimate of it, but in hypothesis testing scenario you would rather compare the sample mean $\overline x$ with population mean $\mu$ to see if they differ. In fact in hypothesis testing scenario you are not interested in their particular values, but rather if they are below some threshold (e.g. $p < 0.05$). With $p$-values you are not that much interested in their point values, but rather you want to know if your data provides enough evidence against null hypothesis. In hypothesis testing scenario, you would not be comparing different $p$-values to each other, but rather use each of them to make separate decisions about your hypotheses. You don't really want to know anything about the hull hypothesis, as far as you know if you can reject it or not. This makes their values inseparable from the decision context and so they differ from point estimates, because with point estimates we are interested in their values per se.
Is p-value a point estimate? $p$-values are not used for estimating any parameter of interest, but for hypothesis testing. For example, you could be interested in estimating population $\mu$ based on the sample you have, or you c
7,907
Likelihood ratio test in R
Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced, family=binomial, data=infert) > lrtest (model0, model1) Likelihood ratio test for MLE method Chi-squared 1 d.f. = 36.48675 , P value = 0 > model1$deviance-model0$deviance [1] 36.48675 and not the deviance for the null model which is the same in both cases. The number of df is the number of parameters that differ between the two nested models, here df=1. BTW, you can look at the source code for lrtest() by just typing > lrtest at the R prompt.
Likelihood ratio test in R
Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced
Likelihood ratio test in R Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced, family=binomial, data=infert) > lrtest (model0, model1) Likelihood ratio test for MLE method Chi-squared 1 d.f. = 36.48675 , P value = 0 > model1$deviance-model0$deviance [1] 36.48675 and not the deviance for the null model which is the same in both cases. The number of df is the number of parameters that differ between the two nested models, here df=1. BTW, you can look at the source code for lrtest() by just typing > lrtest at the R prompt.
Likelihood ratio test in R Basically, yes, provided you use the correct difference in log-likelihood: > library(epicalc) > model0 <- glm(case ~ induced + spontaneous, family=binomial, data=infert) > model1 <- glm(case ~ induced
7,908
Likelihood ratio test in R
An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that work with GLMs: > require(lmtest) Loading required package: lmtest Loading required package: zoo > ## with data from Greene (1993): > ## load data and compute lags > data("USDistLag") > usdl <- na.contiguous(cbind(USDistLag, lag(USDistLag, k = -1))) > colnames(usdl) <- c("con", "gnp", "con1", "gnp1") > fm1 <- lm(con ~ gnp + gnp1, data = usdl) > fm2 <- lm(con ~ gnp + con1 + gnp1, data = usdl) > ## various equivalent specifications of the LR test > > ## Compare two nested models > lrtest(fm2, fm1) Likelihood ratio test Model 1: con ~ gnp + con1 + gnp1 Model 2: con ~ gnp + gnp1 #Df LogLik Df Chisq Pr(>Chisq) 1 5 -56.069 2 4 -65.871 -1 19.605 9.524e-06 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 > > ## with just one model provided, compare this model to a null one > lrtest(fm2) Likelihood ratio test Model 1: con ~ gnp + con1 + gnp1 Model 2: con ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 5 -56.069 2 2 -119.091 -3 126.04 < 2.2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1
Likelihood ratio test in R
An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that
Likelihood ratio test in R An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that work with GLMs: > require(lmtest) Loading required package: lmtest Loading required package: zoo > ## with data from Greene (1993): > ## load data and compute lags > data("USDistLag") > usdl <- na.contiguous(cbind(USDistLag, lag(USDistLag, k = -1))) > colnames(usdl) <- c("con", "gnp", "con1", "gnp1") > fm1 <- lm(con ~ gnp + gnp1, data = usdl) > fm2 <- lm(con ~ gnp + con1 + gnp1, data = usdl) > ## various equivalent specifications of the LR test > > ## Compare two nested models > lrtest(fm2, fm1) Likelihood ratio test Model 1: con ~ gnp + con1 + gnp1 Model 2: con ~ gnp + gnp1 #Df LogLik Df Chisq Pr(>Chisq) 1 5 -56.069 2 4 -65.871 -1 19.605 9.524e-06 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 > > ## with just one model provided, compare this model to a null one > lrtest(fm2) Likelihood ratio test Model 1: con ~ gnp + con1 + gnp1 Model 2: con ~ 1 #Df LogLik Df Chisq Pr(>Chisq) 1 5 -56.069 2 2 -119.091 -3 126.04 < 2.2e-16 *** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1
Likelihood ratio test in R An alternative is the lmtest package, which has an lrtest() function which accepts a single model. Here is the example from ?lrtest in the lmtest package, which is for an LM but there are methods that
7,909
Definition of Conditional Probability with multiple conditions
You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{P(A \cap C)}{P(C)}$$ Now fill in $(B \cap \theta)$ for $C$ again and you have it: $$\frac{P(A \cap C)}{P(C)} = \frac{P(A \cap (B \cap \theta))}{P(B \cap \theta)}$$ And this is the result that you wanted to get to. Let's write this in exactly the form you had when you originally stated the question: $$P(A|B , \theta) = \frac{ P(A \cap B \cap \theta) }{ P(B \cap \theta) }$$ As to your second question, why it is that probability freaks you out: it is one of the findings from psychological research that humans are not very good at probabilistic reasoning ;-). It was a bit hard for me to find a reference that I can point you to. But the work of Daniel Kahneman is certainly very important in this regard.
Definition of Conditional Probability with multiple conditions
You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{
Definition of Conditional Probability with multiple conditions You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{P(A \cap C)}{P(C)}$$ Now fill in $(B \cap \theta)$ for $C$ again and you have it: $$\frac{P(A \cap C)}{P(C)} = \frac{P(A \cap (B \cap \theta))}{P(B \cap \theta)}$$ And this is the result that you wanted to get to. Let's write this in exactly the form you had when you originally stated the question: $$P(A|B , \theta) = \frac{ P(A \cap B \cap \theta) }{ P(B \cap \theta) }$$ As to your second question, why it is that probability freaks you out: it is one of the findings from psychological research that humans are not very good at probabilistic reasoning ;-). It was a bit hard for me to find a reference that I can point you to. But the work of Daniel Kahneman is certainly very important in this regard.
Definition of Conditional Probability with multiple conditions You can do a little trick. Let $(B \cap \theta) = C$. Now you can write $$P(A|B, \theta) = P(A|C).$$ The problem reduces to that of a conditional probability with only one condition: $$P(A|C) = \frac{
7,910
Definition of Conditional Probability with multiple conditions
I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditions, I find it easiest to think about it this way: temporarily remove the condition(s) that you want to remain as conditions in your result. In this case write $\rm{P}(A|B)$, taking out $\theta$. apply the normal rules. In this case $\rm{P}(A|B) = \rm{P}(A\cap B)/\rm{P}(B)$. restore the condition(s) that were removed. In this case, restore $\theta$, to get the result $\rm{P}(A|B,\theta) = \rm{P}(A\cap B|\theta)/\rm{P}(B|\theta)$.
Definition of Conditional Probability with multiple conditions
I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditi
Definition of Conditional Probability with multiple conditions I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditions, I find it easiest to think about it this way: temporarily remove the condition(s) that you want to remain as conditions in your result. In this case write $\rm{P}(A|B)$, taking out $\theta$. apply the normal rules. In this case $\rm{P}(A|B) = \rm{P}(A\cap B)/\rm{P}(B)$. restore the condition(s) that were removed. In this case, restore $\theta$, to get the result $\rm{P}(A|B,\theta) = \rm{P}(A\cap B|\theta)/\rm{P}(B|\theta)$.
Definition of Conditional Probability with multiple conditions I think you probably want this: $$\rm{P}(A|B,\theta) = \frac{\rm{P}(A\cap B|\theta)}{\rm{P}(B|\theta)}$$ I often find it confusing thinking about how to manipulate probabilities. With multiple conditi
7,911
Asymptotic distribution of sample variance of non-normal sample
To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X_i-\mu)(\bar x-\mu)\Big)+\sum_{i=1}^n\Big(\bar x-\mu\Big)^2$$ and after a little manipualtion, $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2 - n\Big(\bar x-\mu\Big)^2$$ Therefore $$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \sigma^2- \frac {\sqrt n}{n-1}n\Big(\bar x-\mu\Big)^2 $$ Manipulating, $$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2 $$ $$=\frac {n\sqrt n}{n-1}\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ $$=\frac {n}{n-1}\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right] + \frac {\sqrt n}{n-1}\sigma^2 -\frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ The term $n/(n-1)$ becomes unity asymptotically. The term $\frac {\sqrt n}{n-1}\sigma^2$ is determinsitic and goes to zero as $n \rightarrow \infty$. We also have $\sqrt n\Big(\bar x-\mu\Big)^2 = \left[\sqrt n\Big(\bar x-\mu\Big)\right]\cdot \Big(\bar x-\mu\Big)$. The first component converges in distribution to a Normal, the second convergres in probability to zero. Then by Slutsky's theorem the product converges in probability to zero, $$\sqrt n\Big(\bar x-\mu\Big)^2\xrightarrow{p} 0$$ We are left with the term $$\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right]$$ Alerted by a lethal example offered by @whuber in a comment to this answer, we want to make certain that $(X_i-\mu)^2$ is not constant. Whuber pointed out that if $X_i$ is a Bernoulli $(1/2)$ then this quantity is a constant. So excluding variables for which this happens (perhaps other dichotomous, not just $0/1$ binary?), for the rest we have $$\mathrm{E}\Big(X_i-\mu\Big)^2 = \sigma^2,\;\; \operatorname {Var}\left[\Big(X_i-\mu\Big)^2\right] = \mu_4 - \sigma^4$$ and so the term under investigation is a usual subject matter of the classical Central Limit Theorem, and $$\sqrt n(s^2 - \sigma^2) \xrightarrow{d} N\left(0,\mu_4 - \sigma^4\right)$$ Note: the above result of course holds also for normally distributed samples -but in this last case we have also available a finite-sample chi-square distributional result.
Asymptotic distribution of sample variance of non-normal sample
To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X
Asymptotic distribution of sample variance of non-normal sample To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X_i-\mu)(\bar x-\mu)\Big)+\sum_{i=1}^n\Big(\bar x-\mu\Big)^2$$ and after a little manipualtion, $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2 - n\Big(\bar x-\mu\Big)^2$$ Therefore $$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \sigma^2- \frac {\sqrt n}{n-1}n\Big(\bar x-\mu\Big)^2 $$ Manipulating, $$\sqrt n(s^2 - \sigma^2) = \frac {\sqrt n}{n-1}\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2 $$ $$=\frac {n\sqrt n}{n-1}\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sqrt n \frac {n-1}{n-1}\sigma^2- \frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ $$=\frac {n}{n-1}\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right] + \frac {\sqrt n}{n-1}\sigma^2 -\frac {n}{n-1}\sqrt n\Big(\bar x-\mu\Big)^2$$ The term $n/(n-1)$ becomes unity asymptotically. The term $\frac {\sqrt n}{n-1}\sigma^2$ is determinsitic and goes to zero as $n \rightarrow \infty$. We also have $\sqrt n\Big(\bar x-\mu\Big)^2 = \left[\sqrt n\Big(\bar x-\mu\Big)\right]\cdot \Big(\bar x-\mu\Big)$. The first component converges in distribution to a Normal, the second convergres in probability to zero. Then by Slutsky's theorem the product converges in probability to zero, $$\sqrt n\Big(\bar x-\mu\Big)^2\xrightarrow{p} 0$$ We are left with the term $$\left[\sqrt n\left(\frac 1n\sum_{i=1}^n\Big(X_i-\mu\Big)^2 -\sigma^2\right)\right]$$ Alerted by a lethal example offered by @whuber in a comment to this answer, we want to make certain that $(X_i-\mu)^2$ is not constant. Whuber pointed out that if $X_i$ is a Bernoulli $(1/2)$ then this quantity is a constant. So excluding variables for which this happens (perhaps other dichotomous, not just $0/1$ binary?), for the rest we have $$\mathrm{E}\Big(X_i-\mu\Big)^2 = \sigma^2,\;\; \operatorname {Var}\left[\Big(X_i-\mu\Big)^2\right] = \mu_4 - \sigma^4$$ and so the term under investigation is a usual subject matter of the classical Central Limit Theorem, and $$\sqrt n(s^2 - \sigma^2) \xrightarrow{d} N\left(0,\mu_4 - \sigma^4\right)$$ Note: the above result of course holds also for normally distributed samples -but in this last case we have also available a finite-sample chi-square distributional result.
Asymptotic distribution of sample variance of non-normal sample To side-step dependencies arising when we consider the sample variance, we write $$(n-1)s^2 = \sum_{i=1}^n\Big((X_i-\mu) -(\bar x-\mu)\Big)^2$$ $$=\sum_{i=1}^n\Big(X_i-\mu\Big)^2-2\sum_{i=1}^n\Big((X
7,912
Asymptotic distribution of sample variance of non-normal sample
You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1} \sum_{i=1}^n \left(X_i - \bar{X} \right)^2 $$ does not depend on $E(X) = \xi$, say. Asymptotically, it also does not matter whether we change the factor $\frac{1}{n-1}$ to $\frac{1}{n}$, which I will do for convenience. We then have $$\sqrt{n} \left(S^2 - \sigma^2 \right) = \sqrt{n} \left[ \frac{1}{n} \sum_{i=1}^n X_i^2 - \bar{X}^2 - \sigma^2 \right]$$ And now we assume without loss of generality that $\xi = 0$ and we notice that $$ \sqrt{n} \bar{X}^2 = \frac{1}{\sqrt{n}} \left( \sqrt{n} \bar{X} \right)^2$$ has probability limit zero, since the second term is bounded in probability (by the CLT and the continuous mapping theorem), i.e. it is $O_p(1)$. The asymptotic result now follows from Slutzky's theorem and the CLT, since $$\sqrt{n} \left[ \frac{1}{n} \sum X_i^2 - \sigma^2 \right] \xrightarrow{D} \mathcal{N} \left(0, \tau^2 \right)$$ where $\tau^2 = Var \left\{ X^2\right\} = \mathbb{E} \left(X^4 \right) - \left( \mathbb{E} \left(X^2\right) \right)^2$. And that will do it.
Asymptotic distribution of sample variance of non-normal sample
You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1}
Asymptotic distribution of sample variance of non-normal sample You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1} \sum_{i=1}^n \left(X_i - \bar{X} \right)^2 $$ does not depend on $E(X) = \xi$, say. Asymptotically, it also does not matter whether we change the factor $\frac{1}{n-1}$ to $\frac{1}{n}$, which I will do for convenience. We then have $$\sqrt{n} \left(S^2 - \sigma^2 \right) = \sqrt{n} \left[ \frac{1}{n} \sum_{i=1}^n X_i^2 - \bar{X}^2 - \sigma^2 \right]$$ And now we assume without loss of generality that $\xi = 0$ and we notice that $$ \sqrt{n} \bar{X}^2 = \frac{1}{\sqrt{n}} \left( \sqrt{n} \bar{X} \right)^2$$ has probability limit zero, since the second term is bounded in probability (by the CLT and the continuous mapping theorem), i.e. it is $O_p(1)$. The asymptotic result now follows from Slutzky's theorem and the CLT, since $$\sqrt{n} \left[ \frac{1}{n} \sum X_i^2 - \sigma^2 \right] \xrightarrow{D} \mathcal{N} \left(0, \tau^2 \right)$$ where $\tau^2 = Var \left\{ X^2\right\} = \mathbb{E} \left(X^4 \right) - \left( \mathbb{E} \left(X^2\right) \right)^2$. And that will do it.
Asymptotic distribution of sample variance of non-normal sample You already have a detailed answer to your question but let me offer another one to go with it. Actually, a shorter proof is possible based on the fact that the distribution of $$S^2 = \frac{1}{n-1}
7,913
Asymptotic distribution of sample variance of non-normal sample
The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see asymptotic results presented using the normal distribution, and this is useful for stating the theorems. However, practically speaking, the purpose of an asymptotic distribution for a sample statistic is that it allows you to obtain an approximate distribution when $n$ is large. There are lots of choices you could make for your large-sample approximation, since many distributions have the same asymptotic form. In the case of the sample variance, it is my view that an excellent approximating distribution for large $n$ is given by: $$\frac{S_n^2}{\sigma^2} \sim \frac{\text{Chi-Sq}(\text{df} = DF_n)}{DF_n},$$ where $DF_n \equiv 2 / \mathbb{V}(S_n^2 / \sigma^2) = 2n / ( \kappa - (n-3)/(n-1))$ and $\kappa = \mu_4 / \sigma^4$ is the kurtosis parameter. This distribution is asymptotically equivalent to the normal approximation derived from the theorem (the chi-squared distribution converges to normal as the degrees-of-freedom tends to infinity). Despite this equivalence, this approximation has various other properties you would like your approximating distribution to have: Unlike the normal approximation derived directly from the theorem, this distribution has the correct support for the statistic of interest. The sample variance is non-negative, and this distribution has non-negative support. In the case where the underlying values are normally distributed, this approximation is actually the exact sampling distribution. (In this case we have $\kappa = 3$ which gives $DF_n = n-1$, which is the standard form used in most texts.) It therefore constitutes a result that is exact in an important special case, while still being a reasonable approximation in more general cases. Derivation of the above result: Approximate distributional results for the sample mean and variance are discussed at length in O'Neill (2014), and this paper provides derivations of many results, including the present approximating distribution. This derivation starts from the limiting result in the question: $$\sqrt{n} (S_n^2 - \sigma^2) \sim \text{N}(0, \sigma^4 (\kappa - 1)).$$ Re-arranging this result we obtain the approximation: $$\frac{S_n^2}{\sigma^2} \sim \text{N} \Big( 1, \frac{\kappa - 1}{n} \Big).$$ Since the chi-squared distribution is asymptotically normal, as $DF \rightarrow \infty$ we have: $$\frac{\text{Chi-Sq}(DF)}{DF} \rightarrow \frac{1}{DF} \text{N} ( DF, 2DF ) = \text{N} \Big( 1, \frac{2}{DF} \Big).$$ Taking $DF_n \equiv 2 / \mathbb{V}(S_n^2 / \sigma^2)$ (which yields the above formula) gives $DF_n \rightarrow 2n / (\kappa - 1)$ which ensures that the chi-squared distribution is asymptotically equivalent to the normal approximation from the limiting theorem.
Asymptotic distribution of sample variance of non-normal sample
The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see
Asymptotic distribution of sample variance of non-normal sample The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see asymptotic results presented using the normal distribution, and this is useful for stating the theorems. However, practically speaking, the purpose of an asymptotic distribution for a sample statistic is that it allows you to obtain an approximate distribution when $n$ is large. There are lots of choices you could make for your large-sample approximation, since many distributions have the same asymptotic form. In the case of the sample variance, it is my view that an excellent approximating distribution for large $n$ is given by: $$\frac{S_n^2}{\sigma^2} \sim \frac{\text{Chi-Sq}(\text{df} = DF_n)}{DF_n},$$ where $DF_n \equiv 2 / \mathbb{V}(S_n^2 / \sigma^2) = 2n / ( \kappa - (n-3)/(n-1))$ and $\kappa = \mu_4 / \sigma^4$ is the kurtosis parameter. This distribution is asymptotically equivalent to the normal approximation derived from the theorem (the chi-squared distribution converges to normal as the degrees-of-freedom tends to infinity). Despite this equivalence, this approximation has various other properties you would like your approximating distribution to have: Unlike the normal approximation derived directly from the theorem, this distribution has the correct support for the statistic of interest. The sample variance is non-negative, and this distribution has non-negative support. In the case where the underlying values are normally distributed, this approximation is actually the exact sampling distribution. (In this case we have $\kappa = 3$ which gives $DF_n = n-1$, which is the standard form used in most texts.) It therefore constitutes a result that is exact in an important special case, while still being a reasonable approximation in more general cases. Derivation of the above result: Approximate distributional results for the sample mean and variance are discussed at length in O'Neill (2014), and this paper provides derivations of many results, including the present approximating distribution. This derivation starts from the limiting result in the question: $$\sqrt{n} (S_n^2 - \sigma^2) \sim \text{N}(0, \sigma^4 (\kappa - 1)).$$ Re-arranging this result we obtain the approximation: $$\frac{S_n^2}{\sigma^2} \sim \text{N} \Big( 1, \frac{\kappa - 1}{n} \Big).$$ Since the chi-squared distribution is asymptotically normal, as $DF \rightarrow \infty$ we have: $$\frac{\text{Chi-Sq}(DF)}{DF} \rightarrow \frac{1}{DF} \text{N} ( DF, 2DF ) = \text{N} \Big( 1, \frac{2}{DF} \Big).$$ Taking $DF_n \equiv 2 / \mathbb{V}(S_n^2 / \sigma^2)$ (which yields the above formula) gives $DF_n \rightarrow 2n / (\kappa - 1)$ which ensures that the chi-squared distribution is asymptotically equivalent to the normal approximation from the limiting theorem.
Asymptotic distribution of sample variance of non-normal sample The excellent answers by Alecos and JohnK already derive the result you are after, but I would like to note something else about the asymptotic distribution of the sample variance. It is common to see
7,914
What's in a name: Precision (inverse of variance)
Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive" than variance because it says how concentrated are the values around the mean rather than how spread they are. It is said that we are more interested in how precise is some measurement rather than how imprecise it is (but honestly I do not see how it would be more intuitive). The more spread are the values around the mean (high variance), the less precise they are (small precision). The smaller the variance, the greater the precision. Precision is just an inverted variance $\tau = 1/\sigma^2$. There is really nothing more than this.
What's in a name: Precision (inverse of variance)
Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive"
What's in a name: Precision (inverse of variance) Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive" than variance because it says how concentrated are the values around the mean rather than how spread they are. It is said that we are more interested in how precise is some measurement rather than how imprecise it is (but honestly I do not see how it would be more intuitive). The more spread are the values around the mean (high variance), the less precise they are (small precision). The smaller the variance, the greater the precision. Precision is just an inverted variance $\tau = 1/\sigma^2$. There is really nothing more than this.
What's in a name: Precision (inverse of variance) Precision is often used in Bayesian software by convention. It gained popularity because gamma distribution can be used as a conjugate prior for precision. Some say that precision is more "intuitive"
7,915
What's in a name: Precision (inverse of variance)
Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add the precisions. Variance does not have this property. On the other hand, when you're accumulating observations, you average expectation parameters. The second moment is an expectation parameter. When taking the convolution of two independent normal distributions, the variances add. Relatedly, if you have a Wiener process (a stochastic process whose increments are Gaussian) you can argue using infinite divisibility that waiting half the time, means jumping with half the variance. Finally, when scaling a Gaussian distribution, the standard deviation is scaled. So, many parameterizations are useful depending on what you're doing. If you're combining predictions in a GLM, precision is the most β€œintuitive” one.
What's in a name: Precision (inverse of variance)
Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add
What's in a name: Precision (inverse of variance) Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add the precisions. Variance does not have this property. On the other hand, when you're accumulating observations, you average expectation parameters. The second moment is an expectation parameter. When taking the convolution of two independent normal distributions, the variances add. Relatedly, if you have a Wiener process (a stochastic process whose increments are Gaussian) you can argue using infinite divisibility that waiting half the time, means jumping with half the variance. Finally, when scaling a Gaussian distribution, the standard deviation is scaled. So, many parameterizations are useful depending on what you're doing. If you're combining predictions in a GLM, precision is the most β€œintuitive” one.
What's in a name: Precision (inverse of variance) Precision is one of the two natural parameters of the normal distribution. That means that if you want to combine two independent predictive distributions (as in a Generalized Linear Model), you add
7,916
What's in a name: Precision (inverse of variance)
Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrument (e.g., measuring a distance with measuring tape). If you were to take several measurements of the quantity of interest with the same measurement instrument, you will likely end up with variation in the results i.e. measurement error. These errors are often well approximated by a normal distribution. The precision parameter of a normal distribution tells you how "precise" your measurements are in the sense of having larger or smaller errors. The larger the precision, the more precise your measurement, and thus the smaller your errors (and vice-versa). B) The reason that precision matrices are sometimes preferred over covariance matrices is due to analytical and computational convenience: they are simpler to work with. This is why normal distributions were classically parameterized via the precision parameter in the Bayesian context before the computer revolution when calculations were done by hand. The parameterization remains relevant today when working with very small variances as it helps to address underflow in numerical computations. The simplicity of the alternative can also be illustrated by comparing the densities of both parameterizations. Notice below how the use of $\tau = \frac{1}{\sigma^2}$ eliminates the need to divide by a parameter. In a Bayesian context (when parameters are treated as random variables) division by a parameter can make calculating posterior distributions painful. $$p_Y(y; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{y-\mu}{\sigma})^2}$$ $$p_Y(y; \mu, \tau) = \sqrt{\frac{\tau}{2\pi}}e^{-\frac{1}{2}\tau(y - \mu)^2}$$
What's in a name: Precision (inverse of variance)
Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrume
What's in a name: Precision (inverse of variance) Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrument (e.g., measuring a distance with measuring tape). If you were to take several measurements of the quantity of interest with the same measurement instrument, you will likely end up with variation in the results i.e. measurement error. These errors are often well approximated by a normal distribution. The precision parameter of a normal distribution tells you how "precise" your measurements are in the sense of having larger or smaller errors. The larger the precision, the more precise your measurement, and thus the smaller your errors (and vice-versa). B) The reason that precision matrices are sometimes preferred over covariance matrices is due to analytical and computational convenience: they are simpler to work with. This is why normal distributions were classically parameterized via the precision parameter in the Bayesian context before the computer revolution when calculations were done by hand. The parameterization remains relevant today when working with very small variances as it helps to address underflow in numerical computations. The simplicity of the alternative can also be illustrated by comparing the densities of both parameterizations. Notice below how the use of $\tau = \frac{1}{\sigma^2}$ eliminates the need to divide by a parameter. In a Bayesian context (when parameters are treated as random variables) division by a parameter can make calculating posterior distributions painful. $$p_Y(y; \mu, \sigma) = \frac{1}{\sigma\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{y-\mu}{\sigma})^2}$$ $$p_Y(y; \mu, \tau) = \sqrt{\frac{\tau}{2\pi}}e^{-\frac{1}{2}\tau(y - \mu)^2}$$
What's in a name: Precision (inverse of variance) Here is my attempt at an explanation: A) An intuition for precision can be found in the context of measurement error. Suppose you are measuring some quantity of interest with some measurement instrume
7,917
What do confidence intervals say about precision (if anything)?
In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = precision" is wrong. This is not to say that any competent frequentist, Bayesian, or likelihoodist would be confused by this. Here's another way to see what's going on: If we were just told the CIs, we would still not be able to combine the information in the samples together; we would need to know $N$, and from that we could decompose the CIs into the $\bar{x}$ and $s^2$, and thus combine the two samples properly. The reason we have to do this is that the information in the CI is marginal over the nuisance parameter. We must take into account that both samples contain information about the same nuisance parameter. This involves computing both $s^2$ values, combining them to get an overall estimate of $\sigma^2$, then computing a new CI. As for other demonstrations of the precision fallacy, see the multiple CIs in the Welch (1939) section (the submarine), one of which includes the "trivial" CI mentioned by @dsaxton above. In this example, the optimal CI does not track the width of the likelihood, and there are several other examples of CIs that do not either. The fact that CIs --- even "good" CIs can be empty, "falsely" indicating infinite precision The answer to the conundrum is that "precision", at least in the way CI advocates think about it (a post-experimental assessment of how "close" an estimate is to an parameter) is simply not a characteristic that confidence intervals have in general, and they were not meant to. Particular confidence procedures might ... or not. See also the discussion here: http://andrewgelman.com/2011/08/25/why_it_doesnt_m/#comment-61591
What do confidence intervals say about precision (if anything)?
In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = pr
What do confidence intervals say about precision (if anything)? In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = precision" is wrong. This is not to say that any competent frequentist, Bayesian, or likelihoodist would be confused by this. Here's another way to see what's going on: If we were just told the CIs, we would still not be able to combine the information in the samples together; we would need to know $N$, and from that we could decompose the CIs into the $\bar{x}$ and $s^2$, and thus combine the two samples properly. The reason we have to do this is that the information in the CI is marginal over the nuisance parameter. We must take into account that both samples contain information about the same nuisance parameter. This involves computing both $s^2$ values, combining them to get an overall estimate of $\sigma^2$, then computing a new CI. As for other demonstrations of the precision fallacy, see the multiple CIs in the Welch (1939) section (the submarine), one of which includes the "trivial" CI mentioned by @dsaxton above. In this example, the optimal CI does not track the width of the likelihood, and there are several other examples of CIs that do not either. The fact that CIs --- even "good" CIs can be empty, "falsely" indicating infinite precision The answer to the conundrum is that "precision", at least in the way CI advocates think about it (a post-experimental assessment of how "close" an estimate is to an parameter) is simply not a characteristic that confidence intervals have in general, and they were not meant to. Particular confidence procedures might ... or not. See also the discussion here: http://andrewgelman.com/2011/08/25/why_it_doesnt_m/#comment-61591
What do confidence intervals say about precision (if anything)? In the paper, we actually demonstrate the precision fallacy in multiple ways. The one you're asking about --- the first in the paper --- The example is meant to demonstrate that a simplistic "CI = pr
7,918
What do confidence intervals say about precision (if anything)?
First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision and CI width can be theoretically demonstrated. Take an estimate for the mean (when it exists). If your CI for the mean is very narrow, then you have two interpretations: either you had some bad luck and your sample was too tightly clumped (a priori 5% chance of that happening), or your interval covers the true mean (95% a priori chance). Of course, the observed CI can be either of these two, but, we set up our calculation so that the latter is far more likely to have occurred (i.e., 95% chance a priori)...hence, we have a high degree of confidence that our interval covers the mean, because we set things up probabilistically so this is so. Thus, a 95% CI is not a probability interval (like a Bayesian Credible Interval), but more like a "trusted adviser"...someone who, statistically, is right 95% of the time, so we trust their answers even though any particular answer could very well be wrong. In the 95% of cases where it does cover the actual parameter, then the width tells you something about the range of plausible values given the data (i.e., how well you can bound the true value), hence it acts like a measure of precision. In the 5% of cases where it doesn't, then the CI is misleading (since the sample is misleading). So, does 95% CI width indicate precision...I'd say there's a 95% chance it does (provided your CI width is positive-finite) ;-) What is a sensible CI? In response to the original author's post, I've revised my response to (a) take into account that the "split sample" example had a very specific purpose, and (b) to provide some more background as requested by the commenter: In an ideal (frequentist) world, all sampling distributions would admit a pivotal statistic that we could use to get exact confidence intervals. What is so great about pivotal statistics? Their distribution can be derived without knowing the actual value of the parameter being estimated! In these nice cases, we have an exact distribution of our sample statistic relative to the true parameter (although it may not be gaussian) about this parameter. Put more succinctly: We know the error distribution (or some transformation thereof). It is this quality of some estimators that allows us to form sensible confidence intervals. These intervals don't just satisfy their definitions...they do so by virtue of being derived from the actual distribution of estimation error. The Gaussian distribution and the associated Z statistic is the canonical example of the use of a pivotal quantity to develop an exact CI for the mean. There are more esoteric examples, but this is generally the one that motivates "large sample theory", which is basically an attempt apply the theory behind Gaussian CIs to distributions that do not admit a true pivotal quantity. In these cases, you'll read about approximately pivotal, or asymptotically pivotal (in the sample size) quantities or "approximate" confidence intervals...these are based on likelihood theory-- specifically, the fact that the error distribution for many MLEs approaches a normal distribution. Another approach for generating sensible CIs is to "invert" a hypothesis test. The idea is that a "good" test (e.g., UMP) will result in a good (read: narrow) CI for a given Type I error rate. These don't tend to give exact coverage, but do provide lower-bound coverage (note: the actual definition of a X%-CI only says it must cover the true parameter at least X% of the time). The use of hypothesis tests does not directly require a pivotal quantity or error distribution -- its sensibility is derived from the sensibility of the underlying test. For example, if we had a test whose rejection region had length 0 5% of the time and infinite length 95% of the time, we'd be back to where we were with the CI's -- but its obvious that this test is not conditional on the data, and hence will not provide any information on the underlying parameter being tested. This broader idea - that an estimate of precision should be conditional on the data, goes back to Fischer and the idea of ancillary statistics. You can be sure that if the result of your test or CI procedure is NOT conditioned by the data (i.e., its conditional behavior is the same as its unconditional behavior), then you've got a questionable method on your hands.
What do confidence intervals say about precision (if anything)?
First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision an
What do confidence intervals say about precision (if anything)? First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision and CI width can be theoretically demonstrated. Take an estimate for the mean (when it exists). If your CI for the mean is very narrow, then you have two interpretations: either you had some bad luck and your sample was too tightly clumped (a priori 5% chance of that happening), or your interval covers the true mean (95% a priori chance). Of course, the observed CI can be either of these two, but, we set up our calculation so that the latter is far more likely to have occurred (i.e., 95% chance a priori)...hence, we have a high degree of confidence that our interval covers the mean, because we set things up probabilistically so this is so. Thus, a 95% CI is not a probability interval (like a Bayesian Credible Interval), but more like a "trusted adviser"...someone who, statistically, is right 95% of the time, so we trust their answers even though any particular answer could very well be wrong. In the 95% of cases where it does cover the actual parameter, then the width tells you something about the range of plausible values given the data (i.e., how well you can bound the true value), hence it acts like a measure of precision. In the 5% of cases where it doesn't, then the CI is misleading (since the sample is misleading). So, does 95% CI width indicate precision...I'd say there's a 95% chance it does (provided your CI width is positive-finite) ;-) What is a sensible CI? In response to the original author's post, I've revised my response to (a) take into account that the "split sample" example had a very specific purpose, and (b) to provide some more background as requested by the commenter: In an ideal (frequentist) world, all sampling distributions would admit a pivotal statistic that we could use to get exact confidence intervals. What is so great about pivotal statistics? Their distribution can be derived without knowing the actual value of the parameter being estimated! In these nice cases, we have an exact distribution of our sample statistic relative to the true parameter (although it may not be gaussian) about this parameter. Put more succinctly: We know the error distribution (or some transformation thereof). It is this quality of some estimators that allows us to form sensible confidence intervals. These intervals don't just satisfy their definitions...they do so by virtue of being derived from the actual distribution of estimation error. The Gaussian distribution and the associated Z statistic is the canonical example of the use of a pivotal quantity to develop an exact CI for the mean. There are more esoteric examples, but this is generally the one that motivates "large sample theory", which is basically an attempt apply the theory behind Gaussian CIs to distributions that do not admit a true pivotal quantity. In these cases, you'll read about approximately pivotal, or asymptotically pivotal (in the sample size) quantities or "approximate" confidence intervals...these are based on likelihood theory-- specifically, the fact that the error distribution for many MLEs approaches a normal distribution. Another approach for generating sensible CIs is to "invert" a hypothesis test. The idea is that a "good" test (e.g., UMP) will result in a good (read: narrow) CI for a given Type I error rate. These don't tend to give exact coverage, but do provide lower-bound coverage (note: the actual definition of a X%-CI only says it must cover the true parameter at least X% of the time). The use of hypothesis tests does not directly require a pivotal quantity or error distribution -- its sensibility is derived from the sensibility of the underlying test. For example, if we had a test whose rejection region had length 0 5% of the time and infinite length 95% of the time, we'd be back to where we were with the CI's -- but its obvious that this test is not conditional on the data, and hence will not provide any information on the underlying parameter being tested. This broader idea - that an estimate of precision should be conditional on the data, goes back to Fischer and the idea of ancillary statistics. You can be sure that if the result of your test or CI procedure is NOT conditioned by the data (i.e., its conditional behavior is the same as its unconditional behavior), then you've got a questionable method on your hands.
What do confidence intervals say about precision (if anything)? First of all, lets limit ourselves to CI procedures that only produce intervals with strictly positive, finite widths (to avoid pathological cases). In this case, the relationship between precision an
7,919
What do confidence intervals say about precision (if anything)?
I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a sample $\{x_1, x_2, \ldots , x_n \}$ from a normal$(\mu, \sigma^2)$ distribution and wish to construct a confidence interval on $\mu$, but instead of using the actual data we take our confidence interval to be either $(- \infty, \infty)$ or $\{ 0 \}$ based on the flip of a biased coin. By using the right bias we can get any level of confidence we like, but obviously our interval "estimate" has no precision at all even if we end up with an interval that has zero width. The reason why I don't think we should care about this apparent fallacy is that while it is true that there's no necessary connection between the width of a confidence interval and precision, there is an almost universal connection between standard errors and precision, and in most cases the width of a confidence interval is proportional to a standard error. I also don't believe the author's example is a very good one. Whenever we do data analysis we can only estimate precision, so of course the two individuals will reach different conclusions. But if we have some privileged knowledge, such as knowing that both samples are from the same distribution, then we obviously shouldn't ignore it. Clearly we should pool the data and use the resulting estimate of $\sigma$ as our best guess. It seems to me this example is like the one above where we only equate confidence interval width with precision if we've allowed ourselves to stop thinking.
What do confidence intervals say about precision (if anything)?
I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a s
What do confidence intervals say about precision (if anything)? I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a sample $\{x_1, x_2, \ldots , x_n \}$ from a normal$(\mu, \sigma^2)$ distribution and wish to construct a confidence interval on $\mu$, but instead of using the actual data we take our confidence interval to be either $(- \infty, \infty)$ or $\{ 0 \}$ based on the flip of a biased coin. By using the right bias we can get any level of confidence we like, but obviously our interval "estimate" has no precision at all even if we end up with an interval that has zero width. The reason why I don't think we should care about this apparent fallacy is that while it is true that there's no necessary connection between the width of a confidence interval and precision, there is an almost universal connection between standard errors and precision, and in most cases the width of a confidence interval is proportional to a standard error. I also don't believe the author's example is a very good one. Whenever we do data analysis we can only estimate precision, so of course the two individuals will reach different conclusions. But if we have some privileged knowledge, such as knowing that both samples are from the same distribution, then we obviously shouldn't ignore it. Clearly we should pool the data and use the resulting estimate of $\sigma$ as our best guess. It seems to me this example is like the one above where we only equate confidence interval width with precision if we've allowed ourselves to stop thinking.
What do confidence intervals say about precision (if anything)? I think the precision fallacy is a true fallacy, but not necessarily one we should care about. It isn't even that hard to show it's a fallacy. Take an extreme example like the following: we have a s
7,920
What do confidence intervals say about precision (if anything)?
I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms. Quoting from Wikipedia: The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. One thus might argue that frequentist confidence intervals do represent a type of precision of a measurement scheme. If one repeats the same scheme, the 95% CI calculated for each repetition will contain the one true value of the parameter in 95% of the repetitions. This, however, is not what many people want from a practical measure of precision. They want to know how close the measured value is to the true value. Frequentist confidence intervals do not strictly provide that measure of precision. Bayesian credible regions do. Some of the confusion is that, in practical examples, frequentist confidence intervals and Bayesian credible regions "will more-or-less overlap". Sampling from a normal distribution, as in some comments on the OP, is such an example. That may also be the case in practice for some of the broader types of analyses that @Bey had in mind, based on approximations to standard errors in processes that have normal distributions in the limit. If you know that you are in such a situation, then there may be no practical danger in interpreting a particular 95% CI, from a single implementation of a measurement scheme, as having a 95% probability of containing the true value. That interpretation of confidence intervals, however, is not from frequentist statistics, for which the true value either is or is not within that particular interval. If confidence intervals and credible regions differ markedly, that Bayesian-like interpretation of frequentist confidence intervals can be misleading or wrong, as the paper linked above and earlier literature referenced therein demonstrate. Yes, "common sense" might help avoid such misinterpretations, but in my experience "common sense" isn't so common. Other CrossValidated pages contain much more information on confidence intervals and the differences between confidence intervals and credible regions. Links from those particular pages are also highly informative.
What do confidence intervals say about precision (if anything)?
I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms.
What do confidence intervals say about precision (if anything)? I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms. Quoting from Wikipedia: The precision of a measurement system, related to reproducibility and repeatability, is the degree to which repeated measurements under unchanged conditions show the same results. One thus might argue that frequentist confidence intervals do represent a type of precision of a measurement scheme. If one repeats the same scheme, the 95% CI calculated for each repetition will contain the one true value of the parameter in 95% of the repetitions. This, however, is not what many people want from a practical measure of precision. They want to know how close the measured value is to the true value. Frequentist confidence intervals do not strictly provide that measure of precision. Bayesian credible regions do. Some of the confusion is that, in practical examples, frequentist confidence intervals and Bayesian credible regions "will more-or-less overlap". Sampling from a normal distribution, as in some comments on the OP, is such an example. That may also be the case in practice for some of the broader types of analyses that @Bey had in mind, based on approximations to standard errors in processes that have normal distributions in the limit. If you know that you are in such a situation, then there may be no practical danger in interpreting a particular 95% CI, from a single implementation of a measurement scheme, as having a 95% probability of containing the true value. That interpretation of confidence intervals, however, is not from frequentist statistics, for which the true value either is or is not within that particular interval. If confidence intervals and credible regions differ markedly, that Bayesian-like interpretation of frequentist confidence intervals can be misleading or wrong, as the paper linked above and earlier literature referenced therein demonstrate. Yes, "common sense" might help avoid such misinterpretations, but in my experience "common sense" isn't so common. Other CrossValidated pages contain much more information on confidence intervals and the differences between confidence intervals and credible regions. Links from those particular pages are also highly informative.
What do confidence intervals say about precision (if anything)? I think the demonstrable distinction between "confidence intervals" and "precision" (see answer from @dsaxton) is important because that distinction points out problems in common usage of both terms.
7,921
What do confidence intervals say about precision (if anything)?
@Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that one cannot give a pop quiz. On close examination this means one cannot guarantee the quiz is a surprise. Yet most of the time it will be. It sounds like Morey et al show that there exist cases where the width is uninformative. Although that is sufficient to claim "There is no necessary connection between the precision of an estimate and the size of a confidence interval", it is not sufficient to further conclude that CIs generally contain no information about precision. Merely that they are not guaranteed to do so. (Insufficient points to + @Bey's answer. )
What do confidence intervals say about precision (if anything)?
@Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that on
What do confidence intervals say about precision (if anything)? @Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that one cannot give a pop quiz. On close examination this means one cannot guarantee the quiz is a surprise. Yet most of the time it will be. It sounds like Morey et al show that there exist cases where the width is uninformative. Although that is sufficient to claim "There is no necessary connection between the precision of an estimate and the size of a confidence interval", it is not sufficient to further conclude that CIs generally contain no information about precision. Merely that they are not guaranteed to do so. (Insufficient points to + @Bey's answer. )
What do confidence intervals say about precision (if anything)? @Bey has it. There is no necessary connection between scores and performance nor price and quality nor smell and taste. Yet the one usually informs about the other. One can prove by induction that on
7,922
How do I calculate confidence intervals for a non-normal distribution?
Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with replacement B times. For each of these samples calculate the sample mean. Calculate an appropriate bootstrap confidence interval. Concerning the last step, there are several types of bootstrap confidence interval (BCI). The following references present a discussion on the properties of different types of BCI: http://staff.ustc.edu.cn/~zwp/teach/Stat-Comp/Efron_Bootstrap_CIs.pdf http://www.tau.ac.il/~saharon/Boot/10.1.1.133.8405.pdf It is a good practice to calculate several BCI and try to understand possible discrepancies between them. In R, you can easily implement this idea using the R package 'boot' as follows: rm(list=ls()) # Simulated data set.seed(123) data0 = rgamma(383,5,3) mean(data0) # Sample mean hist(data0) # Histogram of the data library(boot) # function to obtain the mean Bmean <- function(data, indices) { d <- data[indices] # allows boot to select sample return(mean(d)) } # bootstrapping with 1000 replications results <- boot(data=data0, statistic=Bmean, R=1000) # view results results plot(results) # get 95% confidence interval boot.ci(results, type=c("norm", "basic", "perc", "bca"))
How do I calculate confidence intervals for a non-normal distribution?
Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with rep
How do I calculate confidence intervals for a non-normal distribution? Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with replacement B times. For each of these samples calculate the sample mean. Calculate an appropriate bootstrap confidence interval. Concerning the last step, there are several types of bootstrap confidence interval (BCI). The following references present a discussion on the properties of different types of BCI: http://staff.ustc.edu.cn/~zwp/teach/Stat-Comp/Efron_Bootstrap_CIs.pdf http://www.tau.ac.il/~saharon/Boot/10.1.1.133.8405.pdf It is a good practice to calculate several BCI and try to understand possible discrepancies between them. In R, you can easily implement this idea using the R package 'boot' as follows: rm(list=ls()) # Simulated data set.seed(123) data0 = rgamma(383,5,3) mean(data0) # Sample mean hist(data0) # Histogram of the data library(boot) # function to obtain the mean Bmean <- function(data, indices) { d <- data[indices] # allows boot to select sample return(mean(d)) } # bootstrapping with 1000 replications results <- boot(data=data0, statistic=Bmean, R=1000) # view results results plot(results) # get 95% confidence interval boot.ci(results, type=c("norm", "basic", "perc", "bca"))
How do I calculate confidence intervals for a non-normal distribution? Yes, bootstrap is an alternative for obtaining confidence intervals for the mean (and you have to make a bit of effort if you want to understand the method). The idea is as follows: Resample with rep
7,923
How do I calculate confidence intervals for a non-normal distribution?
Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)median not the mean, but then if the data is heavily non-normal maybe the median is a more informative measure.
How do I calculate confidence intervals for a non-normal distribution?
Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)media
How do I calculate confidence intervals for a non-normal distribution? Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)median not the mean, but then if the data is heavily non-normal maybe the median is a more informative measure.
How do I calculate confidence intervals for a non-normal distribution? Another standard alternative is to calculate the CI with the Wilcoxon test. In R wilcox.test(your-data, conf.int = TRUE, conf.level = 0.95) Unfortunately, it gives you the CI around the (pseudo)media
7,924
How do I calculate confidence intervals for a non-normal distribution?
You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard interval (using the critical points of the T-distribution), even if the underlying data is non-normal. In fact, so long as the data is IID (Independent and Identically Distributed) and the distribution of the data has finite variance, the distribution of the sample mean with $n=383$ observations should be virtually indistinguishable from a normal distribution. This will be the case even if the underlying distribution of the data is extremely different to a normal distribution.
How do I calculate confidence intervals for a non-normal distribution?
You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard int
How do I calculate confidence intervals for a non-normal distribution? You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard interval (using the critical points of the T-distribution), even if the underlying data is non-normal. In fact, so long as the data is IID (Independent and Identically Distributed) and the distribution of the data has finite variance, the distribution of the sample mean with $n=383$ observations should be virtually indistinguishable from a normal distribution. This will be the case even if the underlying distribution of the data is extremely different to a normal distribution.
How do I calculate confidence intervals for a non-normal distribution? You can just use a standard confidence interval for the mean: Bear in mind that when we calculate confidence intervals for the mean, we can appeal to the central limit theorem and use the standard int
7,925
How do I calculate confidence intervals for a non-normal distribution?
For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^2}{2} \pm t_{df}\sqrt{\frac{S^2}{n} + \frac{S^4}{2(n-1)} } $$ Where $ Y = \log(X)$, the sample mean of $Y$ is $\bar{Y}$ and the sample variance of $Y$ is $S^2$. For df, use n-1. An R function is below: ModifiedCox <- function(x){ n <- length(x) y <- log(x) y.m <- mean(y) y.var <- var(y) my.t <- qt(0.975, df = n-1) my.mean <- mean(x) upper <- y.m + y.var/2 + my.t*sqrt(y.var/n + y.var^2/(2*(n - 1))) lower <- y.m + y.var/2 - my.t*sqrt(y.var/n + y.var^2/(2*(n - 1))) return(list(upper = exp(upper), mean = my.mean, lower = exp(lower))) } Repeating the example from Olsson's paper CO.level <- c(12.5, 20, 4, 20, 25, 170, 15, 20, 15) ModifiedCox(CO.level) $upper [1] 78.72254 $mean [1] 33.5 $lower [1] 12.30929
How do I calculate confidence intervals for a non-normal distribution?
For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^
How do I calculate confidence intervals for a non-normal distribution? For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^2}{2} \pm t_{df}\sqrt{\frac{S^2}{n} + \frac{S^4}{2(n-1)} } $$ Where $ Y = \log(X)$, the sample mean of $Y$ is $\bar{Y}$ and the sample variance of $Y$ is $S^2$. For df, use n-1. An R function is below: ModifiedCox <- function(x){ n <- length(x) y <- log(x) y.m <- mean(y) y.var <- var(y) my.t <- qt(0.975, df = n-1) my.mean <- mean(x) upper <- y.m + y.var/2 + my.t*sqrt(y.var/n + y.var^2/(2*(n - 1))) lower <- y.m + y.var/2 - my.t*sqrt(y.var/n + y.var^2/(2*(n - 1))) return(list(upper = exp(upper), mean = my.mean, lower = exp(lower))) } Repeating the example from Olsson's paper CO.level <- c(12.5, 20, 4, 20, 25, 170, 15, 20, 15) ModifiedCox(CO.level) $upper [1] 78.72254 $mean [1] 33.5 $lower [1] 12.30929
How do I calculate confidence intervals for a non-normal distribution? For log-normal data, Olsson (2005) suggests a 'modified Cox method' If $X$ is log-normally distributed and $\rm{E}(X) = \theta$, the a confidence interval for $ \log(\theta)$ is: $$ \bar{Y} = \frac{S^
7,926
What distribution does my data follow?
The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual process generating these observations probably won't be anything that I could suggest either. As sample size increases, you will likely be able to reject any well-known distribution. Parametric distributions are often a useful fiction, not a perfect description. Let's at least look at the log-data, first in a normal qqplot and then as a kernel density estimate to see how it appears: Note that in a Q-Q plot done this way around, the flattest sections of slope are where you tend to see peaks. This has a clear suggestion of a peak near 6 and another about 12.3. The kernel density estimate of the log shows the same thing: In both cases, the indication is that the distribution of the log time is right skew, but it's not clearly unimodal. Clearly the main peak is somewhere around the 5 minute mark. It may be that there's a second small peak in the log-time density, that appears to be somewhere in the region of perhaps 60 hours. Perhaps there are two very qualitatively different "types" of repair, and your distribution is reflecting a mix of two types. Or just maybe once a repair hits a full day of work, it tends to just take a longer time (that is, rather than reflecting a peak at just over a week, it may reflect an anti-peak at just over a day - once you get longer than just under a day to repair, jobs tend to 'slow down'). Even the log of the log of the time is somewhat right skew. Let's look at a stronger transformation, where the second peak is quite clear - minus the inverse of the fourth root of time: The marked lines are at 5 minutes (blue) and 60 hours (dashed green); as you see, there's a peak just below 5 minutes and another somewhere above 60 hours. Note that the upper "peak" is out at about the 95th percentile and won't necessarily be close to a peak in the untransformed distribution. There's also a suggestion of another dip around 7.5 minutes with a broad peak between 10 and 20 minutes, which might suggest a very slight tendency to 'round up' in that region (not that there's necessarily anything untoward going on; even if there's no dip/peak in inherent job time there, it could even be something as simple as a function of human ability to focus in one unbroken period for more than a few minutes.) It looks to me like a two-component (two peak) or maybe three component mixture of right-skew distributions would describe the process reasonably well but would not be a perfect description. The package logspline seems to pick four peaks in log(time): with peaks near 30, 270, 900 and 270K seconds (30s,4.5m,15m and 75h). Using logspline with other transforms generally find 4 peaks but with slightly different centers (when translated to the original units); this is to be expected with transformations.
What distribution does my data follow?
The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual
What distribution does my data follow? The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual process generating these observations probably won't be anything that I could suggest either. As sample size increases, you will likely be able to reject any well-known distribution. Parametric distributions are often a useful fiction, not a perfect description. Let's at least look at the log-data, first in a normal qqplot and then as a kernel density estimate to see how it appears: Note that in a Q-Q plot done this way around, the flattest sections of slope are where you tend to see peaks. This has a clear suggestion of a peak near 6 and another about 12.3. The kernel density estimate of the log shows the same thing: In both cases, the indication is that the distribution of the log time is right skew, but it's not clearly unimodal. Clearly the main peak is somewhere around the 5 minute mark. It may be that there's a second small peak in the log-time density, that appears to be somewhere in the region of perhaps 60 hours. Perhaps there are two very qualitatively different "types" of repair, and your distribution is reflecting a mix of two types. Or just maybe once a repair hits a full day of work, it tends to just take a longer time (that is, rather than reflecting a peak at just over a week, it may reflect an anti-peak at just over a day - once you get longer than just under a day to repair, jobs tend to 'slow down'). Even the log of the log of the time is somewhat right skew. Let's look at a stronger transformation, where the second peak is quite clear - minus the inverse of the fourth root of time: The marked lines are at 5 minutes (blue) and 60 hours (dashed green); as you see, there's a peak just below 5 minutes and another somewhere above 60 hours. Note that the upper "peak" is out at about the 95th percentile and won't necessarily be close to a peak in the untransformed distribution. There's also a suggestion of another dip around 7.5 minutes with a broad peak between 10 and 20 minutes, which might suggest a very slight tendency to 'round up' in that region (not that there's necessarily anything untoward going on; even if there's no dip/peak in inherent job time there, it could even be something as simple as a function of human ability to focus in one unbroken period for more than a few minutes.) It looks to me like a two-component (two peak) or maybe three component mixture of right-skew distributions would describe the process reasonably well but would not be a perfect description. The package logspline seems to pick four peaks in log(time): with peaks near 30, 270, 900 and 270K seconds (30s,4.5m,15m and 75h). Using logspline with other transforms generally find 4 peaks but with slightly different centers (when translated to the original units); this is to be expected with transformations.
What distribution does my data follow? The thing is that real data doesn't necessarily follow any particular distribution you can name ... and indeed it would be surprising if it did. So while I could name a dozen possibilities, the actual
7,927
What distribution does my data follow?
The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My guess is that your data are consistent with more than just the beta distribution. In general, the beta distribution is the distribution of continuous proportions or probabilities. For example, the distribution of p-values from a t-test would be some specific case of a beta distribution depending on whether the null hypothesis is true and the amount of power your analysis has. I find it extremely unlikely that the distribution of your times to repair would actually be beta. Note that that graph is only comparing the skew and kurtosis of your data to the specified distribution. The beta is bound by 0 and 1; I'll bet your data aren't, but that graph isn't checking that fact. On the other hand, the Weibull distribution is common for lag times. From eyeballing the figure (without the bootsamples plotted to gauge the uncertainty), I suspect your data are consistent with a Weibull. You could also check if you data are Weibull, I believe, using qqPlot from the car package to make a qq-plot.
What distribution does my data follow?
The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My
What distribution does my data follow? The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My guess is that your data are consistent with more than just the beta distribution. In general, the beta distribution is the distribution of continuous proportions or probabilities. For example, the distribution of p-values from a t-test would be some specific case of a beta distribution depending on whether the null hypothesis is true and the amount of power your analysis has. I find it extremely unlikely that the distribution of your times to repair would actually be beta. Note that that graph is only comparing the skew and kurtosis of your data to the specified distribution. The beta is bound by 0 and 1; I'll bet your data aren't, but that graph isn't checking that fact. On the other hand, the Weibull distribution is common for lag times. From eyeballing the figure (without the bootsamples plotted to gauge the uncertainty), I suspect your data are consistent with a Weibull. You could also check if you data are Weibull, I believe, using qqPlot from the car package to make a qq-plot.
What distribution does my data follow? The descdist function has an option to bootstrap your distribution to get a sense of the precision associated with the estimate plotted. You might try that. descdist(time_to_repair, boot=1000) My
7,928
What distribution does my data follow?
For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e^{-0.33781 (x-11.7025)^2}+0.229776 e^{-0.245814 (x-6.66864)^2}$$ Using 3 distributions to make a mixture distribution this can be $$f(x)=0.560456\text{ Laplace}(5.85532,0.59296)+0.312384\text{ LogNormal}(2.08338,0.122309)+0.12716\text{ Normal}(11.6327,1.02011) \,,$$ which numerically is $$\begin{array}{cc} \Bigg\{ & \begin{array}{ll} 0.472592 e^{-1.68646 (5.85532\, -x)}\, +0.0497292 e^{-0.480476 (x-11.6327)^2} & x\leq 0 \\ 0.472592 e^{-1.68646 (5.85532\, -x)}+0.0497292 e^{-0.480476 (x-11.6327)^2}+\frac{1.01893 }{x}e^{-33.4238 (\ln (x)-2.08338)^2} & 0<x<5.85532 \\ 0.472592 e^{-1.68646 (x-5.85532)}+0.0497292 e^{-0.480476 (x-11.6327)^2}+\frac{1.01893 }{x}e^{-33.4238 (\ln (x)-2.08338)^2} & \text{Otherwise} \\ \end{array} \\ \end{array}$$ There are many other possibilities. For example, fitting three normal distributions to the 1/10$^\text{th}$ power of the data. For Mathematica code, further methods are as per this link .
What distribution does my data follow?
For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e
What distribution does my data follow? For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e^{-0.33781 (x-11.7025)^2}+0.229776 e^{-0.245814 (x-6.66864)^2}$$ Using 3 distributions to make a mixture distribution this can be $$f(x)=0.560456\text{ Laplace}(5.85532,0.59296)+0.312384\text{ LogNormal}(2.08338,0.122309)+0.12716\text{ Normal}(11.6327,1.02011) \,,$$ which numerically is $$\begin{array}{cc} \Bigg\{ & \begin{array}{ll} 0.472592 e^{-1.68646 (5.85532\, -x)}\, +0.0497292 e^{-0.480476 (x-11.6327)^2} & x\leq 0 \\ 0.472592 e^{-1.68646 (5.85532\, -x)}+0.0497292 e^{-0.480476 (x-11.6327)^2}+\frac{1.01893 }{x}e^{-33.4238 (\ln (x)-2.08338)^2} & 0<x<5.85532 \\ 0.472592 e^{-1.68646 (x-5.85532)}+0.0497292 e^{-0.480476 (x-11.6327)^2}+\frac{1.01893 }{x}e^{-33.4238 (\ln (x)-2.08338)^2} & \text{Otherwise} \\ \end{array} \\ \end{array}$$ There are many other possibilities. For example, fitting three normal distributions to the 1/10$^\text{th}$ power of the data. For Mathematica code, further methods are as per this link .
What distribution does my data follow? For what it is worth, using Mathematica's FindDistribution routine, the logarithms are very approximately a mixture of two normal distributions, That is, $x=\ln(\text{data})$, and $$f(x)=0.0585522 e
7,929
How to get started with neural networks
Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means multi-layer feedforward network trained with back-propagation. Or, you might come across any of the dozens of rarely used, bizarrely named models and conclude that neural networks are more of a zoo than a research project. Or that they're a novelty. Or... I could go on. If you want a clear explanation, I'd listen to Geoffrey Hinton. He has been around forever and (therefore?) does a great job weaving all the disparate models he's worked on into one cohesive, intuitive (and sometimes theoretical) historical narrative. On his homepage, there are links to Google Tech Talks and Videolectures.net lectures he has done (on RBMs and Deep Learning, among others). From the way I see it, here's a historical and pedagogical road map to understanding neural networks, from their inception to the state-of-the-art: Perceptrons Easy to understand Severely limited Multi-layer, trained by back-propogation Many resources to learn these Don't generally do as well as SVMs Boltzmann machines Interesting way of thinking about the stability of a recurrent network in terms of "energy" Look at Hopfield networks if you want an easy to understand (but not very practical) example of recurrent networks with "energy". Theoretically interesting, useless in practice (training about the same speed as continental drift) Restricted Boltzmann Machines Useful! Build off of the theory of Boltzmann machines Some good introductions on the web Deep Belief Networks So far as I can tell, this is a class of multi-layer RBMs for doing semi-supervised learning. Some resources
How to get started with neural networks
Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means mult
How to get started with neural networks Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means multi-layer feedforward network trained with back-propagation. Or, you might come across any of the dozens of rarely used, bizarrely named models and conclude that neural networks are more of a zoo than a research project. Or that they're a novelty. Or... I could go on. If you want a clear explanation, I'd listen to Geoffrey Hinton. He has been around forever and (therefore?) does a great job weaving all the disparate models he's worked on into one cohesive, intuitive (and sometimes theoretical) historical narrative. On his homepage, there are links to Google Tech Talks and Videolectures.net lectures he has done (on RBMs and Deep Learning, among others). From the way I see it, here's a historical and pedagogical road map to understanding neural networks, from their inception to the state-of-the-art: Perceptrons Easy to understand Severely limited Multi-layer, trained by back-propogation Many resources to learn these Don't generally do as well as SVMs Boltzmann machines Interesting way of thinking about the stability of a recurrent network in terms of "energy" Look at Hopfield networks if you want an easy to understand (but not very practical) example of recurrent networks with "energy". Theoretically interesting, useless in practice (training about the same speed as continental drift) Restricted Boltzmann Machines Useful! Build off of the theory of Boltzmann machines Some good introductions on the web Deep Belief Networks So far as I can tell, this is a class of multi-layer RBMs for doing semi-supervised learning. Some resources
How to get started with neural networks Neural networks have been around for a while, and they've changed dramatically over the years. If you only poke around on the web, you might end up with the impression that "neural network" means mult
7,930
How to get started with neural networks
I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to make it accessible for beginners.
How to get started with neural networks
I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to m
How to get started with neural networks I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to make it accessible for beginners.
How to get started with neural networks I highly recommend watching these lectures and use this as reading material. These lectures are on machine learning in general by Andrew Ng talks in length about neural networks and does try hard to m
7,931
How to get started with neural networks
These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and they are fairly old, from 1995. Nevertheless, they remain very useful. Both books start from scratch, by explaining what neural networks are. They provide clear explanations, good examples and good graphs to aid understanding. They explain in great detail the issues of training neural networks, in their many shapes and forms, and what they can and cannot do. The two books supplement each other very nicely, for what one cannot figure out with one book, one tends to find in the other. Rojas has a section, which I particularly like, about implementing back-propagation over many layers in matrix form. It also has a nice section about fuzzy logic, and one about complexity theory. But then Bishop has lots of other nice sections. Rojas is, I would say, the most accessible. Bishop is more mathematical and perhaps more sophisticated. In both books, the maths is mostly linear algebra and calculus of functions of multiple variables (partial derivatives and so on). Without any knowledge of these subjects, you probably would not find either of these books very illuminating. I would recommend reading Rojas first. Both books, obviously, have a lot to say about algorithms, but neither says much about specific implementations in code. To me, these books provide the background, which make an on-line course, (such as Hinton's, on Coursera) understandable. The books also cover much more ground, and in far greater detail, than can be done online. I hope this helps, and am happy to answer any questions about the books.
How to get started with neural networks
These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and
How to get started with neural networks These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and they are fairly old, from 1995. Nevertheless, they remain very useful. Both books start from scratch, by explaining what neural networks are. They provide clear explanations, good examples and good graphs to aid understanding. They explain in great detail the issues of training neural networks, in their many shapes and forms, and what they can and cannot do. The two books supplement each other very nicely, for what one cannot figure out with one book, one tends to find in the other. Rojas has a section, which I particularly like, about implementing back-propagation over many layers in matrix form. It also has a nice section about fuzzy logic, and one about complexity theory. But then Bishop has lots of other nice sections. Rojas is, I would say, the most accessible. Bishop is more mathematical and perhaps more sophisticated. In both books, the maths is mostly linear algebra and calculus of functions of multiple variables (partial derivatives and so on). Without any knowledge of these subjects, you probably would not find either of these books very illuminating. I would recommend reading Rojas first. Both books, obviously, have a lot to say about algorithms, but neither says much about specific implementations in code. To me, these books provide the background, which make an on-line course, (such as Hinton's, on Coursera) understandable. The books also cover much more ground, and in far greater detail, than can be done online. I hope this helps, and am happy to answer any questions about the books.
How to get started with neural networks These are, in my opinion, very good books. R. Rojas: Neural Networks C. M. Bishop: Neural Networks for Pattern recognition The books have some similarities: They are both around 500 pages long, and
7,932
How to get started with neural networks
As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng's deep learning tutorial reading the relevant chapters in the original Parallel Distributed Processing I want to draw attention to the fact that these expositions mostly followed the classical treatment where layers (summation and non-linearity together) are the basic units. The more popular and more flexible treatment implemented in most libraries such as torch-nn and tensorflow, now uses computation graph with auto-differentiation to achieve high modularity. Conceptually it is simpler and more liberating. I would highly recommend the excellent Stanford CS231n open course for this treatment. For a rigorous, learning-theoretic treatment, you may want to consult Neural Networks by Anthony and Bartlett.
How to get started with neural networks
As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng'
How to get started with neural networks As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng's deep learning tutorial reading the relevant chapters in the original Parallel Distributed Processing I want to draw attention to the fact that these expositions mostly followed the classical treatment where layers (summation and non-linearity together) are the basic units. The more popular and more flexible treatment implemented in most libraries such as torch-nn and tensorflow, now uses computation graph with auto-differentiation to achieve high modularity. Conceptually it is simpler and more liberating. I would highly recommend the excellent Stanford CS231n open course for this treatment. For a rigorous, learning-theoretic treatment, you may want to consult Neural Networks by Anthony and Bartlett.
How to get started with neural networks As other people have pointed out, there are a lot of (good) resources online and I have personally done some of them: Ng's Intro to ML class on Coursera Hinton's Neural Networks class on Coursera Ng'
7,933
How to get started with neural networks
If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical background. http://www.stats.ox.ac.uk/~ripley/PRbook/
How to get started with neural networks
If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical backg
How to get started with neural networks If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical background. http://www.stats.ox.ac.uk/~ripley/PRbook/
How to get started with neural networks If you want a treatment from a more statistical viewpoint, have a look at Brian Ripley's "Pattern Recognition and Neural Networks". This book isn't introductory and presupposes some statistical backg
7,934
How to get started with neural networks
I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation functions, training settings) and observe how the settings affect the predictions. All datasets have preconfigured values that can be adopted. It is also possible to create your own datasets. Instructions and explanations to the implemented elements: User Guide
How to get started with neural networks
I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation function
How to get started with neural networks I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation functions, training settings) and observe how the settings affect the predictions. All datasets have preconfigured values that can be adopted. It is also possible to create your own datasets. Instructions and explanations to the implemented elements: User Guide
How to get started with neural networks I have created a web application that supports your learning process in the field of neural networks. https://blueneurons.ch/nn You can play around with the settings (architecture, activation function
7,935
How to get started with neural networks
I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you learn concepts, try to implement them in code, from scratch Keep a few toy datasets and problems in your pocket for testing your understanding and your code Attempt to explain your knowledge to other people (for example, by answering questions on Cross Validated) In regards to 5, when I learned neural networks, I created a video lecture series about them.
How to get started with neural networks
I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you le
How to get started with neural networks I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you learn concepts, try to implement them in code, from scratch Keep a few toy datasets and problems in your pocket for testing your understanding and your code Attempt to explain your knowledge to other people (for example, by answering questions on Cross Validated) In regards to 5, when I learned neural networks, I created a video lecture series about them.
How to get started with neural networks I'll throw my hat into the ring. Read / listen to multiple explanations from different people. Master the Perceptron before you attempt to learn Multilayer Perceptrons (i.e neural networks) As you le
7,936
Encoding Angle Data for Neural Network
Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an representation/encoding of the output, which I do in this answer. I remain thinking that there is a better way, where you can just use a slightly different loss function. (Perhaps sum of squared differences, using subtraction modulo 2 $\pi$). But onwards with the actual answer. Method I propose that an angle $\theta$ be represented as a pair of values, its sine and its cosine. So the encoding function is: $\qquad\qquad\quad\theta \mapsto (\sin(\theta), \cos(\theta))$ and the decoding function is: $\qquad(y_1,y_2) \mapsto \arctan\!2(y_1,y_2)$ For arctan2 being the inverse tangents, preserving direction in all quadrants) You could, in theory, equivalently work directly with the angles if your tool use supported atan2 as a layer function (taking exactly 2 inputs and producing 1 output). TensorFlow does this now, and supports gradient descent on it, though not it intended for this use. I investigated using out = atan2(sigmoid(ylogit), sigmoid(xlogit)) with a loss function min((pred - out)^2, (pred - out - 2pi)^2). I found that it trained far worse than using outs = tanh(ylogit), outc = tanh(xlogit)) with a loss function 0.5((sin(pred) - outs)^2 + (cos(pred) - outc)^2. Which I think can be attributed to the gradient being discontinuous for atan2 My testing here runs it as a preprocessing function To evaluate this I defined a task: Given a black and white image representing a single line on a blank background Output what angle that line is at to the "positive x-axis" I implemented a function randomly generate these images, with lines at random angles (NB: earlier versions of this post used random slopes, rather than random angles. Thanks to @Ari Herman for point it out. It is now fixed). I constructed several neural networks to evaluate there performance on the task. The full details of implementation are in this Jupyter notebook. The code is all in Julia, and I make use of the Mocha neural network library. For comparison, I present it against the alternative methods of scaling to 0,1. and to putting into 500 bins and using soft-label softmax. I am not particularly happy with the last, and feel I need to tweak it. Which is why, unlike the others I only trial it for 1,000 iterations, vs the other two which were run for 1,000 and for 10,000 Experimental Setup Images were $101\times101$ pixels, with the line commensing at the center and going to the edge. There was no noise etc in the image, just a "black" line, on a white background. For each trail 1,000 training, and 1,000 test images were generated randomly. The evaluation network had a single hidden layer of width 500. Sigmoid neurons were used in the hidden layer. It was trained by Stochastic Gradient Decent, with a fixed learning rate of 0.01, and a fixed momentum of 0.9. No regularization, or dropout was used. Nor was any kind of convolution etc. A simple network, which I hope suggests that these results will generalize It is very easy to tweak these parameters in the test code, and I encourage people to do so. (and look for bugs in the test). Results My results are as follows: | | 500 bins | scaled to 0-1 | Sin/Cos | scaled to 0-1 | Sin/Cos | | | 1,000 Iter | 1,000 Iter | 1,000 iter | 10,000 Iter | 10,000 iter | |------------------------|--------------|----------------|--------------|----------------|--------------| | mean_error | 0.4711263342 | 0.2225284486 | 2.099914718 | 0.1085846429 | 2.1036656318 | | std(errors) | 1.1881991421 | 0.4878383767 | 1.485967909 | 0.2807570442 | 1.4891605068 | | minimum(errors) | 1.83E-006 | 1.82E-005 | 9.66E-007 | 1.92E-006 | 5.82E-006 | | median(errors) | 0.0512168533 | 0.1291033982 | 1.8440767072 | 0.0562908143 | 1.8491085947 | | maximum(errors) | 6.0749693965 | 4.9283551248 | 6.2593307366 | 3.735884823 | 6.2704853962 | | accurancy | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | | accurancy_to_point001 | 2.10% | 0.30% | 3.70% | 0.80% | 12.80% | | accurancy_to_point01 | 21.90% | 4.20% | 37.10% | 8.20% | 74.60% | | accurancy_to_point1 | 59.60% | 35.90% | 98.90% | 72.50% | 99.90% | Where I refer to error, this is the absolute value of the difference between the angle output by the neural network, and the true angle. So the mean error (for example) is the average over the 1,000 test cases of this difference etc. I am not sure that I should not be rescaling it by making an error of say $\frac{7\pi}{4}$ be equal to an error of $\frac{\pi}{4}$). I also present the accuracy at various levels of granularity. The accuracy being the portion of test cases it got corred. So accuracy_to_point01 means that it was counted as correct if the output was within 0.01 of the true angle. None of the representations got any perfect results, but that is not at all surprising given how floating point math works. If you take a look at the history of this post you will see the results do have a bit of noise to them, slightly different each time I rerun it. But the general order and scale of values remains the same; thus allowing us to draw some conclusions. Discussion Binning with softmax performs by far the worst, as I said I am not sure I didn't screw up something in the implementation. It does perform marginally above the guess rate though. if it were just guessing we would be getting a mean error of $\pi$ The sin/cos encoding performs significantly better than the scaled 0-1 encoding. The improvement is to the extent that at 1,000 training iterations sin/cos is performing about 3 times better on most metrics than scaling is at 10,000 iterations. I think, in part this is related to improving generalization, as both were getting fairly similar mean squared error on the training set, at least once 10,000 iterations were run. There is certainly an upper limit on the best possible performance at this task, given that the Angle could be more or less any real number, but not all such angels produce different lines at the resolution of $101\times101$ pixels. So since, for example, the angles 45.0 and 45.0000001 both are tied to the same image at that resolution, no method will ever get both perfectly correct. It also seems likely that on an absolute scale to move beyond this performance, a better neural network is needed. Rather than the very simple one outlined above in experimental setup. Conclusion. It seems that the sin/cos representation is by far the best, of the representations I investigated here. This does make sense, in that it does have a smooth value as you move around the circle. I also like that the inverse can be done with arctan2, which is elegant. I believe the task presented is sufficient in its ability to present a reasonable challenge for the network. Though I guess really it is just learning to do curve fitting to $f(x)=\frac{y1}{y2} x$ so perhaps it is too easy. And perhaps worse it may be favouring the paired representation. I don't think it is, but it is getting late here, so I might have missed something I invite you again to look over my code. Suggest improvements, or alternative tasks.
Encoding Angle Data for Neural Network
Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an represe
Encoding Angle Data for Neural Network Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an representation/encoding of the output, which I do in this answer. I remain thinking that there is a better way, where you can just use a slightly different loss function. (Perhaps sum of squared differences, using subtraction modulo 2 $\pi$). But onwards with the actual answer. Method I propose that an angle $\theta$ be represented as a pair of values, its sine and its cosine. So the encoding function is: $\qquad\qquad\quad\theta \mapsto (\sin(\theta), \cos(\theta))$ and the decoding function is: $\qquad(y_1,y_2) \mapsto \arctan\!2(y_1,y_2)$ For arctan2 being the inverse tangents, preserving direction in all quadrants) You could, in theory, equivalently work directly with the angles if your tool use supported atan2 as a layer function (taking exactly 2 inputs and producing 1 output). TensorFlow does this now, and supports gradient descent on it, though not it intended for this use. I investigated using out = atan2(sigmoid(ylogit), sigmoid(xlogit)) with a loss function min((pred - out)^2, (pred - out - 2pi)^2). I found that it trained far worse than using outs = tanh(ylogit), outc = tanh(xlogit)) with a loss function 0.5((sin(pred) - outs)^2 + (cos(pred) - outc)^2. Which I think can be attributed to the gradient being discontinuous for atan2 My testing here runs it as a preprocessing function To evaluate this I defined a task: Given a black and white image representing a single line on a blank background Output what angle that line is at to the "positive x-axis" I implemented a function randomly generate these images, with lines at random angles (NB: earlier versions of this post used random slopes, rather than random angles. Thanks to @Ari Herman for point it out. It is now fixed). I constructed several neural networks to evaluate there performance on the task. The full details of implementation are in this Jupyter notebook. The code is all in Julia, and I make use of the Mocha neural network library. For comparison, I present it against the alternative methods of scaling to 0,1. and to putting into 500 bins and using soft-label softmax. I am not particularly happy with the last, and feel I need to tweak it. Which is why, unlike the others I only trial it for 1,000 iterations, vs the other two which were run for 1,000 and for 10,000 Experimental Setup Images were $101\times101$ pixels, with the line commensing at the center and going to the edge. There was no noise etc in the image, just a "black" line, on a white background. For each trail 1,000 training, and 1,000 test images were generated randomly. The evaluation network had a single hidden layer of width 500. Sigmoid neurons were used in the hidden layer. It was trained by Stochastic Gradient Decent, with a fixed learning rate of 0.01, and a fixed momentum of 0.9. No regularization, or dropout was used. Nor was any kind of convolution etc. A simple network, which I hope suggests that these results will generalize It is very easy to tweak these parameters in the test code, and I encourage people to do so. (and look for bugs in the test). Results My results are as follows: | | 500 bins | scaled to 0-1 | Sin/Cos | scaled to 0-1 | Sin/Cos | | | 1,000 Iter | 1,000 Iter | 1,000 iter | 10,000 Iter | 10,000 iter | |------------------------|--------------|----------------|--------------|----------------|--------------| | mean_error | 0.4711263342 | 0.2225284486 | 2.099914718 | 0.1085846429 | 2.1036656318 | | std(errors) | 1.1881991421 | 0.4878383767 | 1.485967909 | 0.2807570442 | 1.4891605068 | | minimum(errors) | 1.83E-006 | 1.82E-005 | 9.66E-007 | 1.92E-006 | 5.82E-006 | | median(errors) | 0.0512168533 | 0.1291033982 | 1.8440767072 | 0.0562908143 | 1.8491085947 | | maximum(errors) | 6.0749693965 | 4.9283551248 | 6.2593307366 | 3.735884823 | 6.2704853962 | | accurancy | 0.00% | 0.00% | 0.00% | 0.00% | 0.00% | | accurancy_to_point001 | 2.10% | 0.30% | 3.70% | 0.80% | 12.80% | | accurancy_to_point01 | 21.90% | 4.20% | 37.10% | 8.20% | 74.60% | | accurancy_to_point1 | 59.60% | 35.90% | 98.90% | 72.50% | 99.90% | Where I refer to error, this is the absolute value of the difference between the angle output by the neural network, and the true angle. So the mean error (for example) is the average over the 1,000 test cases of this difference etc. I am not sure that I should not be rescaling it by making an error of say $\frac{7\pi}{4}$ be equal to an error of $\frac{\pi}{4}$). I also present the accuracy at various levels of granularity. The accuracy being the portion of test cases it got corred. So accuracy_to_point01 means that it was counted as correct if the output was within 0.01 of the true angle. None of the representations got any perfect results, but that is not at all surprising given how floating point math works. If you take a look at the history of this post you will see the results do have a bit of noise to them, slightly different each time I rerun it. But the general order and scale of values remains the same; thus allowing us to draw some conclusions. Discussion Binning with softmax performs by far the worst, as I said I am not sure I didn't screw up something in the implementation. It does perform marginally above the guess rate though. if it were just guessing we would be getting a mean error of $\pi$ The sin/cos encoding performs significantly better than the scaled 0-1 encoding. The improvement is to the extent that at 1,000 training iterations sin/cos is performing about 3 times better on most metrics than scaling is at 10,000 iterations. I think, in part this is related to improving generalization, as both were getting fairly similar mean squared error on the training set, at least once 10,000 iterations were run. There is certainly an upper limit on the best possible performance at this task, given that the Angle could be more or less any real number, but not all such angels produce different lines at the resolution of $101\times101$ pixels. So since, for example, the angles 45.0 and 45.0000001 both are tied to the same image at that resolution, no method will ever get both perfectly correct. It also seems likely that on an absolute scale to move beyond this performance, a better neural network is needed. Rather than the very simple one outlined above in experimental setup. Conclusion. It seems that the sin/cos representation is by far the best, of the representations I investigated here. This does make sense, in that it does have a smooth value as you move around the circle. I also like that the inverse can be done with arctan2, which is elegant. I believe the task presented is sufficient in its ability to present a reasonable challenge for the network. Though I guess really it is just learning to do curve fitting to $f(x)=\frac{y1}{y2} x$ so perhaps it is too easy. And perhaps worse it may be favouring the paired representation. I don't think it is, but it is getting late here, so I might have missed something I invite you again to look over my code. Suggest improvements, or alternative tasks.
Encoding Angle Data for Neural Network Introduction I find this question really interesting, I'm assume someone has put out a paper on it, but it's my day off, so I don't want to go chasing references. So we could consider it as an represe
7,937
Encoding Angle Data for Neural Network
Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: cos_sin Test Error: 0.017772154610047136 Encoding: binned Test Error: 0.043398792553251526 Training Size: 100 Training Epochs: 500 Encoding: cos_sin Test Error: 0.015376604917819397 Encoding: binned Test Error: 0.032942592915322394 Training Size: 1000 Training Epochs: 100 Encoding: cos_sin Test Error: 0.007544091937411164 Encoding: binned Test Error: 0.012796594492198667 Training Size: 1000 Training Epochs: 500 Encoding: cos_sin Test Error: 0.0038051515079569097 Encoding: binned Test Error: 0.006180633805557207 As you can see, while the binned approach performs admirably in this toy task, the $(\sin(\theta), \cos(\theta))$ encoding performs better in all training configurations, sometimes by a considerable margin. I suspect as the specific task became more complex, the benefits of using Lyndon White's $(\sin(\theta), \cos(\theta))$ representation would become more pronounced. import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.utils.data device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class Net(nn.Module): def __init__(self, input_size, hidden_size, num_out): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.sigmoid = nn.Sigmoid() self.fc2 = nn.Linear(hidden_size, num_out) def forward(self, x): out = self.fc1(x) out = self.sigmoid(out) out = self.fc2(out) return out def gen_train_image(angle, side, thickness): image = np.zeros((side, side)) (x_0, y_0) = (side / 2, side / 2) (c, s) = (np.cos(angle), np.sin(angle)) for y in range(side): for x in range(side): if (abs((x - x_0) * c + (y - y_0) * s) < thickness / 2) and ( -(x - x_0) * s + (y - y_0) * c > 0): image[x, y] = 1 return image.flatten() def gen_data(num_samples, side, num_bins, thickness): angles = 2 * np.pi * np.random.uniform(size=num_samples) X = [gen_train_image(angle, side, thickness) for angle in angles] X = np.stack(X) y = {"cos_sin": [], "binned": []} bin_size = 2 * np.pi / num_bins for angle in angles: idx = int(angle / bin_size) y["binned"].append(idx) y["cos_sin"].append(np.array([np.cos(angle), np.sin(angle)])) for enc in y: y[enc] = np.stack(y[enc]) return (X, y, angles) def get_model_stuff(train_y, input_size, hidden_size, output_sizes, learning_rate, momentum): nets = {} optimizers = {} for enc in train_y: net = Net(input_size, hidden_size, output_sizes[enc]) nets[enc] = net.to(device) optimizers[enc] = torch.optim.SGD(net.parameters(), lr=learning_rate, momentum=momentum) criterions = {"binned": nn.CrossEntropyLoss(), "cos_sin": nn.MSELoss()} return (nets, optimizers, criterions) def get_train_loaders(train_X, train_y, batch_size): train_X_tensor = torch.Tensor(train_X) train_loaders = {} for enc in train_y: if enc == "binned": train_y_tensor = torch.tensor(train_y[enc], dtype=torch.long) else: train_y_tensor = torch.tensor(train_y[enc], dtype=torch.float) dataset = torch.utils.data.TensorDataset(train_X_tensor, train_y_tensor) train_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True) train_loaders[enc] = train_loader return train_loaders def show_image(image, side): img = plt.imshow(np.reshape(image, (side, side)), interpolation="nearest", cmap="Greys") plt.show() def main(): side = 101 input_size = side ** 2 thickness = 5.0 hidden_size = 500 learning_rate = 0.01 momentum = 0.9 num_bins = 500 bin_size = 2 * np.pi / num_bins half_bin_size = bin_size / 2 batch_size = 50 output_sizes = {"binned": num_bins, "cos_sin": 2} num_test = 1000 (test_X, test_y, test_angles) = gen_data(num_test, side, num_bins, thickness) for num_train in [100, 1000]: (train_X, train_y, train_angles) = gen_data(num_train, side, num_bins, thickness) train_loaders = get_train_loaders(train_X, train_y, batch_size) for epochs in [100, 500]: (nets, optimizers, criterions) = get_model_stuff(train_y, input_size, hidden_size, output_sizes, learning_rate, momentum) for enc in train_y: optimizer = optimizers[enc] net = nets[enc] criterion = criterions[enc] for epoch in range(epochs): for (i, (images, ys)) in enumerate(train_loaders[enc]): optimizer.zero_grad() outputs = net(images.to(device)) loss = criterion(outputs, ys.to(device)) loss.backward() optimizer.step() print("Training Size: {0}".format(num_train)) print("Training Epochs: {0}".format(epochs)) for enc in train_y: net = nets[enc] preds = net(torch.tensor(test_X, dtype=torch.float).to(device)) if enc == "binned": pred_bins = np.array(preds.argmax(dim=1).detach().cpu().numpy(), dtype=np.float) pred_angles = bin_size * pred_bins + half_bin_size else: pred_angles = torch.atan2(preds[:, 1], preds[:, 0]).detach().cpu().numpy() pred_angles[pred_angles < 0] = pred_angles[pred_angles < 0] + 2 * np.pi print("Encoding: {0}".format(enc)) print("Test Error: {0}".format(np.abs(pred_angles - test_angles).mean())) print() if __name__ == "__main__": main()
Encoding Angle Data for Neural Network
Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: co
Encoding Angle Data for Neural Network Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: cos_sin Test Error: 0.017772154610047136 Encoding: binned Test Error: 0.043398792553251526 Training Size: 100 Training Epochs: 500 Encoding: cos_sin Test Error: 0.015376604917819397 Encoding: binned Test Error: 0.032942592915322394 Training Size: 1000 Training Epochs: 100 Encoding: cos_sin Test Error: 0.007544091937411164 Encoding: binned Test Error: 0.012796594492198667 Training Size: 1000 Training Epochs: 500 Encoding: cos_sin Test Error: 0.0038051515079569097 Encoding: binned Test Error: 0.006180633805557207 As you can see, while the binned approach performs admirably in this toy task, the $(\sin(\theta), \cos(\theta))$ encoding performs better in all training configurations, sometimes by a considerable margin. I suspect as the specific task became more complex, the benefits of using Lyndon White's $(\sin(\theta), \cos(\theta))$ representation would become more pronounced. import matplotlib.pyplot as plt import numpy as np import torch import torch.nn as nn import torch.utils.data device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") class Net(nn.Module): def __init__(self, input_size, hidden_size, num_out): super(Net, self).__init__() self.fc1 = nn.Linear(input_size, hidden_size) self.sigmoid = nn.Sigmoid() self.fc2 = nn.Linear(hidden_size, num_out) def forward(self, x): out = self.fc1(x) out = self.sigmoid(out) out = self.fc2(out) return out def gen_train_image(angle, side, thickness): image = np.zeros((side, side)) (x_0, y_0) = (side / 2, side / 2) (c, s) = (np.cos(angle), np.sin(angle)) for y in range(side): for x in range(side): if (abs((x - x_0) * c + (y - y_0) * s) < thickness / 2) and ( -(x - x_0) * s + (y - y_0) * c > 0): image[x, y] = 1 return image.flatten() def gen_data(num_samples, side, num_bins, thickness): angles = 2 * np.pi * np.random.uniform(size=num_samples) X = [gen_train_image(angle, side, thickness) for angle in angles] X = np.stack(X) y = {"cos_sin": [], "binned": []} bin_size = 2 * np.pi / num_bins for angle in angles: idx = int(angle / bin_size) y["binned"].append(idx) y["cos_sin"].append(np.array([np.cos(angle), np.sin(angle)])) for enc in y: y[enc] = np.stack(y[enc]) return (X, y, angles) def get_model_stuff(train_y, input_size, hidden_size, output_sizes, learning_rate, momentum): nets = {} optimizers = {} for enc in train_y: net = Net(input_size, hidden_size, output_sizes[enc]) nets[enc] = net.to(device) optimizers[enc] = torch.optim.SGD(net.parameters(), lr=learning_rate, momentum=momentum) criterions = {"binned": nn.CrossEntropyLoss(), "cos_sin": nn.MSELoss()} return (nets, optimizers, criterions) def get_train_loaders(train_X, train_y, batch_size): train_X_tensor = torch.Tensor(train_X) train_loaders = {} for enc in train_y: if enc == "binned": train_y_tensor = torch.tensor(train_y[enc], dtype=torch.long) else: train_y_tensor = torch.tensor(train_y[enc], dtype=torch.float) dataset = torch.utils.data.TensorDataset(train_X_tensor, train_y_tensor) train_loader = torch.utils.data.DataLoader(dataset=dataset, batch_size=batch_size, shuffle=True) train_loaders[enc] = train_loader return train_loaders def show_image(image, side): img = plt.imshow(np.reshape(image, (side, side)), interpolation="nearest", cmap="Greys") plt.show() def main(): side = 101 input_size = side ** 2 thickness = 5.0 hidden_size = 500 learning_rate = 0.01 momentum = 0.9 num_bins = 500 bin_size = 2 * np.pi / num_bins half_bin_size = bin_size / 2 batch_size = 50 output_sizes = {"binned": num_bins, "cos_sin": 2} num_test = 1000 (test_X, test_y, test_angles) = gen_data(num_test, side, num_bins, thickness) for num_train in [100, 1000]: (train_X, train_y, train_angles) = gen_data(num_train, side, num_bins, thickness) train_loaders = get_train_loaders(train_X, train_y, batch_size) for epochs in [100, 500]: (nets, optimizers, criterions) = get_model_stuff(train_y, input_size, hidden_size, output_sizes, learning_rate, momentum) for enc in train_y: optimizer = optimizers[enc] net = nets[enc] criterion = criterions[enc] for epoch in range(epochs): for (i, (images, ys)) in enumerate(train_loaders[enc]): optimizer.zero_grad() outputs = net(images.to(device)) loss = criterion(outputs, ys.to(device)) loss.backward() optimizer.step() print("Training Size: {0}".format(num_train)) print("Training Epochs: {0}".format(epochs)) for enc in train_y: net = nets[enc] preds = net(torch.tensor(test_X, dtype=torch.float).to(device)) if enc == "binned": pred_bins = np.array(preds.argmax(dim=1).detach().cpu().numpy(), dtype=np.float) pred_angles = bin_size * pred_bins + half_bin_size else: pred_angles = torch.atan2(preds[:, 1], preds[:, 0]).detach().cpu().numpy() pred_angles[pred_angles < 0] = pred_angles[pred_angles < 0] + 2 * np.pi print("Encoding: {0}".format(enc)) print("Test Error: {0}".format(np.abs(pred_angles - test_angles).mean())) print() if __name__ == "__main__": main()
Encoding Angle Data for Neural Network Here's another Python implementation comparing Lyndon White's proposed encoding to a binned approach. The code below produced the following output: Training Size: 100 Training Epochs: 100 Encoding: co
7,938
Encoding Angle Data for Neural Network
Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and success metrics. Each network tested has one hidden layer (size = 500) with logistic neurons. The output neurons are either linear or softmax as noted. I used 1,000 training images and 1,000 test images which were independently, randomly generated (so there may be repeats). Training consisted of 50 iterations through the training set. I was able to get quite good accuracy using binning and "gaussian" encoding (a name I made up; similar to binning except that the target output vector has the form exp(-pi*([1,2,3,...,500] - idx)**2) where idx is the index corresponding to the correct angle). The code is below; here are my results: Test error for (cos,sin) encoding: 1,000 training images, 1,000 test images, 50 iterations, linear output Mean: 0.0911558142071 Median: 0.0429723541743 Minimum: 2.77769843793e-06 Maximum: 6.2608513539 Accuracy to 0.1: 85.2% Accuracy to 0.01: 11.6% Accuracy to 0.001: 1.0% Test error for [-1,1] encoding: 1,000 training images, 1,000 test images, 50 iterations, linear output Mean: 0.234181700523 Median: 0.17460197307 Minimum: 0.000473665840258 Maximum: 6.00637777237 Accuracy to 0.1: 29.9% Accuracy to 0.01: 3.3% Accuracy to 0.001: 0.1% Test error for 1-of-500 encoding: 1,000 training images, 1,000 test images, 50 iterations, softmax output Mean: 0.0298767021922 Median: 0.00388858079174 Minimum: 4.08712407829e-06 Maximum: 6.2784479965 Accuracy to 0.1: 99.6% Accuracy to 0.01: 88.9% Accuracy to 0.001: 13.5% Test error for gaussian encoding: 1,000 training images, 1,000 test images, 50 iterations, softmax output Mean: 0.0296905377463 Median: 0.00365867335107 Minimum: 4.08712407829e-06 Maximum: 6.2784479965 Accuracy to 0.1: 99.6% Accuracy to 0.01: 90.8% Accuracy to 0.001: 14.3% I cannot figure out why our results seem to be in contradiction with one another, but it seems worth further investigation. # -*- coding: utf-8 -*- """ Created on Mon Jun 13 16:59:53 2016 @author: Ari """ from numpy import savetxt, loadtxt, round, zeros, sin, cos, arctan2, clip, pi, tanh, exp, arange, dot, outer, array, shape, zeros_like, reshape, mean, median, max, min from numpy.random import rand, shuffle import matplotlib.pyplot as plt ########### # Functions ########### # Returns a B&W image of a line represented as a binary vector of length width*height def gen_train_image(angle, width, height, thickness): image = zeros((height,width)) x_0,y_0 = width/2, height/2 c,s = cos(angle),sin(angle) for y in range(height): for x in range(width): if abs((x-x_0)*c + (y-y_0)*s) < thickness/2 and -(x-x_0)*s + (y-y_0)*c > 0: image[x,y] = 1 return image.flatten() # Display training image def display_image(image,height, width): img = plt.imshow(reshape(image,(height,width)), interpolation = 'nearest', cmap = "Greys") plt.show() # Activation function def sigmoid(X): return 1.0/(1+exp(-clip(X,-50,100))) # Returns encoded angle using specified method ("binned","scaled","cossin","gaussian") def encode_angle(angle, method): if method == "binned": # 1-of-500 encoding X = zeros(500) X[int(round(250*(angle/pi + 1)))%500] = 1 elif method == "gaussian": # Leaky binned encoding X = array([i for i in range(500)]) idx = 250*(angle/pi + 1) X = exp(-pi*(X-idx)**2) elif method == "scaled": # Scaled to [-1,1] encoding X = array([angle/pi]) elif method == "cossin": # Oxinabox's (cos,sin) encoding X = array([cos(angle),sin(angle)]) else: pass return X # Returns decoded angle using specified method def decode_angle(X, method): if method == "binned" or method == "gaussian": # 1-of-500 or gaussian encoding M = max(X) for i in range(len(X)): if abs(X[i]-M) < 1e-5: angle = pi*i/250 - pi break # angle = pi*dot(array([i for i in range(500)]),X)/500 # Averaging elif method == "scaled": # Scaled to [-1,1] encoding angle = pi*X[0] elif method == "cossin": # Oxinabox's (cos,sin) encoding angle = arctan2(X[1],X[0]) else: pass return angle # Train and test neural network with specified angle encoding method def test_encoding_method(train_images,train_angles,test_images, test_angles, method, num_iters, alpha = 0.01, alpha_bias = 0.0001, momentum = 0.9, hid_layer_size = 500): num_train,in_layer_size = shape(train_images) num_test = len(test_angles) if method == "binned": out_layer_size = 500 elif method == "gaussian": out_layer_size = 500 elif method == "scaled": out_layer_size = 1 elif method == "cossin": out_layer_size = 2 else: pass # Initial weights and biases IN_HID = rand(in_layer_size,hid_layer_size) - 0.5 # IN --> HID weights HID_OUT = rand(hid_layer_size,out_layer_size) - 0.5 # HID --> OUT weights BIAS1 = rand(hid_layer_size) - 0.5 # Bias for hidden layer BIAS2 = rand(out_layer_size) - 0.5 # Bias for output layer # Initial weight and bias updates IN_HID_del = zeros_like(IN_HID) HID_OUT_del = zeros_like(HID_OUT) BIAS1_del = zeros_like(BIAS1) BIAS2_del = zeros_like(BIAS2) # Train for j in range(num_iters): for i in range(num_train): # Get training example IN = train_images[i] TARGET = encode_angle(train_angles[i],method) # Feed forward and compute error derivatives HID = sigmoid(dot(IN,IN_HID)+BIAS1) if method == "binned" or method == "gaussian": # Use softmax OUT = exp(clip(dot(HID,HID_OUT)+BIAS2,-100,100)) OUT = OUT/sum(OUT) dACT2 = OUT - TARGET elif method == "cossin" or method == "scaled": # Linear OUT = dot(HID,HID_OUT)+BIAS2 dACT2 = OUT-TARGET else: print("Invalid encoding method") dHID_OUT = outer(HID,dACT2) dACT1 = dot(dACT2,HID_OUT.T)*HID*(1-HID) dIN_HID = outer(IN,dACT1) dBIAS1 = dACT1 dBIAS2 = dACT2 # Update the weight updates IN_HID_del = momentum*IN_HID_del + (1-momentum)*dIN_HID HID_OUT_del = momentum*HID_OUT_del + (1-momentum)*dHID_OUT BIAS1_del = momentum*BIAS1_del + (1-momentum)*dBIAS1 BIAS2_del = momentum*BIAS2_del + (1-momentum)*dBIAS2 # Update the weights HID_OUT -= alpha*dHID_OUT IN_HID -= alpha*dIN_HID BIAS1 -= alpha_bias*dBIAS1 BIAS2 -= alpha_bias*dBIAS2 # Test test_errors = zeros(num_test) angles = zeros(num_test) target_angles = zeros(num_test) accuracy_to_point001 = 0 accuracy_to_point01 = 0 accuracy_to_point1 = 0 for i in range(num_test): # Get training example IN = test_images[i] target_angle = test_angles[i] # Feed forward HID = sigmoid(dot(IN,IN_HID)+BIAS1) if method == "binned" or method == "gaussian": OUT = exp(clip(dot(HID,HID_OUT)+BIAS2,-100,100)) OUT = OUT/sum(OUT) elif method == "cossin" or method == "scaled": OUT = dot(HID,HID_OUT)+BIAS2 # Decode output angle = decode_angle(OUT,method) # Compute errors error = abs(angle-target_angle) test_errors[i] = error angles[i] = angle target_angles[i] = target_angle if error < 0.1: accuracy_to_point1 += 1 if error < 0.01: accuracy_to_point01 += 1 if error < 0.001: accuracy_to_point001 += 1 # Compute and return results accuracy_to_point1 = 100.0*accuracy_to_point1/num_test accuracy_to_point01 = 100.0*accuracy_to_point01/num_test accuracy_to_point001 = 100.0*accuracy_to_point001/num_test return mean(test_errors),median(test_errors),min(test_errors),max(test_errors),accuracy_to_point1,accuracy_to_point01,accuracy_to_point001 # Dispaly results def display_results(results,method): MEAN,MEDIAN,MIN,MAX,ACC1,ACC01,ACC001 = results if method == "binned": print("Test error for 1-of-500 encoding:") elif method == "gaussian": print("Test error for gaussian encoding: ") elif method == "scaled": print("Test error for [-1,1] encoding:") elif method == "cossin": print("Test error for (cos,sin) encoding:") else: pass print("-----------") print("Mean: "+str(MEAN)) print("Median: "+str(MEDIAN)) print("Minimum: "+str(MIN)) print("Maximum: "+str(MAX)) print("Accuracy to 0.1: "+str(ACC1)+"%") print("Accuracy to 0.01: "+str(ACC01)+"%") print("Accuracy to 0.001: "+str(ACC001)+"%") print("\n\n") ################## # Image parameters ################## width = 100 # Image width height = 100 # Image heigth thickness = 5.0 # Line thickness ################################# # Generate training and test data ################################# num_train = 1000 num_test = 1000 test_images = [] test_angles = [] train_images = [] train_angles = [] for i in range(num_train): angle = pi*(2*rand() - 1) train_angles.append(angle) image = gen_train_image(angle,width,height,thickness) train_images.append(image) for i in range(num_test): angle = pi*(2*rand() - 1) test_angles.append(angle) image = gen_train_image(angle,width,height,thickness) test_images.append(image) train_angles,train_images,test_angles,test_images = array(train_angles),array(train_images),array(test_angles),array(test_images) ########################### # Evaluate encoding schemes ########################### num_iters = 50 # Train with cos,sin encoding method = "cossin" results1 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results1,method) # Train with scaled encoding method = "scaled" results3 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results3,method) # Train with binned encoding method = "binned" results2 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results2,method) # Train with gaussian encoding method = "gaussian" results4 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results4,method)
Encoding Angle Data for Neural Network
Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and su
Encoding Angle Data for Neural Network Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and success metrics. Each network tested has one hidden layer (size = 500) with logistic neurons. The output neurons are either linear or softmax as noted. I used 1,000 training images and 1,000 test images which were independently, randomly generated (so there may be repeats). Training consisted of 50 iterations through the training set. I was able to get quite good accuracy using binning and "gaussian" encoding (a name I made up; similar to binning except that the target output vector has the form exp(-pi*([1,2,3,...,500] - idx)**2) where idx is the index corresponding to the correct angle). The code is below; here are my results: Test error for (cos,sin) encoding: 1,000 training images, 1,000 test images, 50 iterations, linear output Mean: 0.0911558142071 Median: 0.0429723541743 Minimum: 2.77769843793e-06 Maximum: 6.2608513539 Accuracy to 0.1: 85.2% Accuracy to 0.01: 11.6% Accuracy to 0.001: 1.0% Test error for [-1,1] encoding: 1,000 training images, 1,000 test images, 50 iterations, linear output Mean: 0.234181700523 Median: 0.17460197307 Minimum: 0.000473665840258 Maximum: 6.00637777237 Accuracy to 0.1: 29.9% Accuracy to 0.01: 3.3% Accuracy to 0.001: 0.1% Test error for 1-of-500 encoding: 1,000 training images, 1,000 test images, 50 iterations, softmax output Mean: 0.0298767021922 Median: 0.00388858079174 Minimum: 4.08712407829e-06 Maximum: 6.2784479965 Accuracy to 0.1: 99.6% Accuracy to 0.01: 88.9% Accuracy to 0.001: 13.5% Test error for gaussian encoding: 1,000 training images, 1,000 test images, 50 iterations, softmax output Mean: 0.0296905377463 Median: 0.00365867335107 Minimum: 4.08712407829e-06 Maximum: 6.2784479965 Accuracy to 0.1: 99.6% Accuracy to 0.01: 90.8% Accuracy to 0.001: 14.3% I cannot figure out why our results seem to be in contradiction with one another, but it seems worth further investigation. # -*- coding: utf-8 -*- """ Created on Mon Jun 13 16:59:53 2016 @author: Ari """ from numpy import savetxt, loadtxt, round, zeros, sin, cos, arctan2, clip, pi, tanh, exp, arange, dot, outer, array, shape, zeros_like, reshape, mean, median, max, min from numpy.random import rand, shuffle import matplotlib.pyplot as plt ########### # Functions ########### # Returns a B&W image of a line represented as a binary vector of length width*height def gen_train_image(angle, width, height, thickness): image = zeros((height,width)) x_0,y_0 = width/2, height/2 c,s = cos(angle),sin(angle) for y in range(height): for x in range(width): if abs((x-x_0)*c + (y-y_0)*s) < thickness/2 and -(x-x_0)*s + (y-y_0)*c > 0: image[x,y] = 1 return image.flatten() # Display training image def display_image(image,height, width): img = plt.imshow(reshape(image,(height,width)), interpolation = 'nearest', cmap = "Greys") plt.show() # Activation function def sigmoid(X): return 1.0/(1+exp(-clip(X,-50,100))) # Returns encoded angle using specified method ("binned","scaled","cossin","gaussian") def encode_angle(angle, method): if method == "binned": # 1-of-500 encoding X = zeros(500) X[int(round(250*(angle/pi + 1)))%500] = 1 elif method == "gaussian": # Leaky binned encoding X = array([i for i in range(500)]) idx = 250*(angle/pi + 1) X = exp(-pi*(X-idx)**2) elif method == "scaled": # Scaled to [-1,1] encoding X = array([angle/pi]) elif method == "cossin": # Oxinabox's (cos,sin) encoding X = array([cos(angle),sin(angle)]) else: pass return X # Returns decoded angle using specified method def decode_angle(X, method): if method == "binned" or method == "gaussian": # 1-of-500 or gaussian encoding M = max(X) for i in range(len(X)): if abs(X[i]-M) < 1e-5: angle = pi*i/250 - pi break # angle = pi*dot(array([i for i in range(500)]),X)/500 # Averaging elif method == "scaled": # Scaled to [-1,1] encoding angle = pi*X[0] elif method == "cossin": # Oxinabox's (cos,sin) encoding angle = arctan2(X[1],X[0]) else: pass return angle # Train and test neural network with specified angle encoding method def test_encoding_method(train_images,train_angles,test_images, test_angles, method, num_iters, alpha = 0.01, alpha_bias = 0.0001, momentum = 0.9, hid_layer_size = 500): num_train,in_layer_size = shape(train_images) num_test = len(test_angles) if method == "binned": out_layer_size = 500 elif method == "gaussian": out_layer_size = 500 elif method == "scaled": out_layer_size = 1 elif method == "cossin": out_layer_size = 2 else: pass # Initial weights and biases IN_HID = rand(in_layer_size,hid_layer_size) - 0.5 # IN --> HID weights HID_OUT = rand(hid_layer_size,out_layer_size) - 0.5 # HID --> OUT weights BIAS1 = rand(hid_layer_size) - 0.5 # Bias for hidden layer BIAS2 = rand(out_layer_size) - 0.5 # Bias for output layer # Initial weight and bias updates IN_HID_del = zeros_like(IN_HID) HID_OUT_del = zeros_like(HID_OUT) BIAS1_del = zeros_like(BIAS1) BIAS2_del = zeros_like(BIAS2) # Train for j in range(num_iters): for i in range(num_train): # Get training example IN = train_images[i] TARGET = encode_angle(train_angles[i],method) # Feed forward and compute error derivatives HID = sigmoid(dot(IN,IN_HID)+BIAS1) if method == "binned" or method == "gaussian": # Use softmax OUT = exp(clip(dot(HID,HID_OUT)+BIAS2,-100,100)) OUT = OUT/sum(OUT) dACT2 = OUT - TARGET elif method == "cossin" or method == "scaled": # Linear OUT = dot(HID,HID_OUT)+BIAS2 dACT2 = OUT-TARGET else: print("Invalid encoding method") dHID_OUT = outer(HID,dACT2) dACT1 = dot(dACT2,HID_OUT.T)*HID*(1-HID) dIN_HID = outer(IN,dACT1) dBIAS1 = dACT1 dBIAS2 = dACT2 # Update the weight updates IN_HID_del = momentum*IN_HID_del + (1-momentum)*dIN_HID HID_OUT_del = momentum*HID_OUT_del + (1-momentum)*dHID_OUT BIAS1_del = momentum*BIAS1_del + (1-momentum)*dBIAS1 BIAS2_del = momentum*BIAS2_del + (1-momentum)*dBIAS2 # Update the weights HID_OUT -= alpha*dHID_OUT IN_HID -= alpha*dIN_HID BIAS1 -= alpha_bias*dBIAS1 BIAS2 -= alpha_bias*dBIAS2 # Test test_errors = zeros(num_test) angles = zeros(num_test) target_angles = zeros(num_test) accuracy_to_point001 = 0 accuracy_to_point01 = 0 accuracy_to_point1 = 0 for i in range(num_test): # Get training example IN = test_images[i] target_angle = test_angles[i] # Feed forward HID = sigmoid(dot(IN,IN_HID)+BIAS1) if method == "binned" or method == "gaussian": OUT = exp(clip(dot(HID,HID_OUT)+BIAS2,-100,100)) OUT = OUT/sum(OUT) elif method == "cossin" or method == "scaled": OUT = dot(HID,HID_OUT)+BIAS2 # Decode output angle = decode_angle(OUT,method) # Compute errors error = abs(angle-target_angle) test_errors[i] = error angles[i] = angle target_angles[i] = target_angle if error < 0.1: accuracy_to_point1 += 1 if error < 0.01: accuracy_to_point01 += 1 if error < 0.001: accuracy_to_point001 += 1 # Compute and return results accuracy_to_point1 = 100.0*accuracy_to_point1/num_test accuracy_to_point01 = 100.0*accuracy_to_point01/num_test accuracy_to_point001 = 100.0*accuracy_to_point001/num_test return mean(test_errors),median(test_errors),min(test_errors),max(test_errors),accuracy_to_point1,accuracy_to_point01,accuracy_to_point001 # Dispaly results def display_results(results,method): MEAN,MEDIAN,MIN,MAX,ACC1,ACC01,ACC001 = results if method == "binned": print("Test error for 1-of-500 encoding:") elif method == "gaussian": print("Test error for gaussian encoding: ") elif method == "scaled": print("Test error for [-1,1] encoding:") elif method == "cossin": print("Test error for (cos,sin) encoding:") else: pass print("-----------") print("Mean: "+str(MEAN)) print("Median: "+str(MEDIAN)) print("Minimum: "+str(MIN)) print("Maximum: "+str(MAX)) print("Accuracy to 0.1: "+str(ACC1)+"%") print("Accuracy to 0.01: "+str(ACC01)+"%") print("Accuracy to 0.001: "+str(ACC001)+"%") print("\n\n") ################## # Image parameters ################## width = 100 # Image width height = 100 # Image heigth thickness = 5.0 # Line thickness ################################# # Generate training and test data ################################# num_train = 1000 num_test = 1000 test_images = [] test_angles = [] train_images = [] train_angles = [] for i in range(num_train): angle = pi*(2*rand() - 1) train_angles.append(angle) image = gen_train_image(angle,width,height,thickness) train_images.append(image) for i in range(num_test): angle = pi*(2*rand() - 1) test_angles.append(angle) image = gen_train_image(angle,width,height,thickness) test_images.append(image) train_angles,train_images,test_angles,test_images = array(train_angles),array(train_images),array(test_angles),array(test_images) ########################### # Evaluate encoding schemes ########################### num_iters = 50 # Train with cos,sin encoding method = "cossin" results1 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results1,method) # Train with scaled encoding method = "scaled" results3 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results3,method) # Train with binned encoding method = "binned" results2 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results2,method) # Train with gaussian encoding method = "gaussian" results4 = test_encoding_method(train_images, train_angles, test_images, test_angles, method, num_iters) display_results(results4,method)
Encoding Angle Data for Neural Network Here is my Python version of your experiment. I kept many of the details of your implementation the same, in particular I use the same image size, network layer sizes, learning rate, momentum, and su
7,939
Encoding Angle Data for Neural Network
Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined at theta = 0. I don't have the time to train a network and compare to the other encodings but in this paper the technique seemed reasonably successful.
Encoding Angle Data for Neural Network
Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined a
Encoding Angle Data for Neural Network Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined at theta = 0. I don't have the time to train a network and compare to the other encodings but in this paper the technique seemed reasonably successful.
Encoding Angle Data for Neural Network Another way to encode the angle is as a set of two values: y1 = max(0,theta) y2 = max(0,-theta) theta_out = y1 - y2 This would have the similar problem to arctan2 in that the gradient is undefined a
7,940
"Absolutely continuous random variable" vs. "Continuous random variable"?
The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continuous function. One definition (usually the first one people encounter in their education) is that for each $x$ and for any number $\epsilon\gt 0$ there exists a $\delta$ (depending on $x$ and $\epsilon$) for which the values of $F$ on the $\delta$-neighborhood of $x$ vary by no more than $\epsilon$ from $F(x)$. It is a short step from this to demonstrating that when a continuous $F$ is the distribution of a random variable $X$, then $\Pr(X=x)=0$ for any number $x$. After, all, the continuity definition implies you can shrink $\delta$ to make $\Pr(X\in (x-\delta, x+\delta))$ as small as any $\epsilon \gt 0$ and since (1) this probability is no less than $\Pr(X=x)$ and (2) $\epsilon$ can be arbitrarily small, it follows that $\Pr(X=x)=0$. The countable additivity of probability extends this result to any finite or countable set $B$. Absolutely continuous distributions All distribution functions $F$ define positive, finite measures $\mu_F$ determined by $$\mu_F((a,b]) = F(b) - F(a).$$ Absolute continuity is a concept of measure theory. One measure $\mu_F$ is absolutely continuous with respect to another measure $\lambda$ (both defined on the same sigma algebra) when, for every measurable set $E$, $\lambda(E)=0$ implies $\mu_F(E)=0$. In other words, relative to $\lambda$, there are no "small" (measure zero) sets to which $\mu_F$ assigns "large" (nonzero) probability. We will be taking $\lambda$ to be the usual Lebesgue measure, for which $\lambda((a,b]) = b-a$ is the length of an interval. The second half of $(*)$ states that the probability measure $\mu_F(B)=\Pr(X\in B)$ is absolutely continuous with respect to Lebesgue measure. Absolute continuity is related to differentiability. The derivative of one measure with respect to another (at some point $x$) is an intuitive concept: take a set of measurable neighborhoods of $x$ that shrink down to $x$ and compare the two measures in those neighborhoods. If they always approach the same limit, no matter what sequence of neighborhoods is chosen, then that limit is the derivative. (There's a technical issue: you need to constrain those neighborhoods so they don't have "pathological" shapes. That can be done by requiring each neighborhood to occupy a non-negligible portion of the region in which it lies.) Differentiation in this sense is precisely what the question at What is the definition of probability on a continuous distribution? is addressing. Let's write $D_\lambda(\mu_F)$ for the derivative of $\mu_F$ with respect to $\lambda$. The relevant theorem--it's a measure-theoretic version of the Fundamental Theorem of Calculus--asserts $\mu_F$ is absolutely continuous with respect to $\lambda$ if and only if $$\mu_F(E) = \int_E \left(D_\lambda \mu_F\right)(x)\,\mathrm{d}\lambda$$ for every measurable set $E$. [Rudin, Theorem 8.6] In other words, absolute continuity (of $\mu_F$ with respect to $\lambda$) is equivalent to the existence of a density function $D_\lambda(\mu_F)$. Summary A distribution $F$ is continuous when $F$ is continuous as a function: intuitively, it has no "jumps." A distribution $F$ is absolutely continuous when it has a density function (with respect to Lebesgue measure). That the two kinds of continuity are not equivalent is demonstrated by examples, such as the one recounted at https://stats.stackexchange.com/a/229561/919. This is the famous Cantor function. For this function, $F$ is almost everywhere horizontal (as its graph makes plain), whence $D_\lambda(\mu_F)$ is almost everywhere zero, and therefore $\int_{\mathbb{R}} D_\lambda(\mu_F)(x)d\lambda = \int_{\mathbb{R}}0 d\lambda = 0$. This obviously does not give the correct value of $1$ (according to the axiom of total probability). Comments Virtually all the distributions used in statistical applications are absolutely continuous, nowhere continuous (discrete), or mixtures thereof, so the distinction between continuity and absolute continuity is often ignored. However, failing to appreciate this distinction can lead to muddy reasoning and bad intuition, especially in the cases where rigor is most needed: namely, when a situation is confusing or nonintuitive, so we rely on mathematics to carry us to correct results. That is why we don't usually make a big deal of this stuff in practice, but everyone should know about it. Reference Rudin, Walter. Real and Complex Analysis. McGraw-Hill, 1974: sections 6.2 (Absolute Continuity) and 8.1 (Derivatives of Measures).
"Absolutely continuous random variable" vs. "Continuous random variable"?
The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continu
"Absolutely continuous random variable" vs. "Continuous random variable"? The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continuous function. One definition (usually the first one people encounter in their education) is that for each $x$ and for any number $\epsilon\gt 0$ there exists a $\delta$ (depending on $x$ and $\epsilon$) for which the values of $F$ on the $\delta$-neighborhood of $x$ vary by no more than $\epsilon$ from $F(x)$. It is a short step from this to demonstrating that when a continuous $F$ is the distribution of a random variable $X$, then $\Pr(X=x)=0$ for any number $x$. After, all, the continuity definition implies you can shrink $\delta$ to make $\Pr(X\in (x-\delta, x+\delta))$ as small as any $\epsilon \gt 0$ and since (1) this probability is no less than $\Pr(X=x)$ and (2) $\epsilon$ can be arbitrarily small, it follows that $\Pr(X=x)=0$. The countable additivity of probability extends this result to any finite or countable set $B$. Absolutely continuous distributions All distribution functions $F$ define positive, finite measures $\mu_F$ determined by $$\mu_F((a,b]) = F(b) - F(a).$$ Absolute continuity is a concept of measure theory. One measure $\mu_F$ is absolutely continuous with respect to another measure $\lambda$ (both defined on the same sigma algebra) when, for every measurable set $E$, $\lambda(E)=0$ implies $\mu_F(E)=0$. In other words, relative to $\lambda$, there are no "small" (measure zero) sets to which $\mu_F$ assigns "large" (nonzero) probability. We will be taking $\lambda$ to be the usual Lebesgue measure, for which $\lambda((a,b]) = b-a$ is the length of an interval. The second half of $(*)$ states that the probability measure $\mu_F(B)=\Pr(X\in B)$ is absolutely continuous with respect to Lebesgue measure. Absolute continuity is related to differentiability. The derivative of one measure with respect to another (at some point $x$) is an intuitive concept: take a set of measurable neighborhoods of $x$ that shrink down to $x$ and compare the two measures in those neighborhoods. If they always approach the same limit, no matter what sequence of neighborhoods is chosen, then that limit is the derivative. (There's a technical issue: you need to constrain those neighborhoods so they don't have "pathological" shapes. That can be done by requiring each neighborhood to occupy a non-negligible portion of the region in which it lies.) Differentiation in this sense is precisely what the question at What is the definition of probability on a continuous distribution? is addressing. Let's write $D_\lambda(\mu_F)$ for the derivative of $\mu_F$ with respect to $\lambda$. The relevant theorem--it's a measure-theoretic version of the Fundamental Theorem of Calculus--asserts $\mu_F$ is absolutely continuous with respect to $\lambda$ if and only if $$\mu_F(E) = \int_E \left(D_\lambda \mu_F\right)(x)\,\mathrm{d}\lambda$$ for every measurable set $E$. [Rudin, Theorem 8.6] In other words, absolute continuity (of $\mu_F$ with respect to $\lambda$) is equivalent to the existence of a density function $D_\lambda(\mu_F)$. Summary A distribution $F$ is continuous when $F$ is continuous as a function: intuitively, it has no "jumps." A distribution $F$ is absolutely continuous when it has a density function (with respect to Lebesgue measure). That the two kinds of continuity are not equivalent is demonstrated by examples, such as the one recounted at https://stats.stackexchange.com/a/229561/919. This is the famous Cantor function. For this function, $F$ is almost everywhere horizontal (as its graph makes plain), whence $D_\lambda(\mu_F)$ is almost everywhere zero, and therefore $\int_{\mathbb{R}} D_\lambda(\mu_F)(x)d\lambda = \int_{\mathbb{R}}0 d\lambda = 0$. This obviously does not give the correct value of $1$ (according to the axiom of total probability). Comments Virtually all the distributions used in statistical applications are absolutely continuous, nowhere continuous (discrete), or mixtures thereof, so the distinction between continuity and absolute continuity is often ignored. However, failing to appreciate this distinction can lead to muddy reasoning and bad intuition, especially in the cases where rigor is most needed: namely, when a situation is confusing or nonintuitive, so we rely on mathematics to carry us to correct results. That is why we don't usually make a big deal of this stuff in practice, but everyone should know about it. Reference Rudin, Walter. Real and Complex Analysis. McGraw-Hill, 1974: sections 6.2 (Absolute Continuity) and 8.1 (Derivatives of Measures).
"Absolutely continuous random variable" vs. "Continuous random variable"? The descriptions differ: only the first one $(*)$ is correct. This answer explains how and why. Continuous distributions A "continuous" distribution $F$ is continuous in the usual sense of a continu
7,941
PCA in numpy and sklearn produces different results [closed]
The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are observing this difference: PCA on correlation or covariance? If you replace pca.fit_transform(x) with x_std = StandardScaler().fit_transform(x) pca.fit_transform(x_std) you will get the same result as with manual computation... ...but only up to the order of the PCs. That is because when you run ev , eig = np.linalg.eig(cov) you get eigenvalues not necessarily in the decreasing order. I get array([ 0.07168571, 2.49382602, 1.43448827]) So you will want to order them manually. Sklearn does that for you. Regarding reconstructing original variables, please see How to reverse PCA and reconstruct original variables from several principal components?
PCA in numpy and sklearn produces different results [closed]
The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are o
PCA in numpy and sklearn produces different results [closed] The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are observing this difference: PCA on correlation or covariance? If you replace pca.fit_transform(x) with x_std = StandardScaler().fit_transform(x) pca.fit_transform(x_std) you will get the same result as with manual computation... ...but only up to the order of the PCs. That is because when you run ev , eig = np.linalg.eig(cov) you get eigenvalues not necessarily in the decreasing order. I get array([ 0.07168571, 2.49382602, 1.43448827]) So you will want to order them manually. Sklearn does that for you. Regarding reconstructing original variables, please see How to reverse PCA and reconstruct original variables from several principal components?
PCA in numpy and sklearn produces different results [closed] The difference is because decomposition.PCA does not standardize your variables before doing PCA, whereas in your manual computation you call StandardScaler to do the standardization. Hence, you are o
7,942
PCA in numpy and sklearn produces different results [closed]
Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. import numpy as np from scipy import linalg as LA x = np.array([ [0.387,4878, 5.42], [0.723,12104,5.25], [1,12756,5.52], [1.524,6787,3.94], ]) #centering the data x -= np.mean(x, axis = 0) cov = np.cov(x, rowvar = False) evals , evecs = LA.eigh(cov) you need to sort the eigenvalues (and eigenvectors accordingly) descending idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] a = np.dot(x, evecs) Generally, I recommend you to you check your code by implementation a simple example (as simple as possible) and calculating by hand the correct resuls (and intermediate results). This helps you to identify the problem.
PCA in numpy and sklearn produces different results [closed]
Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. impor
PCA in numpy and sklearn produces different results [closed] Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. import numpy as np from scipy import linalg as LA x = np.array([ [0.387,4878, 5.42], [0.723,12104,5.25], [1,12756,5.52], [1.524,6787,3.94], ]) #centering the data x -= np.mean(x, axis = 0) cov = np.cov(x, rowvar = False) evals , evecs = LA.eigh(cov) you need to sort the eigenvalues (and eigenvectors accordingly) descending idx = np.argsort(evals)[::-1] evecs = evecs[:,idx] evals = evals[idx] a = np.dot(x, evecs) Generally, I recommend you to you check your code by implementation a simple example (as simple as possible) and calculating by hand the correct resuls (and intermediate results). This helps you to identify the problem.
PCA in numpy and sklearn produces different results [closed] Here is a nice implementation with discussion and explanation of PCA in python. This implementation leads to the same result as the scikit PCA. This is another indicator that your PCA is wrong. impor
7,943
How to interpret the dendrogram of a hierarchical cluster analysis
1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins FL. 3) Hawaii does join rather late; at about 50. This means that the cluster it joins is closer together before HI joins. But not much closer. Note that the cluster it joins (the one all the way on the right) only forms at about 45. The fact that HI joins a cluster later than any other state simply means that (using whatever metric you selected) HI is not that close to any particular state.
How to interpret the dendrogram of a hierarchical cluster analysis
1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins
How to interpret the dendrogram of a hierarchical cluster analysis 1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins FL. 3) Hawaii does join rather late; at about 50. This means that the cluster it joins is closer together before HI joins. But not much closer. Note that the cluster it joins (the one all the way on the right) only forms at about 45. The fact that HI joins a cluster later than any other state simply means that (using whatever metric you selected) HI is not that close to any particular state.
How to interpret the dendrogram of a hierarchical cluster analysis 1) The y-axis is a measure of closeness of either individual data points or clusters. 2) California and Arizona are equally distant from Florida because CA and AZ are in a cluster before either joins
7,944
How to interpret the dendrogram of a hierarchical cluster analysis
I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is already clear about the procedure, others who browse through the question can probably use the pdf, its very simple and clear esp for those who do not have enough maths background.
How to interpret the dendrogram of a hierarchical cluster analysis
I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is alr
How to interpret the dendrogram of a hierarchical cluster analysis I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is already clear about the procedure, others who browse through the question can probably use the pdf, its very simple and clear esp for those who do not have enough maths background.
How to interpret the dendrogram of a hierarchical cluster analysis I had the same questions when I tried learning hierarchical clustering and I found the following pdf to be very very useful. http://www.econ.upf.edu/~michael/stanford/maeb7.pdf Even if Richard is alr
7,945
How to interpret the dendrogram of a hierarchical cluster analysis
The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the splitting of a vertical line into two vertical lines. The vertical position of the split, shown by a short bar gives the distance (dissimilarity) between the two clusters.
How to interpret the dendrogram of a hierarchical cluster analysis
The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the sp
How to interpret the dendrogram of a hierarchical cluster analysis The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the splitting of a vertical line into two vertical lines. The vertical position of the split, shown by a short bar gives the distance (dissimilarity) between the two clusters.
How to interpret the dendrogram of a hierarchical cluster analysis The horizontal axis represents the clusters. The vertical scale on the dendrogram represent the distance or dissimilarity. Each joining (fusion) of two clusters is represented on the diagram by the sp
7,946
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? What I loved most with CLT is the cases when it is not applicable -- this gives me a hope that the life is a bit more interesting that Gauss curve suggests. So show him the Cauchy distribution.
7,947
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? To fully appreciate the CLT, it should be seen. Hence the notion of the bean machine and plenty of youtube videos for illustration.
7,948
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it allows us to use the normal distribution as an approximation in cases where we do not know the true distribution. You could ask your father a standard statistics question (but phrased as math) about what is the probability that the mean of a sample will be greater than a given value if the data comes from a distribution with mean mu and sd sigma, then see if he assumes a distribution (which you then say we don't know) or says that he needs to know the distribution. Then you can show that we can approximate the answer using the CLT in many cases. For comparing math to stats, I like to use the mean value theorem of integration (which says that for an integral from a to b there exists a rectangle from a to b with the same area and the height of the rectangle is the average of the curve). The mathematician looks at this theorem and says "cool, I can use an integration to compute an average", while the statistician looks at the same theorem and says "cool, I can use an average to compute an integral". I actually have cross stitched wall hangings in my office of the mean value theorem and the CLT (along with Bayes theorem).
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it all
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it allows us to use the normal distribution as an approximation in cases where we do not know the true distribution. You could ask your father a standard statistics question (but phrased as math) about what is the probability that the mean of a sample will be greater than a given value if the data comes from a distribution with mean mu and sd sigma, then see if he assumes a distribution (which you then say we don't know) or says that he needs to know the distribution. Then you can show that we can approximate the answer using the CLT in many cases. For comparing math to stats, I like to use the mean value theorem of integration (which says that for an integral from a to b there exists a rectangle from a to b with the same area and the height of the rectangle is the average of the curve). The mathematician looks at this theorem and says "cool, I can use an integration to compute an average", while the statistician looks at the same theorem and says "cool, I can use an average to compute an integral". I actually have cross stitched wall hangings in my office of the mean value theorem and the CLT (along with Bayes theorem).
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Often when mathematicians talk about probability they start with a known probability distribution then talk about the probability of events. The true value of the central limit theorem is that it all
7,949
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All pieces of paper are the same size and folded in the same fashion after I've calculated the average. This is the population and I calculate the average age. Then each student randomly selects 10 pieces of paper, writes down the ages and returns them to the bag. (S)he calculates the mean and passes the bag along to the next student. Eventually we have 100 samples of 10 students each estimating the population mean which we can describe through a histogram and some descriptive statistics. We then repeat the demonstration this time using a set of 100 "opinions" that replicate some Yes/No question from recent polls e.g. If the (British General) election were called tomorrow would you consider voting for the British National Party. Students them sample 10 of these opinions. At the end we've demonstrated sampling variation, the Central Limit Theorem, etc with both continuous and binary data.
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All
How do you convey the beauty of the Central Limit Theorem to a non-statistician? I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All pieces of paper are the same size and folded in the same fashion after I've calculated the average. This is the population and I calculate the average age. Then each student randomly selects 10 pieces of paper, writes down the ages and returns them to the bag. (S)he calculates the mean and passes the bag along to the next student. Eventually we have 100 samples of 10 students each estimating the population mean which we can describe through a histogram and some descriptive statistics. We then repeat the demonstration this time using a set of 100 "opinions" that replicate some Yes/No question from recent polls e.g. If the (British General) election were called tomorrow would you consider voting for the British National Party. Students them sample 10 of these opinions. At the end we've demonstrated sampling variation, the Central Limit Theorem, etc with both continuous and binary data.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? I like to demonstrate sampling variation and essentially the Central Limit Theorem through an "in-class" exercise. Everybody in the class of say 100 students writes their age on a piece of paper. All
7,950
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runif(M,min=0, max=1))}) hist(meanvals, breaks=50, prob=TRUE)
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runi
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runif(M,min=0, max=1))}) hist(meanvals, breaks=50, prob=TRUE)
How do you convey the beauty of the Central Limit Theorem to a non-statistician? Playing around with the following code, varying the value of M and choosing distributions other than the uniform can be a fun illustration. N <- 10000 M <- 5 meanvals <- replicate(N, expr = {mean(runi
7,951
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
How do you convey the beauty of the Central Limit Theorem to a non-statistician? If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
How do you convey the beauty of the Central Limit Theorem to a non-statistician? If you use Stata, you can use the -clt- command that creates graphs of sampling distributions, see http://www.ats.ucla.edu/stat/stata/ado/teach/clt.htm
7,952
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical testing, the CLT helps you protect the type I error but does little to keep the type II error at bay. For example, the t-test can have arbitrarily low power for large n when the data distribution is extremely skewed.
How do you convey the beauty of the Central Limit Theorem to a non-statistician?
In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical te
How do you convey the beauty of the Central Limit Theorem to a non-statistician? In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical testing, the CLT helps you protect the type I error but does little to keep the type II error at bay. For example, the t-test can have arbitrarily low power for large n when the data distribution is extremely skewed.
How do you convey the beauty of the Central Limit Theorem to a non-statistician? In my experience the CLT is less useful than it appears. One never knows in the middle of a project whether n is large enough for the approximation to be adequate to the task. And for statistical te
7,953
How is finding the centroid different from finding the mean?
As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate data. To find the centroid, one computes the (arithmetic) mean of the points' positions separately for each dimension. For example, if you had points at: (-1, 10, 3), (0, 5, 2), and (1, 20, 10), then the centroid would be located at ((-1+0+1)/3, (10+5+20)/3, (3+2+10)/3), which simplifies (0, 11 2/3, 5). (NB: The centroid does not have to be--and rarely is---one of the original data points) The centroid is also sometimes called the center of mass or barycenter, based on its physical interpretation (it's the center of mass of an object defined by the points). Like the mean, the centroid's location minimizes the sum-squared distance from the other points. A related idea is the medoid, which is the data point that is "least dissimilar" from all of the other data points. Unlike the centroid, the medoid has to be one of the original points. You may also be interested in the geometric median which is analgous to the median, but for multivariate data. These are both different from the centroid. However, as Gabe points out in his answer, there is a difference between the "centroid distance" and the "average distance" when you're comparing clusters. The centroid distance between cluster $A$ and $B$ is simply the distance between $\text{centroid}(A)$ and $\text{centroid}(B)$. The average distance is calculated by finding the average pairwise distance between the points in each cluster. In other words, for every point $a_i$ in cluster $A$, you calculate $\text{dist}(a_i, b_1)$, $\text{dist}(a_i, b_2)$ , ... $\text{dist}(a_i, b_n)$ and average them all together.
How is finding the centroid different from finding the mean?
As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate
How is finding the centroid different from finding the mean? As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate data. To find the centroid, one computes the (arithmetic) mean of the points' positions separately for each dimension. For example, if you had points at: (-1, 10, 3), (0, 5, 2), and (1, 20, 10), then the centroid would be located at ((-1+0+1)/3, (10+5+20)/3, (3+2+10)/3), which simplifies (0, 11 2/3, 5). (NB: The centroid does not have to be--and rarely is---one of the original data points) The centroid is also sometimes called the center of mass or barycenter, based on its physical interpretation (it's the center of mass of an object defined by the points). Like the mean, the centroid's location minimizes the sum-squared distance from the other points. A related idea is the medoid, which is the data point that is "least dissimilar" from all of the other data points. Unlike the centroid, the medoid has to be one of the original points. You may also be interested in the geometric median which is analgous to the median, but for multivariate data. These are both different from the centroid. However, as Gabe points out in his answer, there is a difference between the "centroid distance" and the "average distance" when you're comparing clusters. The centroid distance between cluster $A$ and $B$ is simply the distance between $\text{centroid}(A)$ and $\text{centroid}(B)$. The average distance is calculated by finding the average pairwise distance between the points in each cluster. In other words, for every point $a_i$ in cluster $A$, you calculate $\text{dist}(a_i, b_1)$, $\text{dist}(a_i, b_2)$ , ... $\text{dist}(a_i, b_n)$ and average them all together.
How is finding the centroid different from finding the mean? As far as I know, the "mean" of a cluster and the centroid of a single cluster are the same thing, though the term "centroid" might be a little more precise than "mean" when dealing with multivariate
7,954
How is finding the centroid different from finding the mean?
The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and cluster 2 - that is n^2 distances added together and then divides by n^2 to the average. Centroid method first computes the average of each cluster within itself. Then it calculates one distance between those average points.
How is finding the centroid different from finding the mean?
The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and clus
How is finding the centroid different from finding the mean? The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and cluster 2 - that is n^2 distances added together and then divides by n^2 to the average. Centroid method first computes the average of each cluster within itself. Then it calculates one distance between those average points.
How is finding the centroid different from finding the mean? The above answer may be incorrect see this video: https://www.youtube.com/watch?v=VMyXc3SiEqs It seems that average adds up all the combinations of distances between the elements of cluster 1 and clus
7,955
How is finding the centroid different from finding the mean?
In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a mathematical proof: Let $x_1,\dots ,x_n\in \mathbb{R}^d$ and $\{C_1,C_2\}$ a partition of $\{1,\dots,n\}$. Let $d$ be a metric in $\mathbb{R}^d$, positive homogeneous (for instance, the euclidean distance) Define $\alpha := d(\frac{1}{|C_1|}\sum_{i\in C_1}x_i,\frac{1}{|C_2|}\sum_{j\in C_2}x_j)$ and $\beta := \frac{1}{|C_1|}\frac{1}{|C_2|}\sum_{i\in C_1}\sum_{j\in C_2} d(x_i,x_j)$ Claim: $\alpha \leq \beta$ Proof: The function $\phi:= d(\cdot ,\frac{1}{|C_2|}\sum_{j\in C_2}x_j)$ is convex (this follows by the triangle inequality of the metric + positive homogeneity). Therefore, by Jensen's Inequality $$ \alpha = \phi(\frac{1}{|C_1|}\sum_{i\in C_1}x_i) \leq \frac{1}{|C_1|}\sum_{i\in C_1}\phi(x_i)$$ For every fixed $x_i$ the function $\psi_i := d(x_i , \cdot )$ is also convex. Replacing above, we get $$ \alpha \leq \frac{1}{|C_1|}\sum_{i\in C_1}\psi_i(\frac{1}{|C_2|}\sum_{j\in C_2}x_j)$$ Using one more time Jensen's Inequality we get $$ \alpha \leq \frac{1}{|C_1|}\sum_{i\in C_1}\frac{1}{|C_2|}\sum_{j\in C_2}\psi_i(x_j) = \beta$$
How is finding the centroid different from finding the mean?
In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a ma
How is finding the centroid different from finding the mean? In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a mathematical proof: Let $x_1,\dots ,x_n\in \mathbb{R}^d$ and $\{C_1,C_2\}$ a partition of $\{1,\dots,n\}$. Let $d$ be a metric in $\mathbb{R}^d$, positive homogeneous (for instance, the euclidean distance) Define $\alpha := d(\frac{1}{|C_1|}\sum_{i\in C_1}x_i,\frac{1}{|C_2|}\sum_{j\in C_2}x_j)$ and $\beta := \frac{1}{|C_1|}\frac{1}{|C_2|}\sum_{i\in C_1}\sum_{j\in C_2} d(x_i,x_j)$ Claim: $\alpha \leq \beta$ Proof: The function $\phi:= d(\cdot ,\frac{1}{|C_2|}\sum_{j\in C_2}x_j)$ is convex (this follows by the triangle inequality of the metric + positive homogeneity). Therefore, by Jensen's Inequality $$ \alpha = \phi(\frac{1}{|C_1|}\sum_{i\in C_1}x_i) \leq \frac{1}{|C_1|}\sum_{i\in C_1}\phi(x_i)$$ For every fixed $x_i$ the function $\psi_i := d(x_i , \cdot )$ is also convex. Replacing above, we get $$ \alpha \leq \frac{1}{|C_1|}\sum_{i\in C_1}\psi_i(\frac{1}{|C_2|}\sum_{j\in C_2}x_j)$$ Using one more time Jensen's Inequality we get $$ \alpha \leq \frac{1}{|C_1|}\sum_{i\in C_1}\frac{1}{|C_2|}\sum_{j\in C_2}\psi_i(x_j) = \beta$$
How is finding the centroid different from finding the mean? In general, the mean (to be precise, the average) distance (between all pairs of points) is larger than the distance between the centroids of the clusters. So usually, they are different. Here is a ma
7,956
How is finding the centroid different from finding the mean?
centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original data
How is finding the centroid different from finding the mean?
centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original d
How is finding the centroid different from finding the mean? centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original data
How is finding the centroid different from finding the mean? centroid is average of data points in a cluster, centroid point need not present in the data set whereas medoid is the data point which is closer to centroid,medoid has to be present in the original d
7,957
What are 'aliased coefficients'?
I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vif( lm( y ~ x1 + x2 ) ) produces your error. In this context, ''alias'' refers to the variables that are linearly dependent on others (i.e. cause perfect multicollinearity). The first step towards the solution is to identify which variable(s) are the culprit(s). Run alias( lm( y ~ x1 + x2 ) ) to see an example.
What are 'aliased coefficients'?
I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vi
What are 'aliased coefficients'? I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vif( lm( y ~ x1 + x2 ) ) produces your error. In this context, ''alias'' refers to the variables that are linearly dependent on others (i.e. cause perfect multicollinearity). The first step towards the solution is to identify which variable(s) are the culprit(s). Run alias( lm( y ~ x1 + x2 ) ) to see an example.
What are 'aliased coefficients'? I suspect this is not an error of lm, but rather vif (from package car). If so, I believe you have ran into perfect multicollinearity. For instance x1 <- rnorm( 100 ) x2 <- 2 * x1 y <- rnorm( 100 ) vi
7,958
What are 'aliased coefficients'?
This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as the default is singular.ok = TRUE. Other packages/functions are more conservative. For example, for the linearHypothesis() function in the car package, the default is singular.ok = FALSE. If you have perfect multicollinearity in your regression, linearHypothesis() will return an error "there are aliased coefficients in the model". To deal with this error, set singular.ok = TRUE. Be careful, however, as doing this may mask perfect multicollinearity in your regression.
What are 'aliased coefficients'?
This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as
What are 'aliased coefficients'? This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as the default is singular.ok = TRUE. Other packages/functions are more conservative. For example, for the linearHypothesis() function in the car package, the default is singular.ok = FALSE. If you have perfect multicollinearity in your regression, linearHypothesis() will return an error "there are aliased coefficients in the model". To deal with this error, set singular.ok = TRUE. Be careful, however, as doing this may mask perfect multicollinearity in your regression.
What are 'aliased coefficients'? This often comes up when you have singularities in your regression X'X matrix (NA values in the summary of the regression output). Base R lm() allows for singular values/perfect multicollinearity as
7,959
What are 'aliased coefficients'?
maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might be removing one dummy manually.
What are 'aliased coefficients'?
maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might
What are 'aliased coefficients'? maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might be removing one dummy manually.
What are 'aliased coefficients'? maybe to good to know for some: I got this error as well when I added dummies to a regression. R automatically omit one dummy, but this causes an error in the vif test. so a solution, for some, might
7,960
What are the assumptions of negative binomial regression?
I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without either the variable names nor any of the actual values. and came to the conclusion a negative binomial regression would be necessary. I've never done a glm regression before, and I can't find any clear information about what the assumptions are. Are they the same for MLR? Clearly not! You already know you're assuming response is conditionally negative binomial, not conditionally normal. (Some assumptions are shared. Independence for example.) Let me talk about GLMs more generally first. GLMs include multiple regression but generalize in several ways: 1) the conditional distribution of the response (dependent variable) is from the exponential family, which includes the Poisson, binomial, gamma, normal and numerous other distributions. 2) the mean response is related to the predictors (independent variables) through a link function. Each family of distributions has an associated canonical link function - for example in the case of the Poisson, the canonical link is the log. The canonical links are almost always the default, but in most software you generally have several choices within each distribution choice. For the binomial the canonical link is the logit (the linear predictor is modelling $\log(\frac{p}{1-p})$, the log-odds of a success, or a "1") and for the Gamma the canonical link is the inverse - but in both cases other link functions are often used. So if your response was $Y$ and your predictors were $X_1$ and $X_2$, with a Poisson regression with the log link you might have for your description of how the mean of $Y$ is related to the $X$'s: $\text{E}(Y_i) = \mu_i$ $\log\mu_i= \eta_i$ ($\eta$ is called the 'linear predictor', and here the link function is $\log$, the symbol $g$ is often used to represent the link function) $\eta_i = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i}$ 3) the variance of the response is not constant, but operates through a variance-function (a function of the mean, possibly times a scaling parameter). For example, the variance of a Poisson is equal to the mean, while for a gamma it's proportional to the square of the mean. (The quasi-distributions allow some degree of decoupling of Variance function from assumed distribution) -- So what assumptions are in common with what you remember from MLR? Independence is still there. Homoskedasticity is no longer assumed; the variance is explicitly a function of the mean and so in general varies with the predictors (so while the model is generally heteroskedastic, the heteroskedasticity takes a specific form). Linearity: The model is still linear in the parameters (i.e. the linear predictor is $X\beta$), but the expected response is not linearly related to them (unless you use the identity link function!). The distribution of the response is substantially more general The interpretation of the output is in many ways quite similar; you can still look at estimated coefficients divided by their standard errors for example, and interpret them similarly (they're asymptotically normal - a Wald z-test - but people still seem to call them t-ratios, even when there's no theory that makes them $t$-distributed in general). Comparisons between nested models (via 'anova-table' like setups) are a bit different, but similar (involving asymptotic chi-square tests). If you're comfortable with AIC and BIC these can be calculated. Similar kinds of diagnostic displays are generally used, but can be harder to interpret. Much of your multiple linear regression intuition will carry over if you keep the differences in mind. Here's an example of something you can do with a glm that you can't really do with linear regression (indeed, most people would use nonlinear regression for this, but GLM is easier and nicer for it) in the normal case - $Y$ is normal, modelled as a function of $x$: $\text{E}(Y) = \exp(\eta) = \exp(X\beta) = \exp(\beta_0+\beta_1 x)$ (that is, a log-link) $\text{Var}(Y) = \sigma^2$ That is, a least-squares fit of an exponential relationship between $Y$ and $x$. Can I transform the variables the same way (I've already discovered transforming the dependent variable is a bad call since it needs to be a natural number)? You (usually) don't want to transform the response (DV). You sometimes may want to transform predictors (IVs) in order to achieve linearity of the linear predictor. I already determined that the negative binomial distribution would help with the over-dispersion in my data (variance is around 2000, the mean is 48). Yes, it can deal with overdispersion. But take care not to confuse the conditional dispersion with the unconditional dispersion. Another common approach - if a bit more kludgy and so somewhat less satisfying to my mind - is quasi-Poisson regression (overdispersed Poisson regression). With the negative binomial, it's in the exponential family if you specify a particular one of its parameters (the way it's usually reparameterized for GLMS at least). Some packages will fit it if you specify the parameter, others will wrap ML estimation of that parameter (say via profile likelihood) around a GLM routine, automating the process. Some will restrict you to a smaller set of distributions; you don't say what software you might use so it's difficult to say much more there. I think usually the log-link tends to be used with negative binomial regression. There are a number of introductory-level documents (readily found via google) that lead through some basic Poisson GLM and then negative binomial GLM analysis of data, but you may prefer to look at a book on GLMs and maybe do a little Poisson regression first just to get used to that.
What are the assumptions of negative binomial regression?
I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without eith
What are the assumptions of negative binomial regression? I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without either the variable names nor any of the actual values. and came to the conclusion a negative binomial regression would be necessary. I've never done a glm regression before, and I can't find any clear information about what the assumptions are. Are they the same for MLR? Clearly not! You already know you're assuming response is conditionally negative binomial, not conditionally normal. (Some assumptions are shared. Independence for example.) Let me talk about GLMs more generally first. GLMs include multiple regression but generalize in several ways: 1) the conditional distribution of the response (dependent variable) is from the exponential family, which includes the Poisson, binomial, gamma, normal and numerous other distributions. 2) the mean response is related to the predictors (independent variables) through a link function. Each family of distributions has an associated canonical link function - for example in the case of the Poisson, the canonical link is the log. The canonical links are almost always the default, but in most software you generally have several choices within each distribution choice. For the binomial the canonical link is the logit (the linear predictor is modelling $\log(\frac{p}{1-p})$, the log-odds of a success, or a "1") and for the Gamma the canonical link is the inverse - but in both cases other link functions are often used. So if your response was $Y$ and your predictors were $X_1$ and $X_2$, with a Poisson regression with the log link you might have for your description of how the mean of $Y$ is related to the $X$'s: $\text{E}(Y_i) = \mu_i$ $\log\mu_i= \eta_i$ ($\eta$ is called the 'linear predictor', and here the link function is $\log$, the symbol $g$ is often used to represent the link function) $\eta_i = \beta_0 + \beta_1 x_{1i} + \beta_2 x_{2i}$ 3) the variance of the response is not constant, but operates through a variance-function (a function of the mean, possibly times a scaling parameter). For example, the variance of a Poisson is equal to the mean, while for a gamma it's proportional to the square of the mean. (The quasi-distributions allow some degree of decoupling of Variance function from assumed distribution) -- So what assumptions are in common with what you remember from MLR? Independence is still there. Homoskedasticity is no longer assumed; the variance is explicitly a function of the mean and so in general varies with the predictors (so while the model is generally heteroskedastic, the heteroskedasticity takes a specific form). Linearity: The model is still linear in the parameters (i.e. the linear predictor is $X\beta$), but the expected response is not linearly related to them (unless you use the identity link function!). The distribution of the response is substantially more general The interpretation of the output is in many ways quite similar; you can still look at estimated coefficients divided by their standard errors for example, and interpret them similarly (they're asymptotically normal - a Wald z-test - but people still seem to call them t-ratios, even when there's no theory that makes them $t$-distributed in general). Comparisons between nested models (via 'anova-table' like setups) are a bit different, but similar (involving asymptotic chi-square tests). If you're comfortable with AIC and BIC these can be calculated. Similar kinds of diagnostic displays are generally used, but can be harder to interpret. Much of your multiple linear regression intuition will carry over if you keep the differences in mind. Here's an example of something you can do with a glm that you can't really do with linear regression (indeed, most people would use nonlinear regression for this, but GLM is easier and nicer for it) in the normal case - $Y$ is normal, modelled as a function of $x$: $\text{E}(Y) = \exp(\eta) = \exp(X\beta) = \exp(\beta_0+\beta_1 x)$ (that is, a log-link) $\text{Var}(Y) = \sigma^2$ That is, a least-squares fit of an exponential relationship between $Y$ and $x$. Can I transform the variables the same way (I've already discovered transforming the dependent variable is a bad call since it needs to be a natural number)? You (usually) don't want to transform the response (DV). You sometimes may want to transform predictors (IVs) in order to achieve linearity of the linear predictor. I already determined that the negative binomial distribution would help with the over-dispersion in my data (variance is around 2000, the mean is 48). Yes, it can deal with overdispersion. But take care not to confuse the conditional dispersion with the unconditional dispersion. Another common approach - if a bit more kludgy and so somewhat less satisfying to my mind - is quasi-Poisson regression (overdispersed Poisson regression). With the negative binomial, it's in the exponential family if you specify a particular one of its parameters (the way it's usually reparameterized for GLMS at least). Some packages will fit it if you specify the parameter, others will wrap ML estimation of that parameter (say via profile likelihood) around a GLM routine, automating the process. Some will restrict you to a smaller set of distributions; you don't say what software you might use so it's difficult to say much more there. I think usually the log-link tends to be used with negative binomial regression. There are a number of introductory-level documents (readily found via google) that lead through some basic Poisson GLM and then negative binomial GLM analysis of data, but you may prefer to look at a book on GLMs and maybe do a little Poisson regression first just to get used to that.
What are the assumptions of negative binomial regression? I'm working with a large data set (confidential, so I can't share too much), It might be possible to create a small data set that has some of the general characteristics of the real data without eith
7,961
What are the assumptions of negative binomial regression?
Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler, B. Bolker, and S. Walker. 2015. Fitting linear mixed-effects models using lme4. J. Stat. Software 67: 1-48. Bolker, B.M., M.E. Brooks, C.J. Clark, S.W. Geange, J.R. Poulsen, M.H.H. Stevens, and J. White. Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology and Evolution 127-135. Zeileis A. , C. Keleiber C, and S. Jackman 2008. Regression models for count data in R. J. Stat. Software. 27: 1-25 Zuur A.F., E.N. Iene , N. Walker, A.A. Saveliev, and G.M. Smith. 2009. Mixed effects models and extensions in ecology with R. Springer, NY, USA.
What are the assumptions of negative binomial regression?
Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler,
What are the assumptions of negative binomial regression? Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler, B. Bolker, and S. Walker. 2015. Fitting linear mixed-effects models using lme4. J. Stat. Software 67: 1-48. Bolker, B.M., M.E. Brooks, C.J. Clark, S.W. Geange, J.R. Poulsen, M.H.H. Stevens, and J. White. Generalized linear mixed models: a practical guide for ecology and evolution. Trends in Ecology and Evolution 127-135. Zeileis A. , C. Keleiber C, and S. Jackman 2008. Regression models for count data in R. J. Stat. Software. 27: 1-25 Zuur A.F., E.N. Iene , N. Walker, A.A. Saveliev, and G.M. Smith. 2009. Mixed effects models and extensions in ecology with R. Springer, NY, USA.
What are the assumptions of negative binomial regression? Some references I have found to be helpful in analyzing data with the negative binomial distribution specifically (including listing assumptions) and GLM/GLMMs generally are: Bates, D.M., B. Machler,
7,962
Best factor extraction methods in factor analysis
To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give rather similar results. They are "common" because they represent classical factor model, the common factors + unique factors model. It is this model which is typically used in questionnaire analysis/validation. Principal Axis (PAF), aka Principal Factor with iterations is the oldest and perhaps yet quite popular method. It is iterative PCA$^1$ application to the matrix where communalities stand on the diagonal in place of 1s or of variances. Each next iteration thus refines communalities further until they converge. In doing so, the method that seeks to explain variance, not pairwise correlations, eventually explains the correlations. Principal Axis method has the advantage in that it can, like PCA, analyze not only correlations, but also covariances and other SSCP measures (raw sscp, cosines). The rest three methods process only correlations [in SPSS; covariances could be analyzed in some other implementations]. This method is dependent on the quality of starting estimates of communalities (and it is its disadvantage). Usually the squared multiple correlation/covariance is used as the starting value, but you may prefer other estimates (including those taken from previous research). Please read this for more. If you want to see an example of Principal axis factoring computations, commented and compared with PCA computations, please look in here. Ordinary or Unweighted least squares (ULS) is the algorithm that directly aims at minimizing the residuals between the input correlation matrix and the reproduced (by the factors) correlation matrix (while diagonal elements as the sums of communality and uniqueness are aimed to restore 1s). This is the straight task of FA$^2$. ULS method can work with singular and even not positive semidefinite matrix of correlations provided the number of factors is less than its rank, - although it is questionable if theoretically FA is appropriate then. Generalized or Weighted least squares (GLS) is a modification of the previous one. When minimizing the residuals, it weights correlation coefficients differentially: correlations between variables with high uniqness (at the current iteration) are given less weight$^3$. Use this method if you want your factors to fit highly unique variables (i.e. those weakly driven by the factors) worse than highly common variables (i.e. strongly driven by the factors). This wish is not uncommon, especially in questionnaire construction process (at least I think so), so this property is advantageous$^4$. Maximum Likelihood (ML) assumes data (the correlations) came from population having multivariate normal distribution (other methods make no such an assumption) and hence the residuals of correlation coefficients must be normally distributed around 0. The loadings are iteratively estimated by ML approach under the above assumption. The treatment of correlations is weighted by uniqness in the same fashion as in Generalized least squares method. While other methods just analyze the sample as it is, ML method allows some inference about the population, a number of fit indices and confidence intervals are usually computed along with it [unfortunately, mostly not in SPSS, although people wrote macros for SPSS that do it]. The general fit chi-square test asks if the factor-reproduced correlation matrix can pretend to be the population matrix of which the observed matrix is random sampled. All the methods I briefly described are linear, continuous latent model. "Linear" implies that rank correlations, for example, should not be analyzed. "Continuous" implies that binary data, for example, should not be analyzed (IRT or FA based on tetrachoric correlations would be more appropriate). $^1$ Because correlation (or covariance) matrix $\bf R$, - after initial communalities were placed on its diagonal, will usually have some negative eigenvalues, these are to be kept clean of; therefore PCA should be done by eigen-decomposition, not SVD. $^2$ ULS method includes iterative eigendecomposition of the reduced correlation matrix, like PAF, but within a more complex, Newton-Raphson optimization procedure aiming to find unique variances ($\bf u^2$, uniquenesses) at which the correlations are reconstructed maximally. In doing so ULS appears equivalent to method called MINRES (only loadings extracted appear somewhat orthogonally rotated in comparison with MINRES) which is known to directly minimize the sum of squared residuals of correlations. $^3$ GLS and ML algorithms are basically as ULS, but eigendecomposition on iterations is performed on matrix $\bf uR^{-1}u$ (or on $\bf u^{-1}Ru^{-1}$), to incorporate uniquenesses as weights. ML differs from GLS in adopting the knowledge of eigenvalue trend expected under normal distribution. $^4$ The fact that correlations produced by less common variables are permitted to be fitted worse may (I surmise so) give some room for the presence of partial correlations (which need not be explained), what seems nice. Pure common factor model "expects" no partial correlations, which is not very realistic.
Best factor extraction methods in factor analysis
To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give
Best factor extraction methods in factor analysis To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give rather similar results. They are "common" because they represent classical factor model, the common factors + unique factors model. It is this model which is typically used in questionnaire analysis/validation. Principal Axis (PAF), aka Principal Factor with iterations is the oldest and perhaps yet quite popular method. It is iterative PCA$^1$ application to the matrix where communalities stand on the diagonal in place of 1s or of variances. Each next iteration thus refines communalities further until they converge. In doing so, the method that seeks to explain variance, not pairwise correlations, eventually explains the correlations. Principal Axis method has the advantage in that it can, like PCA, analyze not only correlations, but also covariances and other SSCP measures (raw sscp, cosines). The rest three methods process only correlations [in SPSS; covariances could be analyzed in some other implementations]. This method is dependent on the quality of starting estimates of communalities (and it is its disadvantage). Usually the squared multiple correlation/covariance is used as the starting value, but you may prefer other estimates (including those taken from previous research). Please read this for more. If you want to see an example of Principal axis factoring computations, commented and compared with PCA computations, please look in here. Ordinary or Unweighted least squares (ULS) is the algorithm that directly aims at minimizing the residuals between the input correlation matrix and the reproduced (by the factors) correlation matrix (while diagonal elements as the sums of communality and uniqueness are aimed to restore 1s). This is the straight task of FA$^2$. ULS method can work with singular and even not positive semidefinite matrix of correlations provided the number of factors is less than its rank, - although it is questionable if theoretically FA is appropriate then. Generalized or Weighted least squares (GLS) is a modification of the previous one. When minimizing the residuals, it weights correlation coefficients differentially: correlations between variables with high uniqness (at the current iteration) are given less weight$^3$. Use this method if you want your factors to fit highly unique variables (i.e. those weakly driven by the factors) worse than highly common variables (i.e. strongly driven by the factors). This wish is not uncommon, especially in questionnaire construction process (at least I think so), so this property is advantageous$^4$. Maximum Likelihood (ML) assumes data (the correlations) came from population having multivariate normal distribution (other methods make no such an assumption) and hence the residuals of correlation coefficients must be normally distributed around 0. The loadings are iteratively estimated by ML approach under the above assumption. The treatment of correlations is weighted by uniqness in the same fashion as in Generalized least squares method. While other methods just analyze the sample as it is, ML method allows some inference about the population, a number of fit indices and confidence intervals are usually computed along with it [unfortunately, mostly not in SPSS, although people wrote macros for SPSS that do it]. The general fit chi-square test asks if the factor-reproduced correlation matrix can pretend to be the population matrix of which the observed matrix is random sampled. All the methods I briefly described are linear, continuous latent model. "Linear" implies that rank correlations, for example, should not be analyzed. "Continuous" implies that binary data, for example, should not be analyzed (IRT or FA based on tetrachoric correlations would be more appropriate). $^1$ Because correlation (or covariance) matrix $\bf R$, - after initial communalities were placed on its diagonal, will usually have some negative eigenvalues, these are to be kept clean of; therefore PCA should be done by eigen-decomposition, not SVD. $^2$ ULS method includes iterative eigendecomposition of the reduced correlation matrix, like PAF, but within a more complex, Newton-Raphson optimization procedure aiming to find unique variances ($\bf u^2$, uniquenesses) at which the correlations are reconstructed maximally. In doing so ULS appears equivalent to method called MINRES (only loadings extracted appear somewhat orthogonally rotated in comparison with MINRES) which is known to directly minimize the sum of squared residuals of correlations. $^3$ GLS and ML algorithms are basically as ULS, but eigendecomposition on iterations is performed on matrix $\bf uR^{-1}u$ (or on $\bf u^{-1}Ru^{-1}$), to incorporate uniquenesses as weights. ML differs from GLS in adopting the knowledge of eigenvalue trend expected under normal distribution. $^4$ The fact that correlations produced by less common variables are permitted to be fitted worse may (I surmise so) give some room for the presence of partial correlations (which need not be explained), what seems nice. Pure common factor model "expects" no partial correlations, which is not very realistic.
Best factor extraction methods in factor analysis To make it short. The two last methods are each very special and different from numbers 2-5. They are all called common factor analysis and are indeed seen as alternatives. Most of the time, they give
7,963
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on randomness of low-order bits from RNGs. Most of the supplied uniform generators return 32-bit integer values that are converted to doubles, so they take at most 2^32 distinct values and long runs will return duplicated values (Wichmann-Hill is the exception, and all give at least 30 varying bits.) So the implementation in R seems to be different to what is explained on the website of the authors of the Mersenne Twister. Possibly combining this with the Birthday paradox, you would expect duplicates with only 2^16 numbers at a probability of 0.5, and 10^5 > 2^16. Trying the Wichmann-Hill algorithm as suggested in the documentation: RNGkind(kind="Wichmann-Hill") set.seed(123) n = 10^8 x = runif(n) length(unique(x)) # 1e8 Note that the original Wichmann-Hill random number generator has the property that its next number can be predicted by its previous, and therefore does not meet non-predictability requirements of a valid PRNG. See this document by Dutang and Wuertz, 2009 (section 3)
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on randomness of low-order bits from RNGs. Most of the supplied uniform generators return 32-bit integer values that are converted to doubles, so they take at most 2^32 distinct values and long runs will return duplicated values (Wichmann-Hill is the exception, and all give at least 30 varying bits.) So the implementation in R seems to be different to what is explained on the website of the authors of the Mersenne Twister. Possibly combining this with the Birthday paradox, you would expect duplicates with only 2^16 numbers at a probability of 0.5, and 10^5 > 2^16. Trying the Wichmann-Hill algorithm as suggested in the documentation: RNGkind(kind="Wichmann-Hill") set.seed(123) n = 10^8 x = runif(n) length(unique(x)) # 1e8 Note that the original Wichmann-Hill random number generator has the property that its next number can be predicted by its previous, and therefore does not meet non-predictability requirements of a valid PRNG. See this document by Dutang and Wuertz, 2009 (section 3)
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s The documentation of R on random number generation has a few sentences at its end, that confirm your expectation of 32-bit integers being used and might explain what you are observing: Do not rely on
7,964
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an average of $2^{32}\left(1-\left(1-\frac{1}{2^{32}}\right)^{10^5}\right) \approx 10^5 - 1.1634$ distinct values, noting that $\frac{(10^5)^2}{2 \times 2^{32}} \approx 1.1642$ is close to this deficit So you would expect many earlier examples. There are two pairs with set.seed(1): n = 10^5 set.seed(1) x = runif(n) x[21101] == x[56190] x[33322] == x[50637] If you do something similar to the first $2000$ seeds in R for the default runif you get an average of $10^5 - 1.169$ unique values, which is close to the calculated expectation. Only $30.8\%$ of these seed produce no duplicates from the sample of $10^5$ Sample $10^6$ times and you would expect the issue to be about a hundred times worse and indeed average number of unique values for the first $2000$ seeds is $10^6 - 116.602$ and none of these seeds failed to produce duplicates There is another way of reducing the likelihood of overlaps while still having a uniform distribution: try pnorm(rnorm(n)) set.seed(123) n = 10^8 x = runif(n) length(unique(x)) # 98845390 y = pnorm(rnorm(n)) length(unique(y)) # 100000000
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an averag
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an average of $2^{32}\left(1-\left(1-\frac{1}{2^{32}}\right)^{10^5}\right) \approx 10^5 - 1.1634$ distinct values, noting that $\frac{(10^5)^2}{2 \times 2^{32}} \approx 1.1642$ is close to this deficit So you would expect many earlier examples. There are two pairs with set.seed(1): n = 10^5 set.seed(1) x = runif(n) x[21101] == x[56190] x[33322] == x[50637] If you do something similar to the first $2000$ seeds in R for the default runif you get an average of $10^5 - 1.169$ unique values, which is close to the calculated expectation. Only $30.8\%$ of these seed produce no duplicates from the sample of $10^5$ Sample $10^6$ times and you would expect the issue to be about a hundred times worse and indeed average number of unique values for the first $2000$ seeds is $10^6 - 116.602$ and none of these seeds failed to produce duplicates There is another way of reducing the likelihood of overlaps while still having a uniform distribution: try pnorm(rnorm(n)) set.seed(123) n = 10^8 x = runif(n) length(unique(x)) # 98845390 y = pnorm(rnorm(n)) length(unique(y)) # 100000000
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s Just to emphasise the arithmetic of the $2^{32}$ point in terms of the number of potential distinct values: if you sample $10^5$ times from $2^{32}$ values with replacement, you would expect an averag
7,965
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (as already mentioned in the discussion, by the way) and treats this question thoroughly. It has been written by an experienced researcher in computational statistics (not me nor a friend of mine) and uses R. All the codes are reproducible and you can easily check the codes and the claims by yourself. Just to cite a few lines (first lines of the Conclusion) of the conclusion that seem to answer your question: Rather unintuitively (but, as it turns out, not unexpectedly), generating random numbers can lead to ties. For generating $n$ random numbers on a $k$-bit architecture, we showed that the expected number of ties is $n-2^{k}(1-(1-2^{-k})^{n})$. Furthermore, we derived a numerically robust formula to compute this number. For a 32-bit architecture as is still used in random number generators (be it for historical reasons, reproducibility or due to run time), the expected number of ties when generating one million random numbers is 116. The cited version is the one posted on 18th March 2020. https://arxiv.org/abs/2003.08009
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (as already mentioned in the discussion, by the way) and treats this question thoroughly. It has been written by an experienced researcher in computational statistics (not me nor a friend of mine) and uses R. All the codes are reproducible and you can easily check the codes and the claims by yourself. Just to cite a few lines (first lines of the Conclusion) of the conclusion that seem to answer your question: Rather unintuitively (but, as it turns out, not unexpectedly), generating random numbers can lead to ties. For generating $n$ random numbers on a $k$-bit architecture, we showed that the expected number of ties is $n-2^{k}(1-(1-2^{-k})^{n})$. Furthermore, we derived a numerically robust formula to compute this number. For a 32-bit architecture as is still used in random number generators (be it for historical reasons, reproducibility or due to run time), the expected number of ties when generating one million random numbers is 116. The cited version is the one posted on 18th March 2020. https://arxiv.org/abs/2003.08009
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s Even though it is counter intuitive, there are good reasons that explain this phenomenon, essentially because a computer uses finite precision. A preprint has just been posted (March 2020) on ArXiv (
7,966
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps
There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: There is a big difference between "random with replacement" and " random permutation of a known set. " Mathematically, it's completely valid for random integer sequence to contain, e.g., 6,6,6,6,6 . Most PRNGs fail to do a complete "replacement" in their algorithm, so what we end up with is much closer to (but not exactly, as the example in the posted question shows) a random permutation of the set of values. In fact, since most PRNGs generate the next value based on the current (and possible a few previous) value, they are almost Markov processes. We call them "random" because we agree that an outside observer cannot determine the generator algorithm, so the next number to show up is unpredictable to that observer. Consider, then, the difference between runif and sample , where the latter has an argument explicitly directing whether to select with or without replacement.
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s
There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: T
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 steps There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: There is a big difference between "random with replacement" and " random permutation of a known set. " Mathematically, it's completely valid for random integer sequence to contain, e.g., 6,6,6,6,6 . Most PRNGs fail to do a complete "replacement" in their algorithm, so what we end up with is much closer to (but not exactly, as the example in the posted question shows) a random permutation of the set of values. In fact, since most PRNGs generate the next value based on the current (and possible a few previous) value, they are almost Markov processes. We call them "random" because we agree that an outside observer cannot determine the generator algorithm, so the next number to show up is unpredictable to that observer. Consider, then, the difference between runif and sample , where the latter has an argument explicitly directing whether to select with or without replacement.
R: Problem with runif: generated number repeats (more often than expected) after less than 100 000 s There are two problems here. The first has been well-covered in the other answers, to wit: why do duplicates show up for certain configurations of the input arguments. The other is very important: T
7,967
ROC vs Precision-recall curves on imbalanced dataset
First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare their properties, without judging their value. ROC curves can sometimes be misleading in some very imbalanced applications. A ROC curve can still look pretty good (ie better than random) while misclassifying most or all of the minority class. In contrast, PR curves are specifically tailored for the detection of rare events and are pretty useful in those scenarios. They will show that your classifier has a low performance if it is misclassifying most or all of the minority class. But they don't translate well to more balanced cases, or cases where negatives are rare. In addition, because they are sensitive to the baseline probability of positive events, they don't generalize well and only apply to the specific dataset they were built on, or to datastets with the exact same balance. This means it is generally difficult to compare PR curves from different studies, limiting their usefulness. As always, it is important to understand the tools that are available to you and select the right one for the right application. I suggest reading the question ROC vs precision-and-recall curves here on CV.
ROC vs Precision-recall curves on imbalanced dataset
First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare
ROC vs Precision-recall curves on imbalanced dataset First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare their properties, without judging their value. ROC curves can sometimes be misleading in some very imbalanced applications. A ROC curve can still look pretty good (ie better than random) while misclassifying most or all of the minority class. In contrast, PR curves are specifically tailored for the detection of rare events and are pretty useful in those scenarios. They will show that your classifier has a low performance if it is misclassifying most or all of the minority class. But they don't translate well to more balanced cases, or cases where negatives are rare. In addition, because they are sensitive to the baseline probability of positive events, they don't generalize well and only apply to the specific dataset they were built on, or to datastets with the exact same balance. This means it is generally difficult to compare PR curves from different studies, limiting their usefulness. As always, it is important to understand the tools that are available to you and select the right one for the right application. I suggest reading the question ROC vs precision-and-recall curves here on CV.
ROC vs Precision-recall curves on imbalanced dataset First, the claim on the Kaggle post is bogus. The paper they reference, "The Relationship Between Precision-Recall and ROC Curves", never claims that PR AUC is better than ROC AUC. They simply compare
7,968
ROC vs Precision-recall curves on imbalanced dataset
Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negative samples. Only in this case, PR AUC is more "meaningful" than ROC AUC. Consider a detector with TP=9, FN=1, TN=900, FP=90, where there are 10 positive and 990 negative sample. TPR=0.9, FPR=0.1 which indicates a good ROC score, however Precision=0.1 which indicates a bad PR score.
ROC vs Precision-recall curves on imbalanced dataset
Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negati
ROC vs Precision-recall curves on imbalanced dataset Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negative samples. Only in this case, PR AUC is more "meaningful" than ROC AUC. Consider a detector with TP=9, FN=1, TN=900, FP=90, where there are 10 positive and 990 negative sample. TPR=0.9, FPR=0.1 which indicates a good ROC score, however Precision=0.1 which indicates a bad PR score.
ROC vs Precision-recall curves on imbalanced dataset Your example is definitely correct. However, I think in the context of Kaggle competition / real life application, a skewed dataset usually means a dataset with much less positive samples than negati
7,969
ROC vs Precision-recall curves on imbalanced dataset
You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what you should look at are ALL your classes. So for your negative class, your P = 0 and your R = 0. And you usually don't just look at PR scores individually. You want to look at F1-score (F1 macro or F1 micro, depending on your problem) that is a harmonic average of your PR scores for both class 1 and class 0. Your class 1 PR score is super good, but combine that with your class 0 PR score, your F1-score will be TERRIBLE, which is the correct conclusion for your scenario. TL,DR: Look at PR scores for ALL your classes, and combine them with a metric like F1-score to have a realistic conclusion about your model performance. The F1-score for your scenario will be TERRIBLE, which is the correct conclusion for your scenario.
ROC vs Precision-recall curves on imbalanced dataset
You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what y
ROC vs Precision-recall curves on imbalanced dataset You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what you should look at are ALL your classes. So for your negative class, your P = 0 and your R = 0. And you usually don't just look at PR scores individually. You want to look at F1-score (F1 macro or F1 micro, depending on your problem) that is a harmonic average of your PR scores for both class 1 and class 0. Your class 1 PR score is super good, but combine that with your class 0 PR score, your F1-score will be TERRIBLE, which is the correct conclusion for your scenario. TL,DR: Look at PR scores for ALL your classes, and combine them with a metric like F1-score to have a realistic conclusion about your model performance. The F1-score for your scenario will be TERRIBLE, which is the correct conclusion for your scenario.
ROC vs Precision-recall curves on imbalanced dataset You're half way there. Usually when I do imbalanced models, heck, even balanced models, I look at PR for ALL my classes. In your example, yes, your positive class has P = 0.9 and R = 1.0. But what y
7,970
Convolutional Layers: To pad or not to pad?
There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer to another because dimensions will just "work". It allows us to design deeper networks. Without padding, reduction in volume size would reduce too quickly. Padding actually improves performance by keeping information at the borders. Quote from Stanford lectures: "In addition to the aforementioned benefit of keeping the spatial sizes constant after CONV, doing this actually improves performance. If the CONV layers were to not zero-pad the inputs and only perform valid convolutions, then the size of the volumes would reduce by a small amount after each CONV, and the information at the borders would be β€œwashed away” too quickly." - source As @dontloo already said, new network architectures need to concatenate convolutional layers with 1x1, 3x3 and 5x5 filters and it wouldn't be possible if they didn't use padding because dimensions wouldn't match. Check this image of inception module to understand better why padding is useful here.
Convolutional Layers: To pad or not to pad?
There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer
Convolutional Layers: To pad or not to pad? There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer to another because dimensions will just "work". It allows us to design deeper networks. Without padding, reduction in volume size would reduce too quickly. Padding actually improves performance by keeping information at the borders. Quote from Stanford lectures: "In addition to the aforementioned benefit of keeping the spatial sizes constant after CONV, doing this actually improves performance. If the CONV layers were to not zero-pad the inputs and only perform valid convolutions, then the size of the volumes would reduce by a small amount after each CONV, and the information at the borders would be β€œwashed away” too quickly." - source As @dontloo already said, new network architectures need to concatenate convolutional layers with 1x1, 3x3 and 5x5 filters and it wouldn't be possible if they didn't use padding because dimensions wouldn't match. Check this image of inception module to understand better why padding is useful here.
Convolutional Layers: To pad or not to pad? There are couple of reasons padding is important: It's easier to design networks if we preserve the height and width and don't have to worry too much about tensor dimensions when going from one layer
7,971
Convolutional Layers: To pad or not to pad?
It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures (like residual nets, inception nets, fractal nets) operate on the outputs of different layers, which requires a consistent spatial size between them. Another thing is, if no padding, the pixels in the corner of the input only affect the pixels in the corresponding corner of the output, while the pixels in the centre contribute to a neighbourhood in the output. When several no-padding layers get stacked together, the network sort of ignores the boarder pixels of the image. Just some of my understandings, I believe there are other good reasons.
Convolutional Layers: To pad or not to pad?
It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures
Convolutional Layers: To pad or not to pad? It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures (like residual nets, inception nets, fractal nets) operate on the outputs of different layers, which requires a consistent spatial size between them. Another thing is, if no padding, the pixels in the corner of the input only affect the pixels in the corresponding corner of the output, while the pixels in the centre contribute to a neighbourhood in the output. When several no-padding layers get stacked together, the network sort of ignores the boarder pixels of the image. Just some of my understandings, I believe there are other good reasons.
Convolutional Layers: To pad or not to pad? It seems to me the most important reason is to preserve the spatial size. As you said, we can trade-off the decrease in spatial size by removing pooling layers. However many recent network structures
7,972
Convolutional Layers: To pad or not to pad?
There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant padding types in deep learning: valid (no padding at all) same (keep image size by adding zeros around the image - that's what you are talking about and that's what most of the time is called "zero padding" in deep learning context) full (ensure all pixels have same influence on output, even more zeros are added around image, the output is larger than the input) Here is a sketch how these 3 padding types work, with x the size 3 input, k the size 3 kernel (which is shifted to all possible locations), y the output and 0 indicates zero padding: valid: xxx kkk y same: 0xxx0 kkk kkk kkk yyy full: 00xxx00 kkk kkk kkk kkk kkk yyyyy Let's look at how much influence (how often the kernel "touches" the pixel) a pixel in a 10x10 input image that is processed by a 3x3 convolution kernel has on the output (left same, right valid padding): As you can see, with same padding the border pixels have less influence than the central pixels, so it is not true that same padding removes boundary effects completely (as one can sometimes read on the internet). For valid padding, this problem is even more severe. With full padding, on the other hand, all pixels have the same influence on the output. As the network gets deeper, the problem gets more intense - both for valid and same padding. I summarized my finding on the padding experiments I did, and here is an interesting paper about this topic.
Convolutional Layers: To pad or not to pad?
There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant
Convolutional Layers: To pad or not to pad? There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant padding types in deep learning: valid (no padding at all) same (keep image size by adding zeros around the image - that's what you are talking about and that's what most of the time is called "zero padding" in deep learning context) full (ensure all pixels have same influence on output, even more zeros are added around image, the output is larger than the input) Here is a sketch how these 3 padding types work, with x the size 3 input, k the size 3 kernel (which is shifted to all possible locations), y the output and 0 indicates zero padding: valid: xxx kkk y same: 0xxx0 kkk kkk kkk yyy full: 00xxx00 kkk kkk kkk kkk kkk yyyyy Let's look at how much influence (how often the kernel "touches" the pixel) a pixel in a 10x10 input image that is processed by a 3x3 convolution kernel has on the output (left same, right valid padding): As you can see, with same padding the border pixels have less influence than the central pixels, so it is not true that same padding removes boundary effects completely (as one can sometimes read on the internet). For valid padding, this problem is even more severe. With full padding, on the other hand, all pixels have the same influence on the output. As the network gets deeper, the problem gets more intense - both for valid and same padding. I summarized my finding on the padding experiments I did, and here is an interesting paper about this topic.
Convolutional Layers: To pad or not to pad? There are already some very good answers here. I want to add some more details about the image border effects (which were already mentioned) which depend on the padding type used. There are 3 relevant
7,973
Convolutional Layers: To pad or not to pad?
Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No real borders exist. So it is a limitation of the medium. Besides preserving size, does it matter? I am not aware of a satisfactory answer but I conjecture (unproven) that with experiments on attention and occlusion (partial objects), we don't need the information lost on the borders. If you were to do something smarter (say copy the pixel next to it), it wouldn't change the answer though I have not experimented myself. Padding with 0s is fast and preserves size, so that's why we do it.
Convolutional Layers: To pad or not to pad?
Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No r
Convolutional Layers: To pad or not to pad? Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No real borders exist. So it is a limitation of the medium. Besides preserving size, does it matter? I am not aware of a satisfactory answer but I conjecture (unproven) that with experiments on attention and occlusion (partial objects), we don't need the information lost on the borders. If you were to do something smarter (say copy the pixel next to it), it wouldn't change the answer though I have not experimented myself. Padding with 0s is fast and preserves size, so that's why we do it.
Convolutional Layers: To pad or not to pad? Great question. Drag0 explained nicely but I agree, something is amiss. It's like looking at a photograph and having to deal with the border. In real life, you can move your eyes to look further; No r
7,974
Convolutional Layers: To pad or not to pad?
Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, would contribute to the resulting feature map multiple times.Thus, we pad the image See figure: 2.
Convolutional Layers: To pad or not to pad?
Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, w
Convolutional Layers: To pad or not to pad? Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, would contribute to the resulting feature map multiple times.Thus, we pad the image See figure: 2.
Convolutional Layers: To pad or not to pad? Elaborating on keeping information at the border, basically, the pixel at the corner (green shaded) when done convolution upon would just be used once whereas the one in the middle, like shaded red, w
7,975
Convolutional Layers: To pad or not to pad?
this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last time? image feature vector resolution is very small, and pixel value means a kind of vector of some global size. I think in last case some kind of mirroring is better then zero padding.
Convolutional Layers: To pad or not to pad?
this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last ti
Convolutional Layers: To pad or not to pad? this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last time? image feature vector resolution is very small, and pixel value means a kind of vector of some global size. I think in last case some kind of mirroring is better then zero padding.
Convolutional Layers: To pad or not to pad? this is my thinking. zero padding is important at initial time for keeping the size of ouput feature vector. and its someone above said that zero padding has more performance. but how about in last ti
7,976
Convolutional Layers: To pad or not to pad?
I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, either "Valid" or "same". Same will preserve the size of the output and will keep it the same as that of the input by adding suitable padding, while valid won't do that and some people claim that it'll lead to loss of information, but, here's the catch. This information loss depends on the size of the kernel or the filter you're using. For example, let's say you have a 28x28 image and the filter size is 15x15(let's say). The output should have dimension 16x16, but if you pad using "same" in tensorflow it will be 28x28. Now the 12 rows and 12 columns in themselves don't carry any meaningful information but are still there as a form of noise. And we all know how much susceptible deep learning models are towards the noise. This can degrade the training a lot. So if you're using big filters, better not go with padding.
Convolutional Layers: To pad or not to pad?
I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, ei
Convolutional Layers: To pad or not to pad? I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, either "Valid" or "same". Same will preserve the size of the output and will keep it the same as that of the input by adding suitable padding, while valid won't do that and some people claim that it'll lead to loss of information, but, here's the catch. This information loss depends on the size of the kernel or the filter you're using. For example, let's say you have a 28x28 image and the filter size is 15x15(let's say). The output should have dimension 16x16, but if you pad using "same" in tensorflow it will be 28x28. Now the 12 rows and 12 columns in themselves don't carry any meaningful information but are still there as a form of noise. And we all know how much susceptible deep learning models are towards the noise. This can degrade the training a lot. So if you're using big filters, better not go with padding.
Convolutional Layers: To pad or not to pad? I'll try to tell from the regard of information that when is it okay to pad and when it is not. Let's for base case take the example of tensorflow padding functionality. It provides two scenarios, ei
7,977
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes $\beta_2$ and $\beta_3$ are both positive so we can say that (i) $Y$ increases as $X_2$ increases, if $X_3$ is held constant, since $\beta_2$ is positive; (ii) $Y$ increases as $X_3$ increases, if $X_2$ is held constant, since $\beta_3$ is positive. Note that it's important to interpret multiple regression coefficients by considering what happens when the other variables are held constant ("ceteris paribus"). Suppose I just regressed $Y$ against $X_2$ with a model $Y = \beta_1' + \beta_2' X_2 + \epsilon'$. My estimate for the slope coefficient $\beta_2'$, which measures the effect on $Y$ of a one unit increase in $X_2$ without holding $X_3$ constant, may be different from my estimate of $\beta_2$ from the multiple regression - that also measures the effect on $Y$ of a one unit increase in $X_2$, but it does hold $X_3$ constant. The problem with my estimate $\hat{\beta_2'}$ is that it suffers from omitted-variable bias if $X_2$ and $X_3$ are correlated. To understand why, imagine $X_2$ and $X_3$ are negatively correlated. Now when I increase $X_2$ by one unit, I know the mean value of $Y$ should increase since $\beta_2 > 0$. But as $X_2$ increases, if we don't hold $X_3$ constant then $X_3$ tends to decrease, and since $\beta_3 > 0$ this will tend to reduce the mean value of $Y$. So the overall effect of a one unit increase in $X_2$ will appear lower if I allow $X_3$ to vary also, hence $\beta_2' < \beta_2$. Things get worse the more strongly $X_2$ and $X_3$ are correlated, and the larger the effect of $X_3$ through $\beta_3$ - in a really severe case we may even find $\beta_2' < 0$ even though we know that, ceteris paribus, $X_2$ has a positive influence on $Y$! Hopefully you can now see why drawing a graph of $Y$ against $X_2$ would be a poor way to visualise the relationship between $Y$ and $X_2$ in your model. In my example, your eye would be drawn to a line of best fit with slope $\hat{\beta_2'}$ that doesn't reflect the $\hat{\beta_2}$ from your regression model. In the worst case, your model may predict that $Y$ increases as $X_2$ increases (with other variables held constant) and yet the points on the graph suggest $Y$ decreases as $X_2$ increases. The problem is that in the simple graph of $Y$ against $X_2$, the other variables aren't held constant. This is the crucial insight into the benefit of an added variable plot (also called a partial regression plot) - it uses the Frisch-Waugh-Lovell theorem to "partial out" the effect of other predictors. The horizonal and vertical axes on the plot are perhaps most easily understood* as "$X_2$ after other predictors are accounted for" and "$Y$ after other predictors are accounted for". You can now look at the relationship between $Y$ and $X_2$ once all other predictors have been accounted for. So for example, the slope you can see in each plot now reflects the partial regression coefficients from your original multiple regression model. A lot of the value of an added variable plot comes at the regression diagnostic stage, especially since the residuals in the added variable plot are precisely the residuals from the original multiple regression. This means outliers and heteroskedasticity can be identified in a similar way to when looking at the plot of a simple rather than multiple regression model. Influential points can also be seen - this is useful in multiple regression since some influential points are not obvious in the original data before you take the other variables into account. In my example, a moderately large $X_2$ value may not look out of place in the table of data, but if the $X_3$ value is large as well despite $X_2$ and $X_3$ being negatively correlated then the combination is rare. "Accounting for other predictors", that $X_2$ value is unusually large and will stick out more prominently on your added variable plot. $*$ More technically they would be the residuals from running two other multiple regressions: the residuals from regressing $Y$ against all predictors other than $X_2$ go on the vertical axis, while the residuals from regression $X_2$ against all other predictors go on the horizontal axis. This is really what the legends of "$Y$ given others" and "$X_2$ given others" are telling you. Since the mean residual from both of these regressions is zero, the mean point of ($X_2$ given others, $Y$ given others) will just be (0, 0) which explains why the regression line in the added variable plot always goes through the origin. But I often find that mentioning the axes are just residuals from other regressions confuses people (unsurprising perhaps since we now are talking about four different regressions!) so I have tried not to dwell on the matter. Comprehend them as "$X_2$ given others" and "$Y$ given others" and you should be fine.
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes $\beta_2$ and $\beta_3$ are both positive so we can say that (i) $Y$ increases as $X_2$ increases, if $X_3$ is held constant, since $\beta_2$ is positive; (ii) $Y$ increases as $X_3$ increases, if $X_2$ is held constant, since $\beta_3$ is positive. Note that it's important to interpret multiple regression coefficients by considering what happens when the other variables are held constant ("ceteris paribus"). Suppose I just regressed $Y$ against $X_2$ with a model $Y = \beta_1' + \beta_2' X_2 + \epsilon'$. My estimate for the slope coefficient $\beta_2'$, which measures the effect on $Y$ of a one unit increase in $X_2$ without holding $X_3$ constant, may be different from my estimate of $\beta_2$ from the multiple regression - that also measures the effect on $Y$ of a one unit increase in $X_2$, but it does hold $X_3$ constant. The problem with my estimate $\hat{\beta_2'}$ is that it suffers from omitted-variable bias if $X_2$ and $X_3$ are correlated. To understand why, imagine $X_2$ and $X_3$ are negatively correlated. Now when I increase $X_2$ by one unit, I know the mean value of $Y$ should increase since $\beta_2 > 0$. But as $X_2$ increases, if we don't hold $X_3$ constant then $X_3$ tends to decrease, and since $\beta_3 > 0$ this will tend to reduce the mean value of $Y$. So the overall effect of a one unit increase in $X_2$ will appear lower if I allow $X_3$ to vary also, hence $\beta_2' < \beta_2$. Things get worse the more strongly $X_2$ and $X_3$ are correlated, and the larger the effect of $X_3$ through $\beta_3$ - in a really severe case we may even find $\beta_2' < 0$ even though we know that, ceteris paribus, $X_2$ has a positive influence on $Y$! Hopefully you can now see why drawing a graph of $Y$ against $X_2$ would be a poor way to visualise the relationship between $Y$ and $X_2$ in your model. In my example, your eye would be drawn to a line of best fit with slope $\hat{\beta_2'}$ that doesn't reflect the $\hat{\beta_2}$ from your regression model. In the worst case, your model may predict that $Y$ increases as $X_2$ increases (with other variables held constant) and yet the points on the graph suggest $Y$ decreases as $X_2$ increases. The problem is that in the simple graph of $Y$ against $X_2$, the other variables aren't held constant. This is the crucial insight into the benefit of an added variable plot (also called a partial regression plot) - it uses the Frisch-Waugh-Lovell theorem to "partial out" the effect of other predictors. The horizonal and vertical axes on the plot are perhaps most easily understood* as "$X_2$ after other predictors are accounted for" and "$Y$ after other predictors are accounted for". You can now look at the relationship between $Y$ and $X_2$ once all other predictors have been accounted for. So for example, the slope you can see in each plot now reflects the partial regression coefficients from your original multiple regression model. A lot of the value of an added variable plot comes at the regression diagnostic stage, especially since the residuals in the added variable plot are precisely the residuals from the original multiple regression. This means outliers and heteroskedasticity can be identified in a similar way to when looking at the plot of a simple rather than multiple regression model. Influential points can also be seen - this is useful in multiple regression since some influential points are not obvious in the original data before you take the other variables into account. In my example, a moderately large $X_2$ value may not look out of place in the table of data, but if the $X_3$ value is large as well despite $X_2$ and $X_3$ being negatively correlated then the combination is rare. "Accounting for other predictors", that $X_2$ value is unusually large and will stick out more prominently on your added variable plot. $*$ More technically they would be the residuals from running two other multiple regressions: the residuals from regressing $Y$ against all predictors other than $X_2$ go on the vertical axis, while the residuals from regression $X_2$ against all other predictors go on the horizontal axis. This is really what the legends of "$Y$ given others" and "$X_2$ given others" are telling you. Since the mean residual from both of these regressions is zero, the mean point of ($X_2$ given others, $Y$ given others) will just be (0, 0) which explains why the regression line in the added variable plot always goes through the origin. But I often find that mentioning the axes are just residuals from other regressions confuses people (unsurprising perhaps since we now are talking about four different regressions!) so I have tried not to dwell on the matter. Comprehend them as "$X_2$ given others" and "$Y$ given others" and you should be fine.
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? For illustration I will take a less complex regression model $Y = \beta_1 + \beta_2 X_2 + \beta_3 X_3 + \epsilon$ where the predictor variables $X_2$ and $X_3$ may be correlated. Let's say the slopes
7,978
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other predictors held constant)
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression?
is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other pre
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other predictors held constant)
What does an Added Variable Plot (Partial Regression Plot) explain in a multiple regression? is there anything that can really be said about the trends seen in the plots Sure, their slopes are the regression coefficients from the original model (partial regression coefficients, all other pre
7,979
Student t as mixture of gaussian
The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\sqrt{\tau}}{\sqrt{2 \pi}} e^{-\frac{\tau(x-\mu )^2}{2 }}dx.$$ The PDF of a Gamma distribution is $$h_{\alpha, \beta}(\tau) = \frac{1}{\Gamma(\alpha)}e^{-\frac{\tau}{\beta }} \tau^{-1+\alpha } \beta ^{-\alpha }d\tau.$$ Their product, slightly simplified with easy algebra, is therefore $$f_{\mu, \alpha, \beta}(x,\tau) =\frac{1}{\beta^\alpha\Gamma(\alpha)\sqrt{2 \pi}} e^{-\tau\left(\frac{(x-\mu )^2}{2 } + \frac{1}{\beta}\right)} \tau^{-1/2+\alpha}d\tau dx.$$ Its inner part evidently has the form $\exp(-\text{constant}_1 \times \tau) \times \tau^{\text{constant}_2}d\tau$, making it a multiple of a Gamma function when integrated over the full range $\tau=0$ to $\tau=\infty$. That integral therefore is immediate (obtained by knowing the integral of a Gamma distribution is unity), giving the marginal distribution $$f_{\mu, \alpha, \beta}(x) = \frac{\sqrt{\beta } \Gamma \left(\alpha +\frac{1}{2}\right) }{\sqrt{2\pi } \Gamma (\alpha )}\frac{1}{\left(\frac{\beta}{2} (x-\mu )^2+1\right)^{\alpha +\frac{1}{2}}}.$$ Trying to match the pattern provided for the $t$ distribution shows there is an error in the question: the PDF for the Student t distribution actually is proportional to $$\frac{1}{\sqrt{k} s }\left(\frac{1}{1+k^{-1}\left(\frac{x-l}{s}\right)^2}\right)^{\frac{k+1}{2}}$$ (the power of $(x-l)/s$ is $2$, not $1$). Matching the terms indicates $k = 2 \alpha$, $l=\mu$, and $s = 1/\sqrt{\alpha\beta}$. Notice that no Calculus was needed for this derivation: everything was a matter of looking up the formulas of the Normal and Gamma PDFs, carrying out some trivial algebraic manipulations involving products and powers, and matching patterns in algebraic expressions (in that order).
Student t as mixture of gaussian
The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\s
Student t as mixture of gaussian The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\sqrt{\tau}}{\sqrt{2 \pi}} e^{-\frac{\tau(x-\mu )^2}{2 }}dx.$$ The PDF of a Gamma distribution is $$h_{\alpha, \beta}(\tau) = \frac{1}{\Gamma(\alpha)}e^{-\frac{\tau}{\beta }} \tau^{-1+\alpha } \beta ^{-\alpha }d\tau.$$ Their product, slightly simplified with easy algebra, is therefore $$f_{\mu, \alpha, \beta}(x,\tau) =\frac{1}{\beta^\alpha\Gamma(\alpha)\sqrt{2 \pi}} e^{-\tau\left(\frac{(x-\mu )^2}{2 } + \frac{1}{\beta}\right)} \tau^{-1/2+\alpha}d\tau dx.$$ Its inner part evidently has the form $\exp(-\text{constant}_1 \times \tau) \times \tau^{\text{constant}_2}d\tau$, making it a multiple of a Gamma function when integrated over the full range $\tau=0$ to $\tau=\infty$. That integral therefore is immediate (obtained by knowing the integral of a Gamma distribution is unity), giving the marginal distribution $$f_{\mu, \alpha, \beta}(x) = \frac{\sqrt{\beta } \Gamma \left(\alpha +\frac{1}{2}\right) }{\sqrt{2\pi } \Gamma (\alpha )}\frac{1}{\left(\frac{\beta}{2} (x-\mu )^2+1\right)^{\alpha +\frac{1}{2}}}.$$ Trying to match the pattern provided for the $t$ distribution shows there is an error in the question: the PDF for the Student t distribution actually is proportional to $$\frac{1}{\sqrt{k} s }\left(\frac{1}{1+k^{-1}\left(\frac{x-l}{s}\right)^2}\right)^{\frac{k+1}{2}}$$ (the power of $(x-l)/s$ is $2$, not $1$). Matching the terms indicates $k = 2 \alpha$, $l=\mu$, and $s = 1/\sqrt{\alpha\beta}$. Notice that no Calculus was needed for this derivation: everything was a matter of looking up the formulas of the Normal and Gamma PDFs, carrying out some trivial algebraic manipulations involving products and powers, and matching patterns in algebraic expressions (in that order).
Student t as mixture of gaussian The PDF of a Normal distribution is $$f_{\mu, \sigma}(x) = \frac{1}{\sqrt{2 \pi} \sigma} e^{-\frac{(x-\mu )^2}{2 \sigma ^2}}dx$$ but in terms of $\tau = 1/\sigma^2$ it is $$g_{\mu, \tau}(x) = \frac{\s
7,980
Student t as mixture of gaussian
I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ degree freedom can be regarded as a Normal distribution with variance mixture $Y$, where $Y$ follows inverse gamma distribution. More precisely, $X$~$t(k)$,$X$=$\sqrt Y$*$\Phi$,where $Y$~$IG(k/2,k/2)$,$\Phi$ is standard normal rv. I hope this could help you in some sense.
Student t as mixture of gaussian
I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ deg
Student t as mixture of gaussian I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ degree freedom can be regarded as a Normal distribution with variance mixture $Y$, where $Y$ follows inverse gamma distribution. More precisely, $X$~$t(k)$,$X$=$\sqrt Y$*$\Phi$,where $Y$~$IG(k/2,k/2)$,$\Phi$ is standard normal rv. I hope this could help you in some sense.
Student t as mixture of gaussian I don't know the steps of the calculation, but I do know the results from some book (cannot remember which one...). I usually keep it in mind directly... :-) The Student $t$ distribution with $k$ deg
7,981
Student t as mixture of gaussian
To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\tau$, $Y$ is Gaussian with precision $\tau$, and the prior $\tau$ is as desired. Then it remains to show that $\sqrt{1/\tau} X$ is a t-distribution. We can write $$ \tau \sim \Gamma(\alpha, \beta) \sim \frac{\beta}{2} \Gamma(\alpha, 2) \sim \frac{\beta}{2} \chi^2(2\alpha) $$ using a well-known result about gammas and Chi-squares (decompose a gamma as a sum of exponentials and combine the exponentials to normals to Chi squares) This in turn implies that $$ Y \sim X \frac{1}{\sqrt{(\beta/2) \chi^2(2\alpha)} } $$ $$= \frac{ X \sqrt{\alpha \beta} }{\sqrt{ \chi^2_{2\alpha}/(2\alpha)}} $$ which is a scaled t with $ k=2\alpha$ and $s=1/\sqrt{\alpha \beta} $ by variance of t. We can recenter our representation at $\mu$ and $l$ would follow.
Student t as mixture of gaussian
To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\
Student t as mixture of gaussian To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\tau$, $Y$ is Gaussian with precision $\tau$, and the prior $\tau$ is as desired. Then it remains to show that $\sqrt{1/\tau} X$ is a t-distribution. We can write $$ \tau \sim \Gamma(\alpha, \beta) \sim \frac{\beta}{2} \Gamma(\alpha, 2) \sim \frac{\beta}{2} \chi^2(2\alpha) $$ using a well-known result about gammas and Chi-squares (decompose a gamma as a sum of exponentials and combine the exponentials to normals to Chi squares) This in turn implies that $$ Y \sim X \frac{1}{\sqrt{(\beta/2) \chi^2(2\alpha)} } $$ $$= \frac{ X \sqrt{\alpha \beta} }{\sqrt{ \chi^2_{2\alpha}/(2\alpha)}} $$ which is a scaled t with $ k=2\alpha$ and $s=1/\sqrt{\alpha \beta} $ by variance of t. We can recenter our representation at $\mu$ and $l$ would follow.
Student t as mixture of gaussian To simplify we assume mean $0$. Using representation, we show the result for integer degrees of freedom. $$ \sqrt{1/\tau} X =Y $$ is equivalent to a Gaussian mixture with that prior: conditioned on $\
7,982
Propensity score matching after multiple imputation
The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation, as mentioned by Stas. I can't see any advantage in using it. There is an excellent discussion of the issues surrounding propensity score analysis with missing data in Hill (2004): Hill, J. "Reducing Bias in Treatment Effect Estimation in Observational Studies Suffering from Missing Data" ISERP Working Papers, 2004. It is downloadable from here. The paper considers two approaches to using multiple imputation (and also other methods of dealing with missing data) and propensity scores : averaging of propensity scores after multiple imputation, followed by causal inference (method 2 in your post above) causal inference using each set of propensity scores from the multiple imputations followed by averaging of the causal estimates. Additionally, the paper considers whether the outcome should be included as a predictor in the imputation model. Hill asserts that while multiple imputation is preferred to other methods of dealing with missing data, in general, there is no a priori reason to prefer one of these techniques over the other. However, there may be reasons to prefer averaging the propensity scores, particularly when using certain matching algorithms. Hill did a a simulation study in the same paper and found that averaging the propensity scores prior to causal inference, when including the outcome in the imputation model produced the best results in terms of mean squared error, and averaging the scores first, but without the outcome in the imputation model, produced the best results in terms of average bias (absolute difference between estimated and true treatment effect). Generally, it is advisable to include the outcome in the imputation model (for example see here). So it would seem that your method 2 is the way to go.
Propensity score matching after multiple imputation
The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation
Propensity score matching after multiple imputation The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation, as mentioned by Stas. I can't see any advantage in using it. There is an excellent discussion of the issues surrounding propensity score analysis with missing data in Hill (2004): Hill, J. "Reducing Bias in Treatment Effect Estimation in Observational Studies Suffering from Missing Data" ISERP Working Papers, 2004. It is downloadable from here. The paper considers two approaches to using multiple imputation (and also other methods of dealing with missing data) and propensity scores : averaging of propensity scores after multiple imputation, followed by causal inference (method 2 in your post above) causal inference using each set of propensity scores from the multiple imputations followed by averaging of the causal estimates. Additionally, the paper considers whether the outcome should be included as a predictor in the imputation model. Hill asserts that while multiple imputation is preferred to other methods of dealing with missing data, in general, there is no a priori reason to prefer one of these techniques over the other. However, there may be reasons to prefer averaging the propensity scores, particularly when using certain matching algorithms. Hill did a a simulation study in the same paper and found that averaging the propensity scores prior to causal inference, when including the outcome in the imputation model produced the best results in terms of mean squared error, and averaging the scores first, but without the outcome in the imputation model, produced the best results in terms of average bias (absolute difference between estimated and true treatment effect). Generally, it is advisable to include the outcome in the imputation model (for example see here). So it would seem that your method 2 is the way to go.
Propensity score matching after multiple imputation The first thing to say is that, for me, method 1 (sampling) seems to be without much merit - it is discarding the benefits of multiple imputation, and reduces to single imputation for each observation
7,983
Propensity score matching after multiple imputation
There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-defined posterior distribution of the data, otherwise you are screwed. Propensity score matching, on the other hand, is a semi-parametric procedure: once you have computed your propensity score (no matter how, you could've used a kernel density estimate, not necessarily a logit model), you can do the rest by simply taking the differences between the treated and non-treated observations with the same propensity score, which is kinda non-parametric now, as there is no model left that controls for other covariates. I don't feel good about the discontinuities introduced by the literal implementation of matching (find the control with the closest possible value of the propensity score, and ignore the rest; Abadie and Imbens (2008) discussed that it makes it impossible to actually get the standard errors right in some of the matching situations). I would give more trust to the smoother approaches like weighting by the inverse propensity. My favorite reference on this is "Mostly Harmless Econometrics", subtitled "An Empiricist Companion", and aimed at economists, but I think this book should be a required reading for other social scientists, most biostatisticians, and non-bio statisticians as well so that they know how other disciplines approach data analysis. At any rate, using only one out of 15 simulated complete data lines per observation is equivalent to a single imputation. As a result, you lose efficiency compared to all 15 completed data sets, and you can't estimate the standard errors properly. Looks like a deficient procedure to me, from any angle. Of course, we happily sweep under the carpet the assumption that both the multiple imputation model and the propensity model are correct in the sense of having all the right variables in all the right functional forms. There is little way to check that (although I'd be happy to hear otherwise about diagnostic measures for both of these methods).
Propensity score matching after multiple imputation
There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-d
Propensity score matching after multiple imputation There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-defined posterior distribution of the data, otherwise you are screwed. Propensity score matching, on the other hand, is a semi-parametric procedure: once you have computed your propensity score (no matter how, you could've used a kernel density estimate, not necessarily a logit model), you can do the rest by simply taking the differences between the treated and non-treated observations with the same propensity score, which is kinda non-parametric now, as there is no model left that controls for other covariates. I don't feel good about the discontinuities introduced by the literal implementation of matching (find the control with the closest possible value of the propensity score, and ignore the rest; Abadie and Imbens (2008) discussed that it makes it impossible to actually get the standard errors right in some of the matching situations). I would give more trust to the smoother approaches like weighting by the inverse propensity. My favorite reference on this is "Mostly Harmless Econometrics", subtitled "An Empiricist Companion", and aimed at economists, but I think this book should be a required reading for other social scientists, most biostatisticians, and non-bio statisticians as well so that they know how other disciplines approach data analysis. At any rate, using only one out of 15 simulated complete data lines per observation is equivalent to a single imputation. As a result, you lose efficiency compared to all 15 completed data sets, and you can't estimate the standard errors properly. Looks like a deficient procedure to me, from any angle. Of course, we happily sweep under the carpet the assumption that both the multiple imputation model and the propensity model are correct in the sense of having all the right variables in all the right functional forms. There is little way to check that (although I'd be happy to hear otherwise about diagnostic measures for both of these methods).
Propensity score matching after multiple imputation There might be a clash of two paradigms. Multiple imputation is a heavily model-based Bayesian solution: the concept of the proper imputation essentially states that you need to sample from the well-d
7,984
Propensity score matching after multiple imputation
I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets and random sampling to build a single data set. That doesn't necessarily mean it's wrong but it's a strange approach to use. The data set also isn't big enough that you'd need to get creative to get around running 3-5 models instead of just one to save time and computation. Rubin's rule and the pooling method is a pretty general tool. Given the pooled, multiply imputed result can be calculated using only the variance and estimates, there's no reason I can see that it couldn't be used for your project - creating the imputed data, performing the analysis on each set, and then pooling. It's what I've done, it's what I've seen done, and unless you have a specific justification not to do it, I can't really see a reason to go with something more exotic - especially if you don't understand what's going on with the method.
Propensity score matching after multiple imputation
I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets
Propensity score matching after multiple imputation I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets and random sampling to build a single data set. That doesn't necessarily mean it's wrong but it's a strange approach to use. The data set also isn't big enough that you'd need to get creative to get around running 3-5 models instead of just one to save time and computation. Rubin's rule and the pooling method is a pretty general tool. Given the pooled, multiply imputed result can be calculated using only the variance and estimates, there's no reason I can see that it couldn't be used for your project - creating the imputed data, performing the analysis on each set, and then pooling. It's what I've done, it's what I've seen done, and unless you have a specific justification not to do it, I can't really see a reason to go with something more exotic - especially if you don't understand what's going on with the method.
Propensity score matching after multiple imputation I can't really speak to the theoretical aspects of the question, but I'll give my experience using PS/IPTW models and multiple imputation. I've never heard of someone using multiply imputed data sets
7,985
How to measure smoothness of a time series in R?
The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6, -0.8, 1.0) sd(diff(x)) sd(diff(y)) Update: As Cyan points out, that gives you a scale-dependent measure. A similar scale-independent measure would use the coefficient of variation rather than standard deviation: sd(diff(x))/abs(mean(diff(x))) sd(diff(y))/abs(mean(diff(y))) In both cases, small values correspond to smoother series.
How to measure smoothness of a time series in R?
The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6,
How to measure smoothness of a time series in R? The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6, -0.8, 1.0) sd(diff(x)) sd(diff(y)) Update: As Cyan points out, that gives you a scale-dependent measure. A similar scale-independent measure would use the coefficient of variation rather than standard deviation: sd(diff(x))/abs(mean(diff(x))) sd(diff(y))/abs(mean(diff(y))) In both cases, small values correspond to smoother series.
How to measure smoothness of a time series in R? The standard deviation of the differences will give you a rough smoothness estimate: x <- c(-1, -0.8, -0.6, -0.4, -0.2, 0, 0.2, 0.4, 0.6, 0.8, 1.0) y <- c(-1, 0.8, -0.6, 0.4, -0.2, 0, 0.2, -0.4, 0.6,
7,986
How to measure smoothness of a time series in R?
The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly varying series scores near 0 imply that there's no overall linear relationship between a data point and the following one (that is, plot(x[-length(x)],x[-1]) won't give a scatterplot with any apparent linearity) scores near -1 suggest that the series is jagged in a particular way: if one point is above the mean, the next is likely to be below the mean by about the same amount, and vice versa.
How to measure smoothness of a time series in R?
The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly
How to measure smoothness of a time series in R? The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly varying series scores near 0 imply that there's no overall linear relationship between a data point and the following one (that is, plot(x[-length(x)],x[-1]) won't give a scatterplot with any apparent linearity) scores near -1 suggest that the series is jagged in a particular way: if one point is above the mean, the next is likely to be below the mean by about the same amount, and vice versa.
How to measure smoothness of a time series in R? The lag-one autocorrelation will serve as a score and has a reasonably straightforward statistical interpretation too. cor(x[-length(x)],x[-1]) Score interpretation: scores near 1 imply a smoothly
7,987
How to measure smoothness of a time series in R?
To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends (because of using the second difference). firstD = diff(x) normFirstD = (firstD - mean(firstD)) / sd(firstD) roughness = (diff(normFirstD) ** 2) / 4 Zero will be perfect smoothness, 1 is maximal roughness. You then either use the sum of this measure, or its mean, depending on whether you want your roughness measure to be length-independent. I think this may be the same as a previous answer elsewhere: And similar things are discussed in academic sources like this and this, saying we should integrate the squared second derivative. I don't read algebra, so I'm not sure if what I'm suggesting is quite the same as any of these.
How to measure smoothness of a time series in R?
To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends
How to measure smoothness of a time series in R? To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends (because of using the second difference). firstD = diff(x) normFirstD = (firstD - mean(firstD)) / sd(firstD) roughness = (diff(normFirstD) ** 2) / 4 Zero will be perfect smoothness, 1 is maximal roughness. You then either use the sum of this measure, or its mean, depending on whether you want your roughness measure to be length-independent. I think this may be the same as a previous answer elsewhere: And similar things are discussed in academic sources like this and this, saying we should integrate the squared second derivative. I don't read algebra, so I'm not sure if what I'm suggesting is quite the same as any of these.
How to measure smoothness of a time series in R? To estimate the roughness of an array, take the squared difference of the normalized differences, and divide by 4. This gives you scale-independence (because of the normalization), and ignores trends
7,988
How to measure smoothness of a time series in R?
You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very different timeseries, so I don't know how well that works as a comparison.
How to measure smoothness of a time series in R?
You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very diffe
How to measure smoothness of a time series in R? You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very different timeseries, so I don't know how well that works as a comparison.
How to measure smoothness of a time series in R? You could just check the correlation against the timestep number. That would be equivalent to taking the RΒ² of a simple linear regression on the timeseries. Note, though, that those are two very diffe
7,989
What are the properties of a half Cauchy distribution?
A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density must then be doubled. Hence the 2 in your pdf (though it's missing a $\frac{1}{\pi}$ as whuber noted in comments). The half-Cauchy has many properties; some are useful properties we may want in a prior. A common choice for a prior on a scale parameter is the inverse gamma (not least, because it's conjugate for some familiar cases). When a weakly informative prior is desired, very small parameter values are used. The half-Cauchy is quite heavy tailed and it, too, may be regarded as fairly weakly informative in some situations. Gelman ([1] for example) advocates for half-t priors (including the half-Cauchy) over the inverse gamma because they have better behavior for small parameter values but only regards it as wealy informative when a large scale parameter is used*. Gelman has focused more on the half-Cauchy in more recent years. The paper by Polson and Scott [2] gives additional reasons for choosing the half-Cauchy in particular. * Your post shows a standard half-Cauchy. Gelman would probably not choose that for a prior. If you have no sense at all of the scale, it corresponds to saying that the scale is as likely to be above 1 as below 1 (which may be what you want) but it wouldn't necessarily fit with some of the things Gelman is arguing for. [1] A. Gelman (2006), "Prior distributions for variance parameters in hierarchical models" Bayesian Analysis, Vol. 1, N. 3, pp. 515–533 http://www.stat.columbia.edu/~gelman/research/published/taumain.pdf [2] N. G. Polson and J. G. Scott (2012), "On the Half-Cauchy Prior for a Global Scale Parameter" Bayesian Analysis, Vol. 7, No. 4, pp. 887-902 https://projecteuclid.org/euclid.ba/1354024466
What are the properties of a half Cauchy distribution?
A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density m
What are the properties of a half Cauchy distribution? A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density must then be doubled. Hence the 2 in your pdf (though it's missing a $\frac{1}{\pi}$ as whuber noted in comments). The half-Cauchy has many properties; some are useful properties we may want in a prior. A common choice for a prior on a scale parameter is the inverse gamma (not least, because it's conjugate for some familiar cases). When a weakly informative prior is desired, very small parameter values are used. The half-Cauchy is quite heavy tailed and it, too, may be regarded as fairly weakly informative in some situations. Gelman ([1] for example) advocates for half-t priors (including the half-Cauchy) over the inverse gamma because they have better behavior for small parameter values but only regards it as wealy informative when a large scale parameter is used*. Gelman has focused more on the half-Cauchy in more recent years. The paper by Polson and Scott [2] gives additional reasons for choosing the half-Cauchy in particular. * Your post shows a standard half-Cauchy. Gelman would probably not choose that for a prior. If you have no sense at all of the scale, it corresponds to saying that the scale is as likely to be above 1 as below 1 (which may be what you want) but it wouldn't necessarily fit with some of the things Gelman is arguing for. [1] A. Gelman (2006), "Prior distributions for variance parameters in hierarchical models" Bayesian Analysis, Vol. 1, N. 3, pp. 515–533 http://www.stat.columbia.edu/~gelman/research/published/taumain.pdf [2] N. G. Polson and J. G. Scott (2012), "On the Half-Cauchy Prior for a Global Scale Parameter" Bayesian Analysis, Vol. 7, No. 4, pp. 887-902 https://projecteuclid.org/euclid.ba/1354024466
What are the properties of a half Cauchy distribution? A half-Cauchy is one of the symmetric halves of the Cauchy distribution (if unspecified, it is the right half that's intended): Since the area of the right half of a Cauchy is $\frac12$ the density m
7,990
In boosting, why are the learners "weak"?
So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by an inverse polynomial) than random guessing. It's main advantage is speed. When Schapire presented it in 1990 it was a breakthrough in that it showed that a polynomial time learner generating hypotheses with errors just slightly smaller than 1/2 can be transformed into a polynomial time learner generating hypotheses with an arbitrarily small error. So, the theory to back up your question is in "The strength of weak learnability" (pdf) where he basically showed that the "strong" and "weak" learning are equivalent. And perhaps the answer the the original question is, "there's no point constructing strong learners when you can construct weak ones more cheaply". From the relatively recent papers, there's "On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms" (pdf) which I don't understand but which seems related and may be of interest to more educated people :)
In boosting, why are the learners "weak"?
So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by a
In boosting, why are the learners "weak"? So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by an inverse polynomial) than random guessing. It's main advantage is speed. When Schapire presented it in 1990 it was a breakthrough in that it showed that a polynomial time learner generating hypotheses with errors just slightly smaller than 1/2 can be transformed into a polynomial time learner generating hypotheses with an arbitrarily small error. So, the theory to back up your question is in "The strength of weak learnability" (pdf) where he basically showed that the "strong" and "weak" learning are equivalent. And perhaps the answer the the original question is, "there's no point constructing strong learners when you can construct weak ones more cheaply". From the relatively recent papers, there's "On the equivalence of weak learnability and linear separability: new relaxations and efficient boosting algorithms" (pdf) which I don't understand but which seems related and may be of interest to more educated people :)
In boosting, why are the learners "weak"? So, boosting is a learning algorithm, which can generate high-accuracy predictions using as a subroutine another algorithm, which in turn can efficiently generate hypotheses just slightly better (by a
7,991
In boosting, why are the learners "weak"?
I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why not boost with "strong" learning methods - are we more prone to overfitting?) The main reasons, in my understanding, are: Speed, as covered pretty well in the other answers; Accuracy improvement: if you already have a strong learner, the benefits of boosting are less relevant; Avoid overfitting, as you guessed. Think about it this way: What boosting does is to combine many different hypothesis from the hypothesis space so that we end up with a better final hypothesis. The great power of boosting, therefore, comes from the diversity of the hypothesis combined. If we use a strong learner, this diversity tends to decrease: after each iteration there won't be many errors (since the model is complex), which won't make boosting change the new hypothesis much. With very similar hypothesis, the ensemble will be very similar to a single complex model, which in turn tends to overfit!
In boosting, why are the learners "weak"?
I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why n
In boosting, why are the learners "weak"? I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why not boost with "strong" learning methods - are we more prone to overfitting?) The main reasons, in my understanding, are: Speed, as covered pretty well in the other answers; Accuracy improvement: if you already have a strong learner, the benefits of boosting are less relevant; Avoid overfitting, as you guessed. Think about it this way: What boosting does is to combine many different hypothesis from the hypothesis space so that we end up with a better final hypothesis. The great power of boosting, therefore, comes from the diversity of the hypothesis combined. If we use a strong learner, this diversity tends to decrease: after each iteration there won't be many errors (since the model is complex), which won't make boosting change the new hypothesis much. With very similar hypothesis, the ensemble will be very similar to a single complex model, which in turn tends to overfit!
In boosting, why are the learners "weak"? I will address overfitting, which hasn't been mentioned yet, with a more intuitive explanation. Your first question was: What are the benefits of using weak as opposed to strong learners? (e.g. why n
7,992
In boosting, why are the learners "weak"?
In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them. On the other hand, a decision tree may be a lot faster, then I can train lots of them. Let's say I use 100 learners. I train NN in 100 seconds and decision tree in 10 seconds. My first boosting with NN will take 100*100 seconds while second boosting with decision tree will take 100*10 seconds. That said I have seen articles, which uses strong learners in boosting. But in that problems that strong learners were fast in my opinion. I tried to train MLP on KDD99 Intrusion Detection Dataset, (4+ Million) using Weka. It took more than 72 hours on my machine. But boosting (AdaBoostM1 with Decision Tree - Decision Stump) took only 3 hours. In this problem it is clear that I can not use boosting with a strong learner, that is a learner which takes too much time.
In boosting, why are the learners "weak"?
In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them
In boosting, why are the learners "weak"? In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them. On the other hand, a decision tree may be a lot faster, then I can train lots of them. Let's say I use 100 learners. I train NN in 100 seconds and decision tree in 10 seconds. My first boosting with NN will take 100*100 seconds while second boosting with decision tree will take 100*10 seconds. That said I have seen articles, which uses strong learners in boosting. But in that problems that strong learners were fast in my opinion. I tried to train MLP on KDD99 Intrusion Detection Dataset, (4+ Million) using Weka. It took more than 72 hours on my machine. But boosting (AdaBoostM1 with Decision Tree - Decision Stump) took only 3 hours. In this problem it is clear that I can not use boosting with a strong learner, that is a learner which takes too much time.
In boosting, why are the learners "weak"? In boosting we use weak learners mostly since they are trained faster compared to strong learners. Think about it. If I use Multi-Layer Neural Network as the learner, then I need to train lots of them
7,993
Why is variable selection necessary?
Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remaining variables and huge understatement of standard errors. It is a mistake to believe that variable selection done in the usual way helps one get around the "large p small n" problem. The bottom line is the the final model is misleading in every way. This is related to an astounding statement I read in an epidemiology paper: "We didn't have an adequate sample size to develop a multivariable model, so instead we performed all possible tests for 2x2 tables." Any time the dataset at hand is used to eliminate variables, while making use of Y to make the decision, all statistical quantities will be distorted. Typical variable selection is a mirage. Edit: (Copying comments from below hidden by the fold) I don't want to be self-serving but my book Regression Modeling Strategies goes into this in some depth. Online materials including handouts may be found at my webpage. Some available methods are $L_2$ penalization (ridge regression), $L_1$ penalization (lasso), and the so-called elastic net (combination of $L_1$ and $L_2$). Or use data reduction (blinded to the response $Y$) before doing regression. My book spends more space on this than on penalization.
Why is variable selection necessary?
Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remainin
Why is variable selection necessary? Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remaining variables and huge understatement of standard errors. It is a mistake to believe that variable selection done in the usual way helps one get around the "large p small n" problem. The bottom line is the the final model is misleading in every way. This is related to an astounding statement I read in an epidemiology paper: "We didn't have an adequate sample size to develop a multivariable model, so instead we performed all possible tests for 2x2 tables." Any time the dataset at hand is used to eliminate variables, while making use of Y to make the decision, all statistical quantities will be distorted. Typical variable selection is a mirage. Edit: (Copying comments from below hidden by the fold) I don't want to be self-serving but my book Regression Modeling Strategies goes into this in some depth. Online materials including handouts may be found at my webpage. Some available methods are $L_2$ penalization (ridge regression), $L_1$ penalization (lasso), and the so-called elastic net (combination of $L_1$ and $L_2$). Or use data reduction (blinded to the response $Y$) before doing regression. My book spends more space on this than on penalization.
Why is variable selection necessary? Variable selection (without penalization) only makes things worse. Variable selection has almost no chance of finding the "right" variables, and results in large overstatements of effects of remainin
7,994
Why is variable selection necessary?
First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all variables unrelated to the DV (so called all relevant problem, very hard task, unrelated to the used model/classifier), the second is to limit the set to only those variables which can be optimally used by the model (for instance $e^Y$ and $Y$ are equally good in explaining $Y$, but the linear model will rather fail to use $e^Y$ in general case) -- this one is called minimal optimal. All relevant level gives an insight in what really drives the given process, so have explanatory value. Minimal optimal level (by design) gives as non-overfitted model working on as uncluttered data as possible. Real-world FS just want to achieve one of those goals (usually the latter).
Why is variable selection necessary?
First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all
Why is variable selection necessary? First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all variables unrelated to the DV (so called all relevant problem, very hard task, unrelated to the used model/classifier), the second is to limit the set to only those variables which can be optimally used by the model (for instance $e^Y$ and $Y$ are equally good in explaining $Y$, but the linear model will rather fail to use $e^Y$ in general case) -- this one is called minimal optimal. All relevant level gives an insight in what really drives the given process, so have explanatory value. Minimal optimal level (by design) gives as non-overfitted model working on as uncluttered data as possible. Real-world FS just want to achieve one of those goals (usually the latter).
Why is variable selection necessary? First of all, the disadvantages you mentioned are the effects of feature selection done wrong, i.e. overfitted, unfinished or overshoot. The "ideal" FS has two steps; first one is the removal of all
7,995
Why is variable selection necessary?
Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to over-fit. It's a good idea to exclude these variables from analysis. Furthermore, you can't include all the variables that exist in every analysis, because there's an infinite number of them out there. At some point you have to draw the line, and it's good to do so in a rigorous manner. Hence all the discussion on variable selection. Most of the issues with variables selection can be dealt with by cross-validation, or by using a model with built-in penalization and feature selection (such as the elastic net for linear models). If you're interested in some empirical results related to multiple variables causing over-fitting, check out the results of the Don't Overfit competition on Kaggle.
Why is variable selection necessary?
Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to ov
Why is variable selection necessary? Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to over-fit. It's a good idea to exclude these variables from analysis. Furthermore, you can't include all the variables that exist in every analysis, because there's an infinite number of them out there. At some point you have to draw the line, and it's good to do so in a rigorous manner. Hence all the discussion on variable selection. Most of the issues with variables selection can be dealt with by cross-validation, or by using a model with built-in penalization and feature selection (such as the elastic net for linear models). If you're interested in some empirical results related to multiple variables causing over-fitting, check out the results of the Don't Overfit competition on Kaggle.
Why is variable selection necessary? Variable selection is necessarily because most models don't deal well with a large number of irrelevant variables. These variables will only introduce noise into your model, or worse, cause you to ov
7,996
Modelling longitudinal data where the effect of time varies in functional form between individuals
I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the partitioning (package kml, and references included in the online help); so basically, it won't help identifying specific shape for individual time course, but just separate homogeneous evolution profile some kind of latent growth curve accounting for heteroscedasticity: my best guess would be to look at the extensive references around MPlus software, especially the FAQ and mailing. I've also heard of random effect multiplicative heteroscedastic model (try googling around those keywords). I find these papers (1, 2) interesting, but I didn't look at them in details. I will update with references on neuropsychological assessment once back to my office. functional PCA (fpca package) but it may be worth looking at functional data analysis Other references (just browsed on the fly): Willett & Bull (2004), Latent Growth Curve Analysis -- the authors use LGC on non-linear reading trajectories Welch (2007), Model Fit and Interpretation of Non-Linear Latent Growth Curve Models -- a PhD on modeling non-linear change in the context of latent growth modeling Berkey CS, Laird NM (1986). Nonlinear growth curve analysis: estimating the population parameters. Ann Hum Biol. 1986 Mar-Apr;13(2):111-28 Rice (2003), Functional and Longitudinal Data Analysis: Perspectives on Smoothing Wu, Fan and MΓΌller (2007). Varying-Coefficient Functional Linear Regression
Modelling longitudinal data where the effect of time varies in functional form between individuals
I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the p
Modelling longitudinal data where the effect of time varies in functional form between individuals I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the partitioning (package kml, and references included in the online help); so basically, it won't help identifying specific shape for individual time course, but just separate homogeneous evolution profile some kind of latent growth curve accounting for heteroscedasticity: my best guess would be to look at the extensive references around MPlus software, especially the FAQ and mailing. I've also heard of random effect multiplicative heteroscedastic model (try googling around those keywords). I find these papers (1, 2) interesting, but I didn't look at them in details. I will update with references on neuropsychological assessment once back to my office. functional PCA (fpca package) but it may be worth looking at functional data analysis Other references (just browsed on the fly): Willett & Bull (2004), Latent Growth Curve Analysis -- the authors use LGC on non-linear reading trajectories Welch (2007), Model Fit and Interpretation of Non-Linear Latent Growth Curve Models -- a PhD on modeling non-linear change in the context of latent growth modeling Berkey CS, Laird NM (1986). Nonlinear growth curve analysis: estimating the population parameters. Ann Hum Biol. 1986 Mar-Apr;13(2):111-28 Rice (2003), Functional and Longitudinal Data Analysis: Perspectives on Smoothing Wu, Fan and MΓΌller (2007). Varying-Coefficient Functional Linear Regression
Modelling longitudinal data where the effect of time varies in functional form between individuals I would suggest to look at the following three directions: longitudinal clustering: this is unsupervised, but you use k-means approach relying on the Calinsky criterion for assessing quality of the p
7,997
Modelling longitudinal data where the effect of time varies in functional form between individuals
I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mixed effects multivariate adaptive splines model for the analysis of longitudinal and growth curve data In addition, see the MASAL page for software including an R package.
Modelling longitudinal data where the effect of time varies in functional form between individuals
I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mix
Modelling longitudinal data where the effect of time varies in functional form between individuals I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mixed effects multivariate adaptive splines model for the analysis of longitudinal and growth curve data In addition, see the MASAL page for software including an R package.
Modelling longitudinal data where the effect of time varies in functional form between individuals I'd recommend taking a look at a couple of papers by Heping Zhang using adaptive splines for modeling longitudinal data: Multivariate adaptive splines for analysis of longitudinal data (Free PDF) Mix
7,998
Modelling longitudinal data where the effect of time varies in functional form between individuals
It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definitely have to check them out). Latent group based trajectory models have become really popular lately in criminology. But many people simply take for granted that groups actually exist, and some astute research has pointed out that you will find groups even in random data. Also to note Nagin's group based modelling approach does not allow you to assess your error (and honestly I have never seen a model that would look anything like a discontinuity). Although it would be difficult with 20 time points, for exploratory purposes creating simple heuristics to identify patterns could be helpful (e.g. always low or always high, coefficient of variation). I'm envisioning sparklines in a spreadsheet or parallel coordinates plots but I doubt they would be helpful (I honestly have not ever seen a parallel coordinate plot that is very enlightening). Good luck
Modelling longitudinal data where the effect of time varies in functional form between individuals
It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definit
Modelling longitudinal data where the effect of time varies in functional form between individuals It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definitely have to check them out). Latent group based trajectory models have become really popular lately in criminology. But many people simply take for granted that groups actually exist, and some astute research has pointed out that you will find groups even in random data. Also to note Nagin's group based modelling approach does not allow you to assess your error (and honestly I have never seen a model that would look anything like a discontinuity). Although it would be difficult with 20 time points, for exploratory purposes creating simple heuristics to identify patterns could be helpful (e.g. always low or always high, coefficient of variation). I'm envisioning sparklines in a spreadsheet or parallel coordinates plots but I doubt they would be helpful (I honestly have not ever seen a parallel coordinate plot that is very enlightening). Good luck
Modelling longitudinal data where the effect of time varies in functional form between individuals It looks to me like Growth Mixture Models might have potential to allow you to examine your error variance. (PDF here). (I'm not sure what multiplicative heteroscedastic models are, but I will definit
7,999
Modelling longitudinal data where the effect of time varies in functional form between individuals
Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: Tools like jags, stan, WinBugs, and so on potentially combined with their respective R interface packages (e.g., rjags, rstan) make it easier to specify such models. Varying within person error: Bayesian models make it easy to specify the within person error variance as a random factor that varies between people. For example, you could model scores $y$ on participants $i=1,..., n$ at time points $j=1,...J$ as $$y_{ij}\sim N(\mu_i, \sigma^2_i)$$ $$\mu_i = \gamma$$ $$\gamma \sim N(\mu_\gamma, \sigma^2_\gamma)$$ $$\sigma_i \sim \rm{Gamma}(\alpha, \beta)$$ Thus the standard deviation of each person might be modelled as a gamma distribution. I have found this to be an important parameter in many psychological domains where people vary in how much they vary over time. Latent classes of curves: I have not explored this idea as much yet, but it is relatively straight forward to specify two or more possible data generating functions for each individual and then let the Bayesian model choose the most likely model for a given individual. Thus, you would typically get posterior probabilities for each individual regarding which functional form describes the individuals data. As a sketch of an idea for a model, you could have something like the following: $$y_{ij} \sim N(\mu_{ij}, \sigma^2)$$ $$\mu_{ij} = \gamma_i \lambda_{ij}^{(1)} + (1 - \gamma_i) \lambda_{ij}^{(2)}$$ $$\lambda_{ij}^{(1)} = \theta^{(1)}_{1i} + \theta^{(1)}_{2i} \exp(-\theta^{(1)}_{3i}) $$ $$\lambda_{ij}^{(2)} =\theta^{(2)}_{1i} + \theta^{(2)}_{2i} x_{ij} + \theta^{(2)}_{3i} x^2_{ij}$$ $$\gamma_i = \rm{Bernoulli}(\pi_i)$$ Where $x_{ij}$ is time and $\lambda_{ij}^{(1)}$ represents expected values for a three parameter exponential model and $\lambda_{ij}^{(2)}$ represents expected values for a quadratic model. $\pi_i$ represents the probability that model will choose $\lambda_{ij}^{(1)}$.
Modelling longitudinal data where the effect of time varies in functional form between individuals
Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: T
Modelling longitudinal data where the effect of time varies in functional form between individuals Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: Tools like jags, stan, WinBugs, and so on potentially combined with their respective R interface packages (e.g., rjags, rstan) make it easier to specify such models. Varying within person error: Bayesian models make it easy to specify the within person error variance as a random factor that varies between people. For example, you could model scores $y$ on participants $i=1,..., n$ at time points $j=1,...J$ as $$y_{ij}\sim N(\mu_i, \sigma^2_i)$$ $$\mu_i = \gamma$$ $$\gamma \sim N(\mu_\gamma, \sigma^2_\gamma)$$ $$\sigma_i \sim \rm{Gamma}(\alpha, \beta)$$ Thus the standard deviation of each person might be modelled as a gamma distribution. I have found this to be an important parameter in many psychological domains where people vary in how much they vary over time. Latent classes of curves: I have not explored this idea as much yet, but it is relatively straight forward to specify two or more possible data generating functions for each individual and then let the Bayesian model choose the most likely model for a given individual. Thus, you would typically get posterior probabilities for each individual regarding which functional form describes the individuals data. As a sketch of an idea for a model, you could have something like the following: $$y_{ij} \sim N(\mu_{ij}, \sigma^2)$$ $$\mu_{ij} = \gamma_i \lambda_{ij}^{(1)} + (1 - \gamma_i) \lambda_{ij}^{(2)}$$ $$\lambda_{ij}^{(1)} = \theta^{(1)}_{1i} + \theta^{(1)}_{2i} \exp(-\theta^{(1)}_{3i}) $$ $$\lambda_{ij}^{(2)} =\theta^{(2)}_{1i} + \theta^{(2)}_{2i} x_{ij} + \theta^{(2)}_{3i} x^2_{ij}$$ $$\gamma_i = \rm{Bernoulli}(\pi_i)$$ Where $x_{ij}$ is time and $\lambda_{ij}^{(1)}$ represents expected values for a three parameter exponential model and $\lambda_{ij}^{(2)}$ represents expected values for a quadratic model. $\pi_i$ represents the probability that model will choose $\lambda_{ij}^{(1)}$.
Modelling longitudinal data where the effect of time varies in functional form between individuals Four years after asking this question, I've learnt a few things, so perhaps I should add a few ideas. I think Bayesian hierarchical modelling provides a flexible approach to this problem. Software: T
8,000
Modelling longitudinal data where the effect of time varies in functional form between individuals
John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's a lot of great stuff there (and Fox' books are generally quite good!).
Modelling longitudinal data where the effect of time varies in functional form between individuals
John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's
Modelling longitudinal data where the effect of time varies in functional form between individuals John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's a lot of great stuff there (and Fox' books are generally quite good!).
Modelling longitudinal data where the effect of time varies in functional form between individuals John Fox has a great appendix available on-line using nlme to look at longitudinal data. It may be useful for you: http://cran.r-project.org/doc/contrib/Fox-Companion/appendix-mixed-models.pdf There's