idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
18,201
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model
It is almost always a cheating to remove observations to improve a regression model. You should drop observations only when you truly think that these are in fact outliers. For instance, you have time series from the heart rate monitor connected to your smart watch. If you take a look at the series, it's easy to see that there would be erroneous observations with readings like 300bps. These should be removed, but not because you want to improve the model (whatever it means). They're errors in reading which have nothing to do with your heart rate. One thing to be careful though is the correlation of errors with the data. In my example it could be argued that you have errors when the heart rate monitor is displaced during exercises such as running o jumping. Which will make these errors correlated with the hart rate. In this case, care must be taken in removal of these outliers and errors, because they are not at random I'll give you a made up example of when to not remove outliers. Let's say you're measuring the movement of a weight on a spring. If the weight is small relative to the strength of the weight, then you'll notice that Hooke's law works very well: $$F=-k\Delta x,$$ where $F$ is force, $k$ - tension coefficient and $\Delta x$ is the position of the weight. Now if you put a very heavy weight or displace the weight too much, you'll start seeing deviations: at large enough displacements $\Delta x$ the motion will seem to deviate from the linear model. So, you might be tempted to remove the outliers to improve the linear model. This would not be a good idea, because the model is not working very well since Hooke's law is only approximately right. UPDATE In your case I would suggest pulling those data points and looking at them closer. Could it be lab instrument failure? External interference? Sample defect? etc. Next try to identify whether the presnece of these outliers could be correlated with what you measure like in the example I gave. If there's correlation then there's no simple way to go about it. If there's no correlation then you can remove the outliers
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress
It is almost always a cheating to remove observations to improve a regression model. You should drop observations only when you truly think that these are in fact outliers. For instance, you have tim
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model It is almost always a cheating to remove observations to improve a regression model. You should drop observations only when you truly think that these are in fact outliers. For instance, you have time series from the heart rate monitor connected to your smart watch. If you take a look at the series, it's easy to see that there would be erroneous observations with readings like 300bps. These should be removed, but not because you want to improve the model (whatever it means). They're errors in reading which have nothing to do with your heart rate. One thing to be careful though is the correlation of errors with the data. In my example it could be argued that you have errors when the heart rate monitor is displaced during exercises such as running o jumping. Which will make these errors correlated with the hart rate. In this case, care must be taken in removal of these outliers and errors, because they are not at random I'll give you a made up example of when to not remove outliers. Let's say you're measuring the movement of a weight on a spring. If the weight is small relative to the strength of the weight, then you'll notice that Hooke's law works very well: $$F=-k\Delta x,$$ where $F$ is force, $k$ - tension coefficient and $\Delta x$ is the position of the weight. Now if you put a very heavy weight or displace the weight too much, you'll start seeing deviations: at large enough displacements $\Delta x$ the motion will seem to deviate from the linear model. So, you might be tempted to remove the outliers to improve the linear model. This would not be a good idea, because the model is not working very well since Hooke's law is only approximately right. UPDATE In your case I would suggest pulling those data points and looking at them closer. Could it be lab instrument failure? External interference? Sample defect? etc. Next try to identify whether the presnece of these outliers could be correlated with what you measure like in the example I gave. If there's correlation then there's no simple way to go about it. If there's no correlation then you can remove the outliers
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress It is almost always a cheating to remove observations to improve a regression model. You should drop observations only when you truly think that these are in fact outliers. For instance, you have tim
18,202
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model
I originally wanted to post this as a comment to another answer, but it got too long to fit. When I look at your model, it doesn't necessarily contain one large group and some outliers. In my opinion, it contains 1 medium-sized group (1 to -1) and then 6 smaller groups, each found between 2 whole numbers. You can pretty clearly see that when reaching a whole number, there are fewer observations at those frequencies. The only special point is 0, where there isn't really a discernable drop in observations. In my opinion, it's worth addressing why this distribution is spread like this: Why does the distribution have these observation count drops at whole numbers? why does this observation count drop not happen at 0? What is so special about these outliers that they're outliers? When measuring discrete human actions, you're always going to have outliers. It can be interesting to see why those outliers don't fit your model, and how they can be used to improve future iterations of your model.
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress
I originally wanted to post this as a comment to another answer, but it got too long to fit. When I look at your model, it doesn't necessarily contain one large group and some outliers. In my opinion,
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model I originally wanted to post this as a comment to another answer, but it got too long to fit. When I look at your model, it doesn't necessarily contain one large group and some outliers. In my opinion, it contains 1 medium-sized group (1 to -1) and then 6 smaller groups, each found between 2 whole numbers. You can pretty clearly see that when reaching a whole number, there are fewer observations at those frequencies. The only special point is 0, where there isn't really a discernable drop in observations. In my opinion, it's worth addressing why this distribution is spread like this: Why does the distribution have these observation count drops at whole numbers? why does this observation count drop not happen at 0? What is so special about these outliers that they're outliers? When measuring discrete human actions, you're always going to have outliers. It can be interesting to see why those outliers don't fit your model, and how they can be used to improve future iterations of your model.
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress I originally wanted to post this as a comment to another answer, but it got too long to fit. When I look at your model, it doesn't necessarily contain one large group and some outliers. In my opinion,
18,203
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model
There are pros and cons to removing outliers and build model for "normal pattern" only. Pros: the model performance is better. The intuition is that, it is very hard to use ONE model to capture both "normal pattern" and "outlier pattern". So we remove outliers and say, we only build a model for "normal pattern". Cons: we will not be able to predict for outliers. In other words, suppose we put our model in production, there would be some missing predictions from the model I would suggest to remove outliers and build the model, and if possible try to build a separate model for outlier only. For the word "cheating", if you are writing paper and explicitly list how do you define and remove outliers, and the mention improved performance is on the clean data only. It is not cheating.
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress
There are pros and cons to removing outliers and build model for "normal pattern" only. Pros: the model performance is better. The intuition is that, it is very hard to use ONE model to capture both
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model There are pros and cons to removing outliers and build model for "normal pattern" only. Pros: the model performance is better. The intuition is that, it is very hard to use ONE model to capture both "normal pattern" and "outlier pattern". So we remove outliers and say, we only build a model for "normal pattern". Cons: we will not be able to predict for outliers. In other words, suppose we put our model in production, there would be some missing predictions from the model I would suggest to remove outliers and build the model, and if possible try to build a separate model for outlier only. For the word "cheating", if you are writing paper and explicitly list how do you define and remove outliers, and the mention improved performance is on the clean data only. It is not cheating.
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress There are pros and cons to removing outliers and build model for "normal pattern" only. Pros: the model performance is better. The intuition is that, it is very hard to use ONE model to capture both
18,204
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model
I believe it is only reasonable to remove outliers when one has a solid qualitative reason for doing so. By this I mean that one has information that another variable, that is not in the model, is impacting the outlier observations. Then one has the choice of removing the outlier or adding additional variables. I find that when I have outlier observations within my dataset, by studying to determine why the outlier exists, I learn more about my data and possible other models to consider.
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress
I believe it is only reasonable to remove outliers when one has a solid qualitative reason for doing so. By this I mean that one has information that another variable, that is not in the model, is im
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model I believe it is only reasonable to remove outliers when one has a solid qualitative reason for doing so. By this I mean that one has information that another variable, that is not in the model, is impacting the outlier observations. Then one has the choice of removing the outlier or adding additional variables. I find that when I have outlier observations within my dataset, by studying to determine why the outlier exists, I learn more about my data and possible other models to consider.
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress I believe it is only reasonable to remove outliers when one has a solid qualitative reason for doing so. By this I mean that one has information that another variable, that is not in the model, is im
18,205
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model
I'm not even convinced that they are "outliers". You might want to look make a normal probability plot. Are they data or residuals from fitting a model?
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress
I'm not even convinced that they are "outliers". You might want to look make a normal probability plot. Are they data or residuals from fitting a model?
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regression model I'm not even convinced that they are "outliers". You might want to look make a normal probability plot. Are they data or residuals from fitting a model?
Is it cheating to drop the outliers based on the boxplot of Mean Absolute Error to improve a regress I'm not even convinced that they are "outliers". You might want to look make a normal probability plot. Are they data or residuals from fitting a model?
18,206
Box-and-Whisker Plot for Multimodal Distribution
The problem is that the usual boxplot* generally can't give an indication of the number of modes. While in some (generally rare) circumstances it is possible to get a clear indication that the smallest number of modes exceeds 1, more usually a given boxplot is consistent with one or any larger number of modes. * several modifications of the usual kinds of boxplot have been suggested which do more to indicate changes in density and cam be used to identify multiple modes, but I don't think those are the purpose of this question. For example, while this plot does indicate the presence of at least two modes (the data were generated so as to have exactly two) - $\qquad\qquad $ conversely, this one has two very clear modes in its distribution but you simply can't tell that from the boxplot at all: Boxplots don't necessarily convey a lot of information about the distribution. In the absence of any marked points outside the whiskers, they contain only five values, and a five number summary doesn't pin down the distribution much. However, the first figure above shows a case where the cdf is sufficiently "pinned down" to essentially rule out a unimodal distribution (at least at the sample size of $n=$100) -- no unimodal cdf is consistent with the constraints on the cdf in that case, which require a relatively sharp rise in the first quarter, a flattening out to (on average) a small rate of increase in the middle half and then changing to another sharp rise in the last quarter. Indeed, we can see that the five-number summary doesn't tell us a great deal in general in figure 1 here (which I believe is a working paper later published in [1]) shows four different data sets with the same box plot. I don't have that data to hand, but it's a trivial matter to make a similar data set - as indicated in the link above related to the five-number summary, we need only constrain our distributions to lie within the rectangular boxes that the five number summary restricts us to. Here's R code which will generate similar data to that in the paper: x1 = qnorm(ppoints(1:100,a=-.072377)) x1 = x1/diff(range(x1))*18+10 b = fivenum(x1) # all of the data has this five number summary x2 = qnorm(ppoints(1:48));x2=x2/diff(range(x2))*.6 x2 = c(b[1],x2+b[2],.31+b[2],b[4]-.31,x2+b[4],b[5]) d = .1183675; x3 = ((0:34)-34/2)/34*(9-d)+(5.5-d/2) x3 = c(x3,rep(9.5,15),rep(10.5,15),20-x3) x4 = c(1,rep(b[2],24),(0:49)/49*(b[4]-b[2])+b[2],(0:24)/24*(b[5]-b[4])+b[4]) Here's a similar display to that in the paper, of the above data (except I show all four boxplots here): There's a somewhat similar set of displays in Matejka & Fitzmaurice (2017)[2], though they don't seem to have a very skewed example like x4 (they do have some mildly skewed examples) - and they do have some trimodal examples not in [1]; the basic point of the examples is the same. Beware, however -- histograms can have problems, too; indeed, we see one of its problems here, because the distribution in the third "peaked" histogram is actually distinctly bimodal; the histogram bin width is simply too wide to show it. Further, as Nick Cox points out in comments, kernel density estimates may also affect the impression of the number of modes (sometimes smearing out modes ... or sometimes suggesting small modes where none exist in the original distribution). One must take care with interpretation of many common displays. There are modifications of the boxplot that can better indicate multimodality (vase plots, violin plots and bean plots, among numerous others). In some situations they may be useful, but if I'm interested in finding modes I'll usually look at a different sort of display. Boxplots are better when interest focuses on comparisons of location and spread (and often perhaps to skewness$^\dagger$) rather than the particulars of distributional shape. If multimodality is important to show, I'd suggest looking at displays that are better at showing that - the precise choice of display depends on what you most want it to show well. $\dagger$ but not always - the fourth data set (x4) in the example data above shows that you can easily have a distinctly skewed distribution with a perfectly symmetric boxplot. [1]: Choonpradub, C., & McNeil, D. (2005), "Can the boxplot be improved?" Songklanakarin J. Sci. Technol., 27:3, pp. 649-657. http://www.jourlib.org/paper/2081800 pdf [2]: Justin Matejka and George Fitzmaurice, (2017), "Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing". In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). Association for Computing Machinery, New York, NY, USA, 1290–1294. DOI:https://doi.org/10.1145/3025453.3025912 (See the pdf here)
Box-and-Whisker Plot for Multimodal Distribution
The problem is that the usual boxplot* generally can't give an indication of the number of modes. While in some (generally rare) circumstances it is possible to get a clear indication that the smalles
Box-and-Whisker Plot for Multimodal Distribution The problem is that the usual boxplot* generally can't give an indication of the number of modes. While in some (generally rare) circumstances it is possible to get a clear indication that the smallest number of modes exceeds 1, more usually a given boxplot is consistent with one or any larger number of modes. * several modifications of the usual kinds of boxplot have been suggested which do more to indicate changes in density and cam be used to identify multiple modes, but I don't think those are the purpose of this question. For example, while this plot does indicate the presence of at least two modes (the data were generated so as to have exactly two) - $\qquad\qquad $ conversely, this one has two very clear modes in its distribution but you simply can't tell that from the boxplot at all: Boxplots don't necessarily convey a lot of information about the distribution. In the absence of any marked points outside the whiskers, they contain only five values, and a five number summary doesn't pin down the distribution much. However, the first figure above shows a case where the cdf is sufficiently "pinned down" to essentially rule out a unimodal distribution (at least at the sample size of $n=$100) -- no unimodal cdf is consistent with the constraints on the cdf in that case, which require a relatively sharp rise in the first quarter, a flattening out to (on average) a small rate of increase in the middle half and then changing to another sharp rise in the last quarter. Indeed, we can see that the five-number summary doesn't tell us a great deal in general in figure 1 here (which I believe is a working paper later published in [1]) shows four different data sets with the same box plot. I don't have that data to hand, but it's a trivial matter to make a similar data set - as indicated in the link above related to the five-number summary, we need only constrain our distributions to lie within the rectangular boxes that the five number summary restricts us to. Here's R code which will generate similar data to that in the paper: x1 = qnorm(ppoints(1:100,a=-.072377)) x1 = x1/diff(range(x1))*18+10 b = fivenum(x1) # all of the data has this five number summary x2 = qnorm(ppoints(1:48));x2=x2/diff(range(x2))*.6 x2 = c(b[1],x2+b[2],.31+b[2],b[4]-.31,x2+b[4],b[5]) d = .1183675; x3 = ((0:34)-34/2)/34*(9-d)+(5.5-d/2) x3 = c(x3,rep(9.5,15),rep(10.5,15),20-x3) x4 = c(1,rep(b[2],24),(0:49)/49*(b[4]-b[2])+b[2],(0:24)/24*(b[5]-b[4])+b[4]) Here's a similar display to that in the paper, of the above data (except I show all four boxplots here): There's a somewhat similar set of displays in Matejka & Fitzmaurice (2017)[2], though they don't seem to have a very skewed example like x4 (they do have some mildly skewed examples) - and they do have some trimodal examples not in [1]; the basic point of the examples is the same. Beware, however -- histograms can have problems, too; indeed, we see one of its problems here, because the distribution in the third "peaked" histogram is actually distinctly bimodal; the histogram bin width is simply too wide to show it. Further, as Nick Cox points out in comments, kernel density estimates may also affect the impression of the number of modes (sometimes smearing out modes ... or sometimes suggesting small modes where none exist in the original distribution). One must take care with interpretation of many common displays. There are modifications of the boxplot that can better indicate multimodality (vase plots, violin plots and bean plots, among numerous others). In some situations they may be useful, but if I'm interested in finding modes I'll usually look at a different sort of display. Boxplots are better when interest focuses on comparisons of location and spread (and often perhaps to skewness$^\dagger$) rather than the particulars of distributional shape. If multimodality is important to show, I'd suggest looking at displays that are better at showing that - the precise choice of display depends on what you most want it to show well. $\dagger$ but not always - the fourth data set (x4) in the example data above shows that you can easily have a distinctly skewed distribution with a perfectly symmetric boxplot. [1]: Choonpradub, C., & McNeil, D. (2005), "Can the boxplot be improved?" Songklanakarin J. Sci. Technol., 27:3, pp. 649-657. http://www.jourlib.org/paper/2081800 pdf [2]: Justin Matejka and George Fitzmaurice, (2017), "Same Stats, Different Graphs: Generating Datasets with Varied Appearance and Identical Statistics through Simulated Annealing". In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems (CHI '17). Association for Computing Machinery, New York, NY, USA, 1290–1294. DOI:https://doi.org/10.1145/3025453.3025912 (See the pdf here)
Box-and-Whisker Plot for Multimodal Distribution The problem is that the usual boxplot* generally can't give an indication of the number of modes. While in some (generally rare) circumstances it is possible to get a clear indication that the smalles
18,207
Box-and-Whisker Plot for Multimodal Distribution
There are multiple options for detecting multimodality with R. Data for the below charts was generated with three modes (-3,0,1). The boxplot is clearly outperformed by the others (the violin plot looks like it has different default kernel density settings), but none really distinguish between the 0 and 1 modes. There are really few reasons to use boxplots anymore in the computer age. Why throw out information? dat <- c(rnorm(500, -3, 1), rnorm(200, 0, 1), rnorm(300, 1, 1)) par(mfrow=c(2, 2)) boxplot(dat, horizontal=TRUE, main="Boxplot") require(beanplot) beanplot(dat, horizontal=TRUE, main="Beanplot") require(viopoints) viopoints(dat, horizontal=TRUE, main="Viopoints") require(vioplot) vioplot(dat, horizontal=TRUE) title("Violin Plot")
Box-and-Whisker Plot for Multimodal Distribution
There are multiple options for detecting multimodality with R. Data for the below charts was generated with three modes (-3,0,1). The boxplot is clearly outperformed by the others (the violin plot loo
Box-and-Whisker Plot for Multimodal Distribution There are multiple options for detecting multimodality with R. Data for the below charts was generated with three modes (-3,0,1). The boxplot is clearly outperformed by the others (the violin plot looks like it has different default kernel density settings), but none really distinguish between the 0 and 1 modes. There are really few reasons to use boxplots anymore in the computer age. Why throw out information? dat <- c(rnorm(500, -3, 1), rnorm(200, 0, 1), rnorm(300, 1, 1)) par(mfrow=c(2, 2)) boxplot(dat, horizontal=TRUE, main="Boxplot") require(beanplot) beanplot(dat, horizontal=TRUE, main="Beanplot") require(viopoints) viopoints(dat, horizontal=TRUE, main="Viopoints") require(vioplot) vioplot(dat, horizontal=TRUE) title("Violin Plot")
Box-and-Whisker Plot for Multimodal Distribution There are multiple options for detecting multimodality with R. Data for the below charts was generated with three modes (-3,0,1). The boxplot is clearly outperformed by the others (the violin plot loo
18,208
Time spent in an activity as an independent variable
To expand a bit on the answer of @ken-butler. By adding both the continuous variable (hours) and an indicator variable for a special value (hours = 0, or non-breastfeeding), you think that there is a linear effect for the "non-special" value and a discrete jump in the predicted outcome at the special value. It helps (for me at least) to look at a graph. In the example below we model hourly wage as a function of hours per week that the respondents (all females) work, and we think that there is something special about "the standard" 40 hours per week: The code that produced this graph (in Stata) can be found here: http://www.stata.com/statalist/archive/2013-03/msg00088.html So in this case we have assigned the continuous variable a value 40 even though we wanted it to be treated differently from the other values. Similarly, you would give your weeks breastfeeding the value 0 even though you think it is qualitatively different from the other values. I interpret your comment below that you think that this is a problem. This is not the case and you do not need to add an interaction term. In fact, that interaction term will be dropped due to perfect collinearity if you tried. This is not a limitation, it just tells you that the interaction terms does not add any new information. Say your regression equation looks like this: $$ \hat{y} = \beta_1 weeks\_breastfeeding + \beta_2 non\_breastfeeding + \cdots $$ Where $weeks\_breastfeeding$ is the number of weeks breastfeeding (including the value 0 for those that do not breastfeed) and $non\_breastfeeding$ is an indicator variable that is 1 when someone does not breastfeed and 0 otherwise. Consider what happens when someone is breastfeeding. The regression equation simplifies to: $$ \hat{y} = \beta_1 weeks\_breastfeeding + \beta_2 0 + \cdots \\ = \beta_1 weeks\_breastfeeding + \cdots $$ So $\beta_1$ is just a linear effect of the number of weeks breastfeeding for those that do breastfeed. Consider what is hapening when someone is not breastfeeding: $$ \hat{y} = \beta_1 0 + \beta_2 1 + \cdots \\ = \beta_2 + \cdots $$ So $\beta_2$ gives you the effect of not breastfeeding and the number of weeks breastfeeding drops from the equation. You can see that there is no use to add an interaction term, as that interaction term is already (implicitly) in there. There is however something weird about $\beta_2$ though, as it measures the effect of breastfeeding by comparing the expected outcome of those who do not breastfeed with those that breastfeed but do so only 0 weeks... It kind of makes sense in a "compare like with like" sort of way, but the practical usefulness is not immediatly obvious. It may make more sense to compare the "non-breastfeeders" with those women that were breastfeeding 12 weeks (approx. 3 months). In that case you just give the "non-breastfeeders" the value 12 for $weeks\_breastfeeding$. So the value you assigning to $weeks\_breastfeeding$ for the "non-breastfeeders" does influence the regression coefficient $\beta_2$ in the sense that it determines with whom the "non-breastfeeders" are compared. Instead of a problem, this is actually something that can be quite useful.
Time spent in an activity as an independent variable
To expand a bit on the answer of @ken-butler. By adding both the continuous variable (hours) and an indicator variable for a special value (hours = 0, or non-breastfeeding), you think that there is a
Time spent in an activity as an independent variable To expand a bit on the answer of @ken-butler. By adding both the continuous variable (hours) and an indicator variable for a special value (hours = 0, or non-breastfeeding), you think that there is a linear effect for the "non-special" value and a discrete jump in the predicted outcome at the special value. It helps (for me at least) to look at a graph. In the example below we model hourly wage as a function of hours per week that the respondents (all females) work, and we think that there is something special about "the standard" 40 hours per week: The code that produced this graph (in Stata) can be found here: http://www.stata.com/statalist/archive/2013-03/msg00088.html So in this case we have assigned the continuous variable a value 40 even though we wanted it to be treated differently from the other values. Similarly, you would give your weeks breastfeeding the value 0 even though you think it is qualitatively different from the other values. I interpret your comment below that you think that this is a problem. This is not the case and you do not need to add an interaction term. In fact, that interaction term will be dropped due to perfect collinearity if you tried. This is not a limitation, it just tells you that the interaction terms does not add any new information. Say your regression equation looks like this: $$ \hat{y} = \beta_1 weeks\_breastfeeding + \beta_2 non\_breastfeeding + \cdots $$ Where $weeks\_breastfeeding$ is the number of weeks breastfeeding (including the value 0 for those that do not breastfeed) and $non\_breastfeeding$ is an indicator variable that is 1 when someone does not breastfeed and 0 otherwise. Consider what happens when someone is breastfeeding. The regression equation simplifies to: $$ \hat{y} = \beta_1 weeks\_breastfeeding + \beta_2 0 + \cdots \\ = \beta_1 weeks\_breastfeeding + \cdots $$ So $\beta_1$ is just a linear effect of the number of weeks breastfeeding for those that do breastfeed. Consider what is hapening when someone is not breastfeeding: $$ \hat{y} = \beta_1 0 + \beta_2 1 + \cdots \\ = \beta_2 + \cdots $$ So $\beta_2$ gives you the effect of not breastfeeding and the number of weeks breastfeeding drops from the equation. You can see that there is no use to add an interaction term, as that interaction term is already (implicitly) in there. There is however something weird about $\beta_2$ though, as it measures the effect of breastfeeding by comparing the expected outcome of those who do not breastfeed with those that breastfeed but do so only 0 weeks... It kind of makes sense in a "compare like with like" sort of way, but the practical usefulness is not immediatly obvious. It may make more sense to compare the "non-breastfeeders" with those women that were breastfeeding 12 weeks (approx. 3 months). In that case you just give the "non-breastfeeders" the value 12 for $weeks\_breastfeeding$. So the value you assigning to $weeks\_breastfeeding$ for the "non-breastfeeders" does influence the regression coefficient $\beta_2$ in the sense that it determines with whom the "non-breastfeeders" are compared. Instead of a problem, this is actually something that can be quite useful.
Time spent in an activity as an independent variable To expand a bit on the answer of @ken-butler. By adding both the continuous variable (hours) and an indicator variable for a special value (hours = 0, or non-breastfeeding), you think that there is a
18,209
Time spent in an activity as an independent variable
Something simple: represent your variable by a 1/0 indicator for any/none, and the actual value. Put both into the regression.
Time spent in an activity as an independent variable
Something simple: represent your variable by a 1/0 indicator for any/none, and the actual value. Put both into the regression.
Time spent in an activity as an independent variable Something simple: represent your variable by a 1/0 indicator for any/none, and the actual value. Put both into the regression.
Time spent in an activity as an independent variable Something simple: represent your variable by a 1/0 indicator for any/none, and the actual value. Put both into the regression.
18,210
Time spent in an activity as an independent variable
If you put a binary indicator for any-time-spent(=1) vs no-time-spent(=0) and then have the amount of time spent as a continuous variable, the different effect of "0" times will be "picked up" by the 0-1 indicator
Time spent in an activity as an independent variable
If you put a binary indicator for any-time-spent(=1) vs no-time-spent(=0) and then have the amount of time spent as a continuous variable, the different effect of "0" times will be "picked up" by the
Time spent in an activity as an independent variable If you put a binary indicator for any-time-spent(=1) vs no-time-spent(=0) and then have the amount of time spent as a continuous variable, the different effect of "0" times will be "picked up" by the 0-1 indicator
Time spent in an activity as an independent variable If you put a binary indicator for any-time-spent(=1) vs no-time-spent(=0) and then have the amount of time spent as a continuous variable, the different effect of "0" times will be "picked up" by the
18,211
Time spent in an activity as an independent variable
You can use mixed-effects models with a grouping that is based in 0 time vs nonzero time, and keep your independent variable
Time spent in an activity as an independent variable
You can use mixed-effects models with a grouping that is based in 0 time vs nonzero time, and keep your independent variable
Time spent in an activity as an independent variable You can use mixed-effects models with a grouping that is based in 0 time vs nonzero time, and keep your independent variable
Time spent in an activity as an independent variable You can use mixed-effects models with a grouping that is based in 0 time vs nonzero time, and keep your independent variable
18,212
Time spent in an activity as an independent variable
If you are using Random Forest or Neural Network putting this number as 0 is OK, because they will be able to figure out that 0 is distinctly different from other values (if it is in fact different). Other way around is adding of a categorical variable yes/no in addition to time variable. But all in all, in this particular case I don't see a real issue - 0.1 weeks of breastfeeding is close to 0 and effect will be very similar, so it looks like a pretty continuous variable to me with 0 not standing out as something distinct.
Time spent in an activity as an independent variable
If you are using Random Forest or Neural Network putting this number as 0 is OK, because they will be able to figure out that 0 is distinctly different from other values (if it is in fact different).
Time spent in an activity as an independent variable If you are using Random Forest or Neural Network putting this number as 0 is OK, because they will be able to figure out that 0 is distinctly different from other values (if it is in fact different). Other way around is adding of a categorical variable yes/no in addition to time variable. But all in all, in this particular case I don't see a real issue - 0.1 weeks of breastfeeding is close to 0 and effect will be very similar, so it looks like a pretty continuous variable to me with 0 not standing out as something distinct.
Time spent in an activity as an independent variable If you are using Random Forest or Neural Network putting this number as 0 is OK, because they will be able to figure out that 0 is distinctly different from other values (if it is in fact different).
18,213
Time spent in an activity as an independent variable
Tobit model is what you want, I think.
Time spent in an activity as an independent variable
Tobit model is what you want, I think.
Time spent in an activity as an independent variable Tobit model is what you want, I think.
Time spent in an activity as an independent variable Tobit model is what you want, I think.
18,214
How to find residuals and plot them
EDIT: You have an R tag but then in a comment say you don't know much about it. This is R code. I know nothing about Sage. End edit You can do this x = c(21,34,6,47,10,49,23,32,12,16,29,49,28,8,57,9,31,10,21, 26,31,52,21,8,18,5,18,26,27,26,32,2,59,58,19,14,16,9,23, 28,34,70,69,54,39,9,21,54,26) y = c(47,76,33,78,62,78,33,64,83,67,61,85,46,53,55,71,59,41,82, 56,39,89,31,43,29,55, 81,82,82,85,59,74,80,88,29,58,71,60, 86,91,72,89,80,84,54,71,75,84,79) m1 <- lm(y~x) #Create a linear model resid(m1) #List of residuals plot(density(resid(m1))) #A density plot qqnorm(resid(m1)) # A quantile normal plot - good for checking normality qqline(resid(m1))
How to find residuals and plot them
EDIT: You have an R tag but then in a comment say you don't know much about it. This is R code. I know nothing about Sage. End edit You can do this x = c(21,34,6,47,10,49,23,32,12,16,29,49,28,8,57,9,3
How to find residuals and plot them EDIT: You have an R tag but then in a comment say you don't know much about it. This is R code. I know nothing about Sage. End edit You can do this x = c(21,34,6,47,10,49,23,32,12,16,29,49,28,8,57,9,31,10,21, 26,31,52,21,8,18,5,18,26,27,26,32,2,59,58,19,14,16,9,23, 28,34,70,69,54,39,9,21,54,26) y = c(47,76,33,78,62,78,33,64,83,67,61,85,46,53,55,71,59,41,82, 56,39,89,31,43,29,55, 81,82,82,85,59,74,80,88,29,58,71,60, 86,91,72,89,80,84,54,71,75,84,79) m1 <- lm(y~x) #Create a linear model resid(m1) #List of residuals plot(density(resid(m1))) #A density plot qqnorm(resid(m1)) # A quantile normal plot - good for checking normality qqline(resid(m1))
How to find residuals and plot them EDIT: You have an R tag but then in a comment say you don't know much about it. This is R code. I know nothing about Sage. End edit You can do this x = c(21,34,6,47,10,49,23,32,12,16,29,49,28,8,57,9,3
18,215
How to transform ordinal data from questionnaire into proper interval data?
This response will discuss possible models from a measurement perspective, where we are given a set of observed (manifest) interrelated variables, or measures, whose shared variance is assumed to measure a well-identified but not directly observable construct (generally, in a reflective manner), which will be considered as a latent variable. If you are unfamiliar with latent trait measurement model, I would recommend the following two articles: The attack of the psychometricians, by Denny Borsbooom, and Latent Variable Modelling: A Survey, by Anders Skrondal and Sophia Rabe-Hesketh. I will first make a slight digression with binary indicators before dealing with items with multiple response categories. One way to transform ordinal level data into interval scale is to use some kind of Item Response model. A well-known example is the Rasch model, which extends the idea of the parallel test model from classical test theory to cope with binary-scored items through a generalized (with logit link) mixed-effect linear model (in some of the 'modern' software implementation), where the probability of endorsing a given item is a function of 'item difficulty' and 'person ability' (assuming there's no interaction between one's location on the latent trait being measured and item location on the same logit scale--which could be captured through an additional item discrimination parameter, or interaction with individual-specific characteristics--which is called differential item functioning). The underlying construct is assumed to be unidimensional, and the logic of the Rasch model is just that the respondent has a certain 'amount of the construct'--let's talk about subject's liability (his/her 'ability'), and call it $\theta$, as does any item that defines this construct (their 'difficulty'). What is of interest is the difference between respondent location and item location on the measurement scale, $\theta$. To give a concrete example, consider the following question: "I found it hard to focus on anything other than my anxiety" (yes/no). A person suffering from anxiety disorders is more likely to answer positively to this question compared to a random individual taken from the general population and having no past history of depression or anxiety-related disorder. An illustration of 29 item response curves derived from a large-scale US study that aims to build a calibrated item bank assessing anxiety-related disorders(1,2) is shown below. The sample size is $N=766$; exploratory factor analysis confirmed the unidimensionality of the scale (with first eigenvalue largely above the second eigenvalue (by a 17-fold amount), and unreliable 2nd factor axis (eigenvalue juste above 1) as confirmed by parallel analysis), and this scale shows reliability index in the acceptable range, as assessed by Cronbach's alpha ($\alpha=0.971$, with 95% bootstrap CI $[0.967;0.975]$). Initially, five response categories were proposed (1 = 'Never', 2 = 'Rarely', 3 = 'Sometimes', 4 = 'Often', and 5 = 'Always') for each item. We will here only consider binary-scored responses. (Here, responses to Likert-type items have been recoded as binary responses (1/2=0, 3-5=1), and we consider that each item is equally discriminative across individuals, hence the parallelism between item curve slopes (Rasch model).) As can be seen, people located to the right of the $x$-axis, which reflects the latent trait (anxiety), who are thought to express more of this trait are more likely to answer positively to questions like "I felt terrified" (terrific) or "I had sudden feelings of panic" (panic) than people located to the left (normal population, unlikely to be considered as cases); on the other hand, it is not unlikely than someone from the general population would report having trouble to get asleep (sleeping): for someone located at intermediate range of the latent trait, say 0 logit, his/her probability of scoring 3 or higher is about 0.5 (which is the item difficulty). For polytomous items with ordered categories, there are several choices: the partial credit model, the rating scale model, or the graded response model, to name but a few that are mostly used in applied research. The first two belong to the so-called "Rasch family" of IRT models and share the following properties: (a) monotonicity of the response probability function (item/category response curve), (b) sufficiency of total individual score (with latent parameter considered as fixed), (c) local independence meaning that responses to items are independent, conditional on the latent trait, and (d) absence of differential item functioning meaning that, conditional on the latent trait, responses are independent of external individual-specific variables (e.g., gender, age, ethnicity, SES). Extending the previous example to the case where the five response categories are effectively accounted for, a patient will have a higher probability of choosing response category 3 to 5, compared to someone sampled from the general population, without any antecedent of anxiety-related disorders. Compared to the modeling of dichotomous item described above, these models consider either cumulative (e.g., odds of answering 3 vs. 2 or less) or adjacent-category threshold (odds of answering 3 vs. 2), which is also discussed in Agresti's Categorical Data Analysis (chapter 12). The main difference between the aforementioned models lies in the way transitions from one response category to the other are handled: the partial credit model does not assume that difference between any given threshold location and the mean of the threshold locations on the latent trait is equal or uniform across items, contrary to the rating scale model. Another subtle difference between those models is that some of them (like the unconstrained graded response or partial credit model) allows for unequal discrimination parameters between item. See Applying item response theory modeling for evaluating questionnaire item and scale properties, by Reeve and Fayers, or The basis of item response theory, by Frank B. Baker, for more details. Because in the preceding case we discussed the interpretation of responses probability curves for dichotomously scored items, let's look at item response curves derived from a graded response model, highlighting the same target items: (Unconstrained graded response model, allowing for unequal discrimination among items.) Here, the following observations deserve some consideration: Response categories for the 'sleeping' item are less discriminative than, say, the ones attached to 'terrific': in the case of 'sleeping', for two persons located at the two extrema of the interval $[2;2.5]$ on the latent trait (in logit units), their probability of choosing the fourth response ("often had difficulty sleeping") goes from approx. 0.35 to 0.4; with 'terrific', that probability goes from less than 0.1 to about 0.25 (dashed blue line). If you want to discriminate between two patients showing signs of anxiety, the latter item is more informative. There is an overall shift, from the left to the right, between item assessing sleep quality and those assessing more severe conditions, although sleeping disorders are not uncommon. This is expected: after all, even people in the general population might experience some difficulty falling asleep, independent of their health state, and people severely depressed or anxious are likely to exhibit such problems. However, 'normal persons' (if this ever had any meaning) are unlikely to show some signs of panic disorder (the probability they choose the highest response category is zero for people located up to the intermediate range or more of the latent trait, [0;1]). In both cases discussed above, this $\theta$ scale which reflects individual liability on the assumed latent trait has the property of an interval scale. Besides being thought of as truly measurement models, what makes Rasch models attractive is that sum scores, as a sufficient statistic, can be used as surrogates for the latent scores. Moreover, the sufficiency property readily implies the separability of model (persons and items) parameters (in the case of polytomous items, one should not forget that everything applies at the level of item response category), hence conjoint additivity. A good review of IRT model hierarchy, with R implementation, is available in Mair and Hatzinger's article published in the Journal of Statistical Software: Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. Other models include log-linear models, non-parametric model, like the Mokken model, or graphical models. Apart from R, I am not aware of Excel implementations, but several statistical packages were proposed on this thread: How to get started with applying item response theory and what software to use? Finally, if you want to study the relationships between a set of items and a response variable without resorting on a measurement model, some form of variable quantization through optimal scaling can be interesting too. Apart from R implementations discussed in those threads, SPSS solutions were also proposed on related threads. References Pilkonis, P., Choi, S., Reise, S., Stover, A. and Riley, W. et al. (2011). Item banks for mea- suring emotional distress from the patient-reported outcomes measurement information system (PROMIS): Depression, anxiety, and anger. Assessment, 18(3), 263–283. Choi, S., Gibbons, L. and Crane, P. (2011). lordif: An R package for detecting differential item functioning using iterative hybrid ordinal logistic regression/Item Response Theory and monte carlo simulations. Journal of Statistical Software, 39(8).
How to transform ordinal data from questionnaire into proper interval data?
This response will discuss possible models from a measurement perspective, where we are given a set of observed (manifest) interrelated variables, or measures, whose shared variance is assumed to meas
How to transform ordinal data from questionnaire into proper interval data? This response will discuss possible models from a measurement perspective, where we are given a set of observed (manifest) interrelated variables, or measures, whose shared variance is assumed to measure a well-identified but not directly observable construct (generally, in a reflective manner), which will be considered as a latent variable. If you are unfamiliar with latent trait measurement model, I would recommend the following two articles: The attack of the psychometricians, by Denny Borsbooom, and Latent Variable Modelling: A Survey, by Anders Skrondal and Sophia Rabe-Hesketh. I will first make a slight digression with binary indicators before dealing with items with multiple response categories. One way to transform ordinal level data into interval scale is to use some kind of Item Response model. A well-known example is the Rasch model, which extends the idea of the parallel test model from classical test theory to cope with binary-scored items through a generalized (with logit link) mixed-effect linear model (in some of the 'modern' software implementation), where the probability of endorsing a given item is a function of 'item difficulty' and 'person ability' (assuming there's no interaction between one's location on the latent trait being measured and item location on the same logit scale--which could be captured through an additional item discrimination parameter, or interaction with individual-specific characteristics--which is called differential item functioning). The underlying construct is assumed to be unidimensional, and the logic of the Rasch model is just that the respondent has a certain 'amount of the construct'--let's talk about subject's liability (his/her 'ability'), and call it $\theta$, as does any item that defines this construct (their 'difficulty'). What is of interest is the difference between respondent location and item location on the measurement scale, $\theta$. To give a concrete example, consider the following question: "I found it hard to focus on anything other than my anxiety" (yes/no). A person suffering from anxiety disorders is more likely to answer positively to this question compared to a random individual taken from the general population and having no past history of depression or anxiety-related disorder. An illustration of 29 item response curves derived from a large-scale US study that aims to build a calibrated item bank assessing anxiety-related disorders(1,2) is shown below. The sample size is $N=766$; exploratory factor analysis confirmed the unidimensionality of the scale (with first eigenvalue largely above the second eigenvalue (by a 17-fold amount), and unreliable 2nd factor axis (eigenvalue juste above 1) as confirmed by parallel analysis), and this scale shows reliability index in the acceptable range, as assessed by Cronbach's alpha ($\alpha=0.971$, with 95% bootstrap CI $[0.967;0.975]$). Initially, five response categories were proposed (1 = 'Never', 2 = 'Rarely', 3 = 'Sometimes', 4 = 'Often', and 5 = 'Always') for each item. We will here only consider binary-scored responses. (Here, responses to Likert-type items have been recoded as binary responses (1/2=0, 3-5=1), and we consider that each item is equally discriminative across individuals, hence the parallelism between item curve slopes (Rasch model).) As can be seen, people located to the right of the $x$-axis, which reflects the latent trait (anxiety), who are thought to express more of this trait are more likely to answer positively to questions like "I felt terrified" (terrific) or "I had sudden feelings of panic" (panic) than people located to the left (normal population, unlikely to be considered as cases); on the other hand, it is not unlikely than someone from the general population would report having trouble to get asleep (sleeping): for someone located at intermediate range of the latent trait, say 0 logit, his/her probability of scoring 3 or higher is about 0.5 (which is the item difficulty). For polytomous items with ordered categories, there are several choices: the partial credit model, the rating scale model, or the graded response model, to name but a few that are mostly used in applied research. The first two belong to the so-called "Rasch family" of IRT models and share the following properties: (a) monotonicity of the response probability function (item/category response curve), (b) sufficiency of total individual score (with latent parameter considered as fixed), (c) local independence meaning that responses to items are independent, conditional on the latent trait, and (d) absence of differential item functioning meaning that, conditional on the latent trait, responses are independent of external individual-specific variables (e.g., gender, age, ethnicity, SES). Extending the previous example to the case where the five response categories are effectively accounted for, a patient will have a higher probability of choosing response category 3 to 5, compared to someone sampled from the general population, without any antecedent of anxiety-related disorders. Compared to the modeling of dichotomous item described above, these models consider either cumulative (e.g., odds of answering 3 vs. 2 or less) or adjacent-category threshold (odds of answering 3 vs. 2), which is also discussed in Agresti's Categorical Data Analysis (chapter 12). The main difference between the aforementioned models lies in the way transitions from one response category to the other are handled: the partial credit model does not assume that difference between any given threshold location and the mean of the threshold locations on the latent trait is equal or uniform across items, contrary to the rating scale model. Another subtle difference between those models is that some of them (like the unconstrained graded response or partial credit model) allows for unequal discrimination parameters between item. See Applying item response theory modeling for evaluating questionnaire item and scale properties, by Reeve and Fayers, or The basis of item response theory, by Frank B. Baker, for more details. Because in the preceding case we discussed the interpretation of responses probability curves for dichotomously scored items, let's look at item response curves derived from a graded response model, highlighting the same target items: (Unconstrained graded response model, allowing for unequal discrimination among items.) Here, the following observations deserve some consideration: Response categories for the 'sleeping' item are less discriminative than, say, the ones attached to 'terrific': in the case of 'sleeping', for two persons located at the two extrema of the interval $[2;2.5]$ on the latent trait (in logit units), their probability of choosing the fourth response ("often had difficulty sleeping") goes from approx. 0.35 to 0.4; with 'terrific', that probability goes from less than 0.1 to about 0.25 (dashed blue line). If you want to discriminate between two patients showing signs of anxiety, the latter item is more informative. There is an overall shift, from the left to the right, between item assessing sleep quality and those assessing more severe conditions, although sleeping disorders are not uncommon. This is expected: after all, even people in the general population might experience some difficulty falling asleep, independent of their health state, and people severely depressed or anxious are likely to exhibit such problems. However, 'normal persons' (if this ever had any meaning) are unlikely to show some signs of panic disorder (the probability they choose the highest response category is zero for people located up to the intermediate range or more of the latent trait, [0;1]). In both cases discussed above, this $\theta$ scale which reflects individual liability on the assumed latent trait has the property of an interval scale. Besides being thought of as truly measurement models, what makes Rasch models attractive is that sum scores, as a sufficient statistic, can be used as surrogates for the latent scores. Moreover, the sufficiency property readily implies the separability of model (persons and items) parameters (in the case of polytomous items, one should not forget that everything applies at the level of item response category), hence conjoint additivity. A good review of IRT model hierarchy, with R implementation, is available in Mair and Hatzinger's article published in the Journal of Statistical Software: Extended Rasch Modeling: The eRm Package for the Application of IRT Models in R. Other models include log-linear models, non-parametric model, like the Mokken model, or graphical models. Apart from R, I am not aware of Excel implementations, but several statistical packages were proposed on this thread: How to get started with applying item response theory and what software to use? Finally, if you want to study the relationships between a set of items and a response variable without resorting on a measurement model, some form of variable quantization through optimal scaling can be interesting too. Apart from R implementations discussed in those threads, SPSS solutions were also proposed on related threads. References Pilkonis, P., Choi, S., Reise, S., Stover, A. and Riley, W. et al. (2011). Item banks for mea- suring emotional distress from the patient-reported outcomes measurement information system (PROMIS): Depression, anxiety, and anger. Assessment, 18(3), 263–283. Choi, S., Gibbons, L. and Crane, P. (2011). lordif: An R package for detecting differential item functioning using iterative hybrid ordinal logistic regression/Item Response Theory and monte carlo simulations. Journal of Statistical Software, 39(8).
How to transform ordinal data from questionnaire into proper interval data? This response will discuss possible models from a measurement perspective, where we are given a set of observed (manifest) interrelated variables, or measures, whose shared variance is assumed to meas
18,216
How to transform ordinal data from questionnaire into proper interval data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In his book Analysis of Ordinal Categorical Data, Alan Agresti covers several. One of them is ridits, which I discuss on my blog
How to transform ordinal data from questionnaire into proper interval data?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
How to transform ordinal data from questionnaire into proper interval data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. In his book Analysis of Ordinal Categorical Data, Alan Agresti covers several. One of them is ridits, which I discuss on my blog
How to transform ordinal data from questionnaire into proper interval data? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
18,217
Can one use multiple regression to predict one principal component (PC) from several other PCs?
A principal component is a weighted linear combination of all your factors (X's). example: PC1 = 0.1X1 + 0.3X2 There will be one component for each factor (though in general a small number are selected). The components are created such that they have zero correlation (are orthogonal), by design. Therefore, component PC1 should not explain any variation in component PC2. You may want to do regression on your Y variable and the PCA representation of your X's, as they will not have multi-collinearity. However, this could be hard to interpret. If you have more X's than observations, which breaks OLS, you can regress on your components, and simply select a smaller number of the highest variation components. Principal Component Analysis by Jollife a very in-depth and highly cited book on the subject This is also good: http://www.statsoft.com/textbook/principal-components-factor-analysis/
Can one use multiple regression to predict one principal component (PC) from several other PCs?
A principal component is a weighted linear combination of all your factors (X's). example: PC1 = 0.1X1 + 0.3X2 There will be one component for each factor (though in general a small number are selecte
Can one use multiple regression to predict one principal component (PC) from several other PCs? A principal component is a weighted linear combination of all your factors (X's). example: PC1 = 0.1X1 + 0.3X2 There will be one component for each factor (though in general a small number are selected). The components are created such that they have zero correlation (are orthogonal), by design. Therefore, component PC1 should not explain any variation in component PC2. You may want to do regression on your Y variable and the PCA representation of your X's, as they will not have multi-collinearity. However, this could be hard to interpret. If you have more X's than observations, which breaks OLS, you can regress on your components, and simply select a smaller number of the highest variation components. Principal Component Analysis by Jollife a very in-depth and highly cited book on the subject This is also good: http://www.statsoft.com/textbook/principal-components-factor-analysis/
Can one use multiple regression to predict one principal component (PC) from several other PCs? A principal component is a weighted linear combination of all your factors (X's). example: PC1 = 0.1X1 + 0.3X2 There will be one component for each factor (though in general a small number are selecte
18,218
Can one use multiple regression to predict one principal component (PC) from several other PCs?
Principal components are orthogonal by definition, so any pair of PCs will have zero correlation. However, PCA can be used in regression if there are a large number of explanatory variables. These can be reduced to a small number of principal components and used as predictors in a regression.
Can one use multiple regression to predict one principal component (PC) from several other PCs?
Principal components are orthogonal by definition, so any pair of PCs will have zero correlation. However, PCA can be used in regression if there are a large number of explanatory variables. These can
Can one use multiple regression to predict one principal component (PC) from several other PCs? Principal components are orthogonal by definition, so any pair of PCs will have zero correlation. However, PCA can be used in regression if there are a large number of explanatory variables. These can be reduced to a small number of principal components and used as predictors in a regression.
Can one use multiple regression to predict one principal component (PC) from several other PCs? Principal components are orthogonal by definition, so any pair of PCs will have zero correlation. However, PCA can be used in regression if there are a large number of explanatory variables. These can
18,219
Can one use multiple regression to predict one principal component (PC) from several other PCs?
Careful... just because the PCs are by construction orthogonal to each other does not mean that there is not a pattern or that one PC can not appear to "explain" something about the other PCs. Consider 3D data (X,Y,Z) describing a large number of points distributed evenly on the surface of an American football (it is an ellipsoid -- not a sphere -- for those who have never watched American football). Imagine that the football is in an arbitrary configuration so that neither X nor Y nor Z is along the long axis of the football. Principal components will place PC1 along the long axis of the football, the axis that describes the most variance in the data. For any point in the PC1 dimension along the long axis of the football, the planar slice represented by PC2 and PC3 should describe a circle and the radius of this circular slice depends on the PC1 dimension. It is true that regressions of PC2 or PC3 on PC1 should give a zero coefficient globally, but not over smaller sections of the football.... and it is clear that a 2D graph of PC1 and PC2 would show an "interesting" limiting boundary that is two-valued, nonlinear, and symmetric.
Can one use multiple regression to predict one principal component (PC) from several other PCs?
Careful... just because the PCs are by construction orthogonal to each other does not mean that there is not a pattern or that one PC can not appear to "explain" something about the other PCs. Conside
Can one use multiple regression to predict one principal component (PC) from several other PCs? Careful... just because the PCs are by construction orthogonal to each other does not mean that there is not a pattern or that one PC can not appear to "explain" something about the other PCs. Consider 3D data (X,Y,Z) describing a large number of points distributed evenly on the surface of an American football (it is an ellipsoid -- not a sphere -- for those who have never watched American football). Imagine that the football is in an arbitrary configuration so that neither X nor Y nor Z is along the long axis of the football. Principal components will place PC1 along the long axis of the football, the axis that describes the most variance in the data. For any point in the PC1 dimension along the long axis of the football, the planar slice represented by PC2 and PC3 should describe a circle and the radius of this circular slice depends on the PC1 dimension. It is true that regressions of PC2 or PC3 on PC1 should give a zero coefficient globally, but not over smaller sections of the football.... and it is clear that a 2D graph of PC1 and PC2 would show an "interesting" limiting boundary that is two-valued, nonlinear, and symmetric.
Can one use multiple regression to predict one principal component (PC) from several other PCs? Careful... just because the PCs are by construction orthogonal to each other does not mean that there is not a pattern or that one PC can not appear to "explain" something about the other PCs. Conside
18,220
Can one use multiple regression to predict one principal component (PC) from several other PCs?
If your data is high dimensional and noisy, and you don't have a large number of sample, you run into the danger of overfitting. In such cases, it does make sense to use PCA (which can capture a dominant part of data variance; orthogonality isn't an issue) or factor analysis (which can find the true explanatory variables underlying the data) to reduce data dimensionality and then train a regression model with them. For factor analysis based approaches, see this paper Bayesian Factor Regression Model, and a nonparametric Bayesian version of this model that does not assume that you a priori know the "true" number of relevant factors (or principal components in case of PCA). I'd add that in many cases, supervised dimensionality reduction (e.g., Fisher Discriminant Analysis) can give improvements over simple PCA or FA based approaches, because you can make use of the label information while doing dimensionality reduction.
Can one use multiple regression to predict one principal component (PC) from several other PCs?
If your data is high dimensional and noisy, and you don't have a large number of sample, you run into the danger of overfitting. In such cases, it does make sense to use PCA (which can capture a domin
Can one use multiple regression to predict one principal component (PC) from several other PCs? If your data is high dimensional and noisy, and you don't have a large number of sample, you run into the danger of overfitting. In such cases, it does make sense to use PCA (which can capture a dominant part of data variance; orthogonality isn't an issue) or factor analysis (which can find the true explanatory variables underlying the data) to reduce data dimensionality and then train a regression model with them. For factor analysis based approaches, see this paper Bayesian Factor Regression Model, and a nonparametric Bayesian version of this model that does not assume that you a priori know the "true" number of relevant factors (or principal components in case of PCA). I'd add that in many cases, supervised dimensionality reduction (e.g., Fisher Discriminant Analysis) can give improvements over simple PCA or FA based approaches, because you can make use of the label information while doing dimensionality reduction.
Can one use multiple regression to predict one principal component (PC) from several other PCs? If your data is high dimensional and noisy, and you don't have a large number of sample, you run into the danger of overfitting. In such cases, it does make sense to use PCA (which can capture a domin
18,221
Can one use multiple regression to predict one principal component (PC) from several other PCs?
you might pull it out if the predicted PC score was extracted from different variables, or cases, than the predictor PC scores. if that be the case predicted and predictor will not be orthogonal, or at least they need not be, correlation is, of course, not guaranteed.
Can one use multiple regression to predict one principal component (PC) from several other PCs?
you might pull it out if the predicted PC score was extracted from different variables, or cases, than the predictor PC scores. if that be the case predicted and predictor will not be orthogonal, or a
Can one use multiple regression to predict one principal component (PC) from several other PCs? you might pull it out if the predicted PC score was extracted from different variables, or cases, than the predictor PC scores. if that be the case predicted and predictor will not be orthogonal, or at least they need not be, correlation is, of course, not guaranteed.
Can one use multiple regression to predict one principal component (PC) from several other PCs? you might pull it out if the predicted PC score was extracted from different variables, or cases, than the predictor PC scores. if that be the case predicted and predictor will not be orthogonal, or a
18,222
Can I (justifiably) train a second model only on the observations that a previous model predicted poorly?
As noticed in the comments, you’ve re-discovered boosting. Nothing wrong with this approach, but usually it’s easier and safer to use a method already implemented and battle-tested by someone else than starting from scratch. If you really want to use your approach, I’d encourage you to first use some out-of-the-box implementation of boosting (AdaBoost, XGBoost, CatBoost, etc) to use it as a benchmark.
Can I (justifiably) train a second model only on the observations that a previous model predicted po
As noticed in the comments, you’ve re-discovered boosting. Nothing wrong with this approach, but usually it’s easier and safer to use a method already implemented and battle-tested by someone else tha
Can I (justifiably) train a second model only on the observations that a previous model predicted poorly? As noticed in the comments, you’ve re-discovered boosting. Nothing wrong with this approach, but usually it’s easier and safer to use a method already implemented and battle-tested by someone else than starting from scratch. If you really want to use your approach, I’d encourage you to first use some out-of-the-box implementation of boosting (AdaBoost, XGBoost, CatBoost, etc) to use it as a benchmark.
Can I (justifiably) train a second model only on the observations that a previous model predicted po As noticed in the comments, you’ve re-discovered boosting. Nothing wrong with this approach, but usually it’s easier and safer to use a method already implemented and battle-tested by someone else tha
18,223
Can I (justifiably) train a second model only on the observations that a previous model predicted poorly?
As was mentioned in the comments this idea of iteratively learning from previous model errors is at the core of boosting methodologies like Adaboost or gradient boosting. As you theorize the idea is prone to overfitting in certain models like trees but it actually regularizes a model such a linear regression (although I would just do standard regularization like l2 normally). In terms of algorithms which do well with this typically it's trees (xgboost or lightgbm are go-tos for hammers in the data science community) or some approach which partitions your data. This is because each time you refit the model you get new splits and the tree can learn new things whereas in linear regression it just updates your coefficients so you aren't actually adding any complexity or anything. Adding two regression models just averages the coefficients but adding two tree models gives you a new tree. This is similar to bagging predictors, bagging linear models will converge to fitting on the whole set whereas bagging trees actually benefits you in terms of the bias-variance tradeoff. In terms of NNs, I believe there is some theory connecting gradient boosting to residual networks and similar architectures see this question on it. My recommendation is just use lightgbm or xgboost!
Can I (justifiably) train a second model only on the observations that a previous model predicted po
As was mentioned in the comments this idea of iteratively learning from previous model errors is at the core of boosting methodologies like Adaboost or gradient boosting. As you theorize the idea is p
Can I (justifiably) train a second model only on the observations that a previous model predicted poorly? As was mentioned in the comments this idea of iteratively learning from previous model errors is at the core of boosting methodologies like Adaboost or gradient boosting. As you theorize the idea is prone to overfitting in certain models like trees but it actually regularizes a model such a linear regression (although I would just do standard regularization like l2 normally). In terms of algorithms which do well with this typically it's trees (xgboost or lightgbm are go-tos for hammers in the data science community) or some approach which partitions your data. This is because each time you refit the model you get new splits and the tree can learn new things whereas in linear regression it just updates your coefficients so you aren't actually adding any complexity or anything. Adding two regression models just averages the coefficients but adding two tree models gives you a new tree. This is similar to bagging predictors, bagging linear models will converge to fitting on the whole set whereas bagging trees actually benefits you in terms of the bias-variance tradeoff. In terms of NNs, I believe there is some theory connecting gradient boosting to residual networks and similar architectures see this question on it. My recommendation is just use lightgbm or xgboost!
Can I (justifiably) train a second model only on the observations that a previous model predicted po As was mentioned in the comments this idea of iteratively learning from previous model errors is at the core of boosting methodologies like Adaboost or gradient boosting. As you theorize the idea is p
18,224
Are independent variables necessarily "independent" and how does this relate to what's being predicted?
The questions "What do you want to predict?" and "What is the outcome or result here?" often have the same answer, but not always. The terminology of independent variables is widely considered overloaded in statistical sciences. Numerous writers and researchers -- over at least the last several decades -- have suggested using other terms, although there is little consensus on what the best terms are. Some terms are predictors, explanatory variables, controlling variables, regressors, covariates, inputs, .... The term dependent variable similarly is often substituted with something more evocative. For some time response seemed to lead the field of alternatives, but outcome and output have been among frequent recent terms. I note without enthusiasm the existence of regressand. DV and IV are common abbreviations in some fields, sometimes seeming to tag initiates engaged by mutual consent in regression rituals. An objection to DV is that Deo volente remains a standard expansion for many people. A bigger objection to IV is that it is bespoke (by many economists in particular) for instrumental variable. Still, the old terms linger on, and my impression (no names here) is that they are still often recommended in textbooks which on other grounds I regard as poor or incompetent. Terminology aside: There is no absolute implication that so-called independent variables in a regression are statistically independent of each other, and indeed that fact is one of several objections to the terminology. There are even situations in which predictors are deliberately introduced that are highly correlated with each other. Fitting a quadratic in $X$ and $X^2$ is a case in point, as $X$ and $X^2$ are not mutually independent. It's, however, foolish to include two predictors with essentially the same message, as say Fahrenheit and Celsius temperatures. In practice, good software has traps to detect that situation and drop predictors as needed, but the researcher still needs to be careful and thoughtful about their choice of predictors. The ideal -- easier to advise as a principle than to ensure in practice -- is for predictors to have a clear rationale and to use no more predictors than are needed for the purpose, and that are reasonable given the size of the dataset. Your example is instructive. Usually salary depends on age, sometimes directly if an individual moves up a salary scale, but more often indirectly through salary being affected by promotion or moves to a different job and those being affected by greater experience, expertise, reputation, and so forth. Conversely, sometimes older people are less attractive to employ (e.g. sports people past their peak). But the crux is that a salary raise doesn’t affect age, whereas a change in age may affect salary (on average, which is what we care about here). Causal paths can exist in indirect ways. All that said, in different problems age is unknown and the goal is to predict it. This is standard in archaeology, forensic sciences, and several Earth and environmental sciences. EDIT 3 August 2022 Although it may surprise many readers, yet another objection to dependent and independent as terminology for variables is that many beginners get them the wrong way round. This could be -- especially for people without English as their first language -- that the words can seem so similar, or that independence is an abstract statistical concept for those not yet familiar with it.
Are independent variables necessarily "independent" and how does this relate to what's being predict
The questions "What do you want to predict?" and "What is the outcome or result here?" often have the same answer, but not always. The terminology of independent variables is widely considered overloa
Are independent variables necessarily "independent" and how does this relate to what's being predicted? The questions "What do you want to predict?" and "What is the outcome or result here?" often have the same answer, but not always. The terminology of independent variables is widely considered overloaded in statistical sciences. Numerous writers and researchers -- over at least the last several decades -- have suggested using other terms, although there is little consensus on what the best terms are. Some terms are predictors, explanatory variables, controlling variables, regressors, covariates, inputs, .... The term dependent variable similarly is often substituted with something more evocative. For some time response seemed to lead the field of alternatives, but outcome and output have been among frequent recent terms. I note without enthusiasm the existence of regressand. DV and IV are common abbreviations in some fields, sometimes seeming to tag initiates engaged by mutual consent in regression rituals. An objection to DV is that Deo volente remains a standard expansion for many people. A bigger objection to IV is that it is bespoke (by many economists in particular) for instrumental variable. Still, the old terms linger on, and my impression (no names here) is that they are still often recommended in textbooks which on other grounds I regard as poor or incompetent. Terminology aside: There is no absolute implication that so-called independent variables in a regression are statistically independent of each other, and indeed that fact is one of several objections to the terminology. There are even situations in which predictors are deliberately introduced that are highly correlated with each other. Fitting a quadratic in $X$ and $X^2$ is a case in point, as $X$ and $X^2$ are not mutually independent. It's, however, foolish to include two predictors with essentially the same message, as say Fahrenheit and Celsius temperatures. In practice, good software has traps to detect that situation and drop predictors as needed, but the researcher still needs to be careful and thoughtful about their choice of predictors. The ideal -- easier to advise as a principle than to ensure in practice -- is for predictors to have a clear rationale and to use no more predictors than are needed for the purpose, and that are reasonable given the size of the dataset. Your example is instructive. Usually salary depends on age, sometimes directly if an individual moves up a salary scale, but more often indirectly through salary being affected by promotion or moves to a different job and those being affected by greater experience, expertise, reputation, and so forth. Conversely, sometimes older people are less attractive to employ (e.g. sports people past their peak). But the crux is that a salary raise doesn’t affect age, whereas a change in age may affect salary (on average, which is what we care about here). Causal paths can exist in indirect ways. All that said, in different problems age is unknown and the goal is to predict it. This is standard in archaeology, forensic sciences, and several Earth and environmental sciences. EDIT 3 August 2022 Although it may surprise many readers, yet another objection to dependent and independent as terminology for variables is that many beginners get them the wrong way round. This could be -- especially for people without English as their first language -- that the words can seem so similar, or that independence is an abstract statistical concept for those not yet familiar with it.
Are independent variables necessarily "independent" and how does this relate to what's being predict The questions "What do you want to predict?" and "What is the outcome or result here?" often have the same answer, but not always. The terminology of independent variables is widely considered overloa
18,225
Are independent variables necessarily "independent" and how does this relate to what's being predicted?
@NickCox gave an excellent answer. A couple additions: You ask But in statistics, can the independent variables be regarded as the "things we are using to make the prediction," while the dependent variable is the "thing being predicted?" To give an explicit answer: Yes, that is often how the terms are used. I use them that way, myself. Second, the preferred terms seem to vary by field as well as by individual. My PhD is in psychometrics (in the psychology department) and "independent" is very common there. Third, the meaning of other terms on Nick's list also varies. Some people use "covariate" to mean "all the X variables" while others use covariate to mean the nuisance parameters that you aren't really interested in but have to account for. Finally, other terms have their own issues: "Predictors" - sometimes we aren't really interested in predicting. "Explanatory variables" - similarly, we sometimes aren't interested in explanation (and, sometimes, we are interested in both explanation and prediction). "Regressor" isn't bad, but it sort of implies that we are doing some form of regression, but then there are independent variables in methods that are not called "regression". It's a mess!
Are independent variables necessarily "independent" and how does this relate to what's being predict
@NickCox gave an excellent answer. A couple additions: You ask But in statistics, can the independent variables be regarded as the "things we are using to make the prediction," while the dependent
Are independent variables necessarily "independent" and how does this relate to what's being predicted? @NickCox gave an excellent answer. A couple additions: You ask But in statistics, can the independent variables be regarded as the "things we are using to make the prediction," while the dependent variable is the "thing being predicted?" To give an explicit answer: Yes, that is often how the terms are used. I use them that way, myself. Second, the preferred terms seem to vary by field as well as by individual. My PhD is in psychometrics (in the psychology department) and "independent" is very common there. Third, the meaning of other terms on Nick's list also varies. Some people use "covariate" to mean "all the X variables" while others use covariate to mean the nuisance parameters that you aren't really interested in but have to account for. Finally, other terms have their own issues: "Predictors" - sometimes we aren't really interested in predicting. "Explanatory variables" - similarly, we sometimes aren't interested in explanation (and, sometimes, we are interested in both explanation and prediction). "Regressor" isn't bad, but it sort of implies that we are doing some form of regression, but then there are independent variables in methods that are not called "regression". It's a mess!
Are independent variables necessarily "independent" and how does this relate to what's being predict @NickCox gave an excellent answer. A couple additions: You ask But in statistics, can the independent variables be regarded as the "things we are using to make the prediction," while the dependent
18,226
Are independent variables necessarily "independent" and how does this relate to what's being predicted?
As you have correctly noticed, the term 'independent' has completely different meanings depending on context. Statistical independence is what you are describing between the weather and your dinner. These two events are independent in the sense that the value of one does not affect the other. There are more formal mathematical definitions of this independence, but your basic understanding is right. Independent variables in regression is a term that refers to the set of $x$ variables. Sometimes they are also called predictors or covariates. Indeed, as you mentioned in your example, you can pick age as the response (the dependent variable) and the other three as your independent variables. However, whether this is a good idea or not depends on the practical purpose of what you are doing. In reality, you are interested in predicting salary based on other variables, so you pick salary as the dependent variable and call the others independent variables. There is nothing that forces you to call one of them the dependent variable beforehand - it's entirely up to you and depends on the question you are trying to answer.
Are independent variables necessarily "independent" and how does this relate to what's being predict
As you have correctly noticed, the term 'independent' has completely different meanings depending on context. Statistical independence is what you are describing between the weather and your dinner. T
Are independent variables necessarily "independent" and how does this relate to what's being predicted? As you have correctly noticed, the term 'independent' has completely different meanings depending on context. Statistical independence is what you are describing between the weather and your dinner. These two events are independent in the sense that the value of one does not affect the other. There are more formal mathematical definitions of this independence, but your basic understanding is right. Independent variables in regression is a term that refers to the set of $x$ variables. Sometimes they are also called predictors or covariates. Indeed, as you mentioned in your example, you can pick age as the response (the dependent variable) and the other three as your independent variables. However, whether this is a good idea or not depends on the practical purpose of what you are doing. In reality, you are interested in predicting salary based on other variables, so you pick salary as the dependent variable and call the others independent variables. There is nothing that forces you to call one of them the dependent variable beforehand - it's entirely up to you and depends on the question you are trying to answer.
Are independent variables necessarily "independent" and how does this relate to what's being predict As you have correctly noticed, the term 'independent' has completely different meanings depending on context. Statistical independence is what you are describing between the weather and your dinner. T
18,227
Either quadratic or interaction term is significant in isolation, but neither are together
Synopsis When the predictors are correlated, a quadratic term and an interaction term will carry similar information. This can cause either the quadratic model or the interaction model to be significant; but when both terms are included, because they are so similar neither may be significant. Standard diagnostics for multicollinearity, such as VIF, may fail to detect any of this. Even a diagnostic plot, specifically designed to detect the effect of using a quadratic model in place of interaction, may fail to determine which model is best. Analysis The thrust of this analysis, and its main strength, is to characterize situations like that described in the question. With such a characterization available it's then an easy task to simulate data that behave accordingly. Consider two predictors $X_1$ and $X_2$ (which we will automatically standardize so that each has unit variance in the dataset) and suppose the random response $Y$ is determined by these predictors and their interaction plus independent random error: $$Y = \beta_1 X_1 + \beta_2 X_2 + \beta_{1,2} X_1 X_2 + \varepsilon.$$ In many cases predictors are correlated. The dataset might look like this: These sample data were generated with $\beta_1=\beta_2=1$ and $\beta_{1,2}=0.1$. The correlation between $X_1$ and $X_2$ is $0.85$. This doesn't necessarily mean we are thinking of $X_1$ and $X_2$ as realizations of random variables: it can include the situation where both $X_1$ and $X_2$ are settings in a designed experiment, but for some reason these settings are not orthogonal. Regardless of how the correlation arises, one good way to describe it is in terms of how much the predictors differ from their average, $X_0 = (X_1+X_2)/2$. These differences will be fairly small (in the sense that their variance is less than $1$); the greater the correlation between $X_1$ and $X_2$, the smaller these differences will be. Writing, then, $X_1 = X_0 + \delta_1$ and $X_2 = X_0 + \delta_2$, we can re-express (say) $X_2$ in terms of $X_1$ as $X_2 = X_1 + (\delta_2-\delta_1)$. Plugging this into the interaction term only, the model is $$\eqalign{ Y &= \beta_1 X_1+ \beta_2 X_2 + \beta_{1,2}X_1(X_1+ [\delta_2-\delta_1]) + \varepsilon \\ &= (\beta_1 + \beta_{1,2}[\delta_2-\delta_1]) X_1+ \beta_2 X_2 + \beta_{1,2}X_1^2 + \varepsilon }$$ Provided the values of $\beta_{1,2}[\delta_2-\delta_1]$ vary only a little bit compared to $\beta_1$, we can gather this variation with the true random terms, writing $$Y = \beta_1 X_1+ \beta_2 X_2 + \beta_{1,2}X_1^2 + \left(\varepsilon +\beta_{1,2}[\delta_2-\delta_1] X_1\right)$$ Thus, if we regress $Y$ against $X_1, X_2$, and $X_1^2$, we will be making an error: the variation in the residuals will depend on $X_1$ (that is, it will be heteroscedastic). This can be seen with a simple variance calculation: $$\text{var}\left(\varepsilon +\beta_{1,2}[\delta_2-\delta_1] X_1\right) = \text{var}(\varepsilon) + \left[\beta_{1,2}^2\text{var}(\delta_2-\delta_1)\right]X_1^2.$$ However, if the typical variation in $\varepsilon$ substantially exceeds the typical variation in $\beta_{1,2}[\delta_2-\delta_1] X_1$, that heteroscedasticity will be so low as to be undetectable (and should yield a fine model). (As shown below, one way to look for this violation of regression assumptions is to plot the absolute value of the residuals against the absolute value of $X_1$--remembering first to standardize $X_1$ if necessary.) This is the characterization we were seeking. Remembering that $X_1$ and $X_2$ were assumed to be standardized to unit variance, this implies the variance of $\delta_2-\delta_1$ will be relatively small. To reproduce the observed behavior, then, it should suffice to pick a small absolute value for $\beta_{1,2}$, but make it large enough (or use a large enough dataset) so that it will be significant. In short, when the predictors are correlated and the interaction is small but not too small, a quadratic term (in either predictor alone) and an interaction term will be individually significant but confounded with each other. Statistical methods alone are unlikely to help us decide which is better to use. Example Let's check this out with the sample data by fitting several models. Recall that $\beta_{1,2}$ was set to $0.1$ when simulating these data. Although that is small (the quadratic behavior is not even visible in the previous scatterplots), with $150$ data points we have a chance of detecting it. First, the quadratic model: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.03363 0.03046 1.104 0.27130 x1 0.92188 0.04081 22.592 < 2e-16 *** x2 1.05208 0.04085 25.756 < 2e-16 *** I(x1^2) 0.06776 0.02157 3.141 0.00204 ** Residual standard error: 0.2651 on 146 degrees of freedom Multiple R-squared: 0.9812, Adjusted R-squared: 0.9808 The quadratic term is significant. Its coefficient, $0.068$, underestimates $\beta_{1,2}=0.1$, but it's of the right size and right sign. As a check for multicollinearity (correlation among the predictors) we compute the variance inflation factors (VIF): x1 x2 I(x1^2) 3.531167 3.538512 1.009199 Any value less than $5$ is usually considered just fine. These are not alarming. Next, the model with an interaction but no quadratic term: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.02887 0.02975 0.97 0.333420 x1 0.93157 0.04036 23.08 < 2e-16 *** x2 1.04580 0.04039 25.89 < 2e-16 *** x1:x2 0.08581 0.02451 3.50 0.000617 *** Residual standard error: 0.2631 on 146 degrees of freedom Multiple R-squared: 0.9815, Adjusted R-squared: 0.9811 x1 x2 x1:x2 3.506569 3.512599 1.004566 All the results are similar to the previous ones. Both are about equally good (with a very tiny advantage to the interaction model). Finally, let's include both the interaction and quadratic terms: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.02572 0.03074 0.837 0.404 x1 0.92911 0.04088 22.729 <2e-16 *** x2 1.04771 0.04075 25.710 <2e-16 *** I(x1^2) 0.01677 0.03926 0.427 0.670 x1:x2 0.06973 0.04495 1.551 0.123 Residual standard error: 0.2638 on 145 degrees of freedom Multiple R-squared: 0.9815, Adjusted R-squared: 0.981 x1 x2 I(x1^2) x1:x2 3.577700 3.555465 3.374533 3.359040 Now, neither the quadratic term nor the interaction term are significant, because each is trying to estimate a part of the interaction in the model. Another way to see this is that nothing was gained (in terms of reducing the residual standard error) when adding the quadratic term to the interaction model or when adding the interaction term to the quadratic model. It is noteworthy that the VIFs do not detect this situation: although the fundamental explanation for what we have seen is the slight collinearity between $X_1$ and $X_2$, which induces a collinearity between $X_1^2$ and $X_1 X_2$, neither is large enough to raise flags. If we had tried to detect the heteroscedasticity in the quadratic model (the first one), we would be disappointed: In the loess smooth of this scatterplot there is ever so faint a hint that the sizes of the residuals increase with $|X_1|$, but nobody would take this hint seriously.
Either quadratic or interaction term is significant in isolation, but neither are together
Synopsis When the predictors are correlated, a quadratic term and an interaction term will carry similar information. This can cause either the quadratic model or the interaction model to be signific
Either quadratic or interaction term is significant in isolation, but neither are together Synopsis When the predictors are correlated, a quadratic term and an interaction term will carry similar information. This can cause either the quadratic model or the interaction model to be significant; but when both terms are included, because they are so similar neither may be significant. Standard diagnostics for multicollinearity, such as VIF, may fail to detect any of this. Even a diagnostic plot, specifically designed to detect the effect of using a quadratic model in place of interaction, may fail to determine which model is best. Analysis The thrust of this analysis, and its main strength, is to characterize situations like that described in the question. With such a characterization available it's then an easy task to simulate data that behave accordingly. Consider two predictors $X_1$ and $X_2$ (which we will automatically standardize so that each has unit variance in the dataset) and suppose the random response $Y$ is determined by these predictors and their interaction plus independent random error: $$Y = \beta_1 X_1 + \beta_2 X_2 + \beta_{1,2} X_1 X_2 + \varepsilon.$$ In many cases predictors are correlated. The dataset might look like this: These sample data were generated with $\beta_1=\beta_2=1$ and $\beta_{1,2}=0.1$. The correlation between $X_1$ and $X_2$ is $0.85$. This doesn't necessarily mean we are thinking of $X_1$ and $X_2$ as realizations of random variables: it can include the situation where both $X_1$ and $X_2$ are settings in a designed experiment, but for some reason these settings are not orthogonal. Regardless of how the correlation arises, one good way to describe it is in terms of how much the predictors differ from their average, $X_0 = (X_1+X_2)/2$. These differences will be fairly small (in the sense that their variance is less than $1$); the greater the correlation between $X_1$ and $X_2$, the smaller these differences will be. Writing, then, $X_1 = X_0 + \delta_1$ and $X_2 = X_0 + \delta_2$, we can re-express (say) $X_2$ in terms of $X_1$ as $X_2 = X_1 + (\delta_2-\delta_1)$. Plugging this into the interaction term only, the model is $$\eqalign{ Y &= \beta_1 X_1+ \beta_2 X_2 + \beta_{1,2}X_1(X_1+ [\delta_2-\delta_1]) + \varepsilon \\ &= (\beta_1 + \beta_{1,2}[\delta_2-\delta_1]) X_1+ \beta_2 X_2 + \beta_{1,2}X_1^2 + \varepsilon }$$ Provided the values of $\beta_{1,2}[\delta_2-\delta_1]$ vary only a little bit compared to $\beta_1$, we can gather this variation with the true random terms, writing $$Y = \beta_1 X_1+ \beta_2 X_2 + \beta_{1,2}X_1^2 + \left(\varepsilon +\beta_{1,2}[\delta_2-\delta_1] X_1\right)$$ Thus, if we regress $Y$ against $X_1, X_2$, and $X_1^2$, we will be making an error: the variation in the residuals will depend on $X_1$ (that is, it will be heteroscedastic). This can be seen with a simple variance calculation: $$\text{var}\left(\varepsilon +\beta_{1,2}[\delta_2-\delta_1] X_1\right) = \text{var}(\varepsilon) + \left[\beta_{1,2}^2\text{var}(\delta_2-\delta_1)\right]X_1^2.$$ However, if the typical variation in $\varepsilon$ substantially exceeds the typical variation in $\beta_{1,2}[\delta_2-\delta_1] X_1$, that heteroscedasticity will be so low as to be undetectable (and should yield a fine model). (As shown below, one way to look for this violation of regression assumptions is to plot the absolute value of the residuals against the absolute value of $X_1$--remembering first to standardize $X_1$ if necessary.) This is the characterization we were seeking. Remembering that $X_1$ and $X_2$ were assumed to be standardized to unit variance, this implies the variance of $\delta_2-\delta_1$ will be relatively small. To reproduce the observed behavior, then, it should suffice to pick a small absolute value for $\beta_{1,2}$, but make it large enough (or use a large enough dataset) so that it will be significant. In short, when the predictors are correlated and the interaction is small but not too small, a quadratic term (in either predictor alone) and an interaction term will be individually significant but confounded with each other. Statistical methods alone are unlikely to help us decide which is better to use. Example Let's check this out with the sample data by fitting several models. Recall that $\beta_{1,2}$ was set to $0.1$ when simulating these data. Although that is small (the quadratic behavior is not even visible in the previous scatterplots), with $150$ data points we have a chance of detecting it. First, the quadratic model: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.03363 0.03046 1.104 0.27130 x1 0.92188 0.04081 22.592 < 2e-16 *** x2 1.05208 0.04085 25.756 < 2e-16 *** I(x1^2) 0.06776 0.02157 3.141 0.00204 ** Residual standard error: 0.2651 on 146 degrees of freedom Multiple R-squared: 0.9812, Adjusted R-squared: 0.9808 The quadratic term is significant. Its coefficient, $0.068$, underestimates $\beta_{1,2}=0.1$, but it's of the right size and right sign. As a check for multicollinearity (correlation among the predictors) we compute the variance inflation factors (VIF): x1 x2 I(x1^2) 3.531167 3.538512 1.009199 Any value less than $5$ is usually considered just fine. These are not alarming. Next, the model with an interaction but no quadratic term: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.02887 0.02975 0.97 0.333420 x1 0.93157 0.04036 23.08 < 2e-16 *** x2 1.04580 0.04039 25.89 < 2e-16 *** x1:x2 0.08581 0.02451 3.50 0.000617 *** Residual standard error: 0.2631 on 146 degrees of freedom Multiple R-squared: 0.9815, Adjusted R-squared: 0.9811 x1 x2 x1:x2 3.506569 3.512599 1.004566 All the results are similar to the previous ones. Both are about equally good (with a very tiny advantage to the interaction model). Finally, let's include both the interaction and quadratic terms: Estimate Std. Error t value Pr(>|t|) (Intercept) 0.02572 0.03074 0.837 0.404 x1 0.92911 0.04088 22.729 <2e-16 *** x2 1.04771 0.04075 25.710 <2e-16 *** I(x1^2) 0.01677 0.03926 0.427 0.670 x1:x2 0.06973 0.04495 1.551 0.123 Residual standard error: 0.2638 on 145 degrees of freedom Multiple R-squared: 0.9815, Adjusted R-squared: 0.981 x1 x2 I(x1^2) x1:x2 3.577700 3.555465 3.374533 3.359040 Now, neither the quadratic term nor the interaction term are significant, because each is trying to estimate a part of the interaction in the model. Another way to see this is that nothing was gained (in terms of reducing the residual standard error) when adding the quadratic term to the interaction model or when adding the interaction term to the quadratic model. It is noteworthy that the VIFs do not detect this situation: although the fundamental explanation for what we have seen is the slight collinearity between $X_1$ and $X_2$, which induces a collinearity between $X_1^2$ and $X_1 X_2$, neither is large enough to raise flags. If we had tried to detect the heteroscedasticity in the quadratic model (the first one), we would be disappointed: In the loess smooth of this scatterplot there is ever so faint a hint that the sizes of the residuals increase with $|X_1|$, but nobody would take this hint seriously.
Either quadratic or interaction term is significant in isolation, but neither are together Synopsis When the predictors are correlated, a quadratic term and an interaction term will carry similar information. This can cause either the quadratic model or the interaction model to be signific
18,228
Either quadratic or interaction term is significant in isolation, but neither are together
What makes the most sense based on the source of the data? We cannot answer this question for you, the computer cannot answer this question for you. The reason that we still need statisticians instead of just statistical programs is because of questions like this. Statistics is about more than just crunching the numbers, it is about understanding the question and the source of the data and being able to make decisions based on the science and background and other information outside the data that the computer looks at. Your teacher is probably hoping that you will contemplate this as part of the assignment. If I had assigned a problem like this (and I have before) I would be more interested in the justification of your answer than which you actually chose. It is probably beyond your current class, but one approach if there is not a clear scientific reason for prefering one model over the other is model averaging, you fit both models (and maybe several other models as well), then you average together the predictions (often weighted by the goodness of fit of the different models). Another option, when possible, is to collect more data and if possible choosing the x values so that it becomes more clear what the non-linear vs. interaction effects are. There are some tools for comparing the fit of non-nested models (AIC, BIC, etc.), but for this case they probably will not show enough difference to overrule understanding of where the data comes from and what makes the most sense.
Either quadratic or interaction term is significant in isolation, but neither are together
What makes the most sense based on the source of the data? We cannot answer this question for you, the computer cannot answer this question for you. The reason that we still need statisticians inst
Either quadratic or interaction term is significant in isolation, but neither are together What makes the most sense based on the source of the data? We cannot answer this question for you, the computer cannot answer this question for you. The reason that we still need statisticians instead of just statistical programs is because of questions like this. Statistics is about more than just crunching the numbers, it is about understanding the question and the source of the data and being able to make decisions based on the science and background and other information outside the data that the computer looks at. Your teacher is probably hoping that you will contemplate this as part of the assignment. If I had assigned a problem like this (and I have before) I would be more interested in the justification of your answer than which you actually chose. It is probably beyond your current class, but one approach if there is not a clear scientific reason for prefering one model over the other is model averaging, you fit both models (and maybe several other models as well), then you average together the predictions (often weighted by the goodness of fit of the different models). Another option, when possible, is to collect more data and if possible choosing the x values so that it becomes more clear what the non-linear vs. interaction effects are. There are some tools for comparing the fit of non-nested models (AIC, BIC, etc.), but for this case they probably will not show enough difference to overrule understanding of where the data comes from and what makes the most sense.
Either quadratic or interaction term is significant in isolation, but neither are together What makes the most sense based on the source of the data? We cannot answer this question for you, the computer cannot answer this question for you. The reason that we still need statisticians inst
18,229
Either quadratic or interaction term is significant in isolation, but neither are together
Yet another possibility, in addition to @Greg's is to include both terms, even though one is not significant. Including only statistically significant terms is not a law of the universe.
Either quadratic or interaction term is significant in isolation, but neither are together
Yet another possibility, in addition to @Greg's is to include both terms, even though one is not significant. Including only statistically significant terms is not a law of the universe.
Either quadratic or interaction term is significant in isolation, but neither are together Yet another possibility, in addition to @Greg's is to include both terms, even though one is not significant. Including only statistically significant terms is not a law of the universe.
Either quadratic or interaction term is significant in isolation, but neither are together Yet another possibility, in addition to @Greg's is to include both terms, even though one is not significant. Including only statistically significant terms is not a law of the universe.
18,230
Conflicting results of Type III sum of squares in ANOVA in SAS and R
Type III SS depend on the parameterization used. If I set options(contrasts=c("contr.sum","contr.poly")) before running lm() and then drop1() I get exactly the same type III SS as SAS does. For the R-community dogma on this issue, you should read Venables' Exegeses on linear models. See also: How does one do a Type-III SS ANOVA in R with contrast codes?
Conflicting results of Type III sum of squares in ANOVA in SAS and R
Type III SS depend on the parameterization used. If I set options(contrasts=c("contr.sum","contr.poly")) before running lm() and then drop1() I get exactly the same type III SS as SAS does. For
Conflicting results of Type III sum of squares in ANOVA in SAS and R Type III SS depend on the parameterization used. If I set options(contrasts=c("contr.sum","contr.poly")) before running lm() and then drop1() I get exactly the same type III SS as SAS does. For the R-community dogma on this issue, you should read Venables' Exegeses on linear models. See also: How does one do a Type-III SS ANOVA in R with contrast codes?
Conflicting results of Type III sum of squares in ANOVA in SAS and R Type III SS depend on the parameterization used. If I set options(contrasts=c("contr.sum","contr.poly")) before running lm() and then drop1() I get exactly the same type III SS as SAS does. For
18,231
Conflicting results of Type III sum of squares in ANOVA in SAS and R
aov3 in the sasLM package in R will give the same results as SAS Type III. (Continued after output.) library(sasLM) aov3(Y ~ T*B, Data) # Data is defined in question giving: Response : Y Df Sum Sq Mean Sq F value Pr(>F) MODEL 5 77.900 15.580 8.9029 0.02733 * T 1 23.077 23.077 13.1868 0.02213 * B 2 31.053 15.527 8.8724 0.03384 * T:B 2 47.853 23.927 13.6724 0.01629 * RESIDUALS 4 7.000 1.750 CORRECTED TOTAL 9 84.900 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 drop1 does not necessarily give the same result because the way it works is that it partitions the columns into two those corresponding to the term of interest and the other columns and then compares the model without (1) to the full model; however, what SAS Type III does is that it conceptually partitions the columns into 3 groups, rather than 2 groups: those that do not contain the terms of interest the columns that correspond to the term of interest the columns representing interactions that contain the term of interest. It then modifies the third set of columns in a way described in the first link below and its references to compare the model containing the first group and the modified third group to the full model. See https://stackoverflow.com/questions/75594861/differences-in-sum-of-squares-between-jmp-and-r-when-nested-effects-in-the-model/75595079 and https://pubmed.ncbi.nlm.nih.gov/32656159/ for more information.
Conflicting results of Type III sum of squares in ANOVA in SAS and R
aov3 in the sasLM package in R will give the same results as SAS Type III. (Continued after output.) library(sasLM) aov3(Y ~ T*B, Data) # Data is defined in question giving: Response : Y
Conflicting results of Type III sum of squares in ANOVA in SAS and R aov3 in the sasLM package in R will give the same results as SAS Type III. (Continued after output.) library(sasLM) aov3(Y ~ T*B, Data) # Data is defined in question giving: Response : Y Df Sum Sq Mean Sq F value Pr(>F) MODEL 5 77.900 15.580 8.9029 0.02733 * T 1 23.077 23.077 13.1868 0.02213 * B 2 31.053 15.527 8.8724 0.03384 * T:B 2 47.853 23.927 13.6724 0.01629 * RESIDUALS 4 7.000 1.750 CORRECTED TOTAL 9 84.900 --- Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1 drop1 does not necessarily give the same result because the way it works is that it partitions the columns into two those corresponding to the term of interest and the other columns and then compares the model without (1) to the full model; however, what SAS Type III does is that it conceptually partitions the columns into 3 groups, rather than 2 groups: those that do not contain the terms of interest the columns that correspond to the term of interest the columns representing interactions that contain the term of interest. It then modifies the third set of columns in a way described in the first link below and its references to compare the model containing the first group and the modified third group to the full model. See https://stackoverflow.com/questions/75594861/differences-in-sum-of-squares-between-jmp-and-r-when-nested-effects-in-the-model/75595079 and https://pubmed.ncbi.nlm.nih.gov/32656159/ for more information.
Conflicting results of Type III sum of squares in ANOVA in SAS and R aov3 in the sasLM package in R will give the same results as SAS Type III. (Continued after output.) library(sasLM) aov3(Y ~ T*B, Data) # Data is defined in question giving: Response : Y
18,232
Mean and variance of a zero-inflated Poisson distribution
Method 0: The lazy statistician. Note that for $y \neq 0$ we have $f(y) = (1-\pi) p_y$ where $p_y$ is the probability that a Poisson random variable takes value $y$. Since the term corresponding to $y = 0$ does not affect the expected value, our knowledge of the Poisson and the linearity of expectation immediately tells us that $$ \mu = (1-\pi) \lambda $$ and $$ \mathbb E Y^2 = (1-\pi) (\lambda^2 + \lambda) \> . $$ A little algebra and the identity $\mathrm{Var}(Y) = \mathbb E Y^2 - \mu^2$ yields the result. Method 1: A probabilistic argument. It's often helpful to have a simple probabilistic model for how a distribution arises. Let $Z \sim \mathrm{Ber}(1-\pi)$ and $Y \sim \mathrm{Poi}(\lambda)$ be independent random variables. Define $$ X = Z \cdot Y \>. $$ Then, it is easy to see that $X$ has the desired distribution $f$. To check this, note that $\renewcommand{\Pr}{\mathbb P}\Pr(X = 0) = \Pr(Z=0) + \Pr(Z=1, Y=0) = \pi + (1-\pi) e^{-\lambda}$ by independence. Similarly $\Pr(X = k) = \Pr(Z=1, Y=k)$ for $k \neq 0$. From this, the rest is easy, since by the independence of $Z$ and $Y$, $$ \mu = \mathbb E X = \mathbb E Z Y = (\mathbb E Z) (\mathbb E Y) = (1-\pi)\lambda \>, $$ and, $$ \mathrm{Var}(X) = \mathbb E X^2 - \mu^2 = (\mathbb E Z)(\mathbb E Y^2) - \mu^2 = (1-\pi)(\lambda^2 + \lambda) - \mu^2 = \mu + \frac{\pi}{1-\pi}\mu^2 \> . $$ Method 2: Direct calculation. The mean is easily obtained by a slight trick of pulling one $\lambda$ out and rewriting the limits of the sum. $$ \mu = \sum_{k=1}^\infty (1-\pi) k e^{-\lambda} \frac{\lambda^k}{k!} = (1-\pi) \lambda e^{-\lambda} \sum_{j=0}^\infty \frac{\lambda^j}{j!} = (1-\pi) \lambda \> . $$ A similar trick works for the second moment: $$ \mathbb E X^2 = (1-\pi) \sum_{k=1}^\infty k^2 e^{-\lambda} \frac{\lambda^k}{k!} = (1-\pi)\lambda e^{-\lambda} \sum_{j=0}^\infty (j+1) \frac{\lambda^j}{j!} = (1-\pi)(\lambda^2 + \lambda) \>, $$ from which point we can proceed with the algebra as in the first method. Addendum: This details a couple tricks used in the calculations above. First recall that $\sum_{k=0}^\infty \frac{\lambda^k}{k!} = e^\lambda$. Second, note that $$ \sum_{k=0}^\infty k \frac{\lambda^k}{k!} = \sum_{k=1}^\infty k \frac{\lambda^k}{k!} = \sum_{k=1}^\infty \frac{\lambda^k}{(k-1)!} = \sum_{k=1}^\infty \frac{\lambda \cdot \lambda^{k-1}}{(k-1)!} = \lambda \sum_{j=0}^\infty \frac{\lambda^j}{j!} = \lambda e^{\lambda} \>, $$ where the substitution $j = k-1$ was made in the second-to-last step. In general, for the Poisson, it is easy to calculate the factorial moments $\mathbb E X^{(n)} = \mathbb E X(X-1)(X-2)\cdots(X-n+1)$ since $$ e^\lambda \mathbb E X^{(n)} = \sum_{k=n}^\infty k(k-1)\cdots(k-n+1) \frac{\lambda^k}{k!} = \sum_{k=n}^\infty \frac{\lambda^n \lambda^{k-n}}{(k-n)!} = \lambda^n \sum_{j=0}^\infty \frac{\lambda^j}{j!} = \lambda^n e^\lambda \>, $$ so $\mathbb E X^{(n)} = \lambda^n$. We get to "skip" to the $n$th index for the start of the sum in the first equality since for any $0 \leq k < n$, $k(k-1)\cdots(k-n+1) = 0$ since exactly one term in the product is zero.
Mean and variance of a zero-inflated Poisson distribution
Method 0: The lazy statistician. Note that for $y \neq 0$ we have $f(y) = (1-\pi) p_y$ where $p_y$ is the probability that a Poisson random variable takes value $y$. Since the term corresponding to $y
Mean and variance of a zero-inflated Poisson distribution Method 0: The lazy statistician. Note that for $y \neq 0$ we have $f(y) = (1-\pi) p_y$ where $p_y$ is the probability that a Poisson random variable takes value $y$. Since the term corresponding to $y = 0$ does not affect the expected value, our knowledge of the Poisson and the linearity of expectation immediately tells us that $$ \mu = (1-\pi) \lambda $$ and $$ \mathbb E Y^2 = (1-\pi) (\lambda^2 + \lambda) \> . $$ A little algebra and the identity $\mathrm{Var}(Y) = \mathbb E Y^2 - \mu^2$ yields the result. Method 1: A probabilistic argument. It's often helpful to have a simple probabilistic model for how a distribution arises. Let $Z \sim \mathrm{Ber}(1-\pi)$ and $Y \sim \mathrm{Poi}(\lambda)$ be independent random variables. Define $$ X = Z \cdot Y \>. $$ Then, it is easy to see that $X$ has the desired distribution $f$. To check this, note that $\renewcommand{\Pr}{\mathbb P}\Pr(X = 0) = \Pr(Z=0) + \Pr(Z=1, Y=0) = \pi + (1-\pi) e^{-\lambda}$ by independence. Similarly $\Pr(X = k) = \Pr(Z=1, Y=k)$ for $k \neq 0$. From this, the rest is easy, since by the independence of $Z$ and $Y$, $$ \mu = \mathbb E X = \mathbb E Z Y = (\mathbb E Z) (\mathbb E Y) = (1-\pi)\lambda \>, $$ and, $$ \mathrm{Var}(X) = \mathbb E X^2 - \mu^2 = (\mathbb E Z)(\mathbb E Y^2) - \mu^2 = (1-\pi)(\lambda^2 + \lambda) - \mu^2 = \mu + \frac{\pi}{1-\pi}\mu^2 \> . $$ Method 2: Direct calculation. The mean is easily obtained by a slight trick of pulling one $\lambda$ out and rewriting the limits of the sum. $$ \mu = \sum_{k=1}^\infty (1-\pi) k e^{-\lambda} \frac{\lambda^k}{k!} = (1-\pi) \lambda e^{-\lambda} \sum_{j=0}^\infty \frac{\lambda^j}{j!} = (1-\pi) \lambda \> . $$ A similar trick works for the second moment: $$ \mathbb E X^2 = (1-\pi) \sum_{k=1}^\infty k^2 e^{-\lambda} \frac{\lambda^k}{k!} = (1-\pi)\lambda e^{-\lambda} \sum_{j=0}^\infty (j+1) \frac{\lambda^j}{j!} = (1-\pi)(\lambda^2 + \lambda) \>, $$ from which point we can proceed with the algebra as in the first method. Addendum: This details a couple tricks used in the calculations above. First recall that $\sum_{k=0}^\infty \frac{\lambda^k}{k!} = e^\lambda$. Second, note that $$ \sum_{k=0}^\infty k \frac{\lambda^k}{k!} = \sum_{k=1}^\infty k \frac{\lambda^k}{k!} = \sum_{k=1}^\infty \frac{\lambda^k}{(k-1)!} = \sum_{k=1}^\infty \frac{\lambda \cdot \lambda^{k-1}}{(k-1)!} = \lambda \sum_{j=0}^\infty \frac{\lambda^j}{j!} = \lambda e^{\lambda} \>, $$ where the substitution $j = k-1$ was made in the second-to-last step. In general, for the Poisson, it is easy to calculate the factorial moments $\mathbb E X^{(n)} = \mathbb E X(X-1)(X-2)\cdots(X-n+1)$ since $$ e^\lambda \mathbb E X^{(n)} = \sum_{k=n}^\infty k(k-1)\cdots(k-n+1) \frac{\lambda^k}{k!} = \sum_{k=n}^\infty \frac{\lambda^n \lambda^{k-n}}{(k-n)!} = \lambda^n \sum_{j=0}^\infty \frac{\lambda^j}{j!} = \lambda^n e^\lambda \>, $$ so $\mathbb E X^{(n)} = \lambda^n$. We get to "skip" to the $n$th index for the start of the sum in the first equality since for any $0 \leq k < n$, $k(k-1)\cdots(k-n+1) = 0$ since exactly one term in the product is zero.
Mean and variance of a zero-inflated Poisson distribution Method 0: The lazy statistician. Note that for $y \neq 0$ we have $f(y) = (1-\pi) p_y$ where $p_y$ is the probability that a Poisson random variable takes value $y$. Since the term corresponding to $y
18,233
CDF raised to a power?
I like the other answers, but nobody has mentioned the following yet. The event $\{U \leq t,\ V\leq t \}$ occurs if and only if $\{\mathrm{max}(U,V)\leq t\}$, so if $U$ and $V$ are independent and $W = \mathrm{max}(U,V)$, then $F_{W}(t) = F_{U}(t)*F_{V}(t)$ so for $\alpha$ a positive integer (say, $\alpha = n$) take $X = \mathrm{max}(Z_{1},...Z_{n})$ where the $Z$'s are i.i.d. For $\alpha = 1/n$ we can switcheroo to get $F_{Z} = F_{X}^n$, so $X$ would be that random variable such that the max of $n$ independent copies has the same distribution as $Z$ (and this would not be one of our familiar friends, in general). The case of $\alpha$ a positive rational number (say, $\alpha = m/n$) follows from the previous since $$ \left(F_{Z}\right)^{m/n} = \left(F_{Z}^{1/n}\right)^{m}. $$ For $\alpha$ an irrational, choose a sequence of positive rationals $a_{k}$ converging to $\alpha$; then the sequence $X_{k}$ (where we can use our above tricks for each $k$) will converge in distribution to the $X$ desired. This might not be the characterization you are looking for, but it least gives some idea of how to think about $F_{Z}^{\alpha}$ for $\alpha$ suitably nice. On the other hand, I'm not really sure how much nicer it can really get: you already have the CDF, so the chain rule gives you the PDF, and you can calculate moments till the sun sets...? It's true that most $Z$'s won't have an $X$ that's familiar for $\alpha = \sqrt{2}$, but if I wanted to play around with an example to look for something interesting I might try $Z$ uniformly distributed on the unit interval with $F(z) = z$, $0<z<1$. EDIT: I wrote some comments in @JMS answer, and there was a question about my arithmetic, so I'll write out what I meant in the hopes that it's more clear. @cardinal correctly in the comment to @JMS answer wrote that the problem simplifies to $$ g^{-1}(y) = \Phi^{-1}(\Phi^{\alpha}(y)), $$ or more generally when $Z$ is not necessarily $N(0,1)$, we have $$ x = g^{-1}(y) = F^{-1}(F^{\alpha}(y)). $$ My point was that when $F$ has a nice inverse function we can just solve for the function $y = g(x)$ with basic algebra. I wrote in the comment that $g$ should be $$ y = g(x) = F^{-1}(F^{1/\alpha}(x)). $$ Let's take a special case, plug things in, and see how it works. Let $X$ have an Exp(1) distribution, with CDF $$ F(x) = (1 - \mathrm{e}^{-x}),\ x > 0, $$ and inverse CDF $$ F^{-1}(y) = -\ln(1 - y). $$ It is easy to plug everything in to find $g$; after we're done we get $$ y = g(x) = -\ln \left( 1 - (1 - \mathrm{e}^{-x})^{1/\alpha} \right) $$ So, in summary, my claim is that if $X \sim \mathrm{Exp}(1)$ and if we define $$ Y = -\ln \left( 1 - (1 - \mathrm{e}^{-X})^{1/\alpha} \right), $$ then $Y$ will have a CDF which looks like $$ F_{Y}(y) = \left( 1 - \mathrm{e}^{-y} \right)^{\alpha}. $$ We can prove this directly (look at $P(Y \leq y)$ and use algebra to get the expression, in the next to the last step we need the Probability Integral Transform). Just in the (often repeated) case that I'm crazy, I ran some simulations to double-check that it works, ... and it does. See below. To make the code easier I used two facts: $$ \mbox{If $X \sim F$ then $U = F(X) \sim \mathrm{Unif}(0,1)$.} $$ $$ \mbox{If $U \sim \mathrm{Unif}(0,1)$ then $U^{1/\alpha} \sim \mathrm{Beta}(\alpha,1)$.} $$ The plot of the simulation results follows. The R code used to generate the plot (minus labels) is n <- 10000; alpha <- 0.7 z <- rbeta(n, shape1 = alpha, shape2 = 1) y <- -log(1 - z) plot(ecdf(y)) f <- function(x) (pexp(x, rate = 1))^alpha curve(f, add = TRUE, lty = 2, lwd = 2) The fit looks pretty good, I think? Maybe I'm not crazy (this time)?
CDF raised to a power?
I like the other answers, but nobody has mentioned the following yet. The event $\{U \leq t,\ V\leq t \}$ occurs if and only if $\{\mathrm{max}(U,V)\leq t\}$, so if $U$ and $V$ are independent and $
CDF raised to a power? I like the other answers, but nobody has mentioned the following yet. The event $\{U \leq t,\ V\leq t \}$ occurs if and only if $\{\mathrm{max}(U,V)\leq t\}$, so if $U$ and $V$ are independent and $W = \mathrm{max}(U,V)$, then $F_{W}(t) = F_{U}(t)*F_{V}(t)$ so for $\alpha$ a positive integer (say, $\alpha = n$) take $X = \mathrm{max}(Z_{1},...Z_{n})$ where the $Z$'s are i.i.d. For $\alpha = 1/n$ we can switcheroo to get $F_{Z} = F_{X}^n$, so $X$ would be that random variable such that the max of $n$ independent copies has the same distribution as $Z$ (and this would not be one of our familiar friends, in general). The case of $\alpha$ a positive rational number (say, $\alpha = m/n$) follows from the previous since $$ \left(F_{Z}\right)^{m/n} = \left(F_{Z}^{1/n}\right)^{m}. $$ For $\alpha$ an irrational, choose a sequence of positive rationals $a_{k}$ converging to $\alpha$; then the sequence $X_{k}$ (where we can use our above tricks for each $k$) will converge in distribution to the $X$ desired. This might not be the characterization you are looking for, but it least gives some idea of how to think about $F_{Z}^{\alpha}$ for $\alpha$ suitably nice. On the other hand, I'm not really sure how much nicer it can really get: you already have the CDF, so the chain rule gives you the PDF, and you can calculate moments till the sun sets...? It's true that most $Z$'s won't have an $X$ that's familiar for $\alpha = \sqrt{2}$, but if I wanted to play around with an example to look for something interesting I might try $Z$ uniformly distributed on the unit interval with $F(z) = z$, $0<z<1$. EDIT: I wrote some comments in @JMS answer, and there was a question about my arithmetic, so I'll write out what I meant in the hopes that it's more clear. @cardinal correctly in the comment to @JMS answer wrote that the problem simplifies to $$ g^{-1}(y) = \Phi^{-1}(\Phi^{\alpha}(y)), $$ or more generally when $Z$ is not necessarily $N(0,1)$, we have $$ x = g^{-1}(y) = F^{-1}(F^{\alpha}(y)). $$ My point was that when $F$ has a nice inverse function we can just solve for the function $y = g(x)$ with basic algebra. I wrote in the comment that $g$ should be $$ y = g(x) = F^{-1}(F^{1/\alpha}(x)). $$ Let's take a special case, plug things in, and see how it works. Let $X$ have an Exp(1) distribution, with CDF $$ F(x) = (1 - \mathrm{e}^{-x}),\ x > 0, $$ and inverse CDF $$ F^{-1}(y) = -\ln(1 - y). $$ It is easy to plug everything in to find $g$; after we're done we get $$ y = g(x) = -\ln \left( 1 - (1 - \mathrm{e}^{-x})^{1/\alpha} \right) $$ So, in summary, my claim is that if $X \sim \mathrm{Exp}(1)$ and if we define $$ Y = -\ln \left( 1 - (1 - \mathrm{e}^{-X})^{1/\alpha} \right), $$ then $Y$ will have a CDF which looks like $$ F_{Y}(y) = \left( 1 - \mathrm{e}^{-y} \right)^{\alpha}. $$ We can prove this directly (look at $P(Y \leq y)$ and use algebra to get the expression, in the next to the last step we need the Probability Integral Transform). Just in the (often repeated) case that I'm crazy, I ran some simulations to double-check that it works, ... and it does. See below. To make the code easier I used two facts: $$ \mbox{If $X \sim F$ then $U = F(X) \sim \mathrm{Unif}(0,1)$.} $$ $$ \mbox{If $U \sim \mathrm{Unif}(0,1)$ then $U^{1/\alpha} \sim \mathrm{Beta}(\alpha,1)$.} $$ The plot of the simulation results follows. The R code used to generate the plot (minus labels) is n <- 10000; alpha <- 0.7 z <- rbeta(n, shape1 = alpha, shape2 = 1) y <- -log(1 - z) plot(ecdf(y)) f <- function(x) (pexp(x, rate = 1))^alpha curve(f, add = TRUE, lty = 2, lwd = 2) The fit looks pretty good, I think? Maybe I'm not crazy (this time)?
CDF raised to a power? I like the other answers, but nobody has mentioned the following yet. The event $\{U \leq t,\ V\leq t \}$ occurs if and only if $\{\mathrm{max}(U,V)\leq t\}$, so if $U$ and $V$ are independent and $
18,234
CDF raised to a power?
Proof without words The lower blue curve is $F$, the upper red curve is $F^\alpha$ (typifying the case $\alpha \lt 1$), and the arrows show how to go from $z$ to $x = g(z)$.
CDF raised to a power?
Proof without words The lower blue curve is $F$, the upper red curve is $F^\alpha$ (typifying the case $\alpha \lt 1$), and the arrows show how to go from $z$ to $x = g(z)$.
CDF raised to a power? Proof without words The lower blue curve is $F$, the upper red curve is $F^\alpha$ (typifying the case $\alpha \lt 1$), and the arrows show how to go from $z$ to $x = g(z)$.
CDF raised to a power? Proof without words The lower blue curve is $F$, the upper red curve is $F^\alpha$ (typifying the case $\alpha \lt 1$), and the arrows show how to go from $z$ to $x = g(z)$.
18,235
CDF raised to a power?
Q1) Yes. It's also useful for generating variables which are stochastically ordered; you can see this from @whuber's pretty picture :). $\alpha>1$ swaps the stochastic order. That it's a valid cdf is just a matter of verifying the requisite conditions: $F_z(z)^\alpha$ has to be cadlag, nondecreasing and limit to $1$ at infinity and $0$ at negative infinity. $F_z$ has these properties so these are all easy to show. Q2) Seems like it would be pretty difficult analytically, unless $F_Z$ is special
CDF raised to a power?
Q1) Yes. It's also useful for generating variables which are stochastically ordered; you can see this from @whuber's pretty picture :). $\alpha>1$ swaps the stochastic order. That it's a valid cdf is
CDF raised to a power? Q1) Yes. It's also useful for generating variables which are stochastically ordered; you can see this from @whuber's pretty picture :). $\alpha>1$ swaps the stochastic order. That it's a valid cdf is just a matter of verifying the requisite conditions: $F_z(z)^\alpha$ has to be cadlag, nondecreasing and limit to $1$ at infinity and $0$ at negative infinity. $F_z$ has these properties so these are all easy to show. Q2) Seems like it would be pretty difficult analytically, unless $F_Z$ is special
CDF raised to a power? Q1) Yes. It's also useful for generating variables which are stochastically ordered; you can see this from @whuber's pretty picture :). $\alpha>1$ swaps the stochastic order. That it's a valid cdf is
18,236
Trigonometric operations on standard deviations
In this interpretation, the triangle is a right triangle of side lengths $X$ and $Y$ distributed binormally with expectations $\mu_x$ and $\mu_y$, standard deviations $\sigma_x$ and $\sigma_y$, and correlation $\rho$. We seek the distribution of $\arctan(Y/X)$. To this end, standardize $X$ and $Y$ so that $$X = \sigma_x \xi + \mu_x$$ and $$Y = \sigma_y \eta + \mu_y$$ with $\xi$ and $\eta$ standard normal variates with correlation $\rho$. Let $\theta$ be an angle and for convenience write $q = \tan(\theta)$. Then $$\mathbb{P}[\arctan(Y/X) \le \theta] = \mathbb{P}[Y \le q X]$$ $$=\mathbb{P}[\sigma_y \eta + \mu_y \le q \left( \sigma_x \xi + \mu_x \right)$$ $$=\mathbb{P}[\sigma_y \eta - q \sigma_x \xi \le q \mu_x - \mu_y]$$ The left hand side, being a linear combination of Normals, is normal, with mean $\mu_y \sigma_y - q \mu_x \sigma_x$ and variance $\sigma_y^2 + q^2 \sigma_x^2 - 2 q \rho \sigma_x \sigma_y$. Differentiating the Normal cdf of these parameters with respect to $\theta$ yields the pdf of the angle. The expression is fairly grisly, but a key part of it is the exponential $$\exp \left(-\frac{\left(\mu _y \left(\sigma _y+1\right)-\mu _x \left(\sigma _x+1\right) \tan (\theta )\right){}^2}{2 \left(-2 \rho \sigma _x \sigma _y \tan (\theta )+\sigma _x^2+\sigma _y^2+\tan ^2(\theta )\right)}\right),$$ showing right away that the angle is not normally distributed. However, as your simulations show and intuition suggests, it should be approximately normal provided the variations of the side lengths are small compared to the lengths themselves. In this case a Saddlepoint approximation ought to yield good results for specific values of $\mu_x$, $\mu_y$, $\sigma_x$, $\sigma_y$, and $\rho$, even though a closed-form general solution is not available. The approximate standard deviation will drop right out upon finding the second derivative (with respect to $\theta$) of the logarithm of the pdf (as shown in equations (2.6) and (3.1) of the reference). I recommend a computer algebra system (like MatLab or Mathematica) for carrying this out!
Trigonometric operations on standard deviations
In this interpretation, the triangle is a right triangle of side lengths $X$ and $Y$ distributed binormally with expectations $\mu_x$ and $\mu_y$, standard deviations $\sigma_x$ and $\sigma_y$, and co
Trigonometric operations on standard deviations In this interpretation, the triangle is a right triangle of side lengths $X$ and $Y$ distributed binormally with expectations $\mu_x$ and $\mu_y$, standard deviations $\sigma_x$ and $\sigma_y$, and correlation $\rho$. We seek the distribution of $\arctan(Y/X)$. To this end, standardize $X$ and $Y$ so that $$X = \sigma_x \xi + \mu_x$$ and $$Y = \sigma_y \eta + \mu_y$$ with $\xi$ and $\eta$ standard normal variates with correlation $\rho$. Let $\theta$ be an angle and for convenience write $q = \tan(\theta)$. Then $$\mathbb{P}[\arctan(Y/X) \le \theta] = \mathbb{P}[Y \le q X]$$ $$=\mathbb{P}[\sigma_y \eta + \mu_y \le q \left( \sigma_x \xi + \mu_x \right)$$ $$=\mathbb{P}[\sigma_y \eta - q \sigma_x \xi \le q \mu_x - \mu_y]$$ The left hand side, being a linear combination of Normals, is normal, with mean $\mu_y \sigma_y - q \mu_x \sigma_x$ and variance $\sigma_y^2 + q^2 \sigma_x^2 - 2 q \rho \sigma_x \sigma_y$. Differentiating the Normal cdf of these parameters with respect to $\theta$ yields the pdf of the angle. The expression is fairly grisly, but a key part of it is the exponential $$\exp \left(-\frac{\left(\mu _y \left(\sigma _y+1\right)-\mu _x \left(\sigma _x+1\right) \tan (\theta )\right){}^2}{2 \left(-2 \rho \sigma _x \sigma _y \tan (\theta )+\sigma _x^2+\sigma _y^2+\tan ^2(\theta )\right)}\right),$$ showing right away that the angle is not normally distributed. However, as your simulations show and intuition suggests, it should be approximately normal provided the variations of the side lengths are small compared to the lengths themselves. In this case a Saddlepoint approximation ought to yield good results for specific values of $\mu_x$, $\mu_y$, $\sigma_x$, $\sigma_y$, and $\rho$, even though a closed-form general solution is not available. The approximate standard deviation will drop right out upon finding the second derivative (with respect to $\theta$) of the logarithm of the pdf (as shown in equations (2.6) and (3.1) of the reference). I recommend a computer algebra system (like MatLab or Mathematica) for carrying this out!
Trigonometric operations on standard deviations In this interpretation, the triangle is a right triangle of side lengths $X$ and $Y$ distributed binormally with expectations $\mu_x$ and $\mu_y$, standard deviations $\sigma_x$ and $\sigma_y$, and co
18,237
Trigonometric operations on standard deviations
You are looking at circular statistics and in particular a circular distribution called the projected normal distribution. For some reason this topic can be a little hard to google, but the two major texts on circular statistics are The Statistical Analysis of Circular Data by Fisher and Directional Statistics by Mardia and Jupp. For a thorough analysis of the projected normal distribution see page 46 of Mardia and Jupp. There are closed form expressions (up to the error function integral) for the distribution, and as whuber has suggested, it looks similar to the normal when its `variance' (careful here, what does variance mean for a random variable on a circle?!) is small, i.e. when the distribution is quite concentrated at one point (or direction or angle).
Trigonometric operations on standard deviations
You are looking at circular statistics and in particular a circular distribution called the projected normal distribution. For some reason this topic can be a little hard to google, but the two major
Trigonometric operations on standard deviations You are looking at circular statistics and in particular a circular distribution called the projected normal distribution. For some reason this topic can be a little hard to google, but the two major texts on circular statistics are The Statistical Analysis of Circular Data by Fisher and Directional Statistics by Mardia and Jupp. For a thorough analysis of the projected normal distribution see page 46 of Mardia and Jupp. There are closed form expressions (up to the error function integral) for the distribution, and as whuber has suggested, it looks similar to the normal when its `variance' (careful here, what does variance mean for a random variable on a circle?!) is small, i.e. when the distribution is quite concentrated at one point (or direction or angle).
Trigonometric operations on standard deviations You are looking at circular statistics and in particular a circular distribution called the projected normal distribution. For some reason this topic can be a little hard to google, but the two major
18,238
Significance test for two groups with dichotomous variable
BruceET provides one way of analyzing this table. There are several tests for 2 by 2 tables which are all asymptotically equivalent, meaning that with enough data all tests are going to give you the same anwer. I present them here with R code for posterity. In my answer, I'm going to transpose the table since I find it easier to have groups as columns and outcomes as rows. The table is then Group A Group B Yes 350 1700 No 1250 3800 I'll reference the elements of this table as Group A Group B Yes $a$ $b$ No $c$ $d$ $N$ will be the sum of all the elements $N = a+b+c+d$. The Chi Square Test Perhaps the most common test for 2 by 2 tables is the chi square test. Roughly, the null hypothesis of the chi square test is that the proportion of people who answer yes is the same in each group, and in particular it is the same as the proportion of people who answer yes were I to ignore groups completely. The test statistic is $$ X^2_P = \dfrac{(ad-bc)^2N}{n_1n_2m_1m_2} \sim \chi^2_1$$ Here $n_i$ are the column totals and $m_i$ are the row totals. This test statistic is asymptotically distributed as Chi square (hence the name) with one degree of freedom. The math is not important, to be frank. Most software packages, like R, implement this test readily. m = matrix(c(350,1250, 1700, 3800), nrow=2) chisq.test(m, correct = F) Pearson's Chi-squared test data: m X-squared = 49.257, df = 1, p-value = 2.246e-12 The correct=F is so that R implements the test as I have written it and does not apply a continuity correction which is useful for small samples. The p value is very small here so we can conclude that the proportion of people who answer yes in each group is different. Test of Proportions The test of proportions is similar to the chi square test. Let $\pi_i$ be the probability of answering Yes in group $i$. The test of proportions tests the null that $\pi_1 = \pi_2$. In short, the test statistic for this test is $$ z = \dfrac{p_1-p_2}{\sqrt{\dfrac{p_1(1-p_1)}{n_1} + \dfrac{p_2(1-p_2)}{n_2}}} \sim \mathcal{N}(0,1) $$ Again, $n_i$ are the column totals and $p_1 = a/n_1$ and $p_2=b/n_2$. This test statistic has standard normal asymptotic distribution. If your alternative is that $p_1 \neq p_2$ then you want this test statistic to be larger than 1.96 in absolute value in most cases to reject the null. In R # Note that the n argument is the column sums prop.test(x=c(350, 1700), n=c(1600, 5500), correct = F) data: c(350, 1700) out of c(1600, 5500) X-squared = 49.257, df = 1, p-value = 2.246e-12 alternative hypothesis: two.sided 95 percent confidence interval: -0.11399399 -0.06668783 sample estimates: prop 1 prop 2 0.2187500 0.3090909 Note that the X-squared statistic in the output of this test is identical to the chi-square test. There is a good reason for that which I will not talk about here. Note also that this test provides a confidence interval for the difference in proportions, which is an added benefit over the chi square test. Fisher's Exact Test Fisher's exact test conditions on the quantites $n_1 = a+c$ and $m_1 = a + b$. The null of this test is that the probability of success in each group is the same, $\pi_1 = \pi_2$, like the test of proportions. The actual null hypothesis in the derivation of the test is about the odds ratio, but that is not important now. The exact probability of observing the table provided is $$ p = \dfrac{n_1! n_2! m_1! m_2!}{N! a! b! c! d!} $$ John Lachin writes Thus, the probability of the observed table can be considered to arise from a collection of $N$ subjects of whom $m_1$ have positive response, with $a$ of these being drawn from the $n_1$ subjects in group 1 and $b$ from among the $n_2$ subjects in group 2 ($a+b=m_1$, $n_1 + n_2 = N$). Importantly, this is not the p value. It is the probability of observing this table. In order to compute the p value, we need to sum up probabilities of observing tables which are more extreme than this one. Luckily, R does this for us m = matrix(c(350,1250, 1700, 3800), nrow=2) fisher.test(m) Fisher's Exact Test for Count Data data: m p-value = 1.004e-12 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.5470683 0.7149770 sample estimates: odds ratio 0.6259224 Note the result is about odds ratios and not about probabilities in each group. It is also worth noting, again from Lachin, The Fisher-Irwin exact test has been criticized as being too conservative because other unconditional tests have been shown to yield a smaller p value and thus are more powerful. When the data are large, this point becomes moot because you've likely got enough power to detect small effects, but it all depends on what you're trying to test (as it always does). Thus far, we have examined what are likely to be the most prevalent tests for this sort of data. The following tests are equivalent to the first two, but are perhaps less known. I present them here for completeness. Odds Ratio The odds ratio $\widehat{OR}$ for this table is $ad/bc$, but because the odds ratio is bound to be strictly positive, it can be more convenient to work with the log odds ratio $\log(\widehat{OR})$. Asymptotically, the sampling distribution for the log odds ratio is normal. This means we can apply a simple $z$ test. Our test statistic is $$ Z = \dfrac{\log(\widehat{OR}) - \log(OR)}{\sqrt{\hat{V}(\log(\widehat{OR})}} $$. Here, $\hat{V}(\log(\widehat{OR}))$ is the estimated variance of the log odds ratio and is equal to $1/a + 1/b + 1/c + 1/d$. In R odds_ratio = m[1, 1]*m[2, 2]/(m[2, 1]*m[1, 2]) vr = sum(1/m) Z = log(odds_ratio)/sqrt(vr) p.val = 2*pnorm(abs(Z), lower.tail = F) which returns a Z value of -6.978754 and a p value less than 0.01. Cochran's test The test statistic is $$ X^2_u = \dfrac{\dfrac{n_2a-n_1b}{N}}{\dfrac{n_1n_2m_1m_2}{N^3}} \sim \chi^2_1 $$ In R m = matrix(c(350,1250, 1700, 3800), nrow=2) a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d X = ((n2*a-n1*b)/N)^2 /((n1*n2*m1*m2)/N^3) # Look familiar? X >>>49.25663 p.val = pchisq(X,1, lower.tail=F) p.val >>>[1] 2.245731e-12 Conditional Mantel-Haenszel (CMH) Test The CMH Test (I think I've seen this called the Cochran Mantel-Haenszel Test elsewhere) is a test which conditions on the first column total and first row total. The test statistic is $$ X^2_c = \dfrac{\left( a - \dfrac{n_1m_1}{N} \right)^2}{\dfrac{n_1n_2m_1m_2}{N^2(N-1)}} \sim \chi^2_1$$ In R a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d top =( a - n1*m1/N)^2 bottom = (n1*n2*m1*m2)/(N^2*(N-1)) X = top/bottom X >>>49.24969 p.val = pchisq(X, 1, lower.tail = F) p.val >>> [1] 2.253687e-12 Likelihood Ratio Test (LRT) (My Personal Favourite) The LRT compares the difference in log likelihood between a model which freely estimates the group proportions and a model which only estimates a single proportion (not unlike the chi-square test). This test is a bit overkill in my opinion as other tests are simpler, but hey why not include it? I like it personally because the test statistic is oddly satisfying and easy to remember The math, as before, is irrelevant for our purposes. The test statistic is $$ X^2_G = 2 \log \left( \dfrac{a^a b^b c^c d^d N^N}{n_1^{n_1} n_2^{n_2} m_1^{m_1} m_2^{m_2}} \right) \sim \chi^2_1 $$ In R with some applied algebra to prevent overflow a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d top = c(a,b,c,d,N) bottom = c(n1, n2, m1, m2) X = 2*log(exp(sum(top*log(top)) - sum(bottom*log(bottom)))) # Very close to other tests X >>>[1] 51.26845 p.val = pchisq(X, 1, lower.tail=F) p.val >>>1] 8.05601e-13 Note that there is a discrepancy in the test statistic for the LRT and the other tests. It has been noted that this test statistic converges to teh asymptotic chi square distribution at a slower rate than the chi square test statistic or the Cochran's test statistic. What Test Do I Use My suggestion: Test of proportions. It is equivalent to the chi-square test and has the added benefit of being a) directly interpretable in terms of risk difference, and b) provides a confidence interval for this difference (something you should always be reporting). I've not included theoretical motivations for these tests, though understanding those are not essential but captivating in my own opinion. If you're wondering where I got all this information, the book "Biostatsitical Methods - The Assessment of Relative Risks" by John Lachin takes a painstakingly long time to explain all this to you in chapter 2.
Significance test for two groups with dichotomous variable
BruceET provides one way of analyzing this table. There are several tests for 2 by 2 tables which are all asymptotically equivalent, meaning that with enough data all tests are going to give you the s
Significance test for two groups with dichotomous variable BruceET provides one way of analyzing this table. There are several tests for 2 by 2 tables which are all asymptotically equivalent, meaning that with enough data all tests are going to give you the same anwer. I present them here with R code for posterity. In my answer, I'm going to transpose the table since I find it easier to have groups as columns and outcomes as rows. The table is then Group A Group B Yes 350 1700 No 1250 3800 I'll reference the elements of this table as Group A Group B Yes $a$ $b$ No $c$ $d$ $N$ will be the sum of all the elements $N = a+b+c+d$. The Chi Square Test Perhaps the most common test for 2 by 2 tables is the chi square test. Roughly, the null hypothesis of the chi square test is that the proportion of people who answer yes is the same in each group, and in particular it is the same as the proportion of people who answer yes were I to ignore groups completely. The test statistic is $$ X^2_P = \dfrac{(ad-bc)^2N}{n_1n_2m_1m_2} \sim \chi^2_1$$ Here $n_i$ are the column totals and $m_i$ are the row totals. This test statistic is asymptotically distributed as Chi square (hence the name) with one degree of freedom. The math is not important, to be frank. Most software packages, like R, implement this test readily. m = matrix(c(350,1250, 1700, 3800), nrow=2) chisq.test(m, correct = F) Pearson's Chi-squared test data: m X-squared = 49.257, df = 1, p-value = 2.246e-12 The correct=F is so that R implements the test as I have written it and does not apply a continuity correction which is useful for small samples. The p value is very small here so we can conclude that the proportion of people who answer yes in each group is different. Test of Proportions The test of proportions is similar to the chi square test. Let $\pi_i$ be the probability of answering Yes in group $i$. The test of proportions tests the null that $\pi_1 = \pi_2$. In short, the test statistic for this test is $$ z = \dfrac{p_1-p_2}{\sqrt{\dfrac{p_1(1-p_1)}{n_1} + \dfrac{p_2(1-p_2)}{n_2}}} \sim \mathcal{N}(0,1) $$ Again, $n_i$ are the column totals and $p_1 = a/n_1$ and $p_2=b/n_2$. This test statistic has standard normal asymptotic distribution. If your alternative is that $p_1 \neq p_2$ then you want this test statistic to be larger than 1.96 in absolute value in most cases to reject the null. In R # Note that the n argument is the column sums prop.test(x=c(350, 1700), n=c(1600, 5500), correct = F) data: c(350, 1700) out of c(1600, 5500) X-squared = 49.257, df = 1, p-value = 2.246e-12 alternative hypothesis: two.sided 95 percent confidence interval: -0.11399399 -0.06668783 sample estimates: prop 1 prop 2 0.2187500 0.3090909 Note that the X-squared statistic in the output of this test is identical to the chi-square test. There is a good reason for that which I will not talk about here. Note also that this test provides a confidence interval for the difference in proportions, which is an added benefit over the chi square test. Fisher's Exact Test Fisher's exact test conditions on the quantites $n_1 = a+c$ and $m_1 = a + b$. The null of this test is that the probability of success in each group is the same, $\pi_1 = \pi_2$, like the test of proportions. The actual null hypothesis in the derivation of the test is about the odds ratio, but that is not important now. The exact probability of observing the table provided is $$ p = \dfrac{n_1! n_2! m_1! m_2!}{N! a! b! c! d!} $$ John Lachin writes Thus, the probability of the observed table can be considered to arise from a collection of $N$ subjects of whom $m_1$ have positive response, with $a$ of these being drawn from the $n_1$ subjects in group 1 and $b$ from among the $n_2$ subjects in group 2 ($a+b=m_1$, $n_1 + n_2 = N$). Importantly, this is not the p value. It is the probability of observing this table. In order to compute the p value, we need to sum up probabilities of observing tables which are more extreme than this one. Luckily, R does this for us m = matrix(c(350,1250, 1700, 3800), nrow=2) fisher.test(m) Fisher's Exact Test for Count Data data: m p-value = 1.004e-12 alternative hypothesis: true odds ratio is not equal to 1 95 percent confidence interval: 0.5470683 0.7149770 sample estimates: odds ratio 0.6259224 Note the result is about odds ratios and not about probabilities in each group. It is also worth noting, again from Lachin, The Fisher-Irwin exact test has been criticized as being too conservative because other unconditional tests have been shown to yield a smaller p value and thus are more powerful. When the data are large, this point becomes moot because you've likely got enough power to detect small effects, but it all depends on what you're trying to test (as it always does). Thus far, we have examined what are likely to be the most prevalent tests for this sort of data. The following tests are equivalent to the first two, but are perhaps less known. I present them here for completeness. Odds Ratio The odds ratio $\widehat{OR}$ for this table is $ad/bc$, but because the odds ratio is bound to be strictly positive, it can be more convenient to work with the log odds ratio $\log(\widehat{OR})$. Asymptotically, the sampling distribution for the log odds ratio is normal. This means we can apply a simple $z$ test. Our test statistic is $$ Z = \dfrac{\log(\widehat{OR}) - \log(OR)}{\sqrt{\hat{V}(\log(\widehat{OR})}} $$. Here, $\hat{V}(\log(\widehat{OR}))$ is the estimated variance of the log odds ratio and is equal to $1/a + 1/b + 1/c + 1/d$. In R odds_ratio = m[1, 1]*m[2, 2]/(m[2, 1]*m[1, 2]) vr = sum(1/m) Z = log(odds_ratio)/sqrt(vr) p.val = 2*pnorm(abs(Z), lower.tail = F) which returns a Z value of -6.978754 and a p value less than 0.01. Cochran's test The test statistic is $$ X^2_u = \dfrac{\dfrac{n_2a-n_1b}{N}}{\dfrac{n_1n_2m_1m_2}{N^3}} \sim \chi^2_1 $$ In R m = matrix(c(350,1250, 1700, 3800), nrow=2) a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d X = ((n2*a-n1*b)/N)^2 /((n1*n2*m1*m2)/N^3) # Look familiar? X >>>49.25663 p.val = pchisq(X,1, lower.tail=F) p.val >>>[1] 2.245731e-12 Conditional Mantel-Haenszel (CMH) Test The CMH Test (I think I've seen this called the Cochran Mantel-Haenszel Test elsewhere) is a test which conditions on the first column total and first row total. The test statistic is $$ X^2_c = \dfrac{\left( a - \dfrac{n_1m_1}{N} \right)^2}{\dfrac{n_1n_2m_1m_2}{N^2(N-1)}} \sim \chi^2_1$$ In R a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d top =( a - n1*m1/N)^2 bottom = (n1*n2*m1*m2)/(N^2*(N-1)) X = top/bottom X >>>49.24969 p.val = pchisq(X, 1, lower.tail = F) p.val >>> [1] 2.253687e-12 Likelihood Ratio Test (LRT) (My Personal Favourite) The LRT compares the difference in log likelihood between a model which freely estimates the group proportions and a model which only estimates a single proportion (not unlike the chi-square test). This test is a bit overkill in my opinion as other tests are simpler, but hey why not include it? I like it personally because the test statistic is oddly satisfying and easy to remember The math, as before, is irrelevant for our purposes. The test statistic is $$ X^2_G = 2 \log \left( \dfrac{a^a b^b c^c d^d N^N}{n_1^{n_1} n_2^{n_2} m_1^{m_1} m_2^{m_2}} \right) \sim \chi^2_1 $$ In R with some applied algebra to prevent overflow a = 350 b = 1700 c = 1250 d = 3800 N = a+b+c+d n1 = a+c n2 = b+d m1 =a+b m2 =c+d top = c(a,b,c,d,N) bottom = c(n1, n2, m1, m2) X = 2*log(exp(sum(top*log(top)) - sum(bottom*log(bottom)))) # Very close to other tests X >>>[1] 51.26845 p.val = pchisq(X, 1, lower.tail=F) p.val >>>1] 8.05601e-13 Note that there is a discrepancy in the test statistic for the LRT and the other tests. It has been noted that this test statistic converges to teh asymptotic chi square distribution at a slower rate than the chi square test statistic or the Cochran's test statistic. What Test Do I Use My suggestion: Test of proportions. It is equivalent to the chi-square test and has the added benefit of being a) directly interpretable in terms of risk difference, and b) provides a confidence interval for this difference (something you should always be reporting). I've not included theoretical motivations for these tests, though understanding those are not essential but captivating in my own opinion. If you're wondering where I got all this information, the book "Biostatsitical Methods - The Assessment of Relative Risks" by John Lachin takes a painstakingly long time to explain all this to you in chapter 2.
Significance test for two groups with dichotomous variable BruceET provides one way of analyzing this table. There are several tests for 2 by 2 tables which are all asymptotically equivalent, meaning that with enough data all tests are going to give you the s
18,239
Significance test for two groups with dichotomous variable
Two ways to do this in R: Test of two binomial proportions (declining continuity correction on account of large sample sizes.) Highly significant result with P-value nearly $0 < 0.05 = 5\%.$ prop.test(c(350, 1250), c(2050, 4050), cor=F) 2-sample test for equality of proportions without continuity correction data: c(350, 1250) out of c(2050, 4050) X-squared = 133.78, df = 1, p-value < 2.2e-16 alternative hypothesis: two.sided 95 percent confidence interval: -0.1595367 -0.1162838 sample estimates: prop 1 prop 2 0.1707317 0.3086420 Putting your data into a $2\times 2$ table, to use in chisq.test. TBL = rbind(c(350, 1250), c(1700, 3800)); TBL chisq.test(TBL) TBL = rbind(c(350, 1250), c(1700, 3800)); TBL [,1] [,2] [1,] 350 1250 [2,] 1700 3800 Chi-squared test of 2-by-2 contingency table. Declining Yates' continuity correction (which IMHO is seldom useful). Same result. P-value near $0$ rejects the null hypothesis that the the two groups are homogeneous with regard to the question asked. chisq.test(TBL, cor=F) Pearson's Chi-squared test data: TBL X-squared = 49.257, df = 1, p-value = 2.246e-12
Significance test for two groups with dichotomous variable
Two ways to do this in R: Test of two binomial proportions (declining continuity correction on account of large sample sizes.) Highly significant result with P-value nearly $0 < 0.05 = 5\%.$ prop.test
Significance test for two groups with dichotomous variable Two ways to do this in R: Test of two binomial proportions (declining continuity correction on account of large sample sizes.) Highly significant result with P-value nearly $0 < 0.05 = 5\%.$ prop.test(c(350, 1250), c(2050, 4050), cor=F) 2-sample test for equality of proportions without continuity correction data: c(350, 1250) out of c(2050, 4050) X-squared = 133.78, df = 1, p-value < 2.2e-16 alternative hypothesis: two.sided 95 percent confidence interval: -0.1595367 -0.1162838 sample estimates: prop 1 prop 2 0.1707317 0.3086420 Putting your data into a $2\times 2$ table, to use in chisq.test. TBL = rbind(c(350, 1250), c(1700, 3800)); TBL chisq.test(TBL) TBL = rbind(c(350, 1250), c(1700, 3800)); TBL [,1] [,2] [1,] 350 1250 [2,] 1700 3800 Chi-squared test of 2-by-2 contingency table. Declining Yates' continuity correction (which IMHO is seldom useful). Same result. P-value near $0$ rejects the null hypothesis that the the two groups are homogeneous with regard to the question asked. chisq.test(TBL, cor=F) Pearson's Chi-squared test data: TBL X-squared = 49.257, df = 1, p-value = 2.246e-12
Significance test for two groups with dichotomous variable Two ways to do this in R: Test of two binomial proportions (declining continuity correction on account of large sample sizes.) Highly significant result with P-value nearly $0 < 0.05 = 5\%.$ prop.test
18,240
Is Prophet from Facebook any different from a linear regression?
The issue here is to get to an equation that parses the observed data to signal and noise. If your data is simple then your regression approach might work. Care should be taken to understand some of the assumptions that they are making with Prophet. You should better understand what Prophet does do, as it doesn't just fit a simple model but attempts to add some structure. For example, some reflections that I made after reading their well-written introduction might help you in your evaluation. I apologize in advance if I have misunderstood their approach, and would like to be corrected if so. 1) Their lead example has two break-points in trend but they only captured the most obvious one. 2) They ignore any and all ARIMA structure reflecting omitted stochastic series or the value of using historical values of Y to guide the forecast. 3) They ignore any possible dynamics ( lead and lag effects ) of user-suggested stochastic and deterministic series. Prophet's causal regression effects are simply just contemporaneous. 4) No attempt is made to identify step/level shifts in the series or seasonal pulses e.g. a change in the MONDAY EFFECT halfway through time due to some unknown external event. Prophet assumes "simple linear growth' rather than validating it by examining alternative possibilities. For a possible example of this see Forecasting recurring orders for an online subscription business using Facebook Prophet and R 5) Sines and Cosines are an opaque way of dealing with seasonality, while seasonal effects such as day-of-the-week, day-of-the-month, week-of-the-month, month of-the-year are much more effective/informative when dealing with anthropogenic ( dealing with humans ! ) effects. Suggesting frequencies of 365.25 for yearly patterns makes little sense because we don't perform the same action on the exact same day as we did last year, while monthly activity is much more persistent, but Prophet doesn't appear to offer the 11 monthly indicators option. Weekly frequencies of 52 make little sense because we don't have 52 weeks in each and every year. 6) No attempt is made to validate error processes being Gaussian so meaningful tests of significance can be made. 7) No concern for model error variance to be homogeneous, i.e., not changing deterministically at particular points in time suggesting Weighted Least Squares. No concern for finding an optimal power transform to deal the error variance being proportional to the Expected Value When (and why) should you take the log of a distribution (of numbers)? . 8) User has to pre-specify all possible lead and lag effects around events/holidays. For example, daily sales often start to increase in late November, reflecting a long-term effect of Christmas. 9) No concern that the resulting errors are free of structure suggesting ways to improve the model via diagnostic checking for sufficiency. 10) Apparently no concern with improving the model by deleting non-significant structure. 11) There is no facility to obtain a family of simulated forecasts where confidence limits may not necessarily be symmetrical via bootstrapping the model's errors with the allowance of possible anomalies. 12) Letting the user make assumptions about trends ( # of trend breakpoints and the actual breakpoints ) allows unwanted/unusable flexibility in the face of large-scale analysis which by it's name is designed for hands-free large-scale applications.
Is Prophet from Facebook any different from a linear regression?
The issue here is to get to an equation that parses the observed data to signal and noise. If your data is simple then your regression approach might work. Care should be taken to understand some of t
Is Prophet from Facebook any different from a linear regression? The issue here is to get to an equation that parses the observed data to signal and noise. If your data is simple then your regression approach might work. Care should be taken to understand some of the assumptions that they are making with Prophet. You should better understand what Prophet does do, as it doesn't just fit a simple model but attempts to add some structure. For example, some reflections that I made after reading their well-written introduction might help you in your evaluation. I apologize in advance if I have misunderstood their approach, and would like to be corrected if so. 1) Their lead example has two break-points in trend but they only captured the most obvious one. 2) They ignore any and all ARIMA structure reflecting omitted stochastic series or the value of using historical values of Y to guide the forecast. 3) They ignore any possible dynamics ( lead and lag effects ) of user-suggested stochastic and deterministic series. Prophet's causal regression effects are simply just contemporaneous. 4) No attempt is made to identify step/level shifts in the series or seasonal pulses e.g. a change in the MONDAY EFFECT halfway through time due to some unknown external event. Prophet assumes "simple linear growth' rather than validating it by examining alternative possibilities. For a possible example of this see Forecasting recurring orders for an online subscription business using Facebook Prophet and R 5) Sines and Cosines are an opaque way of dealing with seasonality, while seasonal effects such as day-of-the-week, day-of-the-month, week-of-the-month, month of-the-year are much more effective/informative when dealing with anthropogenic ( dealing with humans ! ) effects. Suggesting frequencies of 365.25 for yearly patterns makes little sense because we don't perform the same action on the exact same day as we did last year, while monthly activity is much more persistent, but Prophet doesn't appear to offer the 11 monthly indicators option. Weekly frequencies of 52 make little sense because we don't have 52 weeks in each and every year. 6) No attempt is made to validate error processes being Gaussian so meaningful tests of significance can be made. 7) No concern for model error variance to be homogeneous, i.e., not changing deterministically at particular points in time suggesting Weighted Least Squares. No concern for finding an optimal power transform to deal the error variance being proportional to the Expected Value When (and why) should you take the log of a distribution (of numbers)? . 8) User has to pre-specify all possible lead and lag effects around events/holidays. For example, daily sales often start to increase in late November, reflecting a long-term effect of Christmas. 9) No concern that the resulting errors are free of structure suggesting ways to improve the model via diagnostic checking for sufficiency. 10) Apparently no concern with improving the model by deleting non-significant structure. 11) There is no facility to obtain a family of simulated forecasts where confidence limits may not necessarily be symmetrical via bootstrapping the model's errors with the allowance of possible anomalies. 12) Letting the user make assumptions about trends ( # of trend breakpoints and the actual breakpoints ) allows unwanted/unusable flexibility in the face of large-scale analysis which by it's name is designed for hands-free large-scale applications.
Is Prophet from Facebook any different from a linear regression? The issue here is to get to an equation that parses the observed data to signal and noise. If your data is simple then your regression approach might work. Care should be taken to understand some of t
18,241
Is Prophet from Facebook any different from a linear regression?
I have not used it, but this is their preprint's abstract (emphasis mine): Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly detection. Despite its importance, there are serious challenges associated with producing reliable and high quality forecasts — especially when there are a variety of time series and analysts with expertise in time series modeling are relatively rare. To address these challenges, we describe a practical approach to forecasting “at scale” that combines configurable models with analyst-in-the-loop performance analysis. We propose a modular regression model with interpretable parameters that can be intuitively adjusted by analysts with domain knowledge about the time series. We describe performance analyses to compare and evaluate forecasting procedures, and automatically flag forecasts for manual review and adjustment. Tools that help analysts to use their expertise most effectively enable reliable, practical forecasting of business time series. In the introduction: We have observed two main themes in the practice of creating business forecasts. First, completely automatic forecasting techniques can be hard to tune and are often too inflexible to incorporate useful assumptions or heuristics. Second, the analysts responsible for data science tasks throughout an organization typically have deep domain expertise about the specific products or services that they support, but often do not have training in time series forecasting. So it seems to me that they are not claiming to have made a substantial statistical advance here (although it is capable of far more than the simple model you outline). Instead, they claim that their system makes it feasible for large numbers of people without expertise in time series analysis to generate forecasts while applying their own domain expertise and system-specific constraints. If you already have expertise in both time series analysis and in coding complex models, this may not be very helpful to you. But if their claims are true, this could be hugely useful! Science (and commerce) advances not just because of new ideas, but also because of new tools and their spread (see this short Freeman Dyson piece about the topic and this response). To take an example from statistics itself: R did not represent a statistical advance, but it is has been massively influential because it made it easy for lots more people to do statistical analysis. It has been the scaffolding on which a great deal of statistical understanding has been built. If we are lucky, Prophet may play a similar role. Dyson, Freeman J. "Is science mostly driven by ideas or by tools?." Science 338, no. 6113 (2012): 1426-1427.
Is Prophet from Facebook any different from a linear regression?
I have not used it, but this is their preprint's abstract (emphasis mine): Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly dete
Is Prophet from Facebook any different from a linear regression? I have not used it, but this is their preprint's abstract (emphasis mine): Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly detection. Despite its importance, there are serious challenges associated with producing reliable and high quality forecasts — especially when there are a variety of time series and analysts with expertise in time series modeling are relatively rare. To address these challenges, we describe a practical approach to forecasting “at scale” that combines configurable models with analyst-in-the-loop performance analysis. We propose a modular regression model with interpretable parameters that can be intuitively adjusted by analysts with domain knowledge about the time series. We describe performance analyses to compare and evaluate forecasting procedures, and automatically flag forecasts for manual review and adjustment. Tools that help analysts to use their expertise most effectively enable reliable, practical forecasting of business time series. In the introduction: We have observed two main themes in the practice of creating business forecasts. First, completely automatic forecasting techniques can be hard to tune and are often too inflexible to incorporate useful assumptions or heuristics. Second, the analysts responsible for data science tasks throughout an organization typically have deep domain expertise about the specific products or services that they support, but often do not have training in time series forecasting. So it seems to me that they are not claiming to have made a substantial statistical advance here (although it is capable of far more than the simple model you outline). Instead, they claim that their system makes it feasible for large numbers of people without expertise in time series analysis to generate forecasts while applying their own domain expertise and system-specific constraints. If you already have expertise in both time series analysis and in coding complex models, this may not be very helpful to you. But if their claims are true, this could be hugely useful! Science (and commerce) advances not just because of new ideas, but also because of new tools and their spread (see this short Freeman Dyson piece about the topic and this response). To take an example from statistics itself: R did not represent a statistical advance, but it is has been massively influential because it made it easy for lots more people to do statistical analysis. It has been the scaffolding on which a great deal of statistical understanding has been built. If we are lucky, Prophet may play a similar role. Dyson, Freeman J. "Is science mostly driven by ideas or by tools?." Science 338, no. 6113 (2012): 1426-1427.
Is Prophet from Facebook any different from a linear regression? I have not used it, but this is their preprint's abstract (emphasis mine): Forecasting is a common data science task that helps organizations with capacity planning, goal setting, and anomaly dete
18,242
Is Prophet from Facebook any different from a linear regression?
You are missing the change points, piecewise linear splines, which can be implemented in linear models. You are right that at least in the limiting case it's a linear regularised regression (L1 and L2 regularisation). Note that there is a separate prophet model, logistic growth. Also you are assuming the seasonal factors are additive, but they also support multiplicative seasonal effects, which seems more natural at least for growth modelling.
Is Prophet from Facebook any different from a linear regression?
You are missing the change points, piecewise linear splines, which can be implemented in linear models. You are right that at least in the limiting case it's a linear regularised regression (L1 and L
Is Prophet from Facebook any different from a linear regression? You are missing the change points, piecewise linear splines, which can be implemented in linear models. You are right that at least in the limiting case it's a linear regularised regression (L1 and L2 regularisation). Note that there is a separate prophet model, logistic growth. Also you are assuming the seasonal factors are additive, but they also support multiplicative seasonal effects, which seems more natural at least for growth modelling.
Is Prophet from Facebook any different from a linear regression? You are missing the change points, piecewise linear splines, which can be implemented in linear models. You are right that at least in the limiting case it's a linear regularised regression (L1 and L
18,243
Is Prophet from Facebook any different from a linear regression?
A lot can be done with a simple linear regression but not all that Prophet does. Just one example, you can specify your own change point candidate for a trend, and Prophet will use it as a prior.
Is Prophet from Facebook any different from a linear regression?
A lot can be done with a simple linear regression but not all that Prophet does. Just one example, you can specify your own change point candidate for a trend, and Prophet will use it as a prior.
Is Prophet from Facebook any different from a linear regression? A lot can be done with a simple linear regression but not all that Prophet does. Just one example, you can specify your own change point candidate for a trend, and Prophet will use it as a prior.
Is Prophet from Facebook any different from a linear regression? A lot can be done with a simple linear regression but not all that Prophet does. Just one example, you can specify your own change point candidate for a trend, and Prophet will use it as a prior.
18,244
Training error in KNN classifier when K=1
Training error here is the error you'll have when you input your training set to your KNN as test set. When K = 1, you'll choose the closest training sample to your test sample. Since your test sample is in the training dataset, it'll choose itself as the closest and never make mistake. For this reason, the training error will be zero when K = 1, irrespective of the dataset. There is one logical assumption here by the way, and that is your training set will not include same training samples belonging to different classes, i.e. conflicting information. Some real world datasets might have this property though.
Training error in KNN classifier when K=1
Training error here is the error you'll have when you input your training set to your KNN as test set. When K = 1, you'll choose the closest training sample to your test sample. Since your test sample
Training error in KNN classifier when K=1 Training error here is the error you'll have when you input your training set to your KNN as test set. When K = 1, you'll choose the closest training sample to your test sample. Since your test sample is in the training dataset, it'll choose itself as the closest and never make mistake. For this reason, the training error will be zero when K = 1, irrespective of the dataset. There is one logical assumption here by the way, and that is your training set will not include same training samples belonging to different classes, i.e. conflicting information. Some real world datasets might have this property though.
Training error in KNN classifier when K=1 Training error here is the error you'll have when you input your training set to your KNN as test set. When K = 1, you'll choose the closest training sample to your test sample. Since your test sample
18,245
Training error in KNN classifier when K=1
For a visual understanding, you can think of training KNN's as a process of coloring regions and drawing up boundaries around training data. We can first draw boundaries around each point in the training set with the intersection of perpendicular bisectors of every pair of points. (perpendicular bisector animation is shown below) gif source To find out how to color the regions within these boundaries, for each point we look to the neighbor's color. When $K=1$, for each data point, $x$, in our training set, we want to find one other point, $x'$, that has the least distance from $x$. The shortest possible distance is always $0$, which means our "nearest neighbor" is actually the original data point itself, $x=x'$. To color the areas inside these boundaries, we look up the category corresponding each $x$. Let's say our choices are blue and red. With $K=1$, we color regions surrounding red points with red, and regions surrounding blue with blue. The result would look something like this: Notice how there are no red points in blue regions and vice versa. That tells us there's a training error of 0. Note that decision boundaries are usually drawn only between different categories, (throw out all the blue-blue red-red boundaries) so your decision boundary might look more like this: Again, all the blue points are within blue boundaries and all the red points are within red boundaries; we still have a test error of zero. On the other hand, if we increase $K$ to $K=20$, we have the diagram below. Notice that there are some red points in the blue areas and blue points in red areas. This is what a non-zero training error looks like. When $K = 20$, we color color the regions around a point based on that point's category (color in this case) and the category of 19 of its closest neighbors. If most of the neighbors are blue, but the original point is red, the original point is considered an outlier and the region around it is colored blue. That's why you can have so many red data points in a blue area an vice versa. images source
Training error in KNN classifier when K=1
For a visual understanding, you can think of training KNN's as a process of coloring regions and drawing up boundaries around training data. We can first draw boundaries around each point in the trai
Training error in KNN classifier when K=1 For a visual understanding, you can think of training KNN's as a process of coloring regions and drawing up boundaries around training data. We can first draw boundaries around each point in the training set with the intersection of perpendicular bisectors of every pair of points. (perpendicular bisector animation is shown below) gif source To find out how to color the regions within these boundaries, for each point we look to the neighbor's color. When $K=1$, for each data point, $x$, in our training set, we want to find one other point, $x'$, that has the least distance from $x$. The shortest possible distance is always $0$, which means our "nearest neighbor" is actually the original data point itself, $x=x'$. To color the areas inside these boundaries, we look up the category corresponding each $x$. Let's say our choices are blue and red. With $K=1$, we color regions surrounding red points with red, and regions surrounding blue with blue. The result would look something like this: Notice how there are no red points in blue regions and vice versa. That tells us there's a training error of 0. Note that decision boundaries are usually drawn only between different categories, (throw out all the blue-blue red-red boundaries) so your decision boundary might look more like this: Again, all the blue points are within blue boundaries and all the red points are within red boundaries; we still have a test error of zero. On the other hand, if we increase $K$ to $K=20$, we have the diagram below. Notice that there are some red points in the blue areas and blue points in red areas. This is what a non-zero training error looks like. When $K = 20$, we color color the regions around a point based on that point's category (color in this case) and the category of 19 of its closest neighbors. If most of the neighbors are blue, but the original point is red, the original point is considered an outlier and the region around it is colored blue. That's why you can have so many red data points in a blue area an vice versa. images source
Training error in KNN classifier when K=1 For a visual understanding, you can think of training KNN's as a process of coloring regions and drawing up boundaries around training data. We can first draw boundaries around each point in the trai
18,246
Training error in KNN classifier when K=1
Sorry to be late to the party, but how does this state of affairs make any practical sense? In practice you often use the fit to the training data to select the best model from an algorithm. So we might use several values of k in kNN to decide which is the "best", and then retain that version of kNN to compare to the "best" models from other algorithms and choose an ultimate "best". But under this scheme k=1 will always fit the training data best, you don't even have to run it to know. Regardless of how terrible a choice k=1 might be for any other/future data you apply the model to. The obvious alternative, which I believe I have seen in some software. is to omit the data point being predicted from the training data while that point's prediction is made. So when it's time to predict point A, you leave point A out of the training data. I realize that is itself mathematically flawed. But isn't that more likely to produce a better metric of model quality?
Training error in KNN classifier when K=1
Sorry to be late to the party, but how does this state of affairs make any practical sense? In practice you often use the fit to the training data to select the best model from an algorithm. So we mi
Training error in KNN classifier when K=1 Sorry to be late to the party, but how does this state of affairs make any practical sense? In practice you often use the fit to the training data to select the best model from an algorithm. So we might use several values of k in kNN to decide which is the "best", and then retain that version of kNN to compare to the "best" models from other algorithms and choose an ultimate "best". But under this scheme k=1 will always fit the training data best, you don't even have to run it to know. Regardless of how terrible a choice k=1 might be for any other/future data you apply the model to. The obvious alternative, which I believe I have seen in some software. is to omit the data point being predicted from the training data while that point's prediction is made. So when it's time to predict point A, you leave point A out of the training data. I realize that is itself mathematically flawed. But isn't that more likely to produce a better metric of model quality?
Training error in KNN classifier when K=1 Sorry to be late to the party, but how does this state of affairs make any practical sense? In practice you often use the fit to the training data to select the best model from an algorithm. So we mi
18,247
Training error in KNN classifier when K=1
In the KNN classifier with the k= 1 and with infinite number of training samples, the minimum error is never higher than twice the of the Bayesian error Detecting moldy Bread using an E-Nose and the KNN classifier Hossein Rezaei Estakhroueiyeh, Esmat Rashedi Department of Electrical engineering, Graduate university of Advanced Technology Kerman, Iran
Training error in KNN classifier when K=1
In the KNN classifier with the k= 1 and with infinite number of training samples, the minimum error is never higher than twice the of the Bayesian error Detecting moldy Bread using an E-Nose and the K
Training error in KNN classifier when K=1 In the KNN classifier with the k= 1 and with infinite number of training samples, the minimum error is never higher than twice the of the Bayesian error Detecting moldy Bread using an E-Nose and the KNN classifier Hossein Rezaei Estakhroueiyeh, Esmat Rashedi Department of Electrical engineering, Graduate university of Advanced Technology Kerman, Iran
Training error in KNN classifier when K=1 In the KNN classifier with the k= 1 and with infinite number of training samples, the minimum error is never higher than twice the of the Bayesian error Detecting moldy Bread using an E-Nose and the K
18,248
What causes lasso to be unstable for feature selection?
UPDATE See this second post for McDonald's feedback on my answer where the notion of risk consistency is related to stability. 1) Uniqueness vs Stability Your question is difficult to answer because it mentions two very different topics: uniqueness and stability. Intuitively, a solution is unique if given a fixed data set, the algorithm always produces the same results. Martin's answer cover's this point in great detail. Stability on the other hand can be intuitively understood as one for which the prediction does not change much when the training data is modified slightly. Stability applies to your question because Lasso feature selection is (often) performed via Cross Validation, hence the Lasso algorithm is performed on different folds of data and may yield different results each time. Stability and the No Free Lunch Theorem Using the definition from here if we define Uniform stability as: An algorithm has uniform stability $\beta$ with respect to the loss function $V$ if the following holds: $$\forall S \in Z^m \ \ \forall i \in \{ 1,...,m\}, \ \ \sup | > V(f_s,z) - V(f_{S^{|i},z}) |\ \ \leq \beta$$ Considered as a function of $m$, the term $\beta$ can be written as $\beta_m$. We say the algorithm is stable when $\beta_m$ decreases as $\frac{1}{m}$. then the "No Free Lunch Theorem, Xu and Caramis (2012)" states that If an algorithm is sparse, in the sense that it identifies redundant features, then that algorithm is not stable (and the uniform stability bound $\beta$ does not go to zero). [...] If an algorithm is stable, then there is no hope that it will be sparse. (pages 3 and 4) For instance, $L_2$ regularized regression is stable and does not identify redundant features, while $L_1$ regularized regression (Lasso) is unstable. An attempt at answering your question I think 'lasso favors a sparse solution' is not an answer to why use lasso for feature selection I disagree, the reason Lasso is used for feature selection is that it yields a sparse solution and can be shown to have the IRF property, i.e. Identifies Redundant Features. What is the most crucial reason that causes this instability The No Free Lunch Theorem Going further This is not to say that the combination of Cross Validation and Lasso doesn't work... in fact it has been shown experimentally (and with much supporting theory) to work very well under various conditions. The main keywords here are consistency, risk, oracle inequalities etc.. The following slides and paper by McDonald and Homrighausen (2013) describe some conditions under which Lasso feature selection works well: slides and paper: "The lasso, persistence, and cross-validation, McDonald and Homrighausen (2013)". Tibshirani himself also posted an great set of notes on sparcity, linear regression The various conditions for consistency and their impact on Lasso is an active topic of research and is definitely not a trivial question. I can point you towards some research papers which are relevant: Video lectures on the No free lunch theorem, by Xu H.M. Bøvelstad et all, A comparison of feature selection approaches for gene selection, (2007) The lasso, persistence, and cross-validation, McDonald and Homrighausen (2013) Huang and Bowick, Summary and discussion of: “Stability Selection” Lim and Yu, Estimation Stability with Cross Validation, (2015) A talk by Peter Buhlmann: Stability Selection for High-Dimensional Data, (2008) and the accompanying paper Wang, Nan et all, Random Lasso, (2011) Stackexchange post: Model stability when dealing with large $p$, small $n$ problem Roberts, Nowakm Stabilizing the lasso against cross-validation variability, (2014) which argue that "percentile-lasso, can result in large reductions in both model-selection instability and model-selection error, compared to the lasso" An awesome set of notes by Tibshirani and Wasserman on sparcity, linear regression
What causes lasso to be unstable for feature selection?
UPDATE See this second post for McDonald's feedback on my answer where the notion of risk consistency is related to stability. 1) Uniqueness vs Stability Your question is difficult to answer because
What causes lasso to be unstable for feature selection? UPDATE See this second post for McDonald's feedback on my answer where the notion of risk consistency is related to stability. 1) Uniqueness vs Stability Your question is difficult to answer because it mentions two very different topics: uniqueness and stability. Intuitively, a solution is unique if given a fixed data set, the algorithm always produces the same results. Martin's answer cover's this point in great detail. Stability on the other hand can be intuitively understood as one for which the prediction does not change much when the training data is modified slightly. Stability applies to your question because Lasso feature selection is (often) performed via Cross Validation, hence the Lasso algorithm is performed on different folds of data and may yield different results each time. Stability and the No Free Lunch Theorem Using the definition from here if we define Uniform stability as: An algorithm has uniform stability $\beta$ with respect to the loss function $V$ if the following holds: $$\forall S \in Z^m \ \ \forall i \in \{ 1,...,m\}, \ \ \sup | > V(f_s,z) - V(f_{S^{|i},z}) |\ \ \leq \beta$$ Considered as a function of $m$, the term $\beta$ can be written as $\beta_m$. We say the algorithm is stable when $\beta_m$ decreases as $\frac{1}{m}$. then the "No Free Lunch Theorem, Xu and Caramis (2012)" states that If an algorithm is sparse, in the sense that it identifies redundant features, then that algorithm is not stable (and the uniform stability bound $\beta$ does not go to zero). [...] If an algorithm is stable, then there is no hope that it will be sparse. (pages 3 and 4) For instance, $L_2$ regularized regression is stable and does not identify redundant features, while $L_1$ regularized regression (Lasso) is unstable. An attempt at answering your question I think 'lasso favors a sparse solution' is not an answer to why use lasso for feature selection I disagree, the reason Lasso is used for feature selection is that it yields a sparse solution and can be shown to have the IRF property, i.e. Identifies Redundant Features. What is the most crucial reason that causes this instability The No Free Lunch Theorem Going further This is not to say that the combination of Cross Validation and Lasso doesn't work... in fact it has been shown experimentally (and with much supporting theory) to work very well under various conditions. The main keywords here are consistency, risk, oracle inequalities etc.. The following slides and paper by McDonald and Homrighausen (2013) describe some conditions under which Lasso feature selection works well: slides and paper: "The lasso, persistence, and cross-validation, McDonald and Homrighausen (2013)". Tibshirani himself also posted an great set of notes on sparcity, linear regression The various conditions for consistency and their impact on Lasso is an active topic of research and is definitely not a trivial question. I can point you towards some research papers which are relevant: Video lectures on the No free lunch theorem, by Xu H.M. Bøvelstad et all, A comparison of feature selection approaches for gene selection, (2007) The lasso, persistence, and cross-validation, McDonald and Homrighausen (2013) Huang and Bowick, Summary and discussion of: “Stability Selection” Lim and Yu, Estimation Stability with Cross Validation, (2015) A talk by Peter Buhlmann: Stability Selection for High-Dimensional Data, (2008) and the accompanying paper Wang, Nan et all, Random Lasso, (2011) Stackexchange post: Model stability when dealing with large $p$, small $n$ problem Roberts, Nowakm Stabilizing the lasso against cross-validation variability, (2014) which argue that "percentile-lasso, can result in large reductions in both model-selection instability and model-selection error, compared to the lasso" An awesome set of notes by Tibshirani and Wasserman on sparcity, linear regression
What causes lasso to be unstable for feature selection? UPDATE See this second post for McDonald's feedback on my answer where the notion of risk consistency is related to stability. 1) Uniqueness vs Stability Your question is difficult to answer because
18,249
What causes lasso to be unstable for feature selection?
Comments from Daniel J. McDonald Assistant professor at Indiana University Bloomington, author of the two papers mentioned in the original response from Xavier Bourret Sicotte. Your explanation is, generally, quite correct. A few things I would point out: Our goal in the series of papers about CV and lasso was to prove that "Lasso + Cross Validation (CV)" does as well as "Lasso + optimal $\lambda$". In particular, we wanted to show that the predictions do as well (model-free). In order to make statements about correct recovery of coefficients (finding the right non-sparse ones), one needs to assume a sparse truth, which we didn’t want to do. Algorithmic stability implies risk consistency (first proved by Bousquet and Elisseeff, I believe). By risk consistency, I mean that the $||\hat{f}(X) - f(X)||$ goes to zero where f is either $E[Y|X]$ or the best predictor within some class if the class is misspecified. This is only a sufficient condition however. It is mentioned on the slides you linked to as, essentially, “a possible proof technique that won’t work, since lasso isn’t stable”. Stability is only sufficient but not necessary. We were able to show, that under some conditions, “lasso + CV” predicts as well as “lasso+optimal $\lambda$”. The paper you cite gives the weakest possible assumptions (those on slide 16, which allow $p>n$), but uses the constrained form of lasso rather than the more common Lagrangian version. Another paper (http://www3.stat.sinica.edu.tw/statistica/J27N3/J27N34/J27N34.html) uses the Lagrangian version. It also shows that under much stronger conditions, model selection will also work. A more recent paper (https://arxiv.org/abs/1605.02214) by other people claims to improve on these results (I haven’t read it carefully). In general, because lasso (or any selection algorithm) is not stable, one needs more careful analysis and/or strong assumptions to show that “algorithm+CV” will select the correct model. I’m not aware of necessary conditions, though this would be extremely interesting generally. It’s not too hard to show that for fixed lambda, the lasso predictor is locally Lipschitz in the vector $Y$ (I believe that one or more of Ryan Tibshirani’s papers does this). If one could also argue that this holds true in $X_i$, this would be very interesting, and relevant here. The main takeaway that I would add to your response: “stability” implies "risk-consistency” or “prediction accuracy”. It can also imply “parameter estimation consistency” under more assumptions. But the no free lunch theorem means “selection” $\rightarrow$ “not stable”. Lasso isn’t stable even with fixed lambda. It’s certainly unstable therefore when combined with CV (of any type). However, despite the lack of stability, it is still risk-consistent and selection consistent with or without CV. Uniqueness is immaterial here.
What causes lasso to be unstable for feature selection?
Comments from Daniel J. McDonald Assistant professor at Indiana University Bloomington, author of the two papers mentioned in the original response from Xavier Bourret Sicotte. Your explanation is,
What causes lasso to be unstable for feature selection? Comments from Daniel J. McDonald Assistant professor at Indiana University Bloomington, author of the two papers mentioned in the original response from Xavier Bourret Sicotte. Your explanation is, generally, quite correct. A few things I would point out: Our goal in the series of papers about CV and lasso was to prove that "Lasso + Cross Validation (CV)" does as well as "Lasso + optimal $\lambda$". In particular, we wanted to show that the predictions do as well (model-free). In order to make statements about correct recovery of coefficients (finding the right non-sparse ones), one needs to assume a sparse truth, which we didn’t want to do. Algorithmic stability implies risk consistency (first proved by Bousquet and Elisseeff, I believe). By risk consistency, I mean that the $||\hat{f}(X) - f(X)||$ goes to zero where f is either $E[Y|X]$ or the best predictor within some class if the class is misspecified. This is only a sufficient condition however. It is mentioned on the slides you linked to as, essentially, “a possible proof technique that won’t work, since lasso isn’t stable”. Stability is only sufficient but not necessary. We were able to show, that under some conditions, “lasso + CV” predicts as well as “lasso+optimal $\lambda$”. The paper you cite gives the weakest possible assumptions (those on slide 16, which allow $p>n$), but uses the constrained form of lasso rather than the more common Lagrangian version. Another paper (http://www3.stat.sinica.edu.tw/statistica/J27N3/J27N34/J27N34.html) uses the Lagrangian version. It also shows that under much stronger conditions, model selection will also work. A more recent paper (https://arxiv.org/abs/1605.02214) by other people claims to improve on these results (I haven’t read it carefully). In general, because lasso (or any selection algorithm) is not stable, one needs more careful analysis and/or strong assumptions to show that “algorithm+CV” will select the correct model. I’m not aware of necessary conditions, though this would be extremely interesting generally. It’s not too hard to show that for fixed lambda, the lasso predictor is locally Lipschitz in the vector $Y$ (I believe that one or more of Ryan Tibshirani’s papers does this). If one could also argue that this holds true in $X_i$, this would be very interesting, and relevant here. The main takeaway that I would add to your response: “stability” implies "risk-consistency” or “prediction accuracy”. It can also imply “parameter estimation consistency” under more assumptions. But the no free lunch theorem means “selection” $\rightarrow$ “not stable”. Lasso isn’t stable even with fixed lambda. It’s certainly unstable therefore when combined with CV (of any type). However, despite the lack of stability, it is still risk-consistent and selection consistent with or without CV. Uniqueness is immaterial here.
What causes lasso to be unstable for feature selection? Comments from Daniel J. McDonald Assistant professor at Indiana University Bloomington, author of the two papers mentioned in the original response from Xavier Bourret Sicotte. Your explanation is,
18,250
What causes lasso to be unstable for feature selection?
The Lasso, unlike Ridge regression (see e.g. Hoerl and Kennard, 1970; Hastie et al., 2009) does not always have a unique solution, although it typically has. It depends on the number of parameters in the model, the whether or not the variables are continuous or discrete, and the rank of your design matrix. Conditions for uniqueness can be found in Tibshirani (2013). References: Hastie, T., Tibshirani, R., and Friedman, J. (2009). The elements of statistical learning. Springer series in statistics. Springer, New York, 11th printing, 2nd edition. Hoerl, A. E., and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1), 55-67. Tibshirani, R. J. (2013). The lasso problem and uniqueness. Electronic Journal of Statistics, 7, 1456-1490.
What causes lasso to be unstable for feature selection?
The Lasso, unlike Ridge regression (see e.g. Hoerl and Kennard, 1970; Hastie et al., 2009) does not always have a unique solution, although it typically has. It depends on the number of parameters in
What causes lasso to be unstable for feature selection? The Lasso, unlike Ridge regression (see e.g. Hoerl and Kennard, 1970; Hastie et al., 2009) does not always have a unique solution, although it typically has. It depends on the number of parameters in the model, the whether or not the variables are continuous or discrete, and the rank of your design matrix. Conditions for uniqueness can be found in Tibshirani (2013). References: Hastie, T., Tibshirani, R., and Friedman, J. (2009). The elements of statistical learning. Springer series in statistics. Springer, New York, 11th printing, 2nd edition. Hoerl, A. E., and Kennard, R. W. (1970). Ridge regression: Biased estimation for nonorthogonal problems. Technometrics, 12(1), 55-67. Tibshirani, R. J. (2013). The lasso problem and uniqueness. Electronic Journal of Statistics, 7, 1456-1490.
What causes lasso to be unstable for feature selection? The Lasso, unlike Ridge regression (see e.g. Hoerl and Kennard, 1970; Hastie et al., 2009) does not always have a unique solution, although it typically has. It depends on the number of parameters in
18,251
What causes lasso to be unstable for feature selection?
What causes non-uniqueness. For the vectors $s_ix_i$ (where $s_i$ is a sign denoting whether the change of $c_i$ will increase or decrease $\Vert c \Vert_1$), whenever they are affinely dependent : $$\sum \alpha_i s_i x_i = 0 \qquad \text{and} \qquad \sum \alpha_i =0$$ then there are an infinite number of combinations $c_i + \gamma\alpha_i$ that do not change the solution $Xc$ and the norm $\Vert c\Vert_1$. For example: $$y = \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}= Xc $$ has for $\Vert c \Vert_1 = 1$ the solutions: $$\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}= \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} + \gamma \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix} $$ with $0\leq \gamma \leq \frac{1}{2}$ We can sort of replace the vector $x_2$ by using $x_2 = 0.5 x_1 + 0.5 x_3 $ Situations without this condition In the article from Tibshirani (from Phil's answer) three sufficient conditions are described for lasso to have an unique solution. Linearly independent When the null space $X$ is null or equivalently when the rank of $X$ is equal to the number of columns (M). In that case you do not have linear combinations like above. Affinely independent When the columns $Xs$ are in general position. That is, no $k$ columns represent points in a $k-2$ dimensional plane. A k-2 dimensional plane can be parameterized by any $k-1$ points as $\sum \alpha_i s_ix_i$ with $\sum \alpha_i = 1$. With a $k$-th point $s_jx_j$ in this same plane you would have the conditions $\sum \alpha_i s_ix_i$ with $\sum \alpha_i = 0$ Note that in the example the columns $x_1$, $x_2$ and $x_3$ are on a single line. (It is however a bit awkward here because the signs can be negative, e.g. the matrix $\left[ [2 \, 1] \, [1 \, 1] \, [-0 \, -1] \right]$ has just as well no unique solution) When the columns $X$ are from a continuous distribution then it is unlikely (probability almost zero) that you will have columns of $X$ not in general position. Contrasting with this, if the columns $X$ are a categorical variable then this probability is not neccesarily almost zero. The probability for a continuous variable to be equal to some set of numbers (ie the planes corresponding to the affine span of the other vectors) is 'almost' zero. But, this is not the case for discrete variables.
What causes lasso to be unstable for feature selection?
What causes non-uniqueness. For the vectors $s_ix_i$ (where $s_i$ is a sign denoting whether the change of $c_i$ will increase or decrease $\Vert c \Vert_1$), whenever they are affinely dependent : $$
What causes lasso to be unstable for feature selection? What causes non-uniqueness. For the vectors $s_ix_i$ (where $s_i$ is a sign denoting whether the change of $c_i$ will increase or decrease $\Vert c \Vert_1$), whenever they are affinely dependent : $$\sum \alpha_i s_i x_i = 0 \qquad \text{and} \qquad \sum \alpha_i =0$$ then there are an infinite number of combinations $c_i + \gamma\alpha_i$ that do not change the solution $Xc$ and the norm $\Vert c\Vert_1$. For example: $$y = \begin{bmatrix} 1 \\ 1 \end{bmatrix} = \begin{bmatrix} 2 & 1 & 0 \\ 1 & 1 & 1 \end{bmatrix} \cdot \begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}= Xc $$ has for $\Vert c \Vert_1 = 1$ the solutions: $$\begin{bmatrix} c_1 \\ c_2 \\ c_3 \end{bmatrix}= \begin{bmatrix} 0 \\ 1 \\ 0 \end{bmatrix} + \gamma \begin{bmatrix} 1 \\ -2 \\ 1 \end{bmatrix} $$ with $0\leq \gamma \leq \frac{1}{2}$ We can sort of replace the vector $x_2$ by using $x_2 = 0.5 x_1 + 0.5 x_3 $ Situations without this condition In the article from Tibshirani (from Phil's answer) three sufficient conditions are described for lasso to have an unique solution. Linearly independent When the null space $X$ is null or equivalently when the rank of $X$ is equal to the number of columns (M). In that case you do not have linear combinations like above. Affinely independent When the columns $Xs$ are in general position. That is, no $k$ columns represent points in a $k-2$ dimensional plane. A k-2 dimensional plane can be parameterized by any $k-1$ points as $\sum \alpha_i s_ix_i$ with $\sum \alpha_i = 1$. With a $k$-th point $s_jx_j$ in this same plane you would have the conditions $\sum \alpha_i s_ix_i$ with $\sum \alpha_i = 0$ Note that in the example the columns $x_1$, $x_2$ and $x_3$ are on a single line. (It is however a bit awkward here because the signs can be negative, e.g. the matrix $\left[ [2 \, 1] \, [1 \, 1] \, [-0 \, -1] \right]$ has just as well no unique solution) When the columns $X$ are from a continuous distribution then it is unlikely (probability almost zero) that you will have columns of $X$ not in general position. Contrasting with this, if the columns $X$ are a categorical variable then this probability is not neccesarily almost zero. The probability for a continuous variable to be equal to some set of numbers (ie the planes corresponding to the affine span of the other vectors) is 'almost' zero. But, this is not the case for discrete variables.
What causes lasso to be unstable for feature selection? What causes non-uniqueness. For the vectors $s_ix_i$ (where $s_i$ is a sign denoting whether the change of $c_i$ will increase or decrease $\Vert c \Vert_1$), whenever they are affinely dependent : $$
18,252
Why does this regression NOT fail due to perfect multicollinearity, although one variable is a linear combination of others?
exporterFlorida * importerFlorida + exporterFlorida * importerTexas + exporterTexas * importerFlorida + exporterTexas * importerTexas This is not a linear combination of exporterFlorida, importerFlorida, importerTexas and exporterTexas. In a linear combination, the coefficients of the vectors must be constants. So something like 2*importerFlorida + 3*importerTexas - exporterFlorida - 2*exporterTexas is a linear combination. What you have could possibly be called a quadratic combination, but that's stretching terminology into "I'm making stuff up" land.
Why does this regression NOT fail due to perfect multicollinearity, although one variable is a linea
exporterFlorida * importerFlorida + exporterFlorida * importerTexas + exporterTexas * importerFlorida + exporterTexas * importerTexas This is not a linear combination of exporterFlorida, importerFlor
Why does this regression NOT fail due to perfect multicollinearity, although one variable is a linear combination of others? exporterFlorida * importerFlorida + exporterFlorida * importerTexas + exporterTexas * importerFlorida + exporterTexas * importerTexas This is not a linear combination of exporterFlorida, importerFlorida, importerTexas and exporterTexas. In a linear combination, the coefficients of the vectors must be constants. So something like 2*importerFlorida + 3*importerTexas - exporterFlorida - 2*exporterTexas is a linear combination. What you have could possibly be called a quadratic combination, but that's stretching terminology into "I'm making stuff up" land.
Why does this regression NOT fail due to perfect multicollinearity, although one variable is a linea exporterFlorida * importerFlorida + exporterFlorida * importerTexas + exporterTexas * importerFlorida + exporterTexas * importerTexas This is not a linear combination of exporterFlorida, importerFlor
18,253
Number of bins when computing mutual information
There is no best number of bins to estimate mutual information (MI) with histograms. The best way is to choose it via cross-validation if you can, or to rely on a rule of thumb. This the reason why many other estimators of MI which are not based on histograms have been proposed. The number of bins will depend to the total number of data points $n$. You should try to avoid too many bins to avoid estimation errors for the joint distribution between the two variables. You should also avoid too few bins to be able to capture the relationship between the two variables. Given that np.histogram2d(x, y, D) generates a 2D histogram with D equal width bins for both x and y I would personally choose: $$ D = \lfloor \sqrt{n/5} \rfloor$$ In this case on average for two uniformly distributed random variables you will have at least $5$ points for each cell of the histogram: $$ \frac{n}{D_X D_Y} \geq 5 \Rightarrow \frac{n}{D^2} \geq 5 \Rightarrow D^2 \leq n/5 \Rightarrow D = \lfloor \sqrt{n/5} \rfloor$$ This is one possible choice that simulates the adaptive partitioning approach proposed in (Cellucci, 2005). The latter approach is often used to estimate MI to infer genetic networks: e.g. in MIDER. If you have lots of data points $n$ and no missing values you should not worry too much about finding the best number of bins; e.g. if $n = 100,000$. If this is not the case, you might consider to correct MI for finite samples. (Steuer et al., 2002) discusses some correction for MI for the task of genetic network inference. Estimating the number of bins for a histogram is an old problem. You might be interested in this talk by Lauritz Dieckman about estimating the number of bins for MI. This talk is based on a chapter in Mike X Cohen's book about neural time-series. You might choose $D_X$ and $D_Y$ independently and use the rule of thumb used for the estimating the number of bins in 1D histograms. Freedman-Diaconis' rule (no assumption on the distribution): $$D_X = \lceil \frac{\max{X} - \min{X}}{2 \cdot \mbox{IQR} \cdot n^{-1/3}} \rceil$$ where $\mbox{IQR}$ is the difference between the 75-quantile and the 25-quantile. Look at this related question in SE. Scott's rule (normality assumption): $$D_X = \lceil \frac{\max{X} - \min{X}}{3.5 \cdot s_X \cdot n^{-1/3}} \rceil$$ where $s_X$ is the standard deviation for $X$. Sturges' rule (might underestimate the number of bins but good for large $n$): $$D_X = \lceil 1 + \log_2{n} \rceil$$ It is difficult to correctly estimate MI with histograms. You might then choose a different estimator: Kraskov's $k$NN estimator, which is a bit less sensitive to parameter choice: $k = 4$ or $k = 6$ nearest neighbours is often used as default. Paper: (Kraskov, 2003) Estimation of MI with Kernels (Moon, 1995). There are lots of packages for estimating MI: Non-Parametric Entropy Estimation Toolbox for Python. site. Information-dynamics toolkit in Java but available also for Python. site. ITE toolbox in Matlab. site.
Number of bins when computing mutual information
There is no best number of bins to estimate mutual information (MI) with histograms. The best way is to choose it via cross-validation if you can, or to rely on a rule of thumb. This the reason why ma
Number of bins when computing mutual information There is no best number of bins to estimate mutual information (MI) with histograms. The best way is to choose it via cross-validation if you can, or to rely on a rule of thumb. This the reason why many other estimators of MI which are not based on histograms have been proposed. The number of bins will depend to the total number of data points $n$. You should try to avoid too many bins to avoid estimation errors for the joint distribution between the two variables. You should also avoid too few bins to be able to capture the relationship between the two variables. Given that np.histogram2d(x, y, D) generates a 2D histogram with D equal width bins for both x and y I would personally choose: $$ D = \lfloor \sqrt{n/5} \rfloor$$ In this case on average for two uniformly distributed random variables you will have at least $5$ points for each cell of the histogram: $$ \frac{n}{D_X D_Y} \geq 5 \Rightarrow \frac{n}{D^2} \geq 5 \Rightarrow D^2 \leq n/5 \Rightarrow D = \lfloor \sqrt{n/5} \rfloor$$ This is one possible choice that simulates the adaptive partitioning approach proposed in (Cellucci, 2005). The latter approach is often used to estimate MI to infer genetic networks: e.g. in MIDER. If you have lots of data points $n$ and no missing values you should not worry too much about finding the best number of bins; e.g. if $n = 100,000$. If this is not the case, you might consider to correct MI for finite samples. (Steuer et al., 2002) discusses some correction for MI for the task of genetic network inference. Estimating the number of bins for a histogram is an old problem. You might be interested in this talk by Lauritz Dieckman about estimating the number of bins for MI. This talk is based on a chapter in Mike X Cohen's book about neural time-series. You might choose $D_X$ and $D_Y$ independently and use the rule of thumb used for the estimating the number of bins in 1D histograms. Freedman-Diaconis' rule (no assumption on the distribution): $$D_X = \lceil \frac{\max{X} - \min{X}}{2 \cdot \mbox{IQR} \cdot n^{-1/3}} \rceil$$ where $\mbox{IQR}$ is the difference between the 75-quantile and the 25-quantile. Look at this related question in SE. Scott's rule (normality assumption): $$D_X = \lceil \frac{\max{X} - \min{X}}{3.5 \cdot s_X \cdot n^{-1/3}} \rceil$$ where $s_X$ is the standard deviation for $X$. Sturges' rule (might underestimate the number of bins but good for large $n$): $$D_X = \lceil 1 + \log_2{n} \rceil$$ It is difficult to correctly estimate MI with histograms. You might then choose a different estimator: Kraskov's $k$NN estimator, which is a bit less sensitive to parameter choice: $k = 4$ or $k = 6$ nearest neighbours is often used as default. Paper: (Kraskov, 2003) Estimation of MI with Kernels (Moon, 1995). There are lots of packages for estimating MI: Non-Parametric Entropy Estimation Toolbox for Python. site. Information-dynamics toolkit in Java but available also for Python. site. ITE toolbox in Matlab. site.
Number of bins when computing mutual information There is no best number of bins to estimate mutual information (MI) with histograms. The best way is to choose it via cross-validation if you can, or to rely on a rule of thumb. This the reason why ma
18,254
Number of bins when computing mutual information
The following binning rules should be added to Simone's list, which have become even more commonplace: Given that mutual information is the sum of marginal entropies adjusted by their joint entropy, $$I(X,Y) = H(X) + H(Y) - H(X,Y) $$ The optimal binning rule for marginal entropy $H(X)$, as well as $H(Y)$, found by Hacine-Gharbi et al. (2012) is $$B_X = round\bigg(\frac{\xi}{6} + \frac{2}{3\xi} + \frac{1}{3} \bigg) $$ where $\xi = \big( 8 + 324N + 12 \sqrt{36N + 729N^2}\big)^{\frac{1}{3}} $ while the optimal binning rule for joint entropy $H(X,Y)$ according to Hacine-Gharbi and Ravier (2018) is $$B_X = B_Y = round\Bigg[ \frac{1}{\sqrt{2}} \Bigg(1 + \sqrt{1+\frac{24N}{1-\rho^2}} \Bigg)^{\frac{1}{2}} \Bigg] $$ Applying these binning rules when measuring the individual terms of $I(X,Y)=H(X)+H(Y)−H(X,Y)$, you should have an optimally binned low-bias estimator of mutual information. Hacine-Gharbi, A., and P. Ravier (2018): “A Binning Formula of Bi-histogram for Joint Entropy Estimation Using Mean Square Error Minimization.” Pattern Recognition Letters, Vol. 101, pp. 21–28. Hacine-Gharbi, A., P. Ravier, R. Harba, and T. Mohamadi (2012): “Low Bias Histogram-Based Estimation of Mutual Information for Feature Selection.” Pattern Recognition Letters, Vol. 33, pp. 1302–8.
Number of bins when computing mutual information
The following binning rules should be added to Simone's list, which have become even more commonplace: Given that mutual information is the sum of marginal entropies adjusted by their joint entropy, $
Number of bins when computing mutual information The following binning rules should be added to Simone's list, which have become even more commonplace: Given that mutual information is the sum of marginal entropies adjusted by their joint entropy, $$I(X,Y) = H(X) + H(Y) - H(X,Y) $$ The optimal binning rule for marginal entropy $H(X)$, as well as $H(Y)$, found by Hacine-Gharbi et al. (2012) is $$B_X = round\bigg(\frac{\xi}{6} + \frac{2}{3\xi} + \frac{1}{3} \bigg) $$ where $\xi = \big( 8 + 324N + 12 \sqrt{36N + 729N^2}\big)^{\frac{1}{3}} $ while the optimal binning rule for joint entropy $H(X,Y)$ according to Hacine-Gharbi and Ravier (2018) is $$B_X = B_Y = round\Bigg[ \frac{1}{\sqrt{2}} \Bigg(1 + \sqrt{1+\frac{24N}{1-\rho^2}} \Bigg)^{\frac{1}{2}} \Bigg] $$ Applying these binning rules when measuring the individual terms of $I(X,Y)=H(X)+H(Y)−H(X,Y)$, you should have an optimally binned low-bias estimator of mutual information. Hacine-Gharbi, A., and P. Ravier (2018): “A Binning Formula of Bi-histogram for Joint Entropy Estimation Using Mean Square Error Minimization.” Pattern Recognition Letters, Vol. 101, pp. 21–28. Hacine-Gharbi, A., P. Ravier, R. Harba, and T. Mohamadi (2012): “Low Bias Histogram-Based Estimation of Mutual Information for Feature Selection.” Pattern Recognition Letters, Vol. 33, pp. 1302–8.
Number of bins when computing mutual information The following binning rules should be added to Simone's list, which have become even more commonplace: Given that mutual information is the sum of marginal entropies adjusted by their joint entropy, $
18,255
Number of bins when computing mutual information
I prefer minepy to get and estimate of mutual information in python. You can see the implementation details of the package here, and an example code here. For the sake of easier reference, I copy paste the example and it's output here: import numpy as np from minepy import MINE def print_stats(mine): print "MIC", mine.mic() print "MAS", mine.mas() print "MEV", mine.mev() print "MCN (eps=0)", mine.mcn(0) print "MCN (eps=1-MIC)", mine.mcn_general() x = np.linspace(0, 1, 1000) y = np.sin(10 * np.pi * x) + x mine = MINE(alpha=0.6, c=15) mine.compute_score(x, y) print "Without noise:" print_stats(mine) print np.random.seed(0) y +=np.random.uniform(-1, 1, x.shape[0]) # add some noise mine.compute_score(x, y) print "With noise:" print_stats(mine) Which gives this as output: Without noise: MIC 1.0 MAS 0.726071574374 MEV 1.0 MCN (eps=0) 4.58496250072 MCN (eps=1-MIC) 4.58496250072 With noise: MIC 0.505716693417 MAS 0.365399904262 MEV 0.505716693417 MCN (eps=0) 5.95419631039 MCN (eps=1-MIC) 3.80735492206 My experience is that the results are sensitive to alpha, and the default value .6 is a reasonable one. However, on my real data alpha=.3 is much faster and the estimated mutual informations have a really high correlation with the case that alpha=.6. So in case you're using MI to select the ones with a high MI, you can simply use a smaller alpha and use the highest values as a replacement with a good accuracy.
Number of bins when computing mutual information
I prefer minepy to get and estimate of mutual information in python. You can see the implementation details of the package here, and an example code here. For the sake of easier reference, I copy past
Number of bins when computing mutual information I prefer minepy to get and estimate of mutual information in python. You can see the implementation details of the package here, and an example code here. For the sake of easier reference, I copy paste the example and it's output here: import numpy as np from minepy import MINE def print_stats(mine): print "MIC", mine.mic() print "MAS", mine.mas() print "MEV", mine.mev() print "MCN (eps=0)", mine.mcn(0) print "MCN (eps=1-MIC)", mine.mcn_general() x = np.linspace(0, 1, 1000) y = np.sin(10 * np.pi * x) + x mine = MINE(alpha=0.6, c=15) mine.compute_score(x, y) print "Without noise:" print_stats(mine) print np.random.seed(0) y +=np.random.uniform(-1, 1, x.shape[0]) # add some noise mine.compute_score(x, y) print "With noise:" print_stats(mine) Which gives this as output: Without noise: MIC 1.0 MAS 0.726071574374 MEV 1.0 MCN (eps=0) 4.58496250072 MCN (eps=1-MIC) 4.58496250072 With noise: MIC 0.505716693417 MAS 0.365399904262 MEV 0.505716693417 MCN (eps=0) 5.95419631039 MCN (eps=1-MIC) 3.80735492206 My experience is that the results are sensitive to alpha, and the default value .6 is a reasonable one. However, on my real data alpha=.3 is much faster and the estimated mutual informations have a really high correlation with the case that alpha=.6. So in case you're using MI to select the ones with a high MI, you can simply use a smaller alpha and use the highest values as a replacement with a good accuracy.
Number of bins when computing mutual information I prefer minepy to get and estimate of mutual information in python. You can see the implementation details of the package here, and an example code here. For the sake of easier reference, I copy past
18,256
Does Support Vector Machine handle imbalanced Dataset?
For imbalanced data sets we typically change the misclassification penalty per class. This is called class-weighted SVM, which minimizes the following: $$ \begin{align} \min_{\mathbf{w},b,\xi} &\quad \sum_{i=1}^N\sum_{j=1}^N \alpha_i \alpha_j y_i y_j \kappa(\mathbf{x}_i,\mathbf{x}_j) + C_{pos}\sum_{i\in \mathcal{P}} \xi_i + C_{neg}\sum_{i\in \mathcal{N}}\xi_i, \\ s.t. &\quad y_i\big(\sum_{j=1}^N \alpha_j y_j \kappa(\mathbf{x}_i, \mathbf{x}_j) + b\big) \geq 1-\xi_i,& i=1\ldots N \\ &\quad \xi_i \geq 0, & i=1\ldots N \end{align}$$ where $\mathcal{P}$ and $\mathcal{N}$ represent the positive/negative training instances. In standard SVM we only have a single $C$ value, whereas now we have 2. The misclassification penalty for the minority class is chosen to be larger than that of the majority class. This approach was introduced quite early, it is mentioned for instance in a 1997 paper: Edgar Osuna, Robert Freund, and Federico Girosi. Support Vector Machines: Training and Applications. Technical Report AIM-1602, 1997. (pdf) Essentially this is equivalent to oversampling the minority class: for instance if $C_{pos} = 2 C_{neg}$ this is entirely equivalent to training a standard SVM with $C=C_{neg}$ after including every positive twice in the training set.
Does Support Vector Machine handle imbalanced Dataset?
For imbalanced data sets we typically change the misclassification penalty per class. This is called class-weighted SVM, which minimizes the following: $$ \begin{align} \min_{\mathbf{w},b,\xi} &\quad
Does Support Vector Machine handle imbalanced Dataset? For imbalanced data sets we typically change the misclassification penalty per class. This is called class-weighted SVM, which minimizes the following: $$ \begin{align} \min_{\mathbf{w},b,\xi} &\quad \sum_{i=1}^N\sum_{j=1}^N \alpha_i \alpha_j y_i y_j \kappa(\mathbf{x}_i,\mathbf{x}_j) + C_{pos}\sum_{i\in \mathcal{P}} \xi_i + C_{neg}\sum_{i\in \mathcal{N}}\xi_i, \\ s.t. &\quad y_i\big(\sum_{j=1}^N \alpha_j y_j \kappa(\mathbf{x}_i, \mathbf{x}_j) + b\big) \geq 1-\xi_i,& i=1\ldots N \\ &\quad \xi_i \geq 0, & i=1\ldots N \end{align}$$ where $\mathcal{P}$ and $\mathcal{N}$ represent the positive/negative training instances. In standard SVM we only have a single $C$ value, whereas now we have 2. The misclassification penalty for the minority class is chosen to be larger than that of the majority class. This approach was introduced quite early, it is mentioned for instance in a 1997 paper: Edgar Osuna, Robert Freund, and Federico Girosi. Support Vector Machines: Training and Applications. Technical Report AIM-1602, 1997. (pdf) Essentially this is equivalent to oversampling the minority class: for instance if $C_{pos} = 2 C_{neg}$ this is entirely equivalent to training a standard SVM with $C=C_{neg}$ after including every positive twice in the training set.
Does Support Vector Machine handle imbalanced Dataset? For imbalanced data sets we typically change the misclassification penalty per class. This is called class-weighted SVM, which minimizes the following: $$ \begin{align} \min_{\mathbf{w},b,\xi} &\quad
18,257
Does Support Vector Machine handle imbalanced Dataset?
SVMs are able to deal with datasets with imbalanced class frequencies. Many implementations allow you to have a different value for the slack penalty (C) for positive and negative classes (which is asymptotically equivalent to changing the class frequencies). I would recommend setting the values of these parameters in order maximize generalization performance on a test set where the class frequencies are those you expect to see in operational use. I was one of many people who wrote papers on this, here is mine, I'll see if I can find something more recent/better. Try Veropoulos, Campbell and Cristianini (1999).
Does Support Vector Machine handle imbalanced Dataset?
SVMs are able to deal with datasets with imbalanced class frequencies. Many implementations allow you to have a different value for the slack penalty (C) for positive and negative classes (which is a
Does Support Vector Machine handle imbalanced Dataset? SVMs are able to deal with datasets with imbalanced class frequencies. Many implementations allow you to have a different value for the slack penalty (C) for positive and negative classes (which is asymptotically equivalent to changing the class frequencies). I would recommend setting the values of these parameters in order maximize generalization performance on a test set where the class frequencies are those you expect to see in operational use. I was one of many people who wrote papers on this, here is mine, I'll see if I can find something more recent/better. Try Veropoulos, Campbell and Cristianini (1999).
Does Support Vector Machine handle imbalanced Dataset? SVMs are able to deal with datasets with imbalanced class frequencies. Many implementations allow you to have a different value for the slack penalty (C) for positive and negative classes (which is a
18,258
Clopper-Pearson for non mathematicians
When you say you're used to confidence intervals containing an expression for variance, you're thinking of the Gaussian case, in which information about the two parameters characterizing the population—one its mean & the other its variance—is summarized by the sample mean & sample variance. The sample mean estimates the population mean, but the precision with which it does so depends on the population variance, estimated in turn by the sample variance. The binomial distribution, on the other hand, has just one parameter—the probability of success on each individual trial—& all the information given by the sample about this parameter is summarized in the total no. successes out of so many independent trials. The population variance and mean are both determined by this parameter. You can get a Clopper–Pearson 95% (say) confidence interval for the parameter $\pi$ working directly with the binomial probability mass function. Suppose you observe $x$ successes out of $n$ trials. The p.m.f. is $$\Pr(X=x)= \binom{n}{x}\pi^x(1-\pi)^{n-x}$$ Increase $\pi$ until the probability of $x$ or fewer successes falls to 2.5%: that's your upper bound. Decrease $\pi$ until the probability of $x$ or more successes falls to to 2.5%: that's your lower bound. (I suggest you actually try doing this if it's not clear from reading about it.) What you're doing here is finding the values of $\pi$ that when taken as a null hypothesis would lead to its (only just) being rejected by a two-tailed test at a significance level of 5%. In the long run, bounds calculated this way cover the true value of $\pi$, whatever it is, at least 95% of the time.
Clopper-Pearson for non mathematicians
When you say you're used to confidence intervals containing an expression for variance, you're thinking of the Gaussian case, in which information about the two parameters characterizing the populatio
Clopper-Pearson for non mathematicians When you say you're used to confidence intervals containing an expression for variance, you're thinking of the Gaussian case, in which information about the two parameters characterizing the population—one its mean & the other its variance—is summarized by the sample mean & sample variance. The sample mean estimates the population mean, but the precision with which it does so depends on the population variance, estimated in turn by the sample variance. The binomial distribution, on the other hand, has just one parameter—the probability of success on each individual trial—& all the information given by the sample about this parameter is summarized in the total no. successes out of so many independent trials. The population variance and mean are both determined by this parameter. You can get a Clopper–Pearson 95% (say) confidence interval for the parameter $\pi$ working directly with the binomial probability mass function. Suppose you observe $x$ successes out of $n$ trials. The p.m.f. is $$\Pr(X=x)= \binom{n}{x}\pi^x(1-\pi)^{n-x}$$ Increase $\pi$ until the probability of $x$ or fewer successes falls to 2.5%: that's your upper bound. Decrease $\pi$ until the probability of $x$ or more successes falls to to 2.5%: that's your lower bound. (I suggest you actually try doing this if it's not clear from reading about it.) What you're doing here is finding the values of $\pi$ that when taken as a null hypothesis would lead to its (only just) being rejected by a two-tailed test at a significance level of 5%. In the long run, bounds calculated this way cover the true value of $\pi$, whatever it is, at least 95% of the time.
Clopper-Pearson for non mathematicians When you say you're used to confidence intervals containing an expression for variance, you're thinking of the Gaussian case, in which information about the two parameters characterizing the populatio
18,259
Smoothing time series data
First up, the requirements for compression and analysis/presentation are not necessarily the same -- indeed, for analysis you might want to keep all the raw data and have the ability to slice and dice it in various ways. And what works best for you will depend very much on what you want to get out of it. But there are a number of standard tricks that you could try: Use differences rather than raw data Use thresholding to remove low-level noise. (Combine with differencing to ignore small changes.) Use variance over some time window rather than average, to capture activity level rather than movement Change the time base from fixed intervals to variable length runs and accumulate into a single data point sequences of changes for which some criterion holds (eg, differences in same direction, up to some threshold) Transform data from real values to ordinal (eg low, medium, high); you could also do this on time bins rather than individual samples -- eg, activity level for each 5 minute stretch Use an appropriate convolution kernel* to smooth more subtly than your moving average or pick out features of interest such as sharp changes. Use an FFT library to calculate a power spectrum The last may be a bit expensive for your purposes, but would probably give you some very useful presentation options, in terms of "sleep rhythms" and such. (I know next to nothing about Android, but it's conceivable that some/many/all handsets might have built in DSP hardware that you can take advantage of.) * Given how central convolution is to digital signal processing, it's surprisingly difficult to find an accessible intro online. Or at least in 3 minutes of googling. Suggestions welcome!
Smoothing time series data
First up, the requirements for compression and analysis/presentation are not necessarily the same -- indeed, for analysis you might want to keep all the raw data and have the ability to slice and dice
Smoothing time series data First up, the requirements for compression and analysis/presentation are not necessarily the same -- indeed, for analysis you might want to keep all the raw data and have the ability to slice and dice it in various ways. And what works best for you will depend very much on what you want to get out of it. But there are a number of standard tricks that you could try: Use differences rather than raw data Use thresholding to remove low-level noise. (Combine with differencing to ignore small changes.) Use variance over some time window rather than average, to capture activity level rather than movement Change the time base from fixed intervals to variable length runs and accumulate into a single data point sequences of changes for which some criterion holds (eg, differences in same direction, up to some threshold) Transform data from real values to ordinal (eg low, medium, high); you could also do this on time bins rather than individual samples -- eg, activity level for each 5 minute stretch Use an appropriate convolution kernel* to smooth more subtly than your moving average or pick out features of interest such as sharp changes. Use an FFT library to calculate a power spectrum The last may be a bit expensive for your purposes, but would probably give you some very useful presentation options, in terms of "sleep rhythms" and such. (I know next to nothing about Android, but it's conceivable that some/many/all handsets might have built in DSP hardware that you can take advantage of.) * Given how central convolution is to digital signal processing, it's surprisingly difficult to find an accessible intro online. Or at least in 3 minutes of googling. Suggestions welcome!
Smoothing time series data First up, the requirements for compression and analysis/presentation are not necessarily the same -- indeed, for analysis you might want to keep all the raw data and have the ability to slice and dice
18,260
Smoothing time series data
There are many nonparametric smoothing algorithms including splines and loess. But they will smooth out the sudden changes too. So will low-pass filters. I think you might need a wavelet-based smoother which allows the sudden jumps but still smooths the noise. Check out Percival and Walden (2000) and the associated R package. Although you want a java solution, the algorithms in the R package are open-source and you might be able to translate them.
Smoothing time series data
There are many nonparametric smoothing algorithms including splines and loess. But they will smooth out the sudden changes too. So will low-pass filters. I think you might need a wavelet-based smoothe
Smoothing time series data There are many nonparametric smoothing algorithms including splines and loess. But they will smooth out the sudden changes too. So will low-pass filters. I think you might need a wavelet-based smoother which allows the sudden jumps but still smooths the noise. Check out Percival and Walden (2000) and the associated R package. Although you want a java solution, the algorithms in the R package are open-source and you might be able to translate them.
Smoothing time series data There are many nonparametric smoothing algorithms including splines and loess. But they will smooth out the sudden changes too. So will low-pass filters. I think you might need a wavelet-based smoothe
18,261
Smoothing time series data
This is somewhat tangential to what you're asking, but it may be worth taking a look at the Kalman filter.
Smoothing time series data
This is somewhat tangential to what you're asking, but it may be worth taking a look at the Kalman filter.
Smoothing time series data This is somewhat tangential to what you're asking, but it may be worth taking a look at the Kalman filter.
Smoothing time series data This is somewhat tangential to what you're asking, but it may be worth taking a look at the Kalman filter.
18,262
Smoothing time series data
Savitzky-Golay smoothing could be a good answer. It's an extremely efficient implementation of least squares smoothing over a sliding time window (a convolution over that data) that comes down to just multiplying the data in each time window by fixed constants. You can fit values, derivatives, second derivatives, and higher. You choose how spiky you allow the results to be, based on the size of the sliding time window and the degree of the polynomial fit on that time window. That was originally developed for chromatography, where peaks are an essential part of the results. One desirable property of SG smoothing is that the locations of the peaks are preserved. For instance, a 5 to 11 point window with a cubic curve fit cuts noise but still preserves peaks. There's a good article in Wikipedia, although it's referred to as Savitzky-Golay filter (doing slight violence to normal terminology from systems control theory and time series analysis, as well as the original paper, where it's correctly called smoothing). Also be aware that there is (an argument over) an error in the Wikipedia article for formulas for second derivative estimates -- see the Talk section for that article. EDIT: The Wikipedia article was fixed
Smoothing time series data
Savitzky-Golay smoothing could be a good answer. It's an extremely efficient implementation of least squares smoothing over a sliding time window (a convolution over that data) that comes down to jus
Smoothing time series data Savitzky-Golay smoothing could be a good answer. It's an extremely efficient implementation of least squares smoothing over a sliding time window (a convolution over that data) that comes down to just multiplying the data in each time window by fixed constants. You can fit values, derivatives, second derivatives, and higher. You choose how spiky you allow the results to be, based on the size of the sliding time window and the degree of the polynomial fit on that time window. That was originally developed for chromatography, where peaks are an essential part of the results. One desirable property of SG smoothing is that the locations of the peaks are preserved. For instance, a 5 to 11 point window with a cubic curve fit cuts noise but still preserves peaks. There's a good article in Wikipedia, although it's referred to as Savitzky-Golay filter (doing slight violence to normal terminology from systems control theory and time series analysis, as well as the original paper, where it's correctly called smoothing). Also be aware that there is (an argument over) an error in the Wikipedia article for formulas for second derivative estimates -- see the Talk section for that article. EDIT: The Wikipedia article was fixed
Smoothing time series data Savitzky-Golay smoothing could be a good answer. It's an extremely efficient implementation of least squares smoothing over a sliding time window (a convolution over that data) that comes down to jus
18,263
LOESS that allows discontinuities
It sounds like you want to perform multiple changepoint detection followed by independent smoothing within each segment. (Detection can be online or not, but your application is not likely to be online.) There's a lot of literature on this; Internet searches are fruitful. DA Stephens wrote a useful introduction to Bayesian changepoint detection in 1994 (App. Stat. 43 #1 pp 159-178: JSTOR). More recently Paul Fearnhead has been doing nice work (e.g., Exact and efficient Bayesian inference for multiple changepoint problems, Stat Comput (2006) 16: 203-213: Free PDF). A recursive algorithm exists, based on a beautiful analysis by D Barry & JA Hartigan Product Partition Models for Change Point Models, Ann. Stat. 20:260-279: JSTOR; A Bayesian Analysis for Change Point Problems, JASA 88:309-319: JSTOR. One implementation of the Barry & Hartigan algorithm is documented in O. Seidou & TBMJ Ourda, Recursion-based Multiple Changepoint Detection in Multivariate Linear Regression and Application to River Streamflows, Water Res. Res., 2006: Free PDF. I haven't looked hard for any R implementations (I had coded one in Mathematica a while ago) but would appreciate a reference if you do find one.
LOESS that allows discontinuities
It sounds like you want to perform multiple changepoint detection followed by independent smoothing within each segment. (Detection can be online or not, but your application is not likely to be onli
LOESS that allows discontinuities It sounds like you want to perform multiple changepoint detection followed by independent smoothing within each segment. (Detection can be online or not, but your application is not likely to be online.) There's a lot of literature on this; Internet searches are fruitful. DA Stephens wrote a useful introduction to Bayesian changepoint detection in 1994 (App. Stat. 43 #1 pp 159-178: JSTOR). More recently Paul Fearnhead has been doing nice work (e.g., Exact and efficient Bayesian inference for multiple changepoint problems, Stat Comput (2006) 16: 203-213: Free PDF). A recursive algorithm exists, based on a beautiful analysis by D Barry & JA Hartigan Product Partition Models for Change Point Models, Ann. Stat. 20:260-279: JSTOR; A Bayesian Analysis for Change Point Problems, JASA 88:309-319: JSTOR. One implementation of the Barry & Hartigan algorithm is documented in O. Seidou & TBMJ Ourda, Recursion-based Multiple Changepoint Detection in Multivariate Linear Regression and Application to River Streamflows, Water Res. Res., 2006: Free PDF. I haven't looked hard for any R implementations (I had coded one in Mathematica a while ago) but would appreciate a reference if you do find one.
LOESS that allows discontinuities It sounds like you want to perform multiple changepoint detection followed by independent smoothing within each segment. (Detection can be online or not, but your application is not likely to be onli
18,264
LOESS that allows discontinuities
do it with koencker's broken line regression, see page 18 of this vignette http://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf In response to Whuber last comment: This estimator is defined like this. $x\in\mathbb{R}$, $x_{(i)}\geq x_{(i-1)}\;\forall i$, $e_i:=y_{i}-\beta_{i}x_{(i)}-\beta_0$, $z^+=\max(z,0)$, $z^-=\max(-z,0)$, $\tau \in (0,1)$, $\lambda\geq 0$ $\underset{\beta\in\mathbb{R}^n|\tau, \lambda}{\min.} \sum_{i=1}^{n} \tau e_i^++\sum_{i=1}^{n}(1-\tau)e_i^-+\lambda\sum_{i=2}^{n}|\beta_{i}-\beta_{i-1}|$ $\tau$ gives the desired quantile (i.e. in the example, $\tau=0.9$). $\lambda$ directs the number of breakpoint: for $\lambda$ large this estimator shrinks to no break point (corresponding to the classicla linear quantile regression estimator). Quantile Smoothing Splines Roger Koenker, Pin Ng, Stephen Portnoy Biometrika, Vol. 81, No. 4 (Dec., 1994), pp. 673-680 PS: there is a open acess working paper with the same name by the same others but it's not the same thing.
LOESS that allows discontinuities
do it with koencker's broken line regression, see page 18 of this vignette http://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf In response to Whuber last comment: This estimator is define
LOESS that allows discontinuities do it with koencker's broken line regression, see page 18 of this vignette http://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf In response to Whuber last comment: This estimator is defined like this. $x\in\mathbb{R}$, $x_{(i)}\geq x_{(i-1)}\;\forall i$, $e_i:=y_{i}-\beta_{i}x_{(i)}-\beta_0$, $z^+=\max(z,0)$, $z^-=\max(-z,0)$, $\tau \in (0,1)$, $\lambda\geq 0$ $\underset{\beta\in\mathbb{R}^n|\tau, \lambda}{\min.} \sum_{i=1}^{n} \tau e_i^++\sum_{i=1}^{n}(1-\tau)e_i^-+\lambda\sum_{i=2}^{n}|\beta_{i}-\beta_{i-1}|$ $\tau$ gives the desired quantile (i.e. in the example, $\tau=0.9$). $\lambda$ directs the number of breakpoint: for $\lambda$ large this estimator shrinks to no break point (corresponding to the classicla linear quantile regression estimator). Quantile Smoothing Splines Roger Koenker, Pin Ng, Stephen Portnoy Biometrika, Vol. 81, No. 4 (Dec., 1994), pp. 673-680 PS: there is a open acess working paper with the same name by the same others but it's not the same thing.
LOESS that allows discontinuities do it with koencker's broken line regression, see page 18 of this vignette http://cran.r-project.org/web/packages/quantreg/vignettes/rq.pdf In response to Whuber last comment: This estimator is define
18,265
LOESS that allows discontinuities
Here are some methods and associated R packages to solve this problem Wavelet thresolding estimation in regression allows for discontonuities. You may use the package wavethresh in R. A lot of tree based methods (not far from the idea of wavelet) are usefull when you have disconitnuities. Hence package treethresh, package tree ! In the familly of "local maximum likelihood" methods... among others: Work of Pozhel and Spokoiny: Adaptive weights Smoothing (package aws) Work by Catherine Loader: package locfit I guess any kernel smoother with locally varying bandwidth makes the point but I don't know R package for that. note: I don't really get what is the difference between LOESS and regression... is it the idea that in LOESS alrgorithms should be "on line" ?
LOESS that allows discontinuities
Here are some methods and associated R packages to solve this problem Wavelet thresolding estimation in regression allows for discontonuities. You may use the package wavethresh in R. A lot of tree
LOESS that allows discontinuities Here are some methods and associated R packages to solve this problem Wavelet thresolding estimation in regression allows for discontonuities. You may use the package wavethresh in R. A lot of tree based methods (not far from the idea of wavelet) are usefull when you have disconitnuities. Hence package treethresh, package tree ! In the familly of "local maximum likelihood" methods... among others: Work of Pozhel and Spokoiny: Adaptive weights Smoothing (package aws) Work by Catherine Loader: package locfit I guess any kernel smoother with locally varying bandwidth makes the point but I don't know R package for that. note: I don't really get what is the difference between LOESS and regression... is it the idea that in LOESS alrgorithms should be "on line" ?
LOESS that allows discontinuities Here are some methods and associated R packages to solve this problem Wavelet thresolding estimation in regression allows for discontonuities. You may use the package wavethresh in R. A lot of tree
18,266
LOESS that allows discontinuities
It should be possible to code a solution in R using the non-linear regression function nls, b splines (the bs function in the spline package, for example) and the ifelse function.
LOESS that allows discontinuities
It should be possible to code a solution in R using the non-linear regression function nls, b splines (the bs function in the spline package, for example) and the ifelse function.
LOESS that allows discontinuities It should be possible to code a solution in R using the non-linear regression function nls, b splines (the bs function in the spline package, for example) and the ifelse function.
LOESS that allows discontinuities It should be possible to code a solution in R using the non-linear regression function nls, b splines (the bs function in the spline package, for example) and the ifelse function.
18,267
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis?
I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about heavy tails? Because we care about outliers (substitute the phrase “rare, extreme observation” if you have a problem with the word “outlier.” However, I will use the term “outlier” throughout for brevity.) Outliers are interesting from several points of view: In finance, outlier returns cause much more money to change hands than typical returns (see Taleb‘s discussion of black swans). In hydrology, the outlier flood will cause enormous damage and needs to be planned for. In statistical process control, outliers indicate “out of control” conditions that warrant immediate investigation and rectification. In regression analysis, outliers have enormous effects on the least squares fit. In statistical inference, the degree to which distributions produce outliers has an enormous effect on standard t tests for mean values. Similarly, the degree to which a distribution produces outliers has an enormous effect on the accuracy of the usual estimate of the variance of that distribution. So for various reasons, there is a great interest in outliers in data, and in the degree to which a distribution produces outliers. Notions of heavy-tailedness were therefore developed to characterize outlier-prone processes and data. Unfortunately, the commonly-used definition of “heavy tails” involving exponential bounds and asymptotes is too limited in its characterization of outliers and outlier-prone data generating processes: It requires tails extending to infinity, so it rules out bounded distributions that produce outliers. Further, the standard definition does not even apply to a data set, since all empirical distributions are necessarily bounded. Here is an alternative class of definitions of ”heavy-tailedness,” which I will call “tail-leverage($m$)” to avoid confusion with existing definitions of heavy-tailedness, that addresses this concern. Definition: Assume absolute moments up to order $m>2$ exist for random variables $X$ and $Y$. Let $U = |(X - \mu_X)/\sigma_X|^m$ and let $V =|(Y - \mu_Y)/\sigma_Y|^m$. If $E(V) > E(U)$, then $Y$ is said to have greater tail-leverage($m$) than $X$. The mathematical rationale for the definition is as follows: Suppose $E(V) > E(U)$, and let $\mu_U = E(U)$. Draw the pdf (or pmf, in the discrete case, or in the case of an actual data set) of $V$, which is $p_V(v)$. Place a fulcrum at $\mu_U$ on the horizontal axis. Because of the well-known fact that the distribution balances at its mean, the distribution $p_V(v)$ “falls to the right” of the fulcrum at $\mu_U$. Now, what causes it to “fall to the right”? Is it the concentration of mass less than 1, corresponding to the observations of $Y$ that are within a standard deviation of the mean? Is it the shape of the distribution of $Y$ corresponding to observations that are within a standard deviation of the mean? No, these aspects are to the left of the fulcrum, not to the right. It is the extremes of the distribution (or data) of $Y$, in one or both tails, that produce high positive values of $V$, which cause the “falling to the right.” To illustrate, consider the following two graphs of discrete distributions. The top distribution has kurtosis = 2.46, "platykurtic," and the bottom has kurtosis = 3.45, "leptokurtic." Notice that kurtosis is my tail leverage measure with $m=4$. Both distributions are scaled to a mean of 0.0 and variance of 1.0. Now, consider the distributions of the data values raised to the fourth power, with the red vertical bar indicating the mean of the top distribution: The top distribution balances at the red bar, which locates the kurtosis of the original, untransformed data (2.46). But the bottom distribution, having larger mean (3.45, the kurtosis of the original, untransformed data), "falls to the right" of the red bar located at 2.46. What causes it to "fall to the right"? Is it greater peakedness? No, because the first distribution is more peaked. Is it greater concentration of mass near the mean? No, because this would make it "fall to the left." As is apparent from the graph, it is the extreme values that makes it "fall to the right." BTW, the term “leverage” should now be clear, given the physical representation involving the point of balance. But it is worth noting that, in the characterization of the distribution “falling to the right,” that the “tail leverage” measures can legitimately be called measures of “tail weight.” I chose not to do that because the "leverage" term is more precise. Much has been made of the fact that kurtosis does not correspond directly to the standard definition of “heavy tails.” Of course it doesn’t. Neither does it correspond to any but one of the infinitely many definitions of “tail leverage” I just gave. If you restrict your attention to the case where $m=4$, then an answer to the OP’s question is as follows: Greater tail leverage (using $m=4$ in the definition) does indeed imply greater kurtosis (and conversely). They are identical. Incidentally, the “leverage” definition applies equally to data as it does to distributions: When you apply the kurtosis formula to the empirical distribution, it gives you the estimate of kurtosis without all the so-called “bias corrections.” (This estimate has been compared to others and is reasonable, often better in terms of accuracy; see "Comparing Measures of Sample Skewness and Kurtosis," D. N. Joanes and C. A. Gill, Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 47, No. 1 (1998), pp. 183-189.) My stated leverage definition also resolves many of the various comments and answers given in response to the OP: Some beta distributions can be more greatly tail-leveraged (even if “thin-tailed” by other measures) than the normal distribution. This implies a greater outlier potential of such distributions than the normal, as described above regarding leverage and the fulcrum, despite the normal distribution having infinite tails and the beta being bounded. Further, uniforms mixed with classical “heavy-tailed” distributions are still "heavy-tailed," but can have less tail leverage than the normal distribution, provided the mixing probability on the “heavy tailed” distribution is sufficiently low so that the extremes are very uncommon, and assuming finite moments. Tail leverage is simply a measure of the extremes (or outliers). It differs from the classic definition of heavy-tailedness, even though it is arguably a viable competitor. It is not perfect; a notable flaw is that it requires finite moments, so quantile-based versions would be useful as well. Such alternative definitions are needed because the classic definition of “heavy tails” is far too limited to characterize the universe of outlier-prone data-generating processes and their resulting data. II. My paper in The American Statistician My purpose in writing the paper “Kurtosis as Peakedness, 1905-2014: R.I.P.” was to help people answer the question, “What does higher (or lower) kurtosis tell me about my distribution (or data)?” I suspected the common interpretations (still seen, by the way), “higher kurtosis implies more peaked, lower kurtosis implies more flat” were wrong, but could not quite put my finger on the reason. And, I even wondered that maybe they had an element of truth, given that Pearson said it, and even more compelling, that R.A. Fisher repeated it in all revisions of his famous book. However, I was not able to connect any math to the statement that higher (lower) kurtosis implied greater peakedness (flatness). All the inequalities went in the wrong direction. Then I hit on the main theorem of my paper. Contrary to what has been stated or implied here and elsewhere, my article was not an “opinion” piece; rather, it was a discussion of three mathematical theorems. Yes, The American Statistician (TAS) does often require mathematical proofs. I would not have been able to publish the paper without them. The following three theorems were proven in my paper, although only the second was listed formally as a “Theorem.” Main Theorem: Let $Z_X = (X - \mu_X)/\sigma_X$ and let $\kappa(X) = E(Z_X^4)$ denote the kurtosis of $X$. Then for any distribution (discrete, continuous or mixed, which includes actual data via their discrete empirical distribution), $E\{Z_X^4 I(|Z_X| > 1)\}\le\kappa(X)\le E\{Z_X^4 I(|Z_X| > 1)\} +1$. This is a rather trivial theorem to prove but has major consequences: It states that the shape of the distribution within a standard deviation of the mean (which ordinarily would be where the “peak” is thought to be located) contributes very little to the kurtosis. Instead, the theorem implies that for all data and distributions, kurtosis must lie within $\pm 0.5$ of $E\{Z_X^4 I(|Z_X| > 1)\} + 0.5$. A very nice visual image of this theorem by user "kjetil b Halvorsen" is given at https://stats.stackexchange.com/a/362745/102879; see my comment that follows as well. The bound is sharpened in the Appendix of my TAS paper: Refined Theorem: Assume $X$ is continuous and that the density of $Z_X^2$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+0.5”. This simply amplifies the point of the main theorem that kurtosis is mostly determined by the tails. More recently, @sextus-empiricus was able to reduce the "$+0.5$" bound to "$+1/3$", see https://math.stackexchange.com/a/3781761 . A third theorem proven in my TAS paper states that large kurtosis is mostly determined by (potential) data that are $b$ standard deviations away from the mean, for arbitrary $b$. Theorem 3: Consider a sequence of random variables $X_i$,$ i = 1,2,\dots$, for which $\kappa(X_i) \rightarrow \infty$. Then $E\{Z_i^4I(|Z_i| > b)\}/ \kappa(X_i) \rightarrow 1$, for each $b>0$. The third theorem states that high kurtosis is mostly determined by the most extreme outliers; i.e., those observations that are $b$ or more standard deviations from the mean. These are mathematical theorems, so there can be no argument with them. Supposed “counterexamples” given in this thread and in other online sources are not counterexamples; after all, a theorem is a theorem, not an opinion. So what of one suggested “counterexample,” where spiking the data with many values at the mean (which thereby increases “peakedness”) causes greater kurtosis? Actually, that example just makes the point of my theorems: When spiking the data in this way, the variance is reduced, thus the observations in the tails are more extreme, in terms of number of standard deviations from the mean. And it is observations with large standard deviation from the mean, according to the theorems in my TAS paper, that cause high kurtosis. It’s not the peakedness. Or to put it another way, the reason that the spike increases kurtosis is not because of the spike itself, it is because the spike causes a reduction in the standard deviation, which makes the tails more standard deviations from the mean (i.e., more extreme), which in turn increases the kurtosis. It simply cannot be stated that higher kurtosis implies greater peakedness, because you can have a distribution that is perfectly flat over an arbitrarily high percentage of the data (pick 99.99% for concreteness) with infinite kurtosis. (Just mix a uniform with a Cauchy suitably; there are some minor but trivial and unimportant technical details regarding how to make the peak absolutely flat.) By the same construction, high kurtosis can be associated with any shape whatsoever for 99.99% of the central distribution - U-shaped, flat, triangular, multi-modal, etc. There is also a suggestion in this thread that the center of the distribution is important, because throwing out the central data of the Cauchy example in my TAS paper makes the data have low kurtosis. But this is also due to outliers and extremes: In throwing out the central portion, one increases the variance so that the extremes are no longer extreme (in terms of $Z$ values), hence the kurtosis is low. Any supposed "counterexample" actually obeys my theorems. Theorems have no counterexamples; otherwise, they would not be theorems. A more interesting exercise than “spiking” or “deleting the middle” is this: Take the distribution of a random variable $X$ (discrete or continuous, so it includes the case of actual data), and replace the mass/density within one standard deviation of the mean arbitrarily, but keep the mean and standard deviation of the resulting distribution the same as that of $X$. Q: How much change can you make to the kurtosis statistic over all such possible replacements? A: The difference between the maximum and minimum kurtosis values over all such replacements is $\le 0.25. $ The above question and its answer comprise yet another theorem. Anyone want to publish it? I have its proof written down (it’s quite elegant, as well as constructive, identifying the max and min distributions explicitly), but I lack the incentive to submit it as I am now retired. I have also calculated the actual max differences for various distributions of $X$; for example, if $X$ is normal, then the difference between the largest and smallest kurtosis is over all replacements of the central portion is 0.141. Hardly a large effect of the center on the kurtosis statistic! On the other hand, if you keep the center fixed, but replace the tails, keeping the mean and standard deviation constant, you can make the kurtosis infinitely large. Thus, the effect on kurtosis of manipulating the center while keeping the tails constant, is $\le 0.25$. On the other hand, the effect on kurtosis of manipulating the tails, while keeping the center constant, is infinite. So, while yes, I agree that spiking a distribution at the mean does increase the kurtosis, I do not find this helpful to answer the question, “What does higher kurtosis tell me about my distribution?” There is a difference between “A implies B” and “B implies A.” Just because all bears are mammals does not imply that all mammals are bears. Just because spiking a distribution increases kurtosis does not imply that increasing kurtosis implies a spike; see the uniform/Cauchy example alluded to above in my answer. It is precisely this faulty logic that caused Pearson to make the peakedness/flatness interpretations in the first place. He saw a family of distributions for which the peakedness/flatness interpretations held, and wrongly generalized. In other words, he observed that a bear is a mammal, and then wrongly inferred that a mammal is a bear. Fisher followed suit forever, and here we are. A case in point: People see this picture of "standard symmetric PDFs" (on Wikipedia at https://en.wikipedia.org/wiki/File:Standard_symmetric_pdfs.svg) and think it generalizes to the “flatness/peakedness” conclusions. Yes, in that family of distributions, the flat distribution has the lower kurtosis and the peaked one has the higher kurtosis. But it is an error to conclude from that picture that high kurtosis implies peaked and low kurtosis implies flat. There are other examples of low kurtosis (less than the normal distribution) distributions that are infinitely peaked, and there are examples of infinite kurtosis distributions that are perfectly flat over an arbitrarily large proportion of the observable data. The bear/mammal conundrum also arises in the Finucan conditions, which state (oversimplified) that if tail probability and peak probability increase (losing some mass in between to maintain the standard deviation), then kurtosis increases. This is all fine and good, but you cannot turn the logic around and say that increasing kurtosis implies increasing tail and peak mass (and reducing what is in between). That is precisely the fatal flaw with the sometimes-given interpretation that kurtosis measures the “movement of mass simultaneously to the tails and peak but away from the shoulders." Again, all mammals are not bears. A good counterexample to that interpretation is given here https://math.stackexchange.com/a/2523606/472987 in “counterexample #1, which shows a family of distributions in which the kurtosis increases to infinity, while the mass inside the center stays constant. (There is also a counterexample #2 that has the mass in the center increasing to 1.0 yet the kurtosis decreases to its minimum, so the often-made assertion that kurtosis measures “concentration of mass in the center” is wrong as well.) Many people think that higher kurtosis implies “more probability in the tails.” This is not true; counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend. So what does kurtosis measure? It precisely measures tail leverage (which can be called tail weight as well) as amplified through fourth powers, as I stated above with my definition of tail-leverage($m$). I would just like to reiterate that my TAS article was not an opinion piece. It was instead a discussion of mathematical theorems and their consequences. There is much additional supportive material in the current post that has come to my attention since writing the TAS article, and I hope readers find it to be helpful for understanding kurtosis.
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi
I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about he
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about heavy tails? Because we care about outliers (substitute the phrase “rare, extreme observation” if you have a problem with the word “outlier.” However, I will use the term “outlier” throughout for brevity.) Outliers are interesting from several points of view: In finance, outlier returns cause much more money to change hands than typical returns (see Taleb‘s discussion of black swans). In hydrology, the outlier flood will cause enormous damage and needs to be planned for. In statistical process control, outliers indicate “out of control” conditions that warrant immediate investigation and rectification. In regression analysis, outliers have enormous effects on the least squares fit. In statistical inference, the degree to which distributions produce outliers has an enormous effect on standard t tests for mean values. Similarly, the degree to which a distribution produces outliers has an enormous effect on the accuracy of the usual estimate of the variance of that distribution. So for various reasons, there is a great interest in outliers in data, and in the degree to which a distribution produces outliers. Notions of heavy-tailedness were therefore developed to characterize outlier-prone processes and data. Unfortunately, the commonly-used definition of “heavy tails” involving exponential bounds and asymptotes is too limited in its characterization of outliers and outlier-prone data generating processes: It requires tails extending to infinity, so it rules out bounded distributions that produce outliers. Further, the standard definition does not even apply to a data set, since all empirical distributions are necessarily bounded. Here is an alternative class of definitions of ”heavy-tailedness,” which I will call “tail-leverage($m$)” to avoid confusion with existing definitions of heavy-tailedness, that addresses this concern. Definition: Assume absolute moments up to order $m>2$ exist for random variables $X$ and $Y$. Let $U = |(X - \mu_X)/\sigma_X|^m$ and let $V =|(Y - \mu_Y)/\sigma_Y|^m$. If $E(V) > E(U)$, then $Y$ is said to have greater tail-leverage($m$) than $X$. The mathematical rationale for the definition is as follows: Suppose $E(V) > E(U)$, and let $\mu_U = E(U)$. Draw the pdf (or pmf, in the discrete case, or in the case of an actual data set) of $V$, which is $p_V(v)$. Place a fulcrum at $\mu_U$ on the horizontal axis. Because of the well-known fact that the distribution balances at its mean, the distribution $p_V(v)$ “falls to the right” of the fulcrum at $\mu_U$. Now, what causes it to “fall to the right”? Is it the concentration of mass less than 1, corresponding to the observations of $Y$ that are within a standard deviation of the mean? Is it the shape of the distribution of $Y$ corresponding to observations that are within a standard deviation of the mean? No, these aspects are to the left of the fulcrum, not to the right. It is the extremes of the distribution (or data) of $Y$, in one or both tails, that produce high positive values of $V$, which cause the “falling to the right.” To illustrate, consider the following two graphs of discrete distributions. The top distribution has kurtosis = 2.46, "platykurtic," and the bottom has kurtosis = 3.45, "leptokurtic." Notice that kurtosis is my tail leverage measure with $m=4$. Both distributions are scaled to a mean of 0.0 and variance of 1.0. Now, consider the distributions of the data values raised to the fourth power, with the red vertical bar indicating the mean of the top distribution: The top distribution balances at the red bar, which locates the kurtosis of the original, untransformed data (2.46). But the bottom distribution, having larger mean (3.45, the kurtosis of the original, untransformed data), "falls to the right" of the red bar located at 2.46. What causes it to "fall to the right"? Is it greater peakedness? No, because the first distribution is more peaked. Is it greater concentration of mass near the mean? No, because this would make it "fall to the left." As is apparent from the graph, it is the extreme values that makes it "fall to the right." BTW, the term “leverage” should now be clear, given the physical representation involving the point of balance. But it is worth noting that, in the characterization of the distribution “falling to the right,” that the “tail leverage” measures can legitimately be called measures of “tail weight.” I chose not to do that because the "leverage" term is more precise. Much has been made of the fact that kurtosis does not correspond directly to the standard definition of “heavy tails.” Of course it doesn’t. Neither does it correspond to any but one of the infinitely many definitions of “tail leverage” I just gave. If you restrict your attention to the case where $m=4$, then an answer to the OP’s question is as follows: Greater tail leverage (using $m=4$ in the definition) does indeed imply greater kurtosis (and conversely). They are identical. Incidentally, the “leverage” definition applies equally to data as it does to distributions: When you apply the kurtosis formula to the empirical distribution, it gives you the estimate of kurtosis without all the so-called “bias corrections.” (This estimate has been compared to others and is reasonable, often better in terms of accuracy; see "Comparing Measures of Sample Skewness and Kurtosis," D. N. Joanes and C. A. Gill, Journal of the Royal Statistical Society. Series D (The Statistician) Vol. 47, No. 1 (1998), pp. 183-189.) My stated leverage definition also resolves many of the various comments and answers given in response to the OP: Some beta distributions can be more greatly tail-leveraged (even if “thin-tailed” by other measures) than the normal distribution. This implies a greater outlier potential of such distributions than the normal, as described above regarding leverage and the fulcrum, despite the normal distribution having infinite tails and the beta being bounded. Further, uniforms mixed with classical “heavy-tailed” distributions are still "heavy-tailed," but can have less tail leverage than the normal distribution, provided the mixing probability on the “heavy tailed” distribution is sufficiently low so that the extremes are very uncommon, and assuming finite moments. Tail leverage is simply a measure of the extremes (or outliers). It differs from the classic definition of heavy-tailedness, even though it is arguably a viable competitor. It is not perfect; a notable flaw is that it requires finite moments, so quantile-based versions would be useful as well. Such alternative definitions are needed because the classic definition of “heavy tails” is far too limited to characterize the universe of outlier-prone data-generating processes and their resulting data. II. My paper in The American Statistician My purpose in writing the paper “Kurtosis as Peakedness, 1905-2014: R.I.P.” was to help people answer the question, “What does higher (or lower) kurtosis tell me about my distribution (or data)?” I suspected the common interpretations (still seen, by the way), “higher kurtosis implies more peaked, lower kurtosis implies more flat” were wrong, but could not quite put my finger on the reason. And, I even wondered that maybe they had an element of truth, given that Pearson said it, and even more compelling, that R.A. Fisher repeated it in all revisions of his famous book. However, I was not able to connect any math to the statement that higher (lower) kurtosis implied greater peakedness (flatness). All the inequalities went in the wrong direction. Then I hit on the main theorem of my paper. Contrary to what has been stated or implied here and elsewhere, my article was not an “opinion” piece; rather, it was a discussion of three mathematical theorems. Yes, The American Statistician (TAS) does often require mathematical proofs. I would not have been able to publish the paper without them. The following three theorems were proven in my paper, although only the second was listed formally as a “Theorem.” Main Theorem: Let $Z_X = (X - \mu_X)/\sigma_X$ and let $\kappa(X) = E(Z_X^4)$ denote the kurtosis of $X$. Then for any distribution (discrete, continuous or mixed, which includes actual data via their discrete empirical distribution), $E\{Z_X^4 I(|Z_X| > 1)\}\le\kappa(X)\le E\{Z_X^4 I(|Z_X| > 1)\} +1$. This is a rather trivial theorem to prove but has major consequences: It states that the shape of the distribution within a standard deviation of the mean (which ordinarily would be where the “peak” is thought to be located) contributes very little to the kurtosis. Instead, the theorem implies that for all data and distributions, kurtosis must lie within $\pm 0.5$ of $E\{Z_X^4 I(|Z_X| > 1)\} + 0.5$. A very nice visual image of this theorem by user "kjetil b Halvorsen" is given at https://stats.stackexchange.com/a/362745/102879; see my comment that follows as well. The bound is sharpened in the Appendix of my TAS paper: Refined Theorem: Assume $X$ is continuous and that the density of $Z_X^2$ is decreasing on [0,1]. Then the “+1” of the main theorem can be sharpened to “+0.5”. This simply amplifies the point of the main theorem that kurtosis is mostly determined by the tails. More recently, @sextus-empiricus was able to reduce the "$+0.5$" bound to "$+1/3$", see https://math.stackexchange.com/a/3781761 . A third theorem proven in my TAS paper states that large kurtosis is mostly determined by (potential) data that are $b$ standard deviations away from the mean, for arbitrary $b$. Theorem 3: Consider a sequence of random variables $X_i$,$ i = 1,2,\dots$, for which $\kappa(X_i) \rightarrow \infty$. Then $E\{Z_i^4I(|Z_i| > b)\}/ \kappa(X_i) \rightarrow 1$, for each $b>0$. The third theorem states that high kurtosis is mostly determined by the most extreme outliers; i.e., those observations that are $b$ or more standard deviations from the mean. These are mathematical theorems, so there can be no argument with them. Supposed “counterexamples” given in this thread and in other online sources are not counterexamples; after all, a theorem is a theorem, not an opinion. So what of one suggested “counterexample,” where spiking the data with many values at the mean (which thereby increases “peakedness”) causes greater kurtosis? Actually, that example just makes the point of my theorems: When spiking the data in this way, the variance is reduced, thus the observations in the tails are more extreme, in terms of number of standard deviations from the mean. And it is observations with large standard deviation from the mean, according to the theorems in my TAS paper, that cause high kurtosis. It’s not the peakedness. Or to put it another way, the reason that the spike increases kurtosis is not because of the spike itself, it is because the spike causes a reduction in the standard deviation, which makes the tails more standard deviations from the mean (i.e., more extreme), which in turn increases the kurtosis. It simply cannot be stated that higher kurtosis implies greater peakedness, because you can have a distribution that is perfectly flat over an arbitrarily high percentage of the data (pick 99.99% for concreteness) with infinite kurtosis. (Just mix a uniform with a Cauchy suitably; there are some minor but trivial and unimportant technical details regarding how to make the peak absolutely flat.) By the same construction, high kurtosis can be associated with any shape whatsoever for 99.99% of the central distribution - U-shaped, flat, triangular, multi-modal, etc. There is also a suggestion in this thread that the center of the distribution is important, because throwing out the central data of the Cauchy example in my TAS paper makes the data have low kurtosis. But this is also due to outliers and extremes: In throwing out the central portion, one increases the variance so that the extremes are no longer extreme (in terms of $Z$ values), hence the kurtosis is low. Any supposed "counterexample" actually obeys my theorems. Theorems have no counterexamples; otherwise, they would not be theorems. A more interesting exercise than “spiking” or “deleting the middle” is this: Take the distribution of a random variable $X$ (discrete or continuous, so it includes the case of actual data), and replace the mass/density within one standard deviation of the mean arbitrarily, but keep the mean and standard deviation of the resulting distribution the same as that of $X$. Q: How much change can you make to the kurtosis statistic over all such possible replacements? A: The difference between the maximum and minimum kurtosis values over all such replacements is $\le 0.25. $ The above question and its answer comprise yet another theorem. Anyone want to publish it? I have its proof written down (it’s quite elegant, as well as constructive, identifying the max and min distributions explicitly), but I lack the incentive to submit it as I am now retired. I have also calculated the actual max differences for various distributions of $X$; for example, if $X$ is normal, then the difference between the largest and smallest kurtosis is over all replacements of the central portion is 0.141. Hardly a large effect of the center on the kurtosis statistic! On the other hand, if you keep the center fixed, but replace the tails, keeping the mean and standard deviation constant, you can make the kurtosis infinitely large. Thus, the effect on kurtosis of manipulating the center while keeping the tails constant, is $\le 0.25$. On the other hand, the effect on kurtosis of manipulating the tails, while keeping the center constant, is infinite. So, while yes, I agree that spiking a distribution at the mean does increase the kurtosis, I do not find this helpful to answer the question, “What does higher kurtosis tell me about my distribution?” There is a difference between “A implies B” and “B implies A.” Just because all bears are mammals does not imply that all mammals are bears. Just because spiking a distribution increases kurtosis does not imply that increasing kurtosis implies a spike; see the uniform/Cauchy example alluded to above in my answer. It is precisely this faulty logic that caused Pearson to make the peakedness/flatness interpretations in the first place. He saw a family of distributions for which the peakedness/flatness interpretations held, and wrongly generalized. In other words, he observed that a bear is a mammal, and then wrongly inferred that a mammal is a bear. Fisher followed suit forever, and here we are. A case in point: People see this picture of "standard symmetric PDFs" (on Wikipedia at https://en.wikipedia.org/wiki/File:Standard_symmetric_pdfs.svg) and think it generalizes to the “flatness/peakedness” conclusions. Yes, in that family of distributions, the flat distribution has the lower kurtosis and the peaked one has the higher kurtosis. But it is an error to conclude from that picture that high kurtosis implies peaked and low kurtosis implies flat. There are other examples of low kurtosis (less than the normal distribution) distributions that are infinitely peaked, and there are examples of infinite kurtosis distributions that are perfectly flat over an arbitrarily large proportion of the observable data. The bear/mammal conundrum also arises in the Finucan conditions, which state (oversimplified) that if tail probability and peak probability increase (losing some mass in between to maintain the standard deviation), then kurtosis increases. This is all fine and good, but you cannot turn the logic around and say that increasing kurtosis implies increasing tail and peak mass (and reducing what is in between). That is precisely the fatal flaw with the sometimes-given interpretation that kurtosis measures the “movement of mass simultaneously to the tails and peak but away from the shoulders." Again, all mammals are not bears. A good counterexample to that interpretation is given here https://math.stackexchange.com/a/2523606/472987 in “counterexample #1, which shows a family of distributions in which the kurtosis increases to infinity, while the mass inside the center stays constant. (There is also a counterexample #2 that has the mass in the center increasing to 1.0 yet the kurtosis decreases to its minimum, so the often-made assertion that kurtosis measures “concentration of mass in the center” is wrong as well.) Many people think that higher kurtosis implies “more probability in the tails.” This is not true; counterexample #1 shows that you can have higher kurtosis with less tail probability when the tails extend. So what does kurtosis measure? It precisely measures tail leverage (which can be called tail weight as well) as amplified through fourth powers, as I stated above with my definition of tail-leverage($m$). I would just like to reiterate that my TAS article was not an opinion piece. It was instead a discussion of mathematical theorems and their consequences. There is much additional supportive material in the current post that has come to my attention since writing the TAS article, and I hope readers find it to be helpful for understanding kurtosis.
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi I. A direct answer to the OP Answer: It depends on what you mean by “heavy tails.” By some definitions of “heavy tails,” the answer is “no,” as pointed out here and elsewhere. Why do we care about he
18,268
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis?
Heavy Tails or "Peakedness"? Kurtosis is usually thought of as denoting heavy tails; however, many decades ago, statistics students were taught that higher kurtosis implied more "peakedness" versus the normal distribution. The Wikipedia page (suggested in a comment) does note this in saying that higher kurtosis usually comes from (a) more data close to the mean with rare values very far from the mean, or (b) heavy tails in the distribution. A Thin-Tailed High-Kurtosis Example Usually, these two situations occur at the same time. However, a simple example shows a light-tailed distribution with high kurtosis. The beta distribution has very light tails: the tails are literally bounded in that they cannot extend past 0 or 1. However, the following $R$ code generates a beta distribution with high kurtosis: n.rv <- 10000 rv <- rbeta(n.rv, 1, 0.1) z <- (rv - mean(rv))/sd(rv) # standardized rv for kurtosis calculation kurt <- sum(z^4)/(n.rv-2) # plenty of debate on the right df; not crucial here Running this simulation gives a kurtosis of 9 to 10. (The exact value would be 9.566, to three decimal places.) But What About a Heavy-Tailed Distribution? You asked, however, about heavy-tailed distributions -- and for some intuition. In general, heavier-tailed distributions will have higher kurtoses. The Intuition To intuitively see this, consider two symmetric pdfs $f_X,f_Y$ that are standardized: $E(X)=E(Y)=0$ and ${\rm var}(X)={\rm var}(Y)=1$. Let's also say these densities have support on the whole real line, so $f_X,f_Y>0$ everywhere. Let's assume the contributions toward kurtosis from the centers of the densities are similar: $E(X^4|-k\leq X\leq k)\approx E(Y^4|-k\leq Y\leq k)$ for some finite $k$. Since these distributions both have probability density > 0 in their tails (getting out toward $\pm\infty$), we can see that their kurtoses ($E(X^4),E(Y^4)$) will likely be dominated by the contribution from $X,Y$ approaching $\pm\infty$. This would not be true would be if the tails decayed very quickly: quicker than exponentially and quicker than even $e^{-x^2}$. However, you said this is in comparison to a Gaussian pdf, so we know the Gaussian tails die off like $f_X\propto e^{-x^2}$. Since the heavier-tailed distribution has tails that are thicker (ie do not die off as quickly), we know those tails will contribute more to $E(Y^4)$ Issues As you can tell (if you read the comments), there are plenty of counterexamples to the general guidelines you are trying to get. Kurtosis is far less well understood than, say, variance. In fact, it is not even clear what it the best estimator for kurtosis. What is the Correct Estimator? For small samples, Cramér (1957) suggested replacing $\frac{1}{n-2}$ with $\frac{n^2-2n+3}{(n-1)(n-2)(n-3)}$ and subtracting $\frac{3(n-1)(2n-3)}{n(n-2)(m-3)}\hat\sigma^4$ and Fisher (1973) suggested replacing $\frac{1}{n-2}$ with $\frac{n(n+1)}{(n-1)(n-2)(n-3)}$. (Fisher's justification of unbiasedness under normality, however, is odd for a centered moment which is of most interest for non-normal distributions.) Contributions from the Center of the Distribution The center of the distribution can also have a large effect on the kurtosis. For example, consider a power-law variable: a variable having a density with tails decaying on the order of $|x|^{-p}$. ($p>5$ so that the kurtosis is finite.) These are clearly "fat-tailed" since the tails decay slower than $e^{-x^2}$ (and even $e^{-x}$). Despite that, mixtures of uniform and power-law random variables can have kurtoses less than 3 (i.e. negative excess kurtoses). Variance of Variance? More recently, I have heard people talk about kurtosis as the "variance of variance" (or "vol of vol" in mathematical finance). That idea makes more sense since many types of data exhibit heteroskedasticity or different regimes with different variances. For a great example, just look at a historical plot of US unemployment: the numbers reported remained within a relatively tight range until they exploded due to a pandemic and stay-at-home orders. Are the very high unemployment observations something we would typically expect? Or, are they due to a change in the regime of the macroeconomy? Either way, the resulting series has very high kurtosis and the answer for why may affect what we consider to be reasonable modeling assumptions in the future.
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi
Heavy Tails or "Peakedness"? Kurtosis is usually thought of as denoting heavy tails; however, many decades ago, statistics students were taught that higher kurtosis implied more "peakedness" versus th
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? Heavy Tails or "Peakedness"? Kurtosis is usually thought of as denoting heavy tails; however, many decades ago, statistics students were taught that higher kurtosis implied more "peakedness" versus the normal distribution. The Wikipedia page (suggested in a comment) does note this in saying that higher kurtosis usually comes from (a) more data close to the mean with rare values very far from the mean, or (b) heavy tails in the distribution. A Thin-Tailed High-Kurtosis Example Usually, these two situations occur at the same time. However, a simple example shows a light-tailed distribution with high kurtosis. The beta distribution has very light tails: the tails are literally bounded in that they cannot extend past 0 or 1. However, the following $R$ code generates a beta distribution with high kurtosis: n.rv <- 10000 rv <- rbeta(n.rv, 1, 0.1) z <- (rv - mean(rv))/sd(rv) # standardized rv for kurtosis calculation kurt <- sum(z^4)/(n.rv-2) # plenty of debate on the right df; not crucial here Running this simulation gives a kurtosis of 9 to 10. (The exact value would be 9.566, to three decimal places.) But What About a Heavy-Tailed Distribution? You asked, however, about heavy-tailed distributions -- and for some intuition. In general, heavier-tailed distributions will have higher kurtoses. The Intuition To intuitively see this, consider two symmetric pdfs $f_X,f_Y$ that are standardized: $E(X)=E(Y)=0$ and ${\rm var}(X)={\rm var}(Y)=1$. Let's also say these densities have support on the whole real line, so $f_X,f_Y>0$ everywhere. Let's assume the contributions toward kurtosis from the centers of the densities are similar: $E(X^4|-k\leq X\leq k)\approx E(Y^4|-k\leq Y\leq k)$ for some finite $k$. Since these distributions both have probability density > 0 in their tails (getting out toward $\pm\infty$), we can see that their kurtoses ($E(X^4),E(Y^4)$) will likely be dominated by the contribution from $X,Y$ approaching $\pm\infty$. This would not be true would be if the tails decayed very quickly: quicker than exponentially and quicker than even $e^{-x^2}$. However, you said this is in comparison to a Gaussian pdf, so we know the Gaussian tails die off like $f_X\propto e^{-x^2}$. Since the heavier-tailed distribution has tails that are thicker (ie do not die off as quickly), we know those tails will contribute more to $E(Y^4)$ Issues As you can tell (if you read the comments), there are plenty of counterexamples to the general guidelines you are trying to get. Kurtosis is far less well understood than, say, variance. In fact, it is not even clear what it the best estimator for kurtosis. What is the Correct Estimator? For small samples, Cramér (1957) suggested replacing $\frac{1}{n-2}$ with $\frac{n^2-2n+3}{(n-1)(n-2)(n-3)}$ and subtracting $\frac{3(n-1)(2n-3)}{n(n-2)(m-3)}\hat\sigma^4$ and Fisher (1973) suggested replacing $\frac{1}{n-2}$ with $\frac{n(n+1)}{(n-1)(n-2)(n-3)}$. (Fisher's justification of unbiasedness under normality, however, is odd for a centered moment which is of most interest for non-normal distributions.) Contributions from the Center of the Distribution The center of the distribution can also have a large effect on the kurtosis. For example, consider a power-law variable: a variable having a density with tails decaying on the order of $|x|^{-p}$. ($p>5$ so that the kurtosis is finite.) These are clearly "fat-tailed" since the tails decay slower than $e^{-x^2}$ (and even $e^{-x}$). Despite that, mixtures of uniform and power-law random variables can have kurtoses less than 3 (i.e. negative excess kurtoses). Variance of Variance? More recently, I have heard people talk about kurtosis as the "variance of variance" (or "vol of vol" in mathematical finance). That idea makes more sense since many types of data exhibit heteroskedasticity or different regimes with different variances. For a great example, just look at a historical plot of US unemployment: the numbers reported remained within a relatively tight range until they exploded due to a pandemic and stay-at-home orders. Are the very high unemployment observations something we would typically expect? Or, are they due to a change in the regime of the macroeconomy? Either way, the resulting series has very high kurtosis and the answer for why may affect what we consider to be reasonable modeling assumptions in the future.
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi Heavy Tails or "Peakedness"? Kurtosis is usually thought of as denoting heavy tails; however, many decades ago, statistics students were taught that higher kurtosis implied more "peakedness" versus th
18,269
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis?
If you go with a formal definition, such as one in Wikipedia, then the tails must be heavier than exponential distribution. Exponential distribution's excess kurtosis is 6. Student t distribution's excess kurtosis goes from infinite to zero as the degrees of freedom go from 4 to infinity, and Student t converges to normal. Also, some people, myself included, use a much simpler definition: positive excess kurtosis. So, the answer is yes, excess kurtosis will be positive for heavy tailed distributions. I can't say whether it is possible to construct a distribution that would satisfy formal requirements of heavy tailed distribution and has negative excess kurtosis. If it is possible, I bet it would be a purely theoretical construct that nobody uses to model heavy tails anyway.
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi
If you go with a formal definition, such as one in Wikipedia, then the tails must be heavier than exponential distribution. Exponential distribution's excess kurtosis is 6. Student t distribution's ex
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? If you go with a formal definition, such as one in Wikipedia, then the tails must be heavier than exponential distribution. Exponential distribution's excess kurtosis is 6. Student t distribution's excess kurtosis goes from infinite to zero as the degrees of freedom go from 4 to infinity, and Student t converges to normal. Also, some people, myself included, use a much simpler definition: positive excess kurtosis. So, the answer is yes, excess kurtosis will be positive for heavy tailed distributions. I can't say whether it is possible to construct a distribution that would satisfy formal requirements of heavy tailed distribution and has negative excess kurtosis. If it is possible, I bet it would be a purely theoretical construct that nobody uses to model heavy tails anyway.
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi If you go with a formal definition, such as one in Wikipedia, then the tails must be heavier than exponential distribution. Exponential distribution's excess kurtosis is 6. Student t distribution's ex
18,270
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis?
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? A short and simple answer: It is not necessary for a distribution with heavy tails to have a higher kurtosis than a standard gaussian random variable. (one exception is when you define heavy tails as the distribution being leptokurtic) Heavy tails defined in terms of the rate of decrease to infinity Many definitions for heavy tails have some definition that relate to the rate at which the tails of a distribution (with infinite support) fall of to zero. For instance wikipedia "heavy-tailed distributions are probability distributions whose tails are not exponentially bounded". For these type of definitions it is the case that: if you scale the weight of the tails, (e.g. by mixing with another distribution with less dominant tails), then the tails will still have the same rate and limiting behavior. If a distribution has finite kurtosis, then it can be any value independent from the type of tails (any value above 1, which is the limit for all distributions). Heavy or not, the type of tail does not dictate some minimum kurtosis (except when it is infinite or undefined). Say, if some heavy tail distribution has kurtosis x>3, then you can 'decrease it' by mixing it with a non-heavy tail distribution that has kurtosis<3 (but the tails still remain heavy, they are only scaled with a factor). Only when you have infinite kurtosis, these tails matter (ie. you can not remove the infinity by diluting the heavy tail distribution by mixing with another distribution). Heavy tails defined in terms of kurtosis or other moments Several other answers have mentioned a definition of tails in terms of moments. In that case the above reasoning does not apply. Some of those answers define a heavy tail in terms of 'kurtosis > 3' in which case the question becomes a tautology (as whuber noted in the comments). However, the question still remains whether a distribution with a heavy tail (when it is defined for another higher order moment instead of the kurtosis) must have a higher kurtosis as well. In this q&a here it is shown that we do not need to have the situation that a higher/lower kurtosis, must also mean that the other moments are equally higher/lower. Some similar distribution as in that answer with approximately $2.4<a<2.5$ will have higher 6th standardized moment, but lower kurtosis, in comparison to the normal distribution. $$f(x,a) = \begin{cases} 0.0005 & \text{if} & x = -a \\ 0.2495 & \text{if} & x = -1 \\ 0.5000 & \text{if} & x = 0 \\ 0.2495 & \text{if} & x = 1 \\ 0.0005 & \text{if} & x = a \\ 0 & \text{otherwise} \end{cases}$$
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? A short and simple answer: It is not necessary for a distribution with heavy tails t
In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? A short and simple answer: It is not necessary for a distribution with heavy tails to have a higher kurtosis than a standard gaussian random variable. (one exception is when you define heavy tails as the distribution being leptokurtic) Heavy tails defined in terms of the rate of decrease to infinity Many definitions for heavy tails have some definition that relate to the rate at which the tails of a distribution (with infinite support) fall of to zero. For instance wikipedia "heavy-tailed distributions are probability distributions whose tails are not exponentially bounded". For these type of definitions it is the case that: if you scale the weight of the tails, (e.g. by mixing with another distribution with less dominant tails), then the tails will still have the same rate and limiting behavior. If a distribution has finite kurtosis, then it can be any value independent from the type of tails (any value above 1, which is the limit for all distributions). Heavy or not, the type of tail does not dictate some minimum kurtosis (except when it is infinite or undefined). Say, if some heavy tail distribution has kurtosis x>3, then you can 'decrease it' by mixing it with a non-heavy tail distribution that has kurtosis<3 (but the tails still remain heavy, they are only scaled with a factor). Only when you have infinite kurtosis, these tails matter (ie. you can not remove the infinity by diluting the heavy tail distribution by mixing with another distribution). Heavy tails defined in terms of kurtosis or other moments Several other answers have mentioned a definition of tails in terms of moments. In that case the above reasoning does not apply. Some of those answers define a heavy tail in terms of 'kurtosis > 3' in which case the question becomes a tautology (as whuber noted in the comments). However, the question still remains whether a distribution with a heavy tail (when it is defined for another higher order moment instead of the kurtosis) must have a higher kurtosis as well. In this q&a here it is shown that we do not need to have the situation that a higher/lower kurtosis, must also mean that the other moments are equally higher/lower. Some similar distribution as in that answer with approximately $2.4<a<2.5$ will have higher 6th standardized moment, but lower kurtosis, in comparison to the normal distribution. $$f(x,a) = \begin{cases} 0.0005 & \text{if} & x = -a \\ 0.2495 & \text{if} & x = -1 \\ 0.5000 & \text{if} & x = 0 \\ 0.2495 & \text{if} & x = 1 \\ 0.0005 & \text{if} & x = a \\ 0 & \text{otherwise} \end{cases}$$
In comparison with a standard gaussian random variable, does a distribution with heavy tails have hi In comparison with a standard gaussian random variable, does a distribution with heavy tails have higher kurtosis? A short and simple answer: It is not necessary for a distribution with heavy tails t
18,271
Predictive models: statistics can't possibly beat machine learning? [closed]
Statistical modeling is different from machine learning. For example, a linear regression is both a statistical model and a machine learning model. So if you compare a linear regression to a random forest, you’re just comparing a simpler machine learning model to a more complicated one. You’re not comparing a statistical model to a machine learning model. Statistical modeling provides more than interpretation; it actually gives a model of some population parameter. It depends on a large framework of mathematics and theory, which allows for formulas for things like the variance of coefficients, variance of predictions, and hypothesis testing. The potential yield of statistical modeling is much greater than machine learning, because you can make strong statements about population parameters instead of just measuring error on holdout, but it’s considerably more difficult to approach a problem with a statistical model.
Predictive models: statistics can't possibly beat machine learning? [closed]
Statistical modeling is different from machine learning. For example, a linear regression is both a statistical model and a machine learning model. So if you compare a linear regression to a random
Predictive models: statistics can't possibly beat machine learning? [closed] Statistical modeling is different from machine learning. For example, a linear regression is both a statistical model and a machine learning model. So if you compare a linear regression to a random forest, you’re just comparing a simpler machine learning model to a more complicated one. You’re not comparing a statistical model to a machine learning model. Statistical modeling provides more than interpretation; it actually gives a model of some population parameter. It depends on a large framework of mathematics and theory, which allows for formulas for things like the variance of coefficients, variance of predictions, and hypothesis testing. The potential yield of statistical modeling is much greater than machine learning, because you can make strong statements about population parameters instead of just measuring error on holdout, but it’s considerably more difficult to approach a problem with a statistical model.
Predictive models: statistics can't possibly beat machine learning? [closed] Statistical modeling is different from machine learning. For example, a linear regression is both a statistical model and a machine learning model. So if you compare a linear regression to a random
18,272
Predictive models: statistics can't possibly beat machine learning? [closed]
It's wrong to state the question the way you worded it. For instance, a significant chunk of machine learning can be called statistical learning. So, your comparison is like apples vs. fruit tarts. However, I'll go with the way you framed it, and claim the following: when it comes to prediction nothing can be done without some form of statistics because prediction inherently has randomness (uncertainty) in it. Consider this: despite huge success of machine learning in some applications it has absolutely nothing to show off in asset price prediction. Nothing at all. Why? Because in most developed liquid markets asset prices are inherently stochastic. You can run machine learning all day long to observe and learn about radioactive decay of atoms, and it will never be able to predict the next atom's decay time, simply because it is random. As an aspiring statistician it would be foolish on your side to not master machine learning, because it's one of the hottest applications of statistics, unless, of course, you know for sure that you are going to academia. Anyone who's likely to go work in the industry needs to master ML. There is no animosity or competition between statistics and ML crowds at all. In fact, if you like programming you'll feel at home in ML field
Predictive models: statistics can't possibly beat machine learning? [closed]
It's wrong to state the question the way you worded it. For instance, a significant chunk of machine learning can be called statistical learning. So, your comparison is like apples vs. fruit tarts. Ho
Predictive models: statistics can't possibly beat machine learning? [closed] It's wrong to state the question the way you worded it. For instance, a significant chunk of machine learning can be called statistical learning. So, your comparison is like apples vs. fruit tarts. However, I'll go with the way you framed it, and claim the following: when it comes to prediction nothing can be done without some form of statistics because prediction inherently has randomness (uncertainty) in it. Consider this: despite huge success of machine learning in some applications it has absolutely nothing to show off in asset price prediction. Nothing at all. Why? Because in most developed liquid markets asset prices are inherently stochastic. You can run machine learning all day long to observe and learn about radioactive decay of atoms, and it will never be able to predict the next atom's decay time, simply because it is random. As an aspiring statistician it would be foolish on your side to not master machine learning, because it's one of the hottest applications of statistics, unless, of course, you know for sure that you are going to academia. Anyone who's likely to go work in the industry needs to master ML. There is no animosity or competition between statistics and ML crowds at all. In fact, if you like programming you'll feel at home in ML field
Predictive models: statistics can't possibly beat machine learning? [closed] It's wrong to state the question the way you worded it. For instance, a significant chunk of machine learning can be called statistical learning. So, your comparison is like apples vs. fruit tarts. Ho
18,273
Predictive models: statistics can't possibly beat machine learning? [closed]
Generally not, but potentially yes under misspecification. The issue you are looking for is called admissibility. A decision is admissible if there is no less risky way to calculate it. All Bayesian solutions are admissible and non-Bayesian solutions are admissible to the extent that they either match a Bayesian solution in every sample or at the limit. An admissible Frequentist or Bayesian solution will always beat an ML solution unless it is also admissible. With that said, there are some practical remarks that make this statement true but vacuous. First, the prior for the Bayesian option has to be your real prior and not some prior distribution used to make an editor at a journal happy. Second, many Frequentist solutions are inadmissible and a shrinkage estimator should have been used instead of the standard solution. A lot of people are unaware of Stein's lemma and its implications for out of sample error. Finally, ML can be a bit more robust, in many cases, to misspecification error. When you move into decision trees and their cousins the forests, you are not using a similar methodology unless you are also using something similar to a Bayes net. A graph solution contains a substantial amount of implicit information in it, particularly a directed graph. Whenever you add information to a probabilistic or statistical process you reduce the variability of the outcome and change what would be considered admissible. If you look at machine learning from a composition of functions perspective, it just becomes a statistical solution but using approximations to make the solution tractable. For Bayesian solutions, MCMC saves unbelievable amounts of time as does gradient descent for many ML problems. If you either had to construct an exact posterior to integrate or use brute force on many ML problems, the solar system would have died its heat death before you got an answer. My guess is that you have a misspecified model for those using statistics, or inappropriate statistics. I taught a lecture where I proved newborns will float out windows if not appropriately swaddled and where a Bayesian method so radically outperformed a Frequentist method on a multinomial choice that the Frequentist method broke even, in expectation, while the Bayesian method doubled the participants' money. Now I abused statistics in the former and took advantage of the inadmissibility of the Frequentist estimator in the latter, but a naive user of statistics could easily do what I did. I just made them extreme to make the examples obvious, but I used absolutely real data. Random forests are consistent estimators and they seem to resemble certain Bayesian processes. Because of the linkage to kernel estimators, they may be quite close. If you see a material difference in performance between solution types, then there is something in the underlying problem that you are misunderstanding and if the problem holds any importance, then you really need to look for the source of the difference as it may also be the case that all models are misspecified.
Predictive models: statistics can't possibly beat machine learning? [closed]
Generally not, but potentially yes under misspecification. The issue you are looking for is called admissibility. A decision is admissible if there is no less risky way to calculate it. All Bayesi
Predictive models: statistics can't possibly beat machine learning? [closed] Generally not, but potentially yes under misspecification. The issue you are looking for is called admissibility. A decision is admissible if there is no less risky way to calculate it. All Bayesian solutions are admissible and non-Bayesian solutions are admissible to the extent that they either match a Bayesian solution in every sample or at the limit. An admissible Frequentist or Bayesian solution will always beat an ML solution unless it is also admissible. With that said, there are some practical remarks that make this statement true but vacuous. First, the prior for the Bayesian option has to be your real prior and not some prior distribution used to make an editor at a journal happy. Second, many Frequentist solutions are inadmissible and a shrinkage estimator should have been used instead of the standard solution. A lot of people are unaware of Stein's lemma and its implications for out of sample error. Finally, ML can be a bit more robust, in many cases, to misspecification error. When you move into decision trees and their cousins the forests, you are not using a similar methodology unless you are also using something similar to a Bayes net. A graph solution contains a substantial amount of implicit information in it, particularly a directed graph. Whenever you add information to a probabilistic or statistical process you reduce the variability of the outcome and change what would be considered admissible. If you look at machine learning from a composition of functions perspective, it just becomes a statistical solution but using approximations to make the solution tractable. For Bayesian solutions, MCMC saves unbelievable amounts of time as does gradient descent for many ML problems. If you either had to construct an exact posterior to integrate or use brute force on many ML problems, the solar system would have died its heat death before you got an answer. My guess is that you have a misspecified model for those using statistics, or inappropriate statistics. I taught a lecture where I proved newborns will float out windows if not appropriately swaddled and where a Bayesian method so radically outperformed a Frequentist method on a multinomial choice that the Frequentist method broke even, in expectation, while the Bayesian method doubled the participants' money. Now I abused statistics in the former and took advantage of the inadmissibility of the Frequentist estimator in the latter, but a naive user of statistics could easily do what I did. I just made them extreme to make the examples obvious, but I used absolutely real data. Random forests are consistent estimators and they seem to resemble certain Bayesian processes. Because of the linkage to kernel estimators, they may be quite close. If you see a material difference in performance between solution types, then there is something in the underlying problem that you are misunderstanding and if the problem holds any importance, then you really need to look for the source of the difference as it may also be the case that all models are misspecified.
Predictive models: statistics can't possibly beat machine learning? [closed] Generally not, but potentially yes under misspecification. The issue you are looking for is called admissibility. A decision is admissible if there is no less risky way to calculate it. All Bayesi
18,274
Predictive models: statistics can't possibly beat machine learning? [closed]
A lot of machine learning might not be that different from p-hacking, for at least some purposes. If you test every possible model to find that one that has highest prediction accuracy (historical prediction or out-group prediction) on the basis of historical data, this does not necessarily mean that the results will help to understand what's going on. However, possibly it will find possible relationships that may inform a hypothesis. Motivating specific hypotheses and then testing them using statistical methods can certainly be similarly p-hacked (or similar) as well. But the point is that if the criteria is "highest prediction accuracy based on historical data", then there is a high risk of being overconfident in some model that one does not understand, without actually having any idea of what drove those historical results and/or whether they may be informative for the future.
Predictive models: statistics can't possibly beat machine learning? [closed]
A lot of machine learning might not be that different from p-hacking, for at least some purposes. If you test every possible model to find that one that has highest prediction accuracy (historical pr
Predictive models: statistics can't possibly beat machine learning? [closed] A lot of machine learning might not be that different from p-hacking, for at least some purposes. If you test every possible model to find that one that has highest prediction accuracy (historical prediction or out-group prediction) on the basis of historical data, this does not necessarily mean that the results will help to understand what's going on. However, possibly it will find possible relationships that may inform a hypothesis. Motivating specific hypotheses and then testing them using statistical methods can certainly be similarly p-hacked (or similar) as well. But the point is that if the criteria is "highest prediction accuracy based on historical data", then there is a high risk of being overconfident in some model that one does not understand, without actually having any idea of what drove those historical results and/or whether they may be informative for the future.
Predictive models: statistics can't possibly beat machine learning? [closed] A lot of machine learning might not be that different from p-hacking, for at least some purposes. If you test every possible model to find that one that has highest prediction accuracy (historical pr
18,275
Can one (theoretically) train a neural network with fewer training samples than weights?
People do that all the time with large networks. For example, the famous AlexNet network has about 60 million parameters, while the ImageNet ILSVRC it was originally trained on has only 1.2 million images. The reason you don't fit a 5-parameter polynomial to 4 data points is that it can always find a function that exactly fits your data points, but does nonsensical things elsewhere. Well, as was noted recently, AlexNet and similar networks can fit arbitrary random labels applied to ImageNet and simply memorize them all, presumably because they have so many more parameters than training points. But something about the priors of the network combined with the stochastic gradient descent optimization process means that, in practice, these models can still generalize to new data points well when you give them real labels. We still don't really understand why that happens.
Can one (theoretically) train a neural network with fewer training samples than weights?
People do that all the time with large networks. For example, the famous AlexNet network has about 60 million parameters, while the ImageNet ILSVRC it was originally trained on has only 1.2 million im
Can one (theoretically) train a neural network with fewer training samples than weights? People do that all the time with large networks. For example, the famous AlexNet network has about 60 million parameters, while the ImageNet ILSVRC it was originally trained on has only 1.2 million images. The reason you don't fit a 5-parameter polynomial to 4 data points is that it can always find a function that exactly fits your data points, but does nonsensical things elsewhere. Well, as was noted recently, AlexNet and similar networks can fit arbitrary random labels applied to ImageNet and simply memorize them all, presumably because they have so many more parameters than training points. But something about the priors of the network combined with the stochastic gradient descent optimization process means that, in practice, these models can still generalize to new data points well when you give them real labels. We still don't really understand why that happens.
Can one (theoretically) train a neural network with fewer training samples than weights? People do that all the time with large networks. For example, the famous AlexNet network has about 60 million parameters, while the ImageNet ILSVRC it was originally trained on has only 1.2 million im
18,276
Can one (theoretically) train a neural network with fewer training samples than weights?
Underdetermined systems are only underdetermined if you impose no other constraints than the data. Sticking with your example, fitting a 4-deg polynomial to 4 data points means you have one degree of freedom not constrained by the data, which leaves you with a line (in coefficient space) of equally good solutions. However, you can use various regularization techniques to make the problem tractable. For example, by imposing a penalty on the L2-norm (i.e. the sum of squares) of the coefficients, you ensure that there is always one unique solution with the highest fitness. Regularization techniques also exist for neural networks, so the short answer to your question is 'yes, you can'. Of particular interest is a technique called "dropout", in which, for each update of the weights, you randomly 'drop' a certain subset of nodes from the network. That is, for that particular iteration of the learning algorithm, you pretend these nodes don't exist. Without dropout, the net can learn very complex representations of the input that depend on all the nodes working together just right. Such representations are likely to 'memorize' the training data, rather than finding patterns that generalize. Dropout ensures that the network cannot use all nodes at once to fit the training data; it has to be able to represent the data well even when some nodes are missing, and so the representations it comes up with are more robust. Also note that when using dropout, the degrees of freedom at any given point during training can actually be smaller than the number of training samples, even though in total you're learning more weights than training samples.
Can one (theoretically) train a neural network with fewer training samples than weights?
Underdetermined systems are only underdetermined if you impose no other constraints than the data. Sticking with your example, fitting a 4-deg polynomial to 4 data points means you have one degree of
Can one (theoretically) train a neural network with fewer training samples than weights? Underdetermined systems are only underdetermined if you impose no other constraints than the data. Sticking with your example, fitting a 4-deg polynomial to 4 data points means you have one degree of freedom not constrained by the data, which leaves you with a line (in coefficient space) of equally good solutions. However, you can use various regularization techniques to make the problem tractable. For example, by imposing a penalty on the L2-norm (i.e. the sum of squares) of the coefficients, you ensure that there is always one unique solution with the highest fitness. Regularization techniques also exist for neural networks, so the short answer to your question is 'yes, you can'. Of particular interest is a technique called "dropout", in which, for each update of the weights, you randomly 'drop' a certain subset of nodes from the network. That is, for that particular iteration of the learning algorithm, you pretend these nodes don't exist. Without dropout, the net can learn very complex representations of the input that depend on all the nodes working together just right. Such representations are likely to 'memorize' the training data, rather than finding patterns that generalize. Dropout ensures that the network cannot use all nodes at once to fit the training data; it has to be able to represent the data well even when some nodes are missing, and so the representations it comes up with are more robust. Also note that when using dropout, the degrees of freedom at any given point during training can actually be smaller than the number of training samples, even though in total you're learning more weights than training samples.
Can one (theoretically) train a neural network with fewer training samples than weights? Underdetermined systems are only underdetermined if you impose no other constraints than the data. Sticking with your example, fitting a 4-deg polynomial to 4 data points means you have one degree of
18,277
Should I use an offset for my Poisson GLM?
There are several issues here: You need to use the observed counts as your response variable. You should not use the densities (g_den). If the observed counts are from differing areas, you need to take the log of those areas as a new variable: larea = log(area) You can control for the differing areas for the observations in two different ways: By using larea as an offset. This will make your response actually a rate (even though what is listed on the left hand side of your model is a count). By using larea as a covariate. This will control for the differing areas, but will not make your response equivalent to a rate. This is a more flexible approach that will let you assess if increases in larea have an increasing or decreasing effect on the count (i.e., whether the slope is less than or greater than 1). There is more information about these issues in the following CV threads: When to use an offset in a Poisson regression? In a Poisson model, what is the difference between using time as a covariate or an offset?
Should I use an offset for my Poisson GLM?
There are several issues here: You need to use the observed counts as your response variable. You should not use the densities (g_den). If the observed counts are from differing areas, you need
Should I use an offset for my Poisson GLM? There are several issues here: You need to use the observed counts as your response variable. You should not use the densities (g_den). If the observed counts are from differing areas, you need to take the log of those areas as a new variable: larea = log(area) You can control for the differing areas for the observations in two different ways: By using larea as an offset. This will make your response actually a rate (even though what is listed on the left hand side of your model is a count). By using larea as a covariate. This will control for the differing areas, but will not make your response equivalent to a rate. This is a more flexible approach that will let you assess if increases in larea have an increasing or decreasing effect on the count (i.e., whether the slope is less than or greater than 1). There is more information about these issues in the following CV threads: When to use an offset in a Poisson regression? In a Poisson model, what is the difference between using time as a covariate or an offset?
Should I use an offset for my Poisson GLM? There are several issues here: You need to use the observed counts as your response variable. You should not use the densities (g_den). If the observed counts are from differing areas, you need
18,278
Should I use an offset for my Poisson GLM?
It looks like you divided the fish counts by the volume (or perhaps area) of water surveyed. In that case an offset is indeed appropriate, you should use the log of whatever you divided by. Perhaps model1 <- glm(g_den ~ method + site + depth + offset(log(area)), poisson) (edited from earlier incorrect version, missing the log) The reason for the error message is that the poisson distribution is normally integer-valued but the response wasn't an integer. This changes once an offset is present; (response/offset) must be an integer (which of course it is, assuming the original counts were integers).
Should I use an offset for my Poisson GLM?
It looks like you divided the fish counts by the volume (or perhaps area) of water surveyed. In that case an offset is indeed appropriate, you should use the log of whatever you divided by. Perhaps mo
Should I use an offset for my Poisson GLM? It looks like you divided the fish counts by the volume (or perhaps area) of water surveyed. In that case an offset is indeed appropriate, you should use the log of whatever you divided by. Perhaps model1 <- glm(g_den ~ method + site + depth + offset(log(area)), poisson) (edited from earlier incorrect version, missing the log) The reason for the error message is that the poisson distribution is normally integer-valued but the response wasn't an integer. This changes once an offset is present; (response/offset) must be an integer (which of course it is, assuming the original counts were integers).
Should I use an offset for my Poisson GLM? It looks like you divided the fish counts by the volume (or perhaps area) of water surveyed. In that case an offset is indeed appropriate, you should use the log of whatever you divided by. Perhaps mo
18,279
Should I use an offset for my Poisson GLM?
If you are going to model using the Poisson you have to have integer values for your response variable. You then have two options Use area or some other suitable denominator as an offset. This would usually need to be logged first Include area or etc as a predictor variable. Again this would usually be included as a log because you are modelling the log counts. If you use the offset approach you are saying that if I double the area I would expect to get double the count. If you use the predictor approach you are sayinh that you know iif you multiply the area you multiply the counts but not necessarily by the same factor. It is your call.
Should I use an offset for my Poisson GLM?
If you are going to model using the Poisson you have to have integer values for your response variable. You then have two options Use area or some other suitable denominator as an offset. This would
Should I use an offset for my Poisson GLM? If you are going to model using the Poisson you have to have integer values for your response variable. You then have two options Use area or some other suitable denominator as an offset. This would usually need to be logged first Include area or etc as a predictor variable. Again this would usually be included as a log because you are modelling the log counts. If you use the offset approach you are saying that if I double the area I would expect to get double the count. If you use the predictor approach you are sayinh that you know iif you multiply the area you multiply the counts but not necessarily by the same factor. It is your call.
Should I use an offset for my Poisson GLM? If you are going to model using the Poisson you have to have integer values for your response variable. You then have two options Use area or some other suitable denominator as an offset. This would
18,280
How to find regression coefficients $\beta$ in ridge regression?
There are two formulations for the ridge problem. The first one is $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prime} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)$$ subject to $$\sum_{j} \beta_j^2 \leq s. $$ This formulation shows the size constraint on the regression coefficients. Note what this constraint implies; we are forcing the coefficients to lie in a ball around the origin with radius $\sqrt{s}$. The second formulation is exactly your problem $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prime} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right) + \lambda \sum\beta_j^2 $$ which may be viewed as the Largrange multiplier formulation. Note that here $\lambda$ is a tuning parameter and larger values of it will lead to greater shrinkage. You may proceed to differentiate the expression with respect to $\boldsymbol{\beta}$ and obtain the well-known ridge estimator $$\boldsymbol{\beta}_{R} = \left( \mathbf{X}^{\prime} \mathbf{X} + \lambda \mathbf{I} \right)^{-1} \mathbf{X}^{\prime} \mathbf{y} \tag{1}$$ The two formulations are completely equivalent, since there is a one-to-one correspondence between $s$ and $\lambda$. Let me elaborate a bit on that. Imagine that you are in the ideal orthogonal case, $\mathbf{X}^{\prime} \mathbf{X} = \mathbf{I}$. This is a highly simplified and unrealistic situation but we can investigate the estimator a little more closely so bear with me. Consider what happens to equation (1). The ridge estimator reduces to $$\boldsymbol{\beta}_R = \left( \mathbf{I} + \lambda \mathbf{I} \right)^{-1} \mathbf{X}^{\prime} \mathbf{y} = \left( \mathbf{I} + \lambda \mathbf{I} \right)^{-1} \boldsymbol{\beta}_{OLS} $$ as in the orthogonal case the OLS estimator is given by $\boldsymbol{\beta}_{OLS} = \mathbf{X}^{\prime} \mathbf{y}$. Looking at this component-wise now we obtain $$\beta_R = \frac{\beta_{OLS}}{1+\lambda} \tag{2}$$ Notice then that now the shrinkage is constant for all coefficients. This might not hold in the general case and indeed it can be shown that the shrinkages will differ widely if there are degeneracies in the $\mathbf{X}^{\prime} \mathbf{X}$ matrix. But let's return to the constrained optimization problem. By the KKT theory, a necessary condition for optimality is $$\lambda \left( \sum \beta_{R,j} ^2 -s \right) = 0$$ so either $\lambda = 0$ or $\sum \beta_{R,j} ^2 -s = 0$ (in this case we say that the constraint is binding). If $\lambda = 0$ then there is no penalty and we are back in the regular OLS situation. Suppose then that the constraint is binding and we are in the second situation. Using the formula in (2), we then have $$ s = \sum \beta_{R,j}^2 = \frac{1}{\left(1 + \lambda \right)^2} \sum \beta_{OLS,j}^2$$ whence we obtain $$\lambda = \sqrt{\frac{\sum \beta_{OLS,j} ^2}{s}} - 1 $$ the one-to-one relationship previously claimed. I expect this is harder to establish in the non-orthogonal case but the result carries regardless. Look again at (2) though and you'll see we are still missing the $\lambda$. To get an optimal value for it, you may either use cross-validation or look at the ridge trace. The latter method involves constructing a sequence of $\lambda$ in (0,1) and looking how the estimates change. You then select the $\lambda$ that stabilizes them. This method was suggested in the second of the references below by the way and is the oldest one. References Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: Biased estimation for nonorthogonal problems." Technometrics 12.1 (1970): 55-67. Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: applications to nonorthogonal problems." Technometrics 12.1 (1970): 69-82.
How to find regression coefficients $\beta$ in ridge regression?
There are two formulations for the ridge problem. The first one is $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prim
How to find regression coefficients $\beta$ in ridge regression? There are two formulations for the ridge problem. The first one is $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prime} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)$$ subject to $$\sum_{j} \beta_j^2 \leq s. $$ This formulation shows the size constraint on the regression coefficients. Note what this constraint implies; we are forcing the coefficients to lie in a ball around the origin with radius $\sqrt{s}$. The second formulation is exactly your problem $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prime} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right) + \lambda \sum\beta_j^2 $$ which may be viewed as the Largrange multiplier formulation. Note that here $\lambda$ is a tuning parameter and larger values of it will lead to greater shrinkage. You may proceed to differentiate the expression with respect to $\boldsymbol{\beta}$ and obtain the well-known ridge estimator $$\boldsymbol{\beta}_{R} = \left( \mathbf{X}^{\prime} \mathbf{X} + \lambda \mathbf{I} \right)^{-1} \mathbf{X}^{\prime} \mathbf{y} \tag{1}$$ The two formulations are completely equivalent, since there is a one-to-one correspondence between $s$ and $\lambda$. Let me elaborate a bit on that. Imagine that you are in the ideal orthogonal case, $\mathbf{X}^{\prime} \mathbf{X} = \mathbf{I}$. This is a highly simplified and unrealistic situation but we can investigate the estimator a little more closely so bear with me. Consider what happens to equation (1). The ridge estimator reduces to $$\boldsymbol{\beta}_R = \left( \mathbf{I} + \lambda \mathbf{I} \right)^{-1} \mathbf{X}^{\prime} \mathbf{y} = \left( \mathbf{I} + \lambda \mathbf{I} \right)^{-1} \boldsymbol{\beta}_{OLS} $$ as in the orthogonal case the OLS estimator is given by $\boldsymbol{\beta}_{OLS} = \mathbf{X}^{\prime} \mathbf{y}$. Looking at this component-wise now we obtain $$\beta_R = \frac{\beta_{OLS}}{1+\lambda} \tag{2}$$ Notice then that now the shrinkage is constant for all coefficients. This might not hold in the general case and indeed it can be shown that the shrinkages will differ widely if there are degeneracies in the $\mathbf{X}^{\prime} \mathbf{X}$ matrix. But let's return to the constrained optimization problem. By the KKT theory, a necessary condition for optimality is $$\lambda \left( \sum \beta_{R,j} ^2 -s \right) = 0$$ so either $\lambda = 0$ or $\sum \beta_{R,j} ^2 -s = 0$ (in this case we say that the constraint is binding). If $\lambda = 0$ then there is no penalty and we are back in the regular OLS situation. Suppose then that the constraint is binding and we are in the second situation. Using the formula in (2), we then have $$ s = \sum \beta_{R,j}^2 = \frac{1}{\left(1 + \lambda \right)^2} \sum \beta_{OLS,j}^2$$ whence we obtain $$\lambda = \sqrt{\frac{\sum \beta_{OLS,j} ^2}{s}} - 1 $$ the one-to-one relationship previously claimed. I expect this is harder to establish in the non-orthogonal case but the result carries regardless. Look again at (2) though and you'll see we are still missing the $\lambda$. To get an optimal value for it, you may either use cross-validation or look at the ridge trace. The latter method involves constructing a sequence of $\lambda$ in (0,1) and looking how the estimates change. You then select the $\lambda$ that stabilizes them. This method was suggested in the second of the references below by the way and is the oldest one. References Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: Biased estimation for nonorthogonal problems." Technometrics 12.1 (1970): 55-67. Hoerl, Arthur E., and Robert W. Kennard. "Ridge regression: applications to nonorthogonal problems." Technometrics 12.1 (1970): 69-82.
How to find regression coefficients $\beta$ in ridge regression? There are two formulations for the ridge problem. The first one is $$\boldsymbol{\beta}_R = \operatorname*{argmin}_{\boldsymbol{\beta}} \left( \mathbf{y} - \mathbf{X} \boldsymbol{\beta} \right)^{\prim
18,281
How to find regression coefficients $\beta$ in ridge regression?
My book Regression Modeling Strategies delves into the use of effective AIC for choosing $\lambda$. This comes from the penalized log likelihood and the effective degrees of freedom, the latter being a function of how much variances of $\hat{\beta}$ are reduced by penalization. A presentation about this is here. The R rms package pentrace finds $\lambda$ that optimizes effective AIC, and also allows for multiple penalty parameters (e.g., one for linear main effects, one for nonlinear main effects, one for linear interaction effects, and one for nonlinear interaction effects).
How to find regression coefficients $\beta$ in ridge regression?
My book Regression Modeling Strategies delves into the use of effective AIC for choosing $\lambda$. This comes from the penalized log likelihood and the effective degrees of freedom, the latter being
How to find regression coefficients $\beta$ in ridge regression? My book Regression Modeling Strategies delves into the use of effective AIC for choosing $\lambda$. This comes from the penalized log likelihood and the effective degrees of freedom, the latter being a function of how much variances of $\hat{\beta}$ are reduced by penalization. A presentation about this is here. The R rms package pentrace finds $\lambda$ that optimizes effective AIC, and also allows for multiple penalty parameters (e.g., one for linear main effects, one for nonlinear main effects, one for linear interaction effects, and one for nonlinear interaction effects).
How to find regression coefficients $\beta$ in ridge regression? My book Regression Modeling Strategies delves into the use of effective AIC for choosing $\lambda$. This comes from the penalized log likelihood and the effective degrees of freedom, the latter being
18,282
How to find regression coefficients $\beta$ in ridge regression?
I don't do it analytically, but rather numerically. I usually plot RMSE vs. λ as such: Figure 1. RMSE and the constant λ or alpha.
How to find regression coefficients $\beta$ in ridge regression?
I don't do it analytically, but rather numerically. I usually plot RMSE vs. λ as such: Figure 1. RMSE and the constant λ or alpha.
How to find regression coefficients $\beta$ in ridge regression? I don't do it analytically, but rather numerically. I usually plot RMSE vs. λ as such: Figure 1. RMSE and the constant λ or alpha.
How to find regression coefficients $\beta$ in ridge regression? I don't do it analytically, but rather numerically. I usually plot RMSE vs. λ as such: Figure 1. RMSE and the constant λ or alpha.
18,283
Steps done in factor analysis compared to steps done in PCA
This answer is to show concrete computational similarities and differences between PCA and Factor analysis. For general theoretical differences between them, see questions/answers 1, 2, 3, 4, 5. Below I will do, step by step, Principal Component analysis (PCA) of iris data ("setosa" species only) and then will do Factor analysis of the same data. Factor analysis (FA) will be done by Iterative principal axis (PAF) method which is based on PCA approach and thus makes one able to compare PCA and FA step-by-step. Iris data (setosa only): id SLength SWidth PLength PWidth species 1 5.1 3.5 1.4 .2 setosa 2 4.9 3.0 1.4 .2 setosa 3 4.7 3.2 1.3 .2 setosa 4 4.6 3.1 1.5 .2 setosa 5 5.0 3.6 1.4 .2 setosa 6 5.4 3.9 1.7 .4 setosa 7 4.6 3.4 1.4 .3 setosa 8 5.0 3.4 1.5 .2 setosa 9 4.4 2.9 1.4 .2 setosa 10 4.9 3.1 1.5 .1 setosa 11 5.4 3.7 1.5 .2 setosa 12 4.8 3.4 1.6 .2 setosa 13 4.8 3.0 1.4 .1 setosa 14 4.3 3.0 1.1 .1 setosa 15 5.8 4.0 1.2 .2 setosa 16 5.7 4.4 1.5 .4 setosa 17 5.4 3.9 1.3 .4 setosa 18 5.1 3.5 1.4 .3 setosa 19 5.7 3.8 1.7 .3 setosa 20 5.1 3.8 1.5 .3 setosa 21 5.4 3.4 1.7 .2 setosa 22 5.1 3.7 1.5 .4 setosa 23 4.6 3.6 1.0 .2 setosa 24 5.1 3.3 1.7 .5 setosa 25 4.8 3.4 1.9 .2 setosa 26 5.0 3.0 1.6 .2 setosa 27 5.0 3.4 1.6 .4 setosa 28 5.2 3.5 1.5 .2 setosa 29 5.2 3.4 1.4 .2 setosa 30 4.7 3.2 1.6 .2 setosa 31 4.8 3.1 1.6 .2 setosa 32 5.4 3.4 1.5 .4 setosa 33 5.2 4.1 1.5 .1 setosa 34 5.5 4.2 1.4 .2 setosa 35 4.9 3.1 1.5 .2 setosa 36 5.0 3.2 1.2 .2 setosa 37 5.5 3.5 1.3 .2 setosa 38 4.9 3.6 1.4 .1 setosa 39 4.4 3.0 1.3 .2 setosa 40 5.1 3.4 1.5 .2 setosa 41 5.0 3.5 1.3 .3 setosa 42 4.5 2.3 1.3 .3 setosa 43 4.4 3.2 1.3 .2 setosa 44 5.0 3.5 1.6 .6 setosa 45 5.1 3.8 1.9 .4 setosa 46 4.8 3.0 1.4 .3 setosa 47 5.1 3.8 1.6 .2 setosa 48 4.6 3.2 1.4 .2 setosa 49 5.3 3.7 1.5 .2 setosa 50 5.0 3.3 1.4 .2 setosa We have 4 numeric variables to include in our analyses: SLength SWidth PLength PWidth, and the analyses will be based on covariances, which is the same as to say that we analyse centered variables. (If we chose to analyse correlations that would be analysing standardized variables. Analysis based on correlations produce different results than analysis based on covariances.) I will not display the centered data. Let's call these data matrix X. PCA steps: Step 0. Compute centered variables X and covariance matrix S. Covariances S (= X'*X/(n-1) matrix: see https://stats.stackexchange.com/a/22520/3277) .12424898 .09921633 .01635510 .01033061 .09921633 .14368980 .01169796 .00929796 .01635510 .01169796 .03015918 .00606939 .01033061 .00929796 .00606939 .01110612 Step 1.1. Decompose data X or matrix S to get eigenvalues and right eigenvectors. You may use svd or eigen decomposition (see https://stats.stackexchange.com/q/79043/3277) Eigenvalues L (component variances) and the proportion of overall variance explained L Prop PC1 .2364556901 .7647237023 PC2 .0369187324 .1193992401 PC3 .0267963986 .0866624997 PC4 .0090332606 .0292145579 Eigenvectors V (cosines of rotation of variables into components) PC1 PC2 PC3 PC4 SLength .6690784044 .5978840102 -.4399627716 -.0360771206 SWidth .7341478283 -.6206734170 .2746074698 -.0195502716 PLength .0965438987 .4900555922 .8324494972 -.2399012853 PWidth .0635635941 .1309379098 .1950675055 .9699296890 Step 1.2. Decide on the number M of first PCs you want to retain. You may decide it now or later on - no difference, because in PCA values of components do not depend on M. Let's M=2. So, leave only 2 first eigenvalues and 2 first eigenvector columns. Step 2. Compute loadings A. May skip if you don't need to interpret PCs anyhow. Loadings are eigenvectors normalized to respective eigenvalues: A value = V value * sqrt(L value) Loadings are the covariances between variables and components. Loadings A PC1 PC2 SLength .32535081 .11487892 SWidth .35699193 -.11925773 PLength .04694612 .09416050 PWidth .03090888 .02515873 Sums of squares in columns of A are components' variances, the eigenvalues Standardized (rescaled) loadings. St. loading is Loading / sqrt(Variable's variance); these loadings are computed if you analyse covariances, and are suitable for interpretation of PCs (if you analyse correlations, A are already standardized). PC1 PC2 SLength .92300804 .32590717 SWidth .94177127 -.31461076 PLength .27032731 .54219930 PWidth .29329327 .23873031 Step 3. Compute component scores (values of PCs). Regression coefficients B to compute Standardized component scores are: B = A*diag(1/L) = inv(S)*A B PC1 PC2 SLength 1.375948338 3.111670112 SWidth 1.509762499 -3.230276923 PLength .198540883 2.550480216 PWidth .130717448 .681462580 Standardized component scores (having variances 1) = X*B PC1 PC2 .219719506 -.129560000 -.810351411 .863244439 -.803442667 -.660192989 -1.052305574 -.138236265 .233100923 -.763754703 1.322114762 .413266845 -.606159168 -1.294221106 -.048997489 .137348703 ... Raw component scores (having variances = eigenvalues) can of course be computed from standardized ones. In PCA, they are also computed directly as X*V PC1 PC2 .106842367 -.024893980 -.394047228 .165865927 -.390687734 -.126851118 -.511701577 -.026561059 .113349309 -.146749722 .642900908 .079406116 -.294755259 -.248674852 -.023825867 .026390520 ... FA (iterative principal axis extraction method) steps: Step 0.1. Compute centered variables X and covariance matrix S. Step 0.2. Decide on the number of factors M to extract. (There exist several well-known methods in help to decide, let's omit mentioning them. Most of them require that you do PCA first.) Note that you have to select M before you proceed further because, unlike in PCA, in FA loadings and factor values depend on M. Let's M=2. Step 0.3. Set initial communalities on the diagonal of S. Most often quantities called "images" are used as initial communalities (see https://stats.stackexchange.com/a/43224/3277). Images are diagonal elements of matrix S-D, where D is diagonal matrix with diagonal = 1 / diagonal of inv(S). (If S is correlation matrix, images are the squared multiple correlation coefficients.) With covariance matrix, image is the squared multiple correlation multiplied by the variable variance. S with images as initial communalities on the diagonal .07146025 .09921633 .01635510 .01033061 .09921633 .07946595 .01169796 .00929796 .01635510 .01169796 .00437017 .00606939 .01033061 .00929796 .00606939 .00167624 Step 1. Decompose that modified S to get eigenvalues and right eigenvectors. Use eigen decomposition, not svd. (Some last eigenvalues may be negative. This is because a reduced covariance matrix can be not positive semidefinite.) Eigenvalues L F1 .1782099114 F2 .0062074477 -.0030958623 -.0243488794 Eigenvectors V F1 F2 SLength .6875564132 .0145988554 .0466389510 .7244845480 SWidth .7122191394 .1808121121 -.0560070806 -.6759542030 PLength .1154657746 -.7640573143 .6203992617 -.1341224497 PWidth .0817173855 -.6191205651 -.7808922917 -.0148062006 Leave the first M=2 values in L and columns in V. Step 2.1. Compute loadings A. Loadings are eigenvectors normalized to respective eigenvalues: A value = V value * sqrt(L value) F1 F2 SLength .2902513607 .0011502052 SWidth .3006627098 .0142457085 PLength .0487437795 -.0601980567 PWidth .0344969255 -.0487788732 Step 2.2. Compute row sums of squared loadings. These are updated communalities. Reset the diagonal of S to them S with updated communalities on the diagonal .08424718 .09921633 .01635510 .01033061 .09921633 .09060101 .01169796 .00929796 .01635510 .01169796 .00599976 .00606939 .01033061 .00929796 .00606939 .00356942 REPEAT Steps 1-2 many times (iterations, say, 25) Extraction of factors is done. Let us look at the final eigenvalues of the reduced covariance matrix after iterations: Eigenvalues L F1 .2026316056 F2 .0137096989 .0005000572 -.0005882867 The eigenvalues are the factors' (F1 and F2) variances. The overall common variance is .2026316056 + .0137096989 = .2163413036, so F1, for example, explains .2026316056/.2163413036 = 93.7% of the common variance. That 93.7% of the common variance amounts to .2026316056/.3092040816 = 65.5% of the total variability (.3092040816, the total variance, is the trace of the initial, non-reduced covariance matrix). [Note. The .0005000572 + -.0005882867 do not count a common variance; these "dross" eigenvalues are nonzero due to the fact the 2-factor model does not predict the covariances without any error.] Final loadings A and communalities (row sums of squares in A). Loadings are the covariances between variables and factors. Communality is the degree to what the factors load a variable, it is the "common variance" in the variable. F1 F2 Comm SLength .3125767362 .0128306509 .0978688416 SWidth .3187577564 -.0323523347 .1026531808 PLength .0476237419 .1034495601 .0129698323 PWidth .0324478281 .0423861795 .0028494498 Sums of squares in columns of A are the factors' variances: .2026316056 and .0137096989. The main goal of factor analysis is to explain correlations or covariances by means of the loadings. A*t(A) is the restored covariances: .0978688416 .0992211576 .0162133990 .0106862785 .0992211576 .1026531808 .0118336023 .0089717050 .0162133990 .0118336023 .0129698323 .0059301186 .0106862785 .0089717050 .0059301186 .0028494498 See that off-diagonal elements above are quite close to those of the input covariance matrix: S .1242489796 .0992163265 .0163551020 .0103306122 .0992163265 .1436897959 .0116979592 .0092979592 .0163551020 .0116979592 .0301591837 .0060693878 .0103306122 .0092979592 .0060693878 .0111061224 Standardized (rescaled) loadings and communalities. St. loading is Loading / sqrt(Variable's variance); these loadings are computed if you analyse covariances, and are suitable for interpretation of Fs (if you analyse correlations, A are already standardized). F1 F2 Comm SLength .8867684574 .0364000747 .7876832626 SWidth .8409066701 -.0853478652 .7144082859 PLength .2742292179 .5956880078 .4300458666 PWidth .3078962532 .4022009053 .2565656710 Step 3. Compute factor scores (values of Fs). Unlike component scores in PCA, factor scores are not exact, they are reasonable approximations. Several methods of computation exist (https://stats.stackexchange.com/q/126885/3277). Here is regressional method which is the same as the one used in PCA. Regression coefficients B to compute Standardized factor scores are: B = inv(S)*A (original S is used) B F1 F2 SLength 1.597852081 -.023604439 SWidth 1.070410719 -.637149341 PLength .212220217 3.157497050 PWidth .423222047 2.646300951 Standardized factor scores = X*B These "Standardized factor scores" have variance not 1; the variance of a factor is SSregression of the factor by variables / (n-1). F1 F2 .194641800 -.365588231 -.660133976 -.042292672 -.786844270 -.480751358 -1.011226507 .216823430 .141897664 -.426942721 1.250472186 .848980006 -.669003108 -.025440982 -.050962459 .016236852 ... Factors are extracted as orthogonal. And they are. However, regressionally computed factor scores are not fully uncorrelated. Covariance matrix between computed factor scores. F1 F2 F1 .864 .026 F2 .026 .459 Factor variances are their squared loadings. You can easily recompute the above "standardized" factor scores to "raw" factor scores having those variances: raw score = st. score * sqrt(factor variance / st. scores variance). After the extraction (shown above), optional rotation may take place. Rotation is frequently done in FA. Sometimes it is done in PCA exactly the same way. Rotation rotates loading matrix A into some form of "simple structure" which facilitates interpretation of factors greatly (then rotated scores can be recomputed). Since rotation is not what differentiates FA from PCA mathematically and because it is a separate large topic, I won't touch it.
Steps done in factor analysis compared to steps done in PCA
This answer is to show concrete computational similarities and differences between PCA and Factor analysis. For general theoretical differences between them, see questions/answers 1, 2, 3, 4, 5. Below
Steps done in factor analysis compared to steps done in PCA This answer is to show concrete computational similarities and differences between PCA and Factor analysis. For general theoretical differences between them, see questions/answers 1, 2, 3, 4, 5. Below I will do, step by step, Principal Component analysis (PCA) of iris data ("setosa" species only) and then will do Factor analysis of the same data. Factor analysis (FA) will be done by Iterative principal axis (PAF) method which is based on PCA approach and thus makes one able to compare PCA and FA step-by-step. Iris data (setosa only): id SLength SWidth PLength PWidth species 1 5.1 3.5 1.4 .2 setosa 2 4.9 3.0 1.4 .2 setosa 3 4.7 3.2 1.3 .2 setosa 4 4.6 3.1 1.5 .2 setosa 5 5.0 3.6 1.4 .2 setosa 6 5.4 3.9 1.7 .4 setosa 7 4.6 3.4 1.4 .3 setosa 8 5.0 3.4 1.5 .2 setosa 9 4.4 2.9 1.4 .2 setosa 10 4.9 3.1 1.5 .1 setosa 11 5.4 3.7 1.5 .2 setosa 12 4.8 3.4 1.6 .2 setosa 13 4.8 3.0 1.4 .1 setosa 14 4.3 3.0 1.1 .1 setosa 15 5.8 4.0 1.2 .2 setosa 16 5.7 4.4 1.5 .4 setosa 17 5.4 3.9 1.3 .4 setosa 18 5.1 3.5 1.4 .3 setosa 19 5.7 3.8 1.7 .3 setosa 20 5.1 3.8 1.5 .3 setosa 21 5.4 3.4 1.7 .2 setosa 22 5.1 3.7 1.5 .4 setosa 23 4.6 3.6 1.0 .2 setosa 24 5.1 3.3 1.7 .5 setosa 25 4.8 3.4 1.9 .2 setosa 26 5.0 3.0 1.6 .2 setosa 27 5.0 3.4 1.6 .4 setosa 28 5.2 3.5 1.5 .2 setosa 29 5.2 3.4 1.4 .2 setosa 30 4.7 3.2 1.6 .2 setosa 31 4.8 3.1 1.6 .2 setosa 32 5.4 3.4 1.5 .4 setosa 33 5.2 4.1 1.5 .1 setosa 34 5.5 4.2 1.4 .2 setosa 35 4.9 3.1 1.5 .2 setosa 36 5.0 3.2 1.2 .2 setosa 37 5.5 3.5 1.3 .2 setosa 38 4.9 3.6 1.4 .1 setosa 39 4.4 3.0 1.3 .2 setosa 40 5.1 3.4 1.5 .2 setosa 41 5.0 3.5 1.3 .3 setosa 42 4.5 2.3 1.3 .3 setosa 43 4.4 3.2 1.3 .2 setosa 44 5.0 3.5 1.6 .6 setosa 45 5.1 3.8 1.9 .4 setosa 46 4.8 3.0 1.4 .3 setosa 47 5.1 3.8 1.6 .2 setosa 48 4.6 3.2 1.4 .2 setosa 49 5.3 3.7 1.5 .2 setosa 50 5.0 3.3 1.4 .2 setosa We have 4 numeric variables to include in our analyses: SLength SWidth PLength PWidth, and the analyses will be based on covariances, which is the same as to say that we analyse centered variables. (If we chose to analyse correlations that would be analysing standardized variables. Analysis based on correlations produce different results than analysis based on covariances.) I will not display the centered data. Let's call these data matrix X. PCA steps: Step 0. Compute centered variables X and covariance matrix S. Covariances S (= X'*X/(n-1) matrix: see https://stats.stackexchange.com/a/22520/3277) .12424898 .09921633 .01635510 .01033061 .09921633 .14368980 .01169796 .00929796 .01635510 .01169796 .03015918 .00606939 .01033061 .00929796 .00606939 .01110612 Step 1.1. Decompose data X or matrix S to get eigenvalues and right eigenvectors. You may use svd or eigen decomposition (see https://stats.stackexchange.com/q/79043/3277) Eigenvalues L (component variances) and the proportion of overall variance explained L Prop PC1 .2364556901 .7647237023 PC2 .0369187324 .1193992401 PC3 .0267963986 .0866624997 PC4 .0090332606 .0292145579 Eigenvectors V (cosines of rotation of variables into components) PC1 PC2 PC3 PC4 SLength .6690784044 .5978840102 -.4399627716 -.0360771206 SWidth .7341478283 -.6206734170 .2746074698 -.0195502716 PLength .0965438987 .4900555922 .8324494972 -.2399012853 PWidth .0635635941 .1309379098 .1950675055 .9699296890 Step 1.2. Decide on the number M of first PCs you want to retain. You may decide it now or later on - no difference, because in PCA values of components do not depend on M. Let's M=2. So, leave only 2 first eigenvalues and 2 first eigenvector columns. Step 2. Compute loadings A. May skip if you don't need to interpret PCs anyhow. Loadings are eigenvectors normalized to respective eigenvalues: A value = V value * sqrt(L value) Loadings are the covariances between variables and components. Loadings A PC1 PC2 SLength .32535081 .11487892 SWidth .35699193 -.11925773 PLength .04694612 .09416050 PWidth .03090888 .02515873 Sums of squares in columns of A are components' variances, the eigenvalues Standardized (rescaled) loadings. St. loading is Loading / sqrt(Variable's variance); these loadings are computed if you analyse covariances, and are suitable for interpretation of PCs (if you analyse correlations, A are already standardized). PC1 PC2 SLength .92300804 .32590717 SWidth .94177127 -.31461076 PLength .27032731 .54219930 PWidth .29329327 .23873031 Step 3. Compute component scores (values of PCs). Regression coefficients B to compute Standardized component scores are: B = A*diag(1/L) = inv(S)*A B PC1 PC2 SLength 1.375948338 3.111670112 SWidth 1.509762499 -3.230276923 PLength .198540883 2.550480216 PWidth .130717448 .681462580 Standardized component scores (having variances 1) = X*B PC1 PC2 .219719506 -.129560000 -.810351411 .863244439 -.803442667 -.660192989 -1.052305574 -.138236265 .233100923 -.763754703 1.322114762 .413266845 -.606159168 -1.294221106 -.048997489 .137348703 ... Raw component scores (having variances = eigenvalues) can of course be computed from standardized ones. In PCA, they are also computed directly as X*V PC1 PC2 .106842367 -.024893980 -.394047228 .165865927 -.390687734 -.126851118 -.511701577 -.026561059 .113349309 -.146749722 .642900908 .079406116 -.294755259 -.248674852 -.023825867 .026390520 ... FA (iterative principal axis extraction method) steps: Step 0.1. Compute centered variables X and covariance matrix S. Step 0.2. Decide on the number of factors M to extract. (There exist several well-known methods in help to decide, let's omit mentioning them. Most of them require that you do PCA first.) Note that you have to select M before you proceed further because, unlike in PCA, in FA loadings and factor values depend on M. Let's M=2. Step 0.3. Set initial communalities on the diagonal of S. Most often quantities called "images" are used as initial communalities (see https://stats.stackexchange.com/a/43224/3277). Images are diagonal elements of matrix S-D, where D is diagonal matrix with diagonal = 1 / diagonal of inv(S). (If S is correlation matrix, images are the squared multiple correlation coefficients.) With covariance matrix, image is the squared multiple correlation multiplied by the variable variance. S with images as initial communalities on the diagonal .07146025 .09921633 .01635510 .01033061 .09921633 .07946595 .01169796 .00929796 .01635510 .01169796 .00437017 .00606939 .01033061 .00929796 .00606939 .00167624 Step 1. Decompose that modified S to get eigenvalues and right eigenvectors. Use eigen decomposition, not svd. (Some last eigenvalues may be negative. This is because a reduced covariance matrix can be not positive semidefinite.) Eigenvalues L F1 .1782099114 F2 .0062074477 -.0030958623 -.0243488794 Eigenvectors V F1 F2 SLength .6875564132 .0145988554 .0466389510 .7244845480 SWidth .7122191394 .1808121121 -.0560070806 -.6759542030 PLength .1154657746 -.7640573143 .6203992617 -.1341224497 PWidth .0817173855 -.6191205651 -.7808922917 -.0148062006 Leave the first M=2 values in L and columns in V. Step 2.1. Compute loadings A. Loadings are eigenvectors normalized to respective eigenvalues: A value = V value * sqrt(L value) F1 F2 SLength .2902513607 .0011502052 SWidth .3006627098 .0142457085 PLength .0487437795 -.0601980567 PWidth .0344969255 -.0487788732 Step 2.2. Compute row sums of squared loadings. These are updated communalities. Reset the diagonal of S to them S with updated communalities on the diagonal .08424718 .09921633 .01635510 .01033061 .09921633 .09060101 .01169796 .00929796 .01635510 .01169796 .00599976 .00606939 .01033061 .00929796 .00606939 .00356942 REPEAT Steps 1-2 many times (iterations, say, 25) Extraction of factors is done. Let us look at the final eigenvalues of the reduced covariance matrix after iterations: Eigenvalues L F1 .2026316056 F2 .0137096989 .0005000572 -.0005882867 The eigenvalues are the factors' (F1 and F2) variances. The overall common variance is .2026316056 + .0137096989 = .2163413036, so F1, for example, explains .2026316056/.2163413036 = 93.7% of the common variance. That 93.7% of the common variance amounts to .2026316056/.3092040816 = 65.5% of the total variability (.3092040816, the total variance, is the trace of the initial, non-reduced covariance matrix). [Note. The .0005000572 + -.0005882867 do not count a common variance; these "dross" eigenvalues are nonzero due to the fact the 2-factor model does not predict the covariances without any error.] Final loadings A and communalities (row sums of squares in A). Loadings are the covariances between variables and factors. Communality is the degree to what the factors load a variable, it is the "common variance" in the variable. F1 F2 Comm SLength .3125767362 .0128306509 .0978688416 SWidth .3187577564 -.0323523347 .1026531808 PLength .0476237419 .1034495601 .0129698323 PWidth .0324478281 .0423861795 .0028494498 Sums of squares in columns of A are the factors' variances: .2026316056 and .0137096989. The main goal of factor analysis is to explain correlations or covariances by means of the loadings. A*t(A) is the restored covariances: .0978688416 .0992211576 .0162133990 .0106862785 .0992211576 .1026531808 .0118336023 .0089717050 .0162133990 .0118336023 .0129698323 .0059301186 .0106862785 .0089717050 .0059301186 .0028494498 See that off-diagonal elements above are quite close to those of the input covariance matrix: S .1242489796 .0992163265 .0163551020 .0103306122 .0992163265 .1436897959 .0116979592 .0092979592 .0163551020 .0116979592 .0301591837 .0060693878 .0103306122 .0092979592 .0060693878 .0111061224 Standardized (rescaled) loadings and communalities. St. loading is Loading / sqrt(Variable's variance); these loadings are computed if you analyse covariances, and are suitable for interpretation of Fs (if you analyse correlations, A are already standardized). F1 F2 Comm SLength .8867684574 .0364000747 .7876832626 SWidth .8409066701 -.0853478652 .7144082859 PLength .2742292179 .5956880078 .4300458666 PWidth .3078962532 .4022009053 .2565656710 Step 3. Compute factor scores (values of Fs). Unlike component scores in PCA, factor scores are not exact, they are reasonable approximations. Several methods of computation exist (https://stats.stackexchange.com/q/126885/3277). Here is regressional method which is the same as the one used in PCA. Regression coefficients B to compute Standardized factor scores are: B = inv(S)*A (original S is used) B F1 F2 SLength 1.597852081 -.023604439 SWidth 1.070410719 -.637149341 PLength .212220217 3.157497050 PWidth .423222047 2.646300951 Standardized factor scores = X*B These "Standardized factor scores" have variance not 1; the variance of a factor is SSregression of the factor by variables / (n-1). F1 F2 .194641800 -.365588231 -.660133976 -.042292672 -.786844270 -.480751358 -1.011226507 .216823430 .141897664 -.426942721 1.250472186 .848980006 -.669003108 -.025440982 -.050962459 .016236852 ... Factors are extracted as orthogonal. And they are. However, regressionally computed factor scores are not fully uncorrelated. Covariance matrix between computed factor scores. F1 F2 F1 .864 .026 F2 .026 .459 Factor variances are their squared loadings. You can easily recompute the above "standardized" factor scores to "raw" factor scores having those variances: raw score = st. score * sqrt(factor variance / st. scores variance). After the extraction (shown above), optional rotation may take place. Rotation is frequently done in FA. Sometimes it is done in PCA exactly the same way. Rotation rotates loading matrix A into some form of "simple structure" which facilitates interpretation of factors greatly (then rotated scores can be recomputed). Since rotation is not what differentiates FA from PCA mathematically and because it is a separate large topic, I won't touch it.
Steps done in factor analysis compared to steps done in PCA This answer is to show concrete computational similarities and differences between PCA and Factor analysis. For general theoretical differences between them, see questions/answers 1, 2, 3, 4, 5. Below
18,284
What are the definitions of semi-conjugate and conditional conjugate priors?
Using the definition in Bayesian Data Analysis (3rd ed), if $\mathcal{F}$ is a class of sampling distributions $p(y|\theta)$, and $\mathcal{P}$ is a class of prior distributions for $\theta$, then the class $\mathcal{P}$ is conjugate for $\mathcal{F}$ if $$p(\theta|y)\in \mathcal{P} \mbox{ for all }p(\cdot|\theta)\in \mathcal{F} \mbox{ and }p(\cdot)\in \mathcal{P}.$$ If $\mathcal{F}$ is a class of sampling distributions $p(y|\theta,\phi)$, and $\mathcal{P}$ is a class of prior distributions for $\theta$ conditional on $\phi$, then the class $\mathcal{P}$ is conditional conjugate for $\mathcal{F}$ if $$p(\theta|y,\phi)\in \mathcal{P} \mbox{ for all }p(\cdot|\theta,\phi)\in \mathcal{F} \mbox{ and }p(\cdot|\phi)\in \mathcal{P}.$$ Conditionally conjugate priors are convenient in constructing a Gibbs sampler since the full conditional will be a known family. I searched an electronic version of Bayesian Data Analysis (3rd ed.) and could not find a reference to semi-conjugate prior. I'm guessing it is synonymous with conditionally conjugate, but if you provide a reference to its use in the book, I should be able to provide a definition.
What are the definitions of semi-conjugate and conditional conjugate priors?
Using the definition in Bayesian Data Analysis (3rd ed), if $\mathcal{F}$ is a class of sampling distributions $p(y|\theta)$, and $\mathcal{P}$ is a class of prior distributions for $\theta$, then the
What are the definitions of semi-conjugate and conditional conjugate priors? Using the definition in Bayesian Data Analysis (3rd ed), if $\mathcal{F}$ is a class of sampling distributions $p(y|\theta)$, and $\mathcal{P}$ is a class of prior distributions for $\theta$, then the class $\mathcal{P}$ is conjugate for $\mathcal{F}$ if $$p(\theta|y)\in \mathcal{P} \mbox{ for all }p(\cdot|\theta)\in \mathcal{F} \mbox{ and }p(\cdot)\in \mathcal{P}.$$ If $\mathcal{F}$ is a class of sampling distributions $p(y|\theta,\phi)$, and $\mathcal{P}$ is a class of prior distributions for $\theta$ conditional on $\phi$, then the class $\mathcal{P}$ is conditional conjugate for $\mathcal{F}$ if $$p(\theta|y,\phi)\in \mathcal{P} \mbox{ for all }p(\cdot|\theta,\phi)\in \mathcal{F} \mbox{ and }p(\cdot|\phi)\in \mathcal{P}.$$ Conditionally conjugate priors are convenient in constructing a Gibbs sampler since the full conditional will be a known family. I searched an electronic version of Bayesian Data Analysis (3rd ed.) and could not find a reference to semi-conjugate prior. I'm guessing it is synonymous with conditionally conjugate, but if you provide a reference to its use in the book, I should be able to provide a definition.
What are the definitions of semi-conjugate and conditional conjugate priors? Using the definition in Bayesian Data Analysis (3rd ed), if $\mathcal{F}$ is a class of sampling distributions $p(y|\theta)$, and $\mathcal{P}$ is a class of prior distributions for $\theta$, then the
18,285
What are the definitions of semi-conjugate and conditional conjugate priors?
I would like to use multivariate normal as an example. Recall that the likelihood is given by $$ P(y_1,y_2,...,y_n|\mu,\Sigma) = (2\pi)^{-\frac{ND}{2}}\det(\Sigma)^{-\frac{N}{2}}\exp(\frac{1}{2}\sum_{i=1}^N(x_i-\mu)^T\Sigma^{-1}(x_i-\mu)) $$ In order to find a prior to this likelihood, we may choose $$ P(\mu,\Sigma)=\text{Normal}(\mu;\mu_0,\Lambda_0)\text{InverseWishart}(\Sigma;\nu_0,S_0) $$ I assure you NOT to worry about $\mu_0,\Lambda_0,\nu_0,S_0$ for now; they are simply parameters of the prior distribution. What is important is, however, that this is not conjugate to the likelihood. To see why, I would like to quote a reference I found online. note that $\mu$ and $\Sigma$ appear together in a non-factorized way in the likelihood; hence they will also be coupled together in the posterior The reference is "Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy. Here is the link. You may find the quote in Section 4.6 (Inferring the parameters of an MVN) at the top of page 135. To continue the quote, The above prior is sometimes called semi-conjugate or conditionally conjugate, since both conditionals, $p(\mu|\Sigma)$ and $p(\Sigma|\mu)$, are individually conjugate. To create a full conjugate prior, we need to use a prior where $\mu$ and $\Sigma$ are dependent on each other. We will use a joint distribution of the form $$ p(\mu, \Sigma) = p(\Sigma)p(\mu|\Sigma) $$ The idea here is that the first prior distribution $$ P(\mu,\Sigma)=\text{Normal}(\mu;\mu_0,\Lambda_0)\text{InverseWishart}(\Sigma;\nu_0,S_0) $$ assumes that $\mu$ and $\Sigma$ are separable (or independent in a sense). Nevertheless, we observe that in the likelihood function, $\mu$ and $\Sigma$ cannot be factorized out separately, which implies that they will not be separable in the posterior (Recall, $(\text{Posterior}) \sim (\text{Prior})(\text{Likelihood})$). This shows that the "un-separable" posterior and "separable" prior at the beginning are not conjugate. On the other hand, by rewriting $$ p(\mu, \Sigma) = p(\Sigma)p(\mu|\Sigma) $$ such that $\mu$ and $\Sigma$ depend on each other (through $p(\mu|\Sigma)$), you will obtain a conjugate prior, which is named as semi-conjugate prior. This hopefully answers your question. p.s.: Another really helpful reference I have used is "A First Course in Bayesian Statistical Methods" by Peter D. Hoff. Here is a link to the book. You may find relevant content in Section 7 starting from page 105, and he has a very good explanation (and intuition) about single-variate normal distribution in Section 5 starting from page 67, which will be reinforced again in Section 7 when he deals with MVN.
What are the definitions of semi-conjugate and conditional conjugate priors?
I would like to use multivariate normal as an example. Recall that the likelihood is given by $$ P(y_1,y_2,...,y_n|\mu,\Sigma) = (2\pi)^{-\frac{ND}{2}}\det(\Sigma)^{-\frac{N}{2}}\exp(\frac{1}{2}\sum_{
What are the definitions of semi-conjugate and conditional conjugate priors? I would like to use multivariate normal as an example. Recall that the likelihood is given by $$ P(y_1,y_2,...,y_n|\mu,\Sigma) = (2\pi)^{-\frac{ND}{2}}\det(\Sigma)^{-\frac{N}{2}}\exp(\frac{1}{2}\sum_{i=1}^N(x_i-\mu)^T\Sigma^{-1}(x_i-\mu)) $$ In order to find a prior to this likelihood, we may choose $$ P(\mu,\Sigma)=\text{Normal}(\mu;\mu_0,\Lambda_0)\text{InverseWishart}(\Sigma;\nu_0,S_0) $$ I assure you NOT to worry about $\mu_0,\Lambda_0,\nu_0,S_0$ for now; they are simply parameters of the prior distribution. What is important is, however, that this is not conjugate to the likelihood. To see why, I would like to quote a reference I found online. note that $\mu$ and $\Sigma$ appear together in a non-factorized way in the likelihood; hence they will also be coupled together in the posterior The reference is "Machine Learning: A Probabilistic Perspective" by Kevin P. Murphy. Here is the link. You may find the quote in Section 4.6 (Inferring the parameters of an MVN) at the top of page 135. To continue the quote, The above prior is sometimes called semi-conjugate or conditionally conjugate, since both conditionals, $p(\mu|\Sigma)$ and $p(\Sigma|\mu)$, are individually conjugate. To create a full conjugate prior, we need to use a prior where $\mu$ and $\Sigma$ are dependent on each other. We will use a joint distribution of the form $$ p(\mu, \Sigma) = p(\Sigma)p(\mu|\Sigma) $$ The idea here is that the first prior distribution $$ P(\mu,\Sigma)=\text{Normal}(\mu;\mu_0,\Lambda_0)\text{InverseWishart}(\Sigma;\nu_0,S_0) $$ assumes that $\mu$ and $\Sigma$ are separable (or independent in a sense). Nevertheless, we observe that in the likelihood function, $\mu$ and $\Sigma$ cannot be factorized out separately, which implies that they will not be separable in the posterior (Recall, $(\text{Posterior}) \sim (\text{Prior})(\text{Likelihood})$). This shows that the "un-separable" posterior and "separable" prior at the beginning are not conjugate. On the other hand, by rewriting $$ p(\mu, \Sigma) = p(\Sigma)p(\mu|\Sigma) $$ such that $\mu$ and $\Sigma$ depend on each other (through $p(\mu|\Sigma)$), you will obtain a conjugate prior, which is named as semi-conjugate prior. This hopefully answers your question. p.s.: Another really helpful reference I have used is "A First Course in Bayesian Statistical Methods" by Peter D. Hoff. Here is a link to the book. You may find relevant content in Section 7 starting from page 105, and he has a very good explanation (and intuition) about single-variate normal distribution in Section 5 starting from page 67, which will be reinforced again in Section 7 when he deals with MVN.
What are the definitions of semi-conjugate and conditional conjugate priors? I would like to use multivariate normal as an example. Recall that the likelihood is given by $$ P(y_1,y_2,...,y_n|\mu,\Sigma) = (2\pi)^{-\frac{ND}{2}}\det(\Sigma)^{-\frac{N}{2}}\exp(\frac{1}{2}\sum_{
18,286
What are the definitions of semi-conjugate and conditional conjugate priors?
If $F$ is a class of sampling distributions $p(y|θ,ϕ)$, and $P$ is a class of prior distributions for $θ$, then the class $P$ is semiconjugate for $F$ if $p(θ|y,ϕ)∈P$ for all $p(⋅|θ,ϕ)∈F$ and $p(θ,ϕ)=p(θ)\times p(ϕ)$, where $p(θ)∈P$ and $p(ϕ)$ does not belong to class $P$.
What are the definitions of semi-conjugate and conditional conjugate priors?
If $F$ is a class of sampling distributions $p(y|θ,ϕ)$, and $P$ is a class of prior distributions for $θ$, then the class $P$ is semiconjugate for $F$ if $p(θ|y,ϕ)∈P$ for all $p(⋅|θ,ϕ)∈F$ and $p(θ,ϕ)=
What are the definitions of semi-conjugate and conditional conjugate priors? If $F$ is a class of sampling distributions $p(y|θ,ϕ)$, and $P$ is a class of prior distributions for $θ$, then the class $P$ is semiconjugate for $F$ if $p(θ|y,ϕ)∈P$ for all $p(⋅|θ,ϕ)∈F$ and $p(θ,ϕ)=p(θ)\times p(ϕ)$, where $p(θ)∈P$ and $p(ϕ)$ does not belong to class $P$.
What are the definitions of semi-conjugate and conditional conjugate priors? If $F$ is a class of sampling distributions $p(y|θ,ϕ)$, and $P$ is a class of prior distributions for $θ$, then the class $P$ is semiconjugate for $F$ if $p(θ|y,ϕ)∈P$ for all $p(⋅|θ,ϕ)∈F$ and $p(θ,ϕ)=
18,287
Why is posterior density proportional to prior density times likelihood function?
$Pr(y)$, the marginal probability of $y$, is not "ignored." It is simply constant. Dividing by $Pr(y)$ has the effect of "rescaling" the $Pr(y|\theta)P(\theta)$ computations to be measured as proper probabilities, i.e. on a $[0,1]$ interval. Without this scaling, they are still perfectly valid relative measures, but are not restricted to the $[0,1]$ interval. $Pr(y)$ is often "left out" because $Pr(y)=\int Pr(y|\theta)Pr(\theta)d\theta$ is often difficult to evaluate, and it is usually convenient enough to indirectly perform the integration via simulation.
Why is posterior density proportional to prior density times likelihood function?
$Pr(y)$, the marginal probability of $y$, is not "ignored." It is simply constant. Dividing by $Pr(y)$ has the effect of "rescaling" the $Pr(y|\theta)P(\theta)$ computations to be measured as proper p
Why is posterior density proportional to prior density times likelihood function? $Pr(y)$, the marginal probability of $y$, is not "ignored." It is simply constant. Dividing by $Pr(y)$ has the effect of "rescaling" the $Pr(y|\theta)P(\theta)$ computations to be measured as proper probabilities, i.e. on a $[0,1]$ interval. Without this scaling, they are still perfectly valid relative measures, but are not restricted to the $[0,1]$ interval. $Pr(y)$ is often "left out" because $Pr(y)=\int Pr(y|\theta)Pr(\theta)d\theta$ is often difficult to evaluate, and it is usually convenient enough to indirectly perform the integration via simulation.
Why is posterior density proportional to prior density times likelihood function? $Pr(y)$, the marginal probability of $y$, is not "ignored." It is simply constant. Dividing by $Pr(y)$ has the effect of "rescaling" the $Pr(y|\theta)P(\theta)$ computations to be measured as proper p
18,288
Why is posterior density proportional to prior density times likelihood function?
Notice that $$ P(\theta | y) = \frac{P(\theta, y)}{P(y)} = \frac{P(y | \theta) P(\theta)}{P(y)}. $$ Since you're interested in calculating the density of $\theta$, any function that does not depend on this parameter ― such as $P(y)$ ― can be discarded. This gives you $$ P(\theta | y) \propto P(y | \theta) P(\theta). $$ The consequence of discarding $P(y)$ is that now the density $P(\theta | y)$ has lost some properties like integration to 1 over the domain of $\theta$. This is not a big deal since one is usually not interested in integrating likelihood functions, but in maximizing them. And when you're maximizing a function, multiplying this function by some constant (remember that, in the Bayesian approach, the data $y$ is fixed), doesn't change the $\theta$ that corresponds to the maximum point. It does change the value of the maximum likelihood, but then again, one is usually interested in the relative positioning of each $\theta$.
Why is posterior density proportional to prior density times likelihood function?
Notice that $$ P(\theta | y) = \frac{P(\theta, y)}{P(y)} = \frac{P(y | \theta) P(\theta)}{P(y)}. $$ Since you're interested in calculating the density of $\theta$, any function that does not depend on
Why is posterior density proportional to prior density times likelihood function? Notice that $$ P(\theta | y) = \frac{P(\theta, y)}{P(y)} = \frac{P(y | \theta) P(\theta)}{P(y)}. $$ Since you're interested in calculating the density of $\theta$, any function that does not depend on this parameter ― such as $P(y)$ ― can be discarded. This gives you $$ P(\theta | y) \propto P(y | \theta) P(\theta). $$ The consequence of discarding $P(y)$ is that now the density $P(\theta | y)$ has lost some properties like integration to 1 over the domain of $\theta$. This is not a big deal since one is usually not interested in integrating likelihood functions, but in maximizing them. And when you're maximizing a function, multiplying this function by some constant (remember that, in the Bayesian approach, the data $y$ is fixed), doesn't change the $\theta$ that corresponds to the maximum point. It does change the value of the maximum likelihood, but then again, one is usually interested in the relative positioning of each $\theta$.
Why is posterior density proportional to prior density times likelihood function? Notice that $$ P(\theta | y) = \frac{P(\theta, y)}{P(y)} = \frac{P(y | \theta) P(\theta)}{P(y)}. $$ Since you're interested in calculating the density of $\theta$, any function that does not depend on
18,289
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?
$R^2$ shows the linear relationship between the independent variables and the dependent variable. It is defined as $1-\frac{SSE}{SSTO}$ which is the sum of squared errors divided by the total sum of squares. $SSTO = SSE + SSR$ which are the total error and total sum of the regression squares. As independent variables are added $SSR$ will continue to rise (and since $SSTO$ is fixed) $SSE$ will go down and $R^2$ will continually rise irrespective of how valuable the variables you added are. The Adjusted $R^2$ is attempting to account for statistical shrinkage. Models with tons of predictors tend to perform better in sample than when tested out of sample. The adjusted $R^2$ "penalizes" you for adding the extra predictor variables that don't improve the existing model. It can be helpful in model selection. Adjusted $R^2$ will equal $R^2$ for one predictor variable. As you add variables, it will be smaller than $R^2$.
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?
$R^2$ shows the linear relationship between the independent variables and the dependent variable. It is defined as $1-\frac{SSE}{SSTO}$ which is the sum of squared errors divided by the total sum of
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? $R^2$ shows the linear relationship between the independent variables and the dependent variable. It is defined as $1-\frac{SSE}{SSTO}$ which is the sum of squared errors divided by the total sum of squares. $SSTO = SSE + SSR$ which are the total error and total sum of the regression squares. As independent variables are added $SSR$ will continue to rise (and since $SSTO$ is fixed) $SSE$ will go down and $R^2$ will continually rise irrespective of how valuable the variables you added are. The Adjusted $R^2$ is attempting to account for statistical shrinkage. Models with tons of predictors tend to perform better in sample than when tested out of sample. The adjusted $R^2$ "penalizes" you for adding the extra predictor variables that don't improve the existing model. It can be helpful in model selection. Adjusted $R^2$ will equal $R^2$ for one predictor variable. As you add variables, it will be smaller than $R^2$.
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? $R^2$ shows the linear relationship between the independent variables and the dependent variable. It is defined as $1-\frac{SSE}{SSTO}$ which is the sum of squared errors divided by the total sum of
18,290
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?
R^2 explains the proportion of the variation in your dependent variable (Y) explained by your independent variables (X) for a linear regression model. While adjusted R^2 says the proportion of the variation in your dependent variable (Y) explained by more than 1 independent variables (X) for a linear regression model.
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?
R^2 explains the proportion of the variation in your dependent variable (Y) explained by your independent variables (X) for a linear regression model. While adjusted R^2 says the proportion of the var
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? R^2 explains the proportion of the variation in your dependent variable (Y) explained by your independent variables (X) for a linear regression model. While adjusted R^2 says the proportion of the variation in your dependent variable (Y) explained by more than 1 independent variables (X) for a linear regression model.
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? R^2 explains the proportion of the variation in your dependent variable (Y) explained by your independent variables (X) for a linear regression model. While adjusted R^2 says the proportion of the var
18,291
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?
R-Squared increases even when you add variables which are not related to the dependent variable, but adjusted R-Squared take care of that as it decreases whenever you add variables that are not related to the dependent variable, thus after taking care it is likely to decrease.
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better?
R-Squared increases even when you add variables which are not related to the dependent variable, but adjusted R-Squared take care of that as it decreases whenever you add variables that are not relate
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? R-Squared increases even when you add variables which are not related to the dependent variable, but adjusted R-Squared take care of that as it decreases whenever you add variables that are not related to the dependent variable, thus after taking care it is likely to decrease.
Why is adjusted R-squared less than R-squared if adjusted R-squared predicts the model better? R-Squared increases even when you add variables which are not related to the dependent variable, but adjusted R-Squared take care of that as it decreases whenever you add variables that are not relate
18,292
Algebra of LDA. Fisher discrimination power of a variable and Linear Discriminant Analysis
Here is a short tale about Linear Discriminant Analysis (LDA) as a reply to the question. When we have one variable and $k$ groups (classes) to discriminate by it, this is ANOVA. The discrimination power of the variable is $SS_\text{between groups} / SS_\text{within groups}$, or $B/W$. When we have $p$ variables, this is MANOVA. If the variables are uncorrelated neither in total sample nor within groups, then the above discrimination power, $B/W$, is computed analogously and could be written as $trace(\bf{S_b})$$/trace(\bf{S_w})$, where $\bf{S_w}$ is the pooled within-group scatter matrix (i.e. the sum of $k$ p x p SSCP matrices of the variables, centered about the respective groups' centroid); $\bf{S_b}$ is the between-group scatter matrix $=\bf{S_t}-\bf{S_w}$, where $\bf{S_t}$ is the scatter matrix for the whole data (SSCP matrix of the variables centered about the grand centroid. (A "scatter matrix" is just a covariance matrix without devidedness by sample_size-1.) When there is some correlation between the variables - and usually there is - the above $B/W$ is expressed by $\bf{S_w^{-1} S_b}$ which is not a scalar anymore but a matrix. This simply due to that there are $p$ discriminative variables hidden behind this "overall" discrimination and partly sharing it. Now, we may want to submerge in MANOVA and decompose $\bf{S_w^{-1} S_b}$ into new and mutually orthogonal latent variables (their number is $min(p,k-1)$) called discriminant functions or discriminants - the 1st being the strongest discriminator, the 2nd being next behind, etc. Just like we do it in Pricipal component analysis. We replace original correlated variables by uncorrelated discriminants without loss of discriminative power. Because each next discriminant is weaker and weaker we may accept a small subset of first $m$ discriminants without great loss of discriminative power (again, similar to how we use PCA). This is the essense of LDA as of dimensionality reduction technique (LDA is also a Bayes' classification technique, but this is an entirely separate topic). LDA thus resembles PCA. PCA decomposes "correlatedness", LDA decomposes "separatedness". In LDA, because the above matrix expressing "separatedness" isn't symmetric, a by-pass algebraic trick is used to find its eigenvalues and eigenvectors$^1$. Eigenvalue of each discriminant function (a latent variable) is its discriminative power $B/W$ I was saying about in the first paragraph. Also, it is worth mentioning that discriminants, albeit uncorrelated, are not geometrically orthogonal as axes drawn in the original variable space. Some potentially related topics that you might want to read: LDA is MANOVA "deepened" into analysing latent structure and is a particular case of Canonical correlation analysis (exact equivalence between them as such). How LDA classifies objects and what are Fisher's coefficients. (I link only to my own answers currently, as I remember them, but there is many good and better answers from other people on this site as well). $^1$ LDA extraction phase computations are as follows. Eigenvalues ($\bf L$) of $\bf{S_w^{-1} S_b}$ are the same as of symmetric matrix $\bf{(U^{-1})' S_b U^{-1}}$, where $\bf U$ is the Cholesky root of $\bf{S_w}$: an upper-triangular matrix whereby $\bf{U'U=S_w}$. As for the eigenvectors of $\bf{S_w^{-1} S_b}$, they are given by $\bf{V=U^{-1} E}$, where $\bf E$ are the eigenvectors of the above matrix $\bf{(U^{-1})' S_b U^{-1}}$. (Note: $\bf U$, being triangular, can be inverted - using low-level language - faster than using a standard generic "inv" function of packages.) The described workaround-eigendecomposition-of-$\bf{S_w^{-1} S_b}$ method is realized in some programs (in SPSS, for example), while in other programs there is realized a "quasi zca-whitening" method which, being just a little slower, gives the same results and is described elsewhere. To summarize it here: obtain ZCA-whitening matrix for $\bf{S_w}$ - the symmetric sq. root $\bf S_w^{-1/2}$ (what is done through eigendecomposition); then eigendecomposition of $\bf S_w^{-1/2} S_b S_w^{-1/2}$ (which is a symmetric matrix) yields discriminant eigenvalues $\bf L$ and eigenvectors $\bf A$, whereby the discriminant eigenvectors $\bf V= S_w^{-1/2} A$. The "quasi zca-whitening" method can be rewritten to be done via singular-value-decomposition of casewise dataset instead of working with $\bf S_w$ and $\bf S_b$ scatter matrices; that adds computational precision (what is important in near-singularity situation), but sacrifices speed. OK, let's turn to the statistics usually computed in LDA. Canonical correlations corresponding to the eigenvalues are $\bf \Gamma = \sqrt{L/(L+1)}$. Whereas eigenvalue of a discriminant is $B/W$ of the ANOVA of that discriminant, canonical correlation squared is $B/T$ (T = total sum-of-squares) of that ANOVA. If you normalize (to SS=1) columns of eigenvectors $\bf V$ then these values can be seen as the direction cosines of the rotation of axes-variables into axes-discriminants; so with their help one can plot discriminants as axes on the scatterplot defined by the original variables (the eigenvectors, as axes in that variables' space, are not orthogonal). The unstandardized discriminant coefficients or weights are simply the scaled eigenvectors $\bf {C}= \it \sqrt{N-k} ~\bf V$. These are the coefficients of linear prediction of discriminants by the centered original variables. The values of discriminant functions themselves (discriminant scores) are $\bf XC$, where $\bf X$ is the centered original variables (input multivariate data with each column centered). Discriminants are uncorrelated. And when computed by the just above formula they also have the property that their pooled within-class covariance matrix is the identity matrix. Optional constant terms accompanying the unstandardized coefficients and allowing to un-center the discriminants if the input variables had nonzero means are $\bf {C_0} \it = -\sum^p diag(\bar{X}) \bf C$, where $diag(\bar{X}) $ is the diagonal matrix of the p variables' means and $\sum^p$ is the sum across the variables. In standardized discriminant coefficients, the contribution of variables into a discriminant is adjusted to the fact that variables have different variances and might be measured in different units; $\bf {K} \it = \sqrt{diag \bf (S_w)} \bf V$ (where diag(Sw) is diagonal matrix with the diagonal of $\bf S_w$). Despite being "standardized", these coefficients may occasionally exceed 1 (so don't be confused). If the input variables were z-standardized within each class separately, standardized coefficients = unstandardized ones. Coefficients may be used to interpret discriminants. Pooled within-group correlations ("structure matrix", sometimes called loadings) between variables and discriminants are given by $\bf R= {\it \sqrt{diag \bf (S_w)}} ^{-1} \bf S_w V$. Correlations are insensitive to collinearity problems and constitute an alternative (to the coefficients) guidance in assessment of variables' contributions, and in interpreting discriminants. See the complete output of the extraction phase of the discriminant analysis of iris data here. Read this nice later answer which explains a bit more formally and detailed the same things as I did here. This question deals with the issue of standardizing data before doing LDA.
Algebra of LDA. Fisher discrimination power of a variable and Linear Discriminant Analysis
Here is a short tale about Linear Discriminant Analysis (LDA) as a reply to the question. When we have one variable and $k$ groups (classes) to discriminate by it, this is ANOVA. The discrimination po
Algebra of LDA. Fisher discrimination power of a variable and Linear Discriminant Analysis Here is a short tale about Linear Discriminant Analysis (LDA) as a reply to the question. When we have one variable and $k$ groups (classes) to discriminate by it, this is ANOVA. The discrimination power of the variable is $SS_\text{between groups} / SS_\text{within groups}$, or $B/W$. When we have $p$ variables, this is MANOVA. If the variables are uncorrelated neither in total sample nor within groups, then the above discrimination power, $B/W$, is computed analogously and could be written as $trace(\bf{S_b})$$/trace(\bf{S_w})$, where $\bf{S_w}$ is the pooled within-group scatter matrix (i.e. the sum of $k$ p x p SSCP matrices of the variables, centered about the respective groups' centroid); $\bf{S_b}$ is the between-group scatter matrix $=\bf{S_t}-\bf{S_w}$, where $\bf{S_t}$ is the scatter matrix for the whole data (SSCP matrix of the variables centered about the grand centroid. (A "scatter matrix" is just a covariance matrix without devidedness by sample_size-1.) When there is some correlation between the variables - and usually there is - the above $B/W$ is expressed by $\bf{S_w^{-1} S_b}$ which is not a scalar anymore but a matrix. This simply due to that there are $p$ discriminative variables hidden behind this "overall" discrimination and partly sharing it. Now, we may want to submerge in MANOVA and decompose $\bf{S_w^{-1} S_b}$ into new and mutually orthogonal latent variables (their number is $min(p,k-1)$) called discriminant functions or discriminants - the 1st being the strongest discriminator, the 2nd being next behind, etc. Just like we do it in Pricipal component analysis. We replace original correlated variables by uncorrelated discriminants without loss of discriminative power. Because each next discriminant is weaker and weaker we may accept a small subset of first $m$ discriminants without great loss of discriminative power (again, similar to how we use PCA). This is the essense of LDA as of dimensionality reduction technique (LDA is also a Bayes' classification technique, but this is an entirely separate topic). LDA thus resembles PCA. PCA decomposes "correlatedness", LDA decomposes "separatedness". In LDA, because the above matrix expressing "separatedness" isn't symmetric, a by-pass algebraic trick is used to find its eigenvalues and eigenvectors$^1$. Eigenvalue of each discriminant function (a latent variable) is its discriminative power $B/W$ I was saying about in the first paragraph. Also, it is worth mentioning that discriminants, albeit uncorrelated, are not geometrically orthogonal as axes drawn in the original variable space. Some potentially related topics that you might want to read: LDA is MANOVA "deepened" into analysing latent structure and is a particular case of Canonical correlation analysis (exact equivalence between them as such). How LDA classifies objects and what are Fisher's coefficients. (I link only to my own answers currently, as I remember them, but there is many good and better answers from other people on this site as well). $^1$ LDA extraction phase computations are as follows. Eigenvalues ($\bf L$) of $\bf{S_w^{-1} S_b}$ are the same as of symmetric matrix $\bf{(U^{-1})' S_b U^{-1}}$, where $\bf U$ is the Cholesky root of $\bf{S_w}$: an upper-triangular matrix whereby $\bf{U'U=S_w}$. As for the eigenvectors of $\bf{S_w^{-1} S_b}$, they are given by $\bf{V=U^{-1} E}$, where $\bf E$ are the eigenvectors of the above matrix $\bf{(U^{-1})' S_b U^{-1}}$. (Note: $\bf U$, being triangular, can be inverted - using low-level language - faster than using a standard generic "inv" function of packages.) The described workaround-eigendecomposition-of-$\bf{S_w^{-1} S_b}$ method is realized in some programs (in SPSS, for example), while in other programs there is realized a "quasi zca-whitening" method which, being just a little slower, gives the same results and is described elsewhere. To summarize it here: obtain ZCA-whitening matrix for $\bf{S_w}$ - the symmetric sq. root $\bf S_w^{-1/2}$ (what is done through eigendecomposition); then eigendecomposition of $\bf S_w^{-1/2} S_b S_w^{-1/2}$ (which is a symmetric matrix) yields discriminant eigenvalues $\bf L$ and eigenvectors $\bf A$, whereby the discriminant eigenvectors $\bf V= S_w^{-1/2} A$. The "quasi zca-whitening" method can be rewritten to be done via singular-value-decomposition of casewise dataset instead of working with $\bf S_w$ and $\bf S_b$ scatter matrices; that adds computational precision (what is important in near-singularity situation), but sacrifices speed. OK, let's turn to the statistics usually computed in LDA. Canonical correlations corresponding to the eigenvalues are $\bf \Gamma = \sqrt{L/(L+1)}$. Whereas eigenvalue of a discriminant is $B/W$ of the ANOVA of that discriminant, canonical correlation squared is $B/T$ (T = total sum-of-squares) of that ANOVA. If you normalize (to SS=1) columns of eigenvectors $\bf V$ then these values can be seen as the direction cosines of the rotation of axes-variables into axes-discriminants; so with their help one can plot discriminants as axes on the scatterplot defined by the original variables (the eigenvectors, as axes in that variables' space, are not orthogonal). The unstandardized discriminant coefficients or weights are simply the scaled eigenvectors $\bf {C}= \it \sqrt{N-k} ~\bf V$. These are the coefficients of linear prediction of discriminants by the centered original variables. The values of discriminant functions themselves (discriminant scores) are $\bf XC$, where $\bf X$ is the centered original variables (input multivariate data with each column centered). Discriminants are uncorrelated. And when computed by the just above formula they also have the property that their pooled within-class covariance matrix is the identity matrix. Optional constant terms accompanying the unstandardized coefficients and allowing to un-center the discriminants if the input variables had nonzero means are $\bf {C_0} \it = -\sum^p diag(\bar{X}) \bf C$, where $diag(\bar{X}) $ is the diagonal matrix of the p variables' means and $\sum^p$ is the sum across the variables. In standardized discriminant coefficients, the contribution of variables into a discriminant is adjusted to the fact that variables have different variances and might be measured in different units; $\bf {K} \it = \sqrt{diag \bf (S_w)} \bf V$ (where diag(Sw) is diagonal matrix with the diagonal of $\bf S_w$). Despite being "standardized", these coefficients may occasionally exceed 1 (so don't be confused). If the input variables were z-standardized within each class separately, standardized coefficients = unstandardized ones. Coefficients may be used to interpret discriminants. Pooled within-group correlations ("structure matrix", sometimes called loadings) between variables and discriminants are given by $\bf R= {\it \sqrt{diag \bf (S_w)}} ^{-1} \bf S_w V$. Correlations are insensitive to collinearity problems and constitute an alternative (to the coefficients) guidance in assessment of variables' contributions, and in interpreting discriminants. See the complete output of the extraction phase of the discriminant analysis of iris data here. Read this nice later answer which explains a bit more formally and detailed the same things as I did here. This question deals with the issue of standardizing data before doing LDA.
Algebra of LDA. Fisher discrimination power of a variable and Linear Discriminant Analysis Here is a short tale about Linear Discriminant Analysis (LDA) as a reply to the question. When we have one variable and $k$ groups (classes) to discriminate by it, this is ANOVA. The discrimination po
18,293
Can I use Kolmogorov-Smirnov test and estimate distribution parameters?
The better approach is to compute your critical value of p-value by simulation. The problem is that when you estimate the parameters from the data rather than using hypothesized values then the distribution of the KS statistic does not follow the null distribution. You can instead ignore the p-values from the KS test and instead simulate a bunch of datasets from the candidate distribution (with a meaningful set of parameters) of the same size as your real data. Then for each set estimate the parameters and do the KS test using the estimated parameters. You p-value will be the proportion of test statistics from the simulated sets that are more extreeme than for your original data. Added Example Here is an example using R (hopefully readable/understandable for people who use other programs). A simple example using the Normal distribution as the null hypothesis: tmpfun <- function(x, m=0, s=1, sim=TRUE) { if(sim) { tmp.x <- rnorm(length(x), m, s) } else { tmp.x <- x } obs.mean <- mean(tmp.x) obs.sd <- sd(tmp.x) ks.test(tmp.x, 'pnorm', mean=obs.mean, sd=obs.sd)$statistic } set.seed(20200319) x <- rnorm(25, 100, 5) out <- replicate(1000, tmpfun(x)) hist(out) abline(v=tmpfun(x, sim=FALSE)) mean(out >= tmpfun(x, sim=FALSE)) The function will either compute the KS test statistic from the actual data (sim=FALSE) or simulate a new dataset of the same size from a normal distribution with specified mean and sd. Then in either case will compute the test statistic comparing to a normal distribution with the same mean and sd as the sample (original or simulated). The code then runs 1,000 simulations (feel free to change and rerun) to get/approximate the distribution of the test statistic under the NULL (but with estimated parameters) then finally compares the test statistic for the original data to this NULL distribution. We can simulate the whole process (simulations within simulations) to see how it compares to the default p-values: tmpfun2 <- function(B=1000) { x <- rnorm(25, 100, 5) out <- replicate(B, tmpfun(x)) p1 <- mean(out >= tmpfun(x, sim=FALSE)) p2 <- ks.test(x, 'pnorm', mean=mean(x), sd=sd(x))$p.value return(c(p1=p1, p2=p2)) } out <- replicate(1000, tmpfun2()) par(mfrow=c(2,1)) hist(out[1,]) hist(out[2,]) For my simulation, the histogram of the simulation based p-values is fairly uniform (which is should be since the NULL is true), but the p-values for the ks.test function are bunched up much more against 1.0. You can change anything in the simulations to estimate power by having the original data come from a different distribution, or using a different Null distribution, etc. The normal is probably the simplest since the mean and variance are independent, more tuning may be needed for other distributions.
Can I use Kolmogorov-Smirnov test and estimate distribution parameters?
The better approach is to compute your critical value of p-value by simulation. The problem is that when you estimate the parameters from the data rather than using hypothesized values then the distr
Can I use Kolmogorov-Smirnov test and estimate distribution parameters? The better approach is to compute your critical value of p-value by simulation. The problem is that when you estimate the parameters from the data rather than using hypothesized values then the distribution of the KS statistic does not follow the null distribution. You can instead ignore the p-values from the KS test and instead simulate a bunch of datasets from the candidate distribution (with a meaningful set of parameters) of the same size as your real data. Then for each set estimate the parameters and do the KS test using the estimated parameters. You p-value will be the proportion of test statistics from the simulated sets that are more extreeme than for your original data. Added Example Here is an example using R (hopefully readable/understandable for people who use other programs). A simple example using the Normal distribution as the null hypothesis: tmpfun <- function(x, m=0, s=1, sim=TRUE) { if(sim) { tmp.x <- rnorm(length(x), m, s) } else { tmp.x <- x } obs.mean <- mean(tmp.x) obs.sd <- sd(tmp.x) ks.test(tmp.x, 'pnorm', mean=obs.mean, sd=obs.sd)$statistic } set.seed(20200319) x <- rnorm(25, 100, 5) out <- replicate(1000, tmpfun(x)) hist(out) abline(v=tmpfun(x, sim=FALSE)) mean(out >= tmpfun(x, sim=FALSE)) The function will either compute the KS test statistic from the actual data (sim=FALSE) or simulate a new dataset of the same size from a normal distribution with specified mean and sd. Then in either case will compute the test statistic comparing to a normal distribution with the same mean and sd as the sample (original or simulated). The code then runs 1,000 simulations (feel free to change and rerun) to get/approximate the distribution of the test statistic under the NULL (but with estimated parameters) then finally compares the test statistic for the original data to this NULL distribution. We can simulate the whole process (simulations within simulations) to see how it compares to the default p-values: tmpfun2 <- function(B=1000) { x <- rnorm(25, 100, 5) out <- replicate(B, tmpfun(x)) p1 <- mean(out >= tmpfun(x, sim=FALSE)) p2 <- ks.test(x, 'pnorm', mean=mean(x), sd=sd(x))$p.value return(c(p1=p1, p2=p2)) } out <- replicate(1000, tmpfun2()) par(mfrow=c(2,1)) hist(out[1,]) hist(out[2,]) For my simulation, the histogram of the simulation based p-values is fairly uniform (which is should be since the NULL is true), but the p-values for the ks.test function are bunched up much more against 1.0. You can change anything in the simulations to estimate power by having the original data come from a different distribution, or using a different Null distribution, etc. The normal is probably the simplest since the mean and variance are independent, more tuning may be needed for other distributions.
Can I use Kolmogorov-Smirnov test and estimate distribution parameters? The better approach is to compute your critical value of p-value by simulation. The problem is that when you estimate the parameters from the data rather than using hypothesized values then the distr
18,294
Can I use Kolmogorov-Smirnov test and estimate distribution parameters?
Sample splitting might perhaps reduce the problem with the distribution of the statistic, but it doesn't remove it. Your idea avoids the issue that the estimates will be 'too close' relative to the population values because they're based on the same sample. You aren't avoiding the problem that they're still estimates. The distribution of the test statistic is not the tabulated one. In this case it increases the rejection rate under the null, instead of dramatically reducing it. A better choice is to use a test where the parameters aren't assumed known, such as a Shapiro Wilk. If you're wedded to a Kolmogorov-Smirnov type of test, you can take the approach of Lilliefors' test. That is, to use the KS statistic but have the distribution of the test statistic reflect the effect of parameters estimation - simulate the distribution of the test statistic under parameter estimation. (It's no longer distribution-free, so you need new tables for each distribution.) http://en.wikipedia.org/wiki/Lilliefors_test Liliefors used simulation for the normal and the exponential case, but you can easily do it for any specific distribution; in something like R it's a matter of moments to simulate 10,000 or 100,000 samples and get a distribution of the test statistic under the null. [An alternative might be to consider the Anderson-Darling, which does have the same issue, but which - judging from the book by D'Agostino and Stephens (Goodness-of-fit-techniques) seems to be less sensitive to it. You could adapt the Lilliefors idea, but they suggest a relatively simple adjustment that seems to work fairly well.] But there are other approaches still; there are families of smooth tests of goodness of fit, for example (e.g. see the book by Rayner and Best) that in a number of specific cases can deal with parameter estimation. * the effect can still be pretty large - perhaps bigger than would normally be regarded as acceptable; Momo is right to express concern about it. If a higher type I error rate (and a flatter power curve) is a problem, then this may not be an improvement!
Can I use Kolmogorov-Smirnov test and estimate distribution parameters?
Sample splitting might perhaps reduce the problem with the distribution of the statistic, but it doesn't remove it. Your idea avoids the issue that the estimates will be 'too close' relative to the po
Can I use Kolmogorov-Smirnov test and estimate distribution parameters? Sample splitting might perhaps reduce the problem with the distribution of the statistic, but it doesn't remove it. Your idea avoids the issue that the estimates will be 'too close' relative to the population values because they're based on the same sample. You aren't avoiding the problem that they're still estimates. The distribution of the test statistic is not the tabulated one. In this case it increases the rejection rate under the null, instead of dramatically reducing it. A better choice is to use a test where the parameters aren't assumed known, such as a Shapiro Wilk. If you're wedded to a Kolmogorov-Smirnov type of test, you can take the approach of Lilliefors' test. That is, to use the KS statistic but have the distribution of the test statistic reflect the effect of parameters estimation - simulate the distribution of the test statistic under parameter estimation. (It's no longer distribution-free, so you need new tables for each distribution.) http://en.wikipedia.org/wiki/Lilliefors_test Liliefors used simulation for the normal and the exponential case, but you can easily do it for any specific distribution; in something like R it's a matter of moments to simulate 10,000 or 100,000 samples and get a distribution of the test statistic under the null. [An alternative might be to consider the Anderson-Darling, which does have the same issue, but which - judging from the book by D'Agostino and Stephens (Goodness-of-fit-techniques) seems to be less sensitive to it. You could adapt the Lilliefors idea, but they suggest a relatively simple adjustment that seems to work fairly well.] But there are other approaches still; there are families of smooth tests of goodness of fit, for example (e.g. see the book by Rayner and Best) that in a number of specific cases can deal with parameter estimation. * the effect can still be pretty large - perhaps bigger than would normally be regarded as acceptable; Momo is right to express concern about it. If a higher type I error rate (and a flatter power curve) is a problem, then this may not be an improvement!
Can I use Kolmogorov-Smirnov test and estimate distribution parameters? Sample splitting might perhaps reduce the problem with the distribution of the statistic, but it doesn't remove it. Your idea avoids the issue that the estimates will be 'too close' relative to the po
18,295
Can I use Kolmogorov-Smirnov test and estimate distribution parameters?
I'm afraid that wouldn't solve the problem. I believe the problem is not that the parameters are estimated from the same sample but from any sample at all. The derivation of the usual null distribution of the KS test does not account for any estimation error in the parameters of the reference distribution, but rather sees them as given. See also Durbin 1973 who discusses this issues at length and offers solutions.
Can I use Kolmogorov-Smirnov test and estimate distribution parameters?
I'm afraid that wouldn't solve the problem. I believe the problem is not that the parameters are estimated from the same sample but from any sample at all. The derivation of the usual null distributio
Can I use Kolmogorov-Smirnov test and estimate distribution parameters? I'm afraid that wouldn't solve the problem. I believe the problem is not that the parameters are estimated from the same sample but from any sample at all. The derivation of the usual null distribution of the KS test does not account for any estimation error in the parameters of the reference distribution, but rather sees them as given. See also Durbin 1973 who discusses this issues at length and offers solutions.
Can I use Kolmogorov-Smirnov test and estimate distribution parameters? I'm afraid that wouldn't solve the problem. I believe the problem is not that the parameters are estimated from the same sample but from any sample at all. The derivation of the usual null distributio
18,296
Kullback-Leibler divergence: negative values? [duplicate]
KL-divergence is the sum of $q(i)\log\frac{q(i)}{p(i)}$ across all values of $i$. You've only got one instance ($i$) in your equation. For example, if your model was binomial (only two possible words occurred in your document) and $Pr(word1)$ was 0.005 in document 1 and 0.01 in document 2 then you would have: \begin{equation} KL = 0.005*\log\frac{0.005}{0.01} + 0.995*\log\frac{0.995}{0.99} = 0.001547 \geq 0. \end{equation} This sum (or integral in the case of continuous random variables) will always be positive, by the Gibbs inequality (see http://en.wikipedia.org/wiki/Gibbs%27_inequality).
Kullback-Leibler divergence: negative values? [duplicate]
KL-divergence is the sum of $q(i)\log\frac{q(i)}{p(i)}$ across all values of $i$. You've only got one instance ($i$) in your equation. For example, if your model was binomial (only two possible word
Kullback-Leibler divergence: negative values? [duplicate] KL-divergence is the sum of $q(i)\log\frac{q(i)}{p(i)}$ across all values of $i$. You've only got one instance ($i$) in your equation. For example, if your model was binomial (only two possible words occurred in your document) and $Pr(word1)$ was 0.005 in document 1 and 0.01 in document 2 then you would have: \begin{equation} KL = 0.005*\log\frac{0.005}{0.01} + 0.995*\log\frac{0.995}{0.99} = 0.001547 \geq 0. \end{equation} This sum (or integral in the case of continuous random variables) will always be positive, by the Gibbs inequality (see http://en.wikipedia.org/wiki/Gibbs%27_inequality).
Kullback-Leibler divergence: negative values? [duplicate] KL-divergence is the sum of $q(i)\log\frac{q(i)}{p(i)}$ across all values of $i$. You've only got one instance ($i$) in your equation. For example, if your model was binomial (only two possible word
18,297
Interpreting the regression output from a mixed model when interactions between categorical variables are included
Using the given regression table, we can compute the table of expected value of the dependent variable, DV, for each combination of the two factors, which might make this more clear (Note I've used the ordinary estimates, not the MCMC estimates): $$ \begin{array}{c|cc} \phantom{} & {\rm GroupA} & {\rm GroupB} \\ \hline {\rm Condition1} & 6.1372 & 6.0758 \\ {\rm Condition2} & 6.2522 & 6.0853 \\ {\rm Condition3} & 6.2372 & 6.1149 \\ \end{array} $$ I'll answer your question by responding to your interpretations, referencing this table. No overall differences between the groups (hence groupB having a p of >.05) The $p$-value you're referring to is only restricting focus to the reference level of the variable Condition , so it's only testing the difference between the groups when Condition=1 (the first row of the table), i.e. it's only testing whether $6.1372$ is significantly different from $6.0758$. It's not testing whether there is an overall difference between the groups. To do that test, you'd have to leave Condition out of the model entirely and test the significance of Group. Overall differences between condition 1 and condition 2, and between condition 1 and condition 3. Similarly to the first interpretation, this is only comparing Condition2 and Condition3 to the reference level (Condition1) when Group=A. That is, this is only testing whether the second and third entries in the first column are significantly different from $6.1372$. To test for overall differences in the condition variable, you'd need to leave Group out of the model and test condition alone. Differences between groupA, condition 1 versus groupB, condition 2 and also between groupA, condition 1 versus group B, condition 3. The interaction terms test whether the effect of one variable depends on the level of the other variable. For example, significance of the groupB:condition2 term tells you that difference between Condition1 and Condition2 is different when Group=A vs. Group=B. Referencing the table, this means that $$6.2522-6.1372=.115$$ is significantly different from $$6.0853-6.0758=.0095$$ In this particular case it looks like Condition2 is different from Condition1 in GroupA but much less so in GroupB, and that's how I'd interpret this. It appears a similar dynamic is occurring, to a lesser extent, with regard to Condition3.
Interpreting the regression output from a mixed model when interactions between categorical variable
Using the given regression table, we can compute the table of expected value of the dependent variable, DV, for each combination of the two factors, which might make this more clear (Note I've used th
Interpreting the regression output from a mixed model when interactions between categorical variables are included Using the given regression table, we can compute the table of expected value of the dependent variable, DV, for each combination of the two factors, which might make this more clear (Note I've used the ordinary estimates, not the MCMC estimates): $$ \begin{array}{c|cc} \phantom{} & {\rm GroupA} & {\rm GroupB} \\ \hline {\rm Condition1} & 6.1372 & 6.0758 \\ {\rm Condition2} & 6.2522 & 6.0853 \\ {\rm Condition3} & 6.2372 & 6.1149 \\ \end{array} $$ I'll answer your question by responding to your interpretations, referencing this table. No overall differences between the groups (hence groupB having a p of >.05) The $p$-value you're referring to is only restricting focus to the reference level of the variable Condition , so it's only testing the difference between the groups when Condition=1 (the first row of the table), i.e. it's only testing whether $6.1372$ is significantly different from $6.0758$. It's not testing whether there is an overall difference between the groups. To do that test, you'd have to leave Condition out of the model entirely and test the significance of Group. Overall differences between condition 1 and condition 2, and between condition 1 and condition 3. Similarly to the first interpretation, this is only comparing Condition2 and Condition3 to the reference level (Condition1) when Group=A. That is, this is only testing whether the second and third entries in the first column are significantly different from $6.1372$. To test for overall differences in the condition variable, you'd need to leave Group out of the model and test condition alone. Differences between groupA, condition 1 versus groupB, condition 2 and also between groupA, condition 1 versus group B, condition 3. The interaction terms test whether the effect of one variable depends on the level of the other variable. For example, significance of the groupB:condition2 term tells you that difference between Condition1 and Condition2 is different when Group=A vs. Group=B. Referencing the table, this means that $$6.2522-6.1372=.115$$ is significantly different from $$6.0853-6.0758=.0095$$ In this particular case it looks like Condition2 is different from Condition1 in GroupA but much less so in GroupB, and that's how I'd interpret this. It appears a similar dynamic is occurring, to a lesser extent, with regard to Condition3.
Interpreting the regression output from a mixed model when interactions between categorical variable Using the given regression table, we can compute the table of expected value of the dependent variable, DV, for each combination of the two factors, which might make this more clear (Note I've used th
18,298
What are the standard statistical tests to see if data follows exponential or normal distributions?
It seems that you're trying to decide whether to model your data using the normal or the exponential distribution. This seems somewhat strange to me, as these distributions are very different from each other. The normal distribution is symmetric whereas the exponential distribution is heavily skewed to the right, with no negative values. Typically a sample from the exponential distribution will contain many observations relatively close to $0$ and a few obervations that deviate far to the right from $0$. This difference is often easy to see graphically. Here is an example where I've simulated $n=100$ observations from a normal distribution with mean $2$ and variance $4$ and an exponential distribution with mean $2$ and variance $4$: The symmetry of the normal distribution and the skewness of the exponential can be seen using histograms, boxplots and scatterplots, as illustrated in the figure above. Another very useful tool is a Q-Q-plot. In the example below, the points should approximately follow the line if the sample comes from a normal distribution. As you can see, this is the case for the normal data, but not for the exponential data. If graphical examination for some reason isn't enough for you, you can still use a test to determine whether your distribution is normal or exponential. Since the normal distribution is a scale and location family, you'll want to use a test that is invariant under changes in scale and location (i.e. the result of the test should not change if you change your measurements from inches to centimetres or add $+1$ to all your observations). When the null hypothesis is that the distribution is normal and the alternative hypothesis is that it is exponential, the most powerful location and scale invariant test is given by the statistic $$T_{E,N}=\frac{\bar{x}-x_{(1)}}{s}$$ where $\bar{x}$ is the sample mean, $x_{(1)}$ is the smallest observation in the sample and $s$ is the sample standard deviation. Normality is rejected in favour of exponentiality if $T_{E,N}$ is too large. This test is actually a one-sided version of Grubbs' test for outliers. You'll find this implemented in most statistical software (but make sure that you use the right version - there are several alternative test statistics used for the outlier test!). Reference for $T_{E,N}$ being the most powerful test: Section 4.2.4 of Testing for Normality by H.C. Thode.
What are the standard statistical tests to see if data follows exponential or normal distributions?
It seems that you're trying to decide whether to model your data using the normal or the exponential distribution. This seems somewhat strange to me, as these distributions are very different from eac
What are the standard statistical tests to see if data follows exponential or normal distributions? It seems that you're trying to decide whether to model your data using the normal or the exponential distribution. This seems somewhat strange to me, as these distributions are very different from each other. The normal distribution is symmetric whereas the exponential distribution is heavily skewed to the right, with no negative values. Typically a sample from the exponential distribution will contain many observations relatively close to $0$ and a few obervations that deviate far to the right from $0$. This difference is often easy to see graphically. Here is an example where I've simulated $n=100$ observations from a normal distribution with mean $2$ and variance $4$ and an exponential distribution with mean $2$ and variance $4$: The symmetry of the normal distribution and the skewness of the exponential can be seen using histograms, boxplots and scatterplots, as illustrated in the figure above. Another very useful tool is a Q-Q-plot. In the example below, the points should approximately follow the line if the sample comes from a normal distribution. As you can see, this is the case for the normal data, but not for the exponential data. If graphical examination for some reason isn't enough for you, you can still use a test to determine whether your distribution is normal or exponential. Since the normal distribution is a scale and location family, you'll want to use a test that is invariant under changes in scale and location (i.e. the result of the test should not change if you change your measurements from inches to centimetres or add $+1$ to all your observations). When the null hypothesis is that the distribution is normal and the alternative hypothesis is that it is exponential, the most powerful location and scale invariant test is given by the statistic $$T_{E,N}=\frac{\bar{x}-x_{(1)}}{s}$$ where $\bar{x}$ is the sample mean, $x_{(1)}$ is the smallest observation in the sample and $s$ is the sample standard deviation. Normality is rejected in favour of exponentiality if $T_{E,N}$ is too large. This test is actually a one-sided version of Grubbs' test for outliers. You'll find this implemented in most statistical software (but make sure that you use the right version - there are several alternative test statistics used for the outlier test!). Reference for $T_{E,N}$ being the most powerful test: Section 4.2.4 of Testing for Normality by H.C. Thode.
What are the standard statistical tests to see if data follows exponential or normal distributions? It seems that you're trying to decide whether to model your data using the normal or the exponential distribution. This seems somewhat strange to me, as these distributions are very different from eac
18,299
What are the standard statistical tests to see if data follows exponential or normal distributions?
For the exponential distribution, you can use a test called Moran's or Bartlett's test. The test statistic $B_n$ involves the sample mean $\overline{Y}$ as well as the sample mean $\overline{\log Y}$ of the logged $Y_i$ $$ B_n = b_n \times \left\{\log \bar{Y} - \overline{\log Y} \right\} \qquad b_n = 2n \times \left\{1+ (n+1)/(6n) \right\}^{-1} $$ Under the null hypothesis we have approximately $B_n \sim \chi^2(n-1)$ and a two-sided test works. This test is designed against gamma alternatives. See K.C. Kapur and L.R. Lamberson Reliability in engineering design. Wiley 1977.
What are the standard statistical tests to see if data follows exponential or normal distributions?
For the exponential distribution, you can use a test called Moran's or Bartlett's test. The test statistic $B_n$ involves the sample mean $\overline{Y}$ as well as the sample mean $\overline{\log Y}$
What are the standard statistical tests to see if data follows exponential or normal distributions? For the exponential distribution, you can use a test called Moran's or Bartlett's test. The test statistic $B_n$ involves the sample mean $\overline{Y}$ as well as the sample mean $\overline{\log Y}$ of the logged $Y_i$ $$ B_n = b_n \times \left\{\log \bar{Y} - \overline{\log Y} \right\} \qquad b_n = 2n \times \left\{1+ (n+1)/(6n) \right\}^{-1} $$ Under the null hypothesis we have approximately $B_n \sim \chi^2(n-1)$ and a two-sided test works. This test is designed against gamma alternatives. See K.C. Kapur and L.R. Lamberson Reliability in engineering design. Wiley 1977.
What are the standard statistical tests to see if data follows exponential or normal distributions? For the exponential distribution, you can use a test called Moran's or Bartlett's test. The test statistic $B_n$ involves the sample mean $\overline{Y}$ as well as the sample mean $\overline{\log Y}$
18,300
What are the standard statistical tests to see if data follows exponential or normal distributions?
Have you considered graphical methods to see how the data behaves? Probability graph techniques usually involves ranking the data, applying the inverse CDF and then plotting the results on the Cartesian plane. This allows you to see if several values deviate from the hypothesized distribution and possibly account for the reason for the deviation.
What are the standard statistical tests to see if data follows exponential or normal distributions?
Have you considered graphical methods to see how the data behaves? Probability graph techniques usually involves ranking the data, applying the inverse CDF and then plotting the results on the Cartes
What are the standard statistical tests to see if data follows exponential or normal distributions? Have you considered graphical methods to see how the data behaves? Probability graph techniques usually involves ranking the data, applying the inverse CDF and then plotting the results on the Cartesian plane. This allows you to see if several values deviate from the hypothesized distribution and possibly account for the reason for the deviation.
What are the standard statistical tests to see if data follows exponential or normal distributions? Have you considered graphical methods to see how the data behaves? Probability graph techniques usually involves ranking the data, applying the inverse CDF and then plotting the results on the Cartes