idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
38,101
Linear regression with strongly non-normal response variable
The distribution of the response is irrelevant. Inference based on small samples requires the errors to be approximately normal (better look at the QQ-plot of the residuals than at its density because the tails are important). If you are only interested in descriptive results or if the sample size is not too small, you therefore do not need to worry about normality. Much more important are the other assumptions of linear regression (correct model structure, no large outliers in the predictors and, if you are interested in inference, homoscedastic and uncorrelated errors).
Linear regression with strongly non-normal response variable
The distribution of the response is irrelevant. Inference based on small samples requires the errors to be approximately normal (better look at the QQ-plot of the residuals than at its density because
Linear regression with strongly non-normal response variable The distribution of the response is irrelevant. Inference based on small samples requires the errors to be approximately normal (better look at the QQ-plot of the residuals than at its density because the tails are important). If you are only interested in descriptive results or if the sample size is not too small, you therefore do not need to worry about normality. Much more important are the other assumptions of linear regression (correct model structure, no large outliers in the predictors and, if you are interested in inference, homoscedastic and uncorrelated errors).
Linear regression with strongly non-normal response variable The distribution of the response is irrelevant. Inference based on small samples requires the errors to be approximately normal (better look at the QQ-plot of the residuals than at its density because
38,102
Linear regression with strongly non-normal response variable
Your distribution is not beta if your density plot is to be taken at face value. A beta distribution cannot have two modes within (0, 1). However, no density plot for a bounded variable (at a guess here from some kernel density estimation procedure) can be taken at face value unless the estimation includes adjustments for boundary artefacts, which is not typical. But, as it were, we see what you mean. However, to focus on the major issues: A regression is first and foremost a model for the mean of a variable as it varies with the predictors. Even if an assumption of normal errors is made, that is not an assumption about the marginal distribution of the response and it is the least important assumption that is being made. So, it is not surprising that your regression behaves fairly well as far as can be inferred from the distribution of residuals if the functional form catches the way that conditional means behave. The assertion of normality is more convincing if you show us a normal probability plot. That distribution looks to me to have higher kurtosis than a normal, although that is likely to be a little deal. You need to check that your model is predicting values within [0,1]. Some of your residuals are about 0.7 in magnitude and so it seems possible that some of the predictions are qualitatively wrong. At the same time, you should be able to do better with a regression that respects the bounded nature of the response. You could try beta regression or a generalised linear model with binomial family and logit link. The latter sounds wrong but often works well in practice. For a concise introductory review, see http://www.stata-journal.com/sjpdf.html?articlenum=st0147 Beta regression is supported in R and Stata (and likely so in other software) and generalised linear models are widely supported, although watch for routines that reject non-binary responses if a logit link is requested. Note: The exact form of your density plot for the response is a side-issue, so I will make this an added note. It's clear that the density for a variable bounded by 0 and 1 must average 1. Your graph has a useful reference line density at 1. Visually comparing the bump above 1 on the left with the area to its right underlines that some of the density has been smoothed away by the procedure beyond the support and discarded. That is, the graph shown truncates the display: the smoothed distribution has positive density below 0 or above 1, which is not shown. There are known ways to smooth a bounded variable more respectfully, in this case including (a) to smooth logit of the variable and back-transform the density (a little problematic if observed values include 0 or 1), or (b) to reflect density inwards at the extremes. Naturally, there is scope for disagreement about whether this is trivial or secondary on the one hand or incorrect on the other. (I'd rather see a quantile plot of the data, but I'll not expand on that.)
Linear regression with strongly non-normal response variable
Your distribution is not beta if your density plot is to be taken at face value. A beta distribution cannot have two modes within (0, 1). However, no density plot for a bounded variable (at a guess he
Linear regression with strongly non-normal response variable Your distribution is not beta if your density plot is to be taken at face value. A beta distribution cannot have two modes within (0, 1). However, no density plot for a bounded variable (at a guess here from some kernel density estimation procedure) can be taken at face value unless the estimation includes adjustments for boundary artefacts, which is not typical. But, as it were, we see what you mean. However, to focus on the major issues: A regression is first and foremost a model for the mean of a variable as it varies with the predictors. Even if an assumption of normal errors is made, that is not an assumption about the marginal distribution of the response and it is the least important assumption that is being made. So, it is not surprising that your regression behaves fairly well as far as can be inferred from the distribution of residuals if the functional form catches the way that conditional means behave. The assertion of normality is more convincing if you show us a normal probability plot. That distribution looks to me to have higher kurtosis than a normal, although that is likely to be a little deal. You need to check that your model is predicting values within [0,1]. Some of your residuals are about 0.7 in magnitude and so it seems possible that some of the predictions are qualitatively wrong. At the same time, you should be able to do better with a regression that respects the bounded nature of the response. You could try beta regression or a generalised linear model with binomial family and logit link. The latter sounds wrong but often works well in practice. For a concise introductory review, see http://www.stata-journal.com/sjpdf.html?articlenum=st0147 Beta regression is supported in R and Stata (and likely so in other software) and generalised linear models are widely supported, although watch for routines that reject non-binary responses if a logit link is requested. Note: The exact form of your density plot for the response is a side-issue, so I will make this an added note. It's clear that the density for a variable bounded by 0 and 1 must average 1. Your graph has a useful reference line density at 1. Visually comparing the bump above 1 on the left with the area to its right underlines that some of the density has been smoothed away by the procedure beyond the support and discarded. That is, the graph shown truncates the display: the smoothed distribution has positive density below 0 or above 1, which is not shown. There are known ways to smooth a bounded variable more respectfully, in this case including (a) to smooth logit of the variable and back-transform the density (a little problematic if observed values include 0 or 1), or (b) to reflect density inwards at the extremes. Naturally, there is scope for disagreement about whether this is trivial or secondary on the one hand or incorrect on the other. (I'd rather see a quantile plot of the data, but I'll not expand on that.)
Linear regression with strongly non-normal response variable Your distribution is not beta if your density plot is to be taken at face value. A beta distribution cannot have two modes within (0, 1). However, no density plot for a bounded variable (at a guess he
38,103
Linear regression with strongly non-normal response variable
Strictly speaking, the normality of residuals assumption is not needed for OLS to work, it becomes an issue especially in hypothesis testing. Since your residuals actually seem to be normally distributed, you're fine even in this area. Additionally, OLS does not assume anything about the distribution of variables so you do not have to worry about that.
Linear regression with strongly non-normal response variable
Strictly speaking, the normality of residuals assumption is not needed for OLS to work, it becomes an issue especially in hypothesis testing. Since your residuals actually seem to be normally distribu
Linear regression with strongly non-normal response variable Strictly speaking, the normality of residuals assumption is not needed for OLS to work, it becomes an issue especially in hypothesis testing. Since your residuals actually seem to be normally distributed, you're fine even in this area. Additionally, OLS does not assume anything about the distribution of variables so you do not have to worry about that.
Linear regression with strongly non-normal response variable Strictly speaking, the normality of residuals assumption is not needed for OLS to work, it becomes an issue especially in hypothesis testing. Since your residuals actually seem to be normally distribu
38,104
Linear regression with strongly non-normal response variable
Although the other answers have already addressed the question, I would like to add another powerful option that would solve most of the problems related to the distribution-assumptions: quantile regression. Depending on the research interests, this method can be extremely powerful. As someone has already said before, if you are merely interested in estimating the marginal mean (or any quantile) of your outcome then you don't need to worry about any assumption at all, as both quantile and ordinary regression methods perfectly estimate it. If you are interested in inference, ordinary regression has a couple of problems with the distribution assumptions, whereas the quantile doesn't because it is distribution free. It's true that you can try using mean regression and robust estimators, but personally I prefer quantile regression, which is by the way even more informative (because you can estimate the whole conditional distribution of the outcome instead of just one of its summary indicators, the mean). If you are interested in both prediction and inference, then the quantile's property of invariance is quite handy. For example, suppose you are working with probabilities, or rates (or any other "bounded" outcome). With quantile regression you can transform the outcome Y so that it's transformation is not bounded (for example, using a logit or probit function), model logit(Y) and use the same model for predictions and inference. With ordinary methods it's not so easy, because of Jensen's inequality: E(g(Y)) is never equal to g(E(Y)). Therefore, you either use two models (one for the prediction, one for the association) or you must use other methods (beta regression, logit normal regression) that, however, have problems related to respectively parameter interpretation and distribution assumptions. Finally, there can always be problems related to the linearity assumption or independent data. In the former, we can solve the problem by adding splines (which, though, complicate the interpretation of parameters). For the latter, instead, mixed effect regression models could help us (if we have hierarchical or longitudinal data).
Linear regression with strongly non-normal response variable
Although the other answers have already addressed the question, I would like to add another powerful option that would solve most of the problems related to the distribution-assumptions: quantile regr
Linear regression with strongly non-normal response variable Although the other answers have already addressed the question, I would like to add another powerful option that would solve most of the problems related to the distribution-assumptions: quantile regression. Depending on the research interests, this method can be extremely powerful. As someone has already said before, if you are merely interested in estimating the marginal mean (or any quantile) of your outcome then you don't need to worry about any assumption at all, as both quantile and ordinary regression methods perfectly estimate it. If you are interested in inference, ordinary regression has a couple of problems with the distribution assumptions, whereas the quantile doesn't because it is distribution free. It's true that you can try using mean regression and robust estimators, but personally I prefer quantile regression, which is by the way even more informative (because you can estimate the whole conditional distribution of the outcome instead of just one of its summary indicators, the mean). If you are interested in both prediction and inference, then the quantile's property of invariance is quite handy. For example, suppose you are working with probabilities, or rates (or any other "bounded" outcome). With quantile regression you can transform the outcome Y so that it's transformation is not bounded (for example, using a logit or probit function), model logit(Y) and use the same model for predictions and inference. With ordinary methods it's not so easy, because of Jensen's inequality: E(g(Y)) is never equal to g(E(Y)). Therefore, you either use two models (one for the prediction, one for the association) or you must use other methods (beta regression, logit normal regression) that, however, have problems related to respectively parameter interpretation and distribution assumptions. Finally, there can always be problems related to the linearity assumption or independent data. In the former, we can solve the problem by adding splines (which, though, complicate the interpretation of parameters). For the latter, instead, mixed effect regression models could help us (if we have hierarchical or longitudinal data).
Linear regression with strongly non-normal response variable Although the other answers have already addressed the question, I would like to add another powerful option that would solve most of the problems related to the distribution-assumptions: quantile regr
38,105
Mean has lower standard error than 5% trimmed mean?
If the underlying population is normally distributed without contamination then the sample mean is the best unbiased estimate (in the sense of the lowest mean square error) of the centre of the population distribution. This is not always the case with other distributions, which might include those with contamination. So your observation depends on the particular distribution and contamination.
Mean has lower standard error than 5% trimmed mean?
If the underlying population is normally distributed without contamination then the sample mean is the best unbiased estimate (in the sense of the lowest mean square error) of the centre of the popula
Mean has lower standard error than 5% trimmed mean? If the underlying population is normally distributed without contamination then the sample mean is the best unbiased estimate (in the sense of the lowest mean square error) of the centre of the population distribution. This is not always the case with other distributions, which might include those with contamination. So your observation depends on the particular distribution and contamination.
Mean has lower standard error than 5% trimmed mean? If the underlying population is normally distributed without contamination then the sample mean is the best unbiased estimate (in the sense of the lowest mean square error) of the centre of the popula
38,106
Mean has lower standard error than 5% trimmed mean?
This does seem surprising at first sight, but here is a guess at what is happening. Focus on what a bootstrap sample is, namely a sample with replacement. So, every now and again some of these samples will include repetitions of the outliers or wild values. Those samples will be trimmed, but in some cases the trimming will not be enough to exclude all the repeated wild values. But as the degree of trimming increases, this pathology is less likely to be seen. To spell it out, let's imagine a sample of 20 values 1(1)19, 2000. Trimming 5% is always enough to deal with the outlier in the original data. But trimming 5% won't be enough to deal with bootstrap samples with 2000, 2000 or 2000, 2000, 2000 and so on. There will be plenty of cases with no occurrences of 2000, but they (evidently) don't balance the others. Bootstrapping is of course not white magic that works regardless. With enigmatic output you need to look beyond printed summaries and see what the entire distribution looks like from all your bootstrap samples. My guess is that you have a tail of really wild results at 5% and this is widening the standard errors. In fact you will have tails of really wild results at all trimming proportions, but less marked as the trimming proportion increases. Otherwise put, part of the problem is that standard error inevitably is influenced by all values, here all trimmed means. I'd look at percentile-based confidence intervals too.
Mean has lower standard error than 5% trimmed mean?
This does seem surprising at first sight, but here is a guess at what is happening. Focus on what a bootstrap sample is, namely a sample with replacement. So, every now and again some of these sample
Mean has lower standard error than 5% trimmed mean? This does seem surprising at first sight, but here is a guess at what is happening. Focus on what a bootstrap sample is, namely a sample with replacement. So, every now and again some of these samples will include repetitions of the outliers or wild values. Those samples will be trimmed, but in some cases the trimming will not be enough to exclude all the repeated wild values. But as the degree of trimming increases, this pathology is less likely to be seen. To spell it out, let's imagine a sample of 20 values 1(1)19, 2000. Trimming 5% is always enough to deal with the outlier in the original data. But trimming 5% won't be enough to deal with bootstrap samples with 2000, 2000 or 2000, 2000, 2000 and so on. There will be plenty of cases with no occurrences of 2000, but they (evidently) don't balance the others. Bootstrapping is of course not white magic that works regardless. With enigmatic output you need to look beyond printed summaries and see what the entire distribution looks like from all your bootstrap samples. My guess is that you have a tail of really wild results at 5% and this is widening the standard errors. In fact you will have tails of really wild results at all trimming proportions, but less marked as the trimming proportion increases. Otherwise put, part of the problem is that standard error inevitably is influenced by all values, here all trimmed means. I'd look at percentile-based confidence intervals too.
Mean has lower standard error than 5% trimmed mean? This does seem surprising at first sight, but here is a guess at what is happening. Focus on what a bootstrap sample is, namely a sample with replacement. So, every now and again some of these sample
38,107
Mean has lower standard error than 5% trimmed mean?
The efficiency of the trimmed mean depends on the shape of the distribution. If the underlying distribution is very asymmetric -- say, Exponential -- then trimming will bias your mean in the negative direction. Or, say, if the distribution is a mixture of two distributions with different means, trimming could remove more of one, again biasing the estimate. For instance, if 90% of your data are $N(0,1)$ and the remainder are $N(1,10)$, then trimming will remove most of the latter points, getting you an estimate closer to $0$ than to the true value $0.1$. So, it's reasonable that the mean should do better than the trimmed mean, even outside of the standard, Normal case. What seems more suprising is that the accuracy is not monotonic in the amount of trimming -- you list 20%, 10%, 0%, 5% from most to least accurate. This might happen if, say, you had a mixture again, this time 85% of $N(0,1)$ and 15% of $N(0,20)$, since trimming the 5% tails would greatly reduce the sample size of your $N(0,20)$ samples, leading to a high standard error, but trimming enough more would remove them entirely; since they have the same mean you get a better estimate.
Mean has lower standard error than 5% trimmed mean?
The efficiency of the trimmed mean depends on the shape of the distribution. If the underlying distribution is very asymmetric -- say, Exponential -- then trimming will bias your mean in the negative
Mean has lower standard error than 5% trimmed mean? The efficiency of the trimmed mean depends on the shape of the distribution. If the underlying distribution is very asymmetric -- say, Exponential -- then trimming will bias your mean in the negative direction. Or, say, if the distribution is a mixture of two distributions with different means, trimming could remove more of one, again biasing the estimate. For instance, if 90% of your data are $N(0,1)$ and the remainder are $N(1,10)$, then trimming will remove most of the latter points, getting you an estimate closer to $0$ than to the true value $0.1$. So, it's reasonable that the mean should do better than the trimmed mean, even outside of the standard, Normal case. What seems more suprising is that the accuracy is not monotonic in the amount of trimming -- you list 20%, 10%, 0%, 5% from most to least accurate. This might happen if, say, you had a mixture again, this time 85% of $N(0,1)$ and 15% of $N(0,20)$, since trimming the 5% tails would greatly reduce the sample size of your $N(0,20)$ samples, leading to a high standard error, but trimming enough more would remove them entirely; since they have the same mean you get a better estimate.
Mean has lower standard error than 5% trimmed mean? The efficiency of the trimmed mean depends on the shape of the distribution. If the underlying distribution is very asymmetric -- say, Exponential -- then trimming will bias your mean in the negative
38,108
Excel, Heatmap & Data Visualization without add-ins
The short answer is no, there is no easy way to create most of the graphics you mention. But in any graphics environment where you can draw line segments (such as the pen plotter drivers from the 60's, 70's, and 80's), you can construct workable visualizations. So one method is to focus on joined scatterplots (which is the principal mechanism for creating line segments in Excel). Writing macros can help, if that's allowed. I haven't gone far in this direction, but some years ago created spreadsheets with side-by-side box-and-whisker plots, showing this approach is feasible. This graphic, summarizing individual batting averages in baseball teams, was created by copying and arranging summaries of the team batting averages as needed to allow them to be plotted as scatterplots. For this to happen, you need to work out the $(x,y)$ coordinates of the endpoints of each line segment you want to appear in the plot, arrange those in pairs of rows of columns, and add them as new series to the graphic. Here, to illustrate, is a portion of the worksheet that drives this graphic: (Original data are shown in blue; everything else is calculated.) For instance, the left side of the "Red Sox" boxplot (at the far right) is given by the coordinates in columns U:V, the right side in W:X, the middle bar (showing the median) in M3:M5 and O3:O5, etc. In all, this graphic displays $98$ series of data: seven series per boxplot. As I recall (this is from a few years ago), some manual editing was required to format the names of the outlying players, but otherwise the boxplots were produced automatically using a (very crude) macro. This macro copied the summary data (seen in columns I:L) into the requisite columns. Another macro systematically set the graphics styles for the series, and so on. Little expertise in VBA is needed to write such macros: you just "record" what you're doing in order to create one basic element of your graphic and then edit the resulting macro to make its specific cell references into relative cell references. I don't recommend any of this and anticipate never doing it again, but I can attest that the process of creating statistical graphics in such a primitive environment is educational.
Excel, Heatmap & Data Visualization without add-ins
The short answer is no, there is no easy way to create most of the graphics you mention. But in any graphics environment where you can draw line segments (such as the pen plotter drivers from the 60's
Excel, Heatmap & Data Visualization without add-ins The short answer is no, there is no easy way to create most of the graphics you mention. But in any graphics environment where you can draw line segments (such as the pen plotter drivers from the 60's, 70's, and 80's), you can construct workable visualizations. So one method is to focus on joined scatterplots (which is the principal mechanism for creating line segments in Excel). Writing macros can help, if that's allowed. I haven't gone far in this direction, but some years ago created spreadsheets with side-by-side box-and-whisker plots, showing this approach is feasible. This graphic, summarizing individual batting averages in baseball teams, was created by copying and arranging summaries of the team batting averages as needed to allow them to be plotted as scatterplots. For this to happen, you need to work out the $(x,y)$ coordinates of the endpoints of each line segment you want to appear in the plot, arrange those in pairs of rows of columns, and add them as new series to the graphic. Here, to illustrate, is a portion of the worksheet that drives this graphic: (Original data are shown in blue; everything else is calculated.) For instance, the left side of the "Red Sox" boxplot (at the far right) is given by the coordinates in columns U:V, the right side in W:X, the middle bar (showing the median) in M3:M5 and O3:O5, etc. In all, this graphic displays $98$ series of data: seven series per boxplot. As I recall (this is from a few years ago), some manual editing was required to format the names of the outlying players, but otherwise the boxplots were produced automatically using a (very crude) macro. This macro copied the summary data (seen in columns I:L) into the requisite columns. Another macro systematically set the graphics styles for the series, and so on. Little expertise in VBA is needed to write such macros: you just "record" what you're doing in order to create one basic element of your graphic and then edit the resulting macro to make its specific cell references into relative cell references. I don't recommend any of this and anticipate never doing it again, but I can attest that the process of creating statistical graphics in such a primitive environment is educational.
Excel, Heatmap & Data Visualization without add-ins The short answer is no, there is no easy way to create most of the graphics you mention. But in any graphics environment where you can draw line segments (such as the pen plotter drivers from the 60's
38,109
Excel, Heatmap & Data Visualization without add-ins
As the way I've found to plot heatmaps in graphs is very simple and can be used to plot almost everything, I think it will interet some people. For curious people (and those who have time to lose). My idea is to create a "screen" pixel by pixel. Create a table with the same size of what you want to plot, filled with ones. Plot it as stacked columns. reduce the gap between columns to 0. (right click on the plot area, format, gap width to 0) Use a macro to change the color of each "pixel". Sub Macro6() Application.ScreenUpdating = False ActiveSheet.ChartObjects("Your table").Activate For i = 1 To row For j = 1 To col red = Int(WorksheetFunction.Max(ActiveSheet.Cells(j + col0, row0 + i), 0) * 255) '' positive value in red blue = Int(-WorksheetFunction.Min(ActiveSheet.Cells(j + col0, row0 + i), * 255) '' negative value in blue ActiveChart.SeriesCollection(i).Points(j).Interior.Color = RGB(255 - blue, 255 - red - blue, 255 - red) '' gives each pixel a colour based on the value in a cells in a table starting in row0,col0 Next Next Application.ScreenUpdating = True End Sub and obtain: Now, do it again with bigger data (33 * 2681): TADAAA ! Try to enjoy and save it before the crash: It's usefull to plot an image and works well with few data (no problems with 33*33), but it will slow down your computer with too much datas. As you can see I did't had the patience to works on the details with such a slow computer. If someone can test the method with a good computer, it would be interesting to know if it can be used with real life computer. And now, dendograms !
Excel, Heatmap & Data Visualization without add-ins
As the way I've found to plot heatmaps in graphs is very simple and can be used to plot almost everything, I think it will interet some people. For curious people (and those who have time to lose). My
Excel, Heatmap & Data Visualization without add-ins As the way I've found to plot heatmaps in graphs is very simple and can be used to plot almost everything, I think it will interet some people. For curious people (and those who have time to lose). My idea is to create a "screen" pixel by pixel. Create a table with the same size of what you want to plot, filled with ones. Plot it as stacked columns. reduce the gap between columns to 0. (right click on the plot area, format, gap width to 0) Use a macro to change the color of each "pixel". Sub Macro6() Application.ScreenUpdating = False ActiveSheet.ChartObjects("Your table").Activate For i = 1 To row For j = 1 To col red = Int(WorksheetFunction.Max(ActiveSheet.Cells(j + col0, row0 + i), 0) * 255) '' positive value in red blue = Int(-WorksheetFunction.Min(ActiveSheet.Cells(j + col0, row0 + i), * 255) '' negative value in blue ActiveChart.SeriesCollection(i).Points(j).Interior.Color = RGB(255 - blue, 255 - red - blue, 255 - red) '' gives each pixel a colour based on the value in a cells in a table starting in row0,col0 Next Next Application.ScreenUpdating = True End Sub and obtain: Now, do it again with bigger data (33 * 2681): TADAAA ! Try to enjoy and save it before the crash: It's usefull to plot an image and works well with few data (no problems with 33*33), but it will slow down your computer with too much datas. As you can see I did't had the patience to works on the details with such a slow computer. If someone can test the method with a good computer, it would be interesting to know if it can be used with real life computer. And now, dendograms !
Excel, Heatmap & Data Visualization without add-ins As the way I've found to plot heatmaps in graphs is very simple and can be used to plot almost everything, I think it will interet some people. For curious people (and those who have time to lose). My
38,110
Excel, Heatmap & Data Visualization without add-ins
Jon Peltier has a really great site with a lot of guides on how to bend Excel to make some more exotic charts. For example, Box plots and Heat maps. You don't need his plug-in to execute a lot of them. Excel can't do a lot of things, and sometimes the things that you can make it do are hard to maintain and update, but as in your case you sometimes just don't have a choice.
Excel, Heatmap & Data Visualization without add-ins
Jon Peltier has a really great site with a lot of guides on how to bend Excel to make some more exotic charts. For example, Box plots and Heat maps. You don't need his plug-in to execute a lot of th
Excel, Heatmap & Data Visualization without add-ins Jon Peltier has a really great site with a lot of guides on how to bend Excel to make some more exotic charts. For example, Box plots and Heat maps. You don't need his plug-in to execute a lot of them. Excel can't do a lot of things, and sometimes the things that you can make it do are hard to maintain and update, but as in your case you sometimes just don't have a choice.
Excel, Heatmap & Data Visualization without add-ins Jon Peltier has a really great site with a lot of guides on how to bend Excel to make some more exotic charts. For example, Box plots and Heat maps. You don't need his plug-in to execute a lot of th
38,111
Excel, Heatmap & Data Visualization without add-ins
There a few tools out there to create treemaps in Excel. Try Treemap or Sparklines
Excel, Heatmap & Data Visualization without add-ins
There a few tools out there to create treemaps in Excel. Try Treemap or Sparklines
Excel, Heatmap & Data Visualization without add-ins There a few tools out there to create treemaps in Excel. Try Treemap or Sparklines
Excel, Heatmap & Data Visualization without add-ins There a few tools out there to create treemaps in Excel. Try Treemap or Sparklines
38,112
Are misses in my data distributed completely at random?
As @Dirk Eddelbuettel already mentioned, your questions is not very clear. In fact, I think you are asking two questions. The first question is related to your M(C)AR assumption. The second question is about (an) appropriate R package(s). (1) "Testing" for MAR To test if age has an effect on the missingness of your score variable, you could run a simple logistic regression model with age as a predictor variable. Your response variable is 0: score is not missing, 1: score is missing (see also @mbq's answer and @Macro's comment). Given the assumption that younger children are more likely to not report math scores, we expect to see a significant negative effect of age . ## Make up some data set.seed(2) ## Younger children are more likely to not report math scores, ## so I use a Poisson distribution to model that behaviour missData <- rpois(10000, 10) dfr <- data.frame(score=rnorm(100), age=sample(6:15, 100, replace=TRUE)) dfr <- dfr[order(dfr$age), ] dfr$agemiss <- sort(sample(missData, 100, replace=TRUE)) dfr$miss <- ifelse(dfr$agemiss == dfr$age, 1, 0) ## Run the logistic regression with age as predictor > summary(glm(miss ~ age, data=dfr, family=binomial)) [...] Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.9729 1.4946 3.996 6.43e-05 *** age -0.7997 0.1760 -4.544 5.53e-06 *** --- [...] (2) (Some) Missing data related R packages Some of these packages also have functions to explore patterns of missingness (e.g., missing.pattern.plot() in the mi package). Amelia II: A Program for Missing Data Hmisc: Harrell Miscellaneous mi: Missing Data Imputation and Model Checking mitools: Tools for multiple imputation of missing data
Are misses in my data distributed completely at random?
As @Dirk Eddelbuettel already mentioned, your questions is not very clear. In fact, I think you are asking two questions. The first question is related to your M(C)AR assumption. The second question i
Are misses in my data distributed completely at random? As @Dirk Eddelbuettel already mentioned, your questions is not very clear. In fact, I think you are asking two questions. The first question is related to your M(C)AR assumption. The second question is about (an) appropriate R package(s). (1) "Testing" for MAR To test if age has an effect on the missingness of your score variable, you could run a simple logistic regression model with age as a predictor variable. Your response variable is 0: score is not missing, 1: score is missing (see also @mbq's answer and @Macro's comment). Given the assumption that younger children are more likely to not report math scores, we expect to see a significant negative effect of age . ## Make up some data set.seed(2) ## Younger children are more likely to not report math scores, ## so I use a Poisson distribution to model that behaviour missData <- rpois(10000, 10) dfr <- data.frame(score=rnorm(100), age=sample(6:15, 100, replace=TRUE)) dfr <- dfr[order(dfr$age), ] dfr$agemiss <- sort(sample(missData, 100, replace=TRUE)) dfr$miss <- ifelse(dfr$agemiss == dfr$age, 1, 0) ## Run the logistic regression with age as predictor > summary(glm(miss ~ age, data=dfr, family=binomial)) [...] Coefficients: Estimate Std. Error z value Pr(>|z|) (Intercept) 5.9729 1.4946 3.996 6.43e-05 *** age -0.7997 0.1760 -4.544 5.53e-06 *** --- [...] (2) (Some) Missing data related R packages Some of these packages also have functions to explore patterns of missingness (e.g., missing.pattern.plot() in the mi package). Amelia II: A Program for Missing Data Hmisc: Harrell Miscellaneous mi: Missing Data Imputation and Model Checking mitools: Tools for multiple imputation of missing data
Are misses in my data distributed completely at random? As @Dirk Eddelbuettel already mentioned, your questions is not very clear. In fact, I think you are asking two questions. The first question is related to your M(C)AR assumption. The second question i
38,113
Are misses in my data distributed completely at random?
As far as I understand your question, you want to investigate if missing values in your data appear due to some pattern. In this case, you don't need any "missing value analysis" -- this is the same problem as checking whether the score is bigger than 0.7 or whatever. Just convert your dataset into two-class factor (missing, not-missing) and look for correlations.
Are misses in my data distributed completely at random?
As far as I understand your question, you want to investigate if missing values in your data appear due to some pattern. In this case, you don't need any "missing value analysis" -- this is the same p
Are misses in my data distributed completely at random? As far as I understand your question, you want to investigate if missing values in your data appear due to some pattern. In this case, you don't need any "missing value analysis" -- this is the same problem as checking whether the score is bigger than 0.7 or whatever. Just convert your dataset into two-class factor (missing, not-missing) and look for correlations.
Are misses in my data distributed completely at random? As far as I understand your question, you want to investigate if missing values in your data appear due to some pattern. In this case, you don't need any "missing value analysis" -- this is the same p
38,114
Are misses in my data distributed completely at random?
Your question is a little difficult to decipher. One approach for dealing with missing data is imputation -- and there is a substantial literature on this and an already large and growing set of packages at CRAN so you may want to start there.
Are misses in my data distributed completely at random?
Your question is a little difficult to decipher. One approach for dealing with missing data is imputation -- and there is a substantial literature on this and an already large and growing set of pack
Are misses in my data distributed completely at random? Your question is a little difficult to decipher. One approach for dealing with missing data is imputation -- and there is a substantial literature on this and an already large and growing set of packages at CRAN so you may want to start there.
Are misses in my data distributed completely at random? Your question is a little difficult to decipher. One approach for dealing with missing data is imputation -- and there is a substantial literature on this and an already large and growing set of pack
38,115
What does Φ mean? [closed]
The Phyrexian or Greek letter Φ is being used to describe the CDF (cumulative distribution function) of the standard normal. The lowercase Greek letter φ is being used to describe the PDF (probability density function) of the standard normal distribution. And as noted in the other answer as well (I copy it to make this answer more compleated): Φ⁻¹ is the inverse of the CDF, which is also known as the quantile function of the standard normal.
What does Φ mean? [closed]
The Phyrexian or Greek letter Φ is being used to describe the CDF (cumulative distribution function) of the standard normal. The lowercase Greek letter φ is being used to describe the PDF (probability
What does Φ mean? [closed] The Phyrexian or Greek letter Φ is being used to describe the CDF (cumulative distribution function) of the standard normal. The lowercase Greek letter φ is being used to describe the PDF (probability density function) of the standard normal distribution. And as noted in the other answer as well (I copy it to make this answer more compleated): Φ⁻¹ is the inverse of the CDF, which is also known as the quantile function of the standard normal.
What does Φ mean? [closed] The Phyrexian or Greek letter Φ is being used to describe the CDF (cumulative distribution function) of the standard normal. The lowercase Greek letter φ is being used to describe the PDF (probability
38,116
What does Φ mean? [closed]
As the previos answer noticed, $\Phi$ is just a short notation to denote the CDF of random variable which has standard normal distribution. But as I see, in METAL documentation you need $\Phi^{-1}$ (normal quantile function) so these links to Wiki can help you. https://en.wikipedia.org/wiki/Error_function#Related_functions https://en.wikipedia.org/wiki/Probit
What does Φ mean? [closed]
As the previos answer noticed, $\Phi$ is just a short notation to denote the CDF of random variable which has standard normal distribution. But as I see, in METAL documentation you need $\Phi^{-1}$ (n
What does Φ mean? [closed] As the previos answer noticed, $\Phi$ is just a short notation to denote the CDF of random variable which has standard normal distribution. But as I see, in METAL documentation you need $\Phi^{-1}$ (normal quantile function) so these links to Wiki can help you. https://en.wikipedia.org/wiki/Error_function#Related_functions https://en.wikipedia.org/wiki/Probit
What does Φ mean? [closed] As the previos answer noticed, $\Phi$ is just a short notation to denote the CDF of random variable which has standard normal distribution. But as I see, in METAL documentation you need $\Phi^{-1}$ (n
38,117
When to use Mean(X/Y) versus Mean(X)/Mean(Y)?
If $x_i =$ number of items consumed on active days (for person $i$) and $y=$ number of active days (for person $i$), then... $\text{Mean}(x/y)$ is the average number of items consumed per person on a day in which that person is active. This gives equal weight to each person's data. $\frac{\text{Mean}(x)}{\text{Mean}(y)} = \frac{\text{Sum}(x)}{\text{Sum}(y)}$ is the total number of items consumed, divided by the total number of active person-days (1 person-day = 1 person active for one day), or the average number of items consumed per active person-day. This gives more weight to more active people, since they contribute more active person-days. Which one you want depends on your goals, but I would imagine you're almost certainly more interested in people than in person-days.
When to use Mean(X/Y) versus Mean(X)/Mean(Y)?
If $x_i =$ number of items consumed on active days (for person $i$) and $y=$ number of active days (for person $i$), then... $\text{Mean}(x/y)$ is the average number of items consumed per person on a
When to use Mean(X/Y) versus Mean(X)/Mean(Y)? If $x_i =$ number of items consumed on active days (for person $i$) and $y=$ number of active days (for person $i$), then... $\text{Mean}(x/y)$ is the average number of items consumed per person on a day in which that person is active. This gives equal weight to each person's data. $\frac{\text{Mean}(x)}{\text{Mean}(y)} = \frac{\text{Sum}(x)}{\text{Sum}(y)}$ is the total number of items consumed, divided by the total number of active person-days (1 person-day = 1 person active for one day), or the average number of items consumed per active person-day. This gives more weight to more active people, since they contribute more active person-days. Which one you want depends on your goals, but I would imagine you're almost certainly more interested in people than in person-days.
When to use Mean(X/Y) versus Mean(X)/Mean(Y)? If $x_i =$ number of items consumed on active days (for person $i$) and $y=$ number of active days (for person $i$), then... $\text{Mean}(x/y)$ is the average number of items consumed per person on a
38,118
When to use Mean(X/Y) versus Mean(X)/Mean(Y)?
I first consider $mean(x/y)$ versus $mean(x)/mean(y)$. As Eoin and dariober suggest, the more practically relevant quantity should be used, which is often the former, although the latter is not that outlandish. To illustrate both, let's modify the OP's example and imagine opening a bakery with bakers A, B and C. A usecase for $mean(x/y)$. On the initial (test) day, A, B and C worked $X =(3, 1, 7)$ hrs and made $Y=(1, 3, 6)$ cakes, respectively; later on, each baker will work the same hours per month. The hourly productivity of worker $i$ is $z_i=y_i/x_i$, and the expected productivity $\sum z_i/n = 1.5$ cakes per hour/worker. If, say, A worked for 6 hrs and made 2 cakes on the test day, this expectation won't change. A usecase for $mean(x)/mean(y)$. Suppose a single baker can work at a time, and the bakers do $X =(3, 1, 7)$ hrs per day. Then the bakery's expected hourly output, $\sum y_i/\sum x_i= 1.1$, is meaningful. Since $\sum y_i/\sum x_i= \sum w_i z_i$, where $w_1 = x_1/\sum x_i$ is the "timeshare" of worker 1, this is like the previous case, but weighted. Similar weighting could be based for example on customer expenditure. Using $mean(x)/mean(y)$ when X and Y are not (fully) matched. Suppose we pay $X =(3, 1, 7)$ pounds per hour to $(A, B, C)$ and want to assess competing wages. This assessment is easy if the bakers moonlight for $Y=(1, 3, 6)$ pounds per hour at a competitor's. It is less so if only A and C do; then we have unequal samples or "missing" data. The sizes are equal but samples are still unmatched if $Y=(1, 3, 6)$ are randomly drawn IT wages. (Appropriate matching could be possible, but not always.) The second ("robustness") aspect, implied by dariober and several commenters, is whether means are suitable for ratios in the first place. As suggested by @Nick Cox under the question, the geometric mean(s) might be better; see also. Alternatives include removal of outliers; the use of trimmed means; the use of $median(x)/median(y)$ or $median(x/y)$. Also, bootstrap estimation could be appropriate for ratios.
When to use Mean(X/Y) versus Mean(X)/Mean(Y)?
I first consider $mean(x/y)$ versus $mean(x)/mean(y)$. As Eoin and dariober suggest, the more practically relevant quantity should be used, which is often the former, although the latter is not t
When to use Mean(X/Y) versus Mean(X)/Mean(Y)? I first consider $mean(x/y)$ versus $mean(x)/mean(y)$. As Eoin and dariober suggest, the more practically relevant quantity should be used, which is often the former, although the latter is not that outlandish. To illustrate both, let's modify the OP's example and imagine opening a bakery with bakers A, B and C. A usecase for $mean(x/y)$. On the initial (test) day, A, B and C worked $X =(3, 1, 7)$ hrs and made $Y=(1, 3, 6)$ cakes, respectively; later on, each baker will work the same hours per month. The hourly productivity of worker $i$ is $z_i=y_i/x_i$, and the expected productivity $\sum z_i/n = 1.5$ cakes per hour/worker. If, say, A worked for 6 hrs and made 2 cakes on the test day, this expectation won't change. A usecase for $mean(x)/mean(y)$. Suppose a single baker can work at a time, and the bakers do $X =(3, 1, 7)$ hrs per day. Then the bakery's expected hourly output, $\sum y_i/\sum x_i= 1.1$, is meaningful. Since $\sum y_i/\sum x_i= \sum w_i z_i$, where $w_1 = x_1/\sum x_i$ is the "timeshare" of worker 1, this is like the previous case, but weighted. Similar weighting could be based for example on customer expenditure. Using $mean(x)/mean(y)$ when X and Y are not (fully) matched. Suppose we pay $X =(3, 1, 7)$ pounds per hour to $(A, B, C)$ and want to assess competing wages. This assessment is easy if the bakers moonlight for $Y=(1, 3, 6)$ pounds per hour at a competitor's. It is less so if only A and C do; then we have unequal samples or "missing" data. The sizes are equal but samples are still unmatched if $Y=(1, 3, 6)$ are randomly drawn IT wages. (Appropriate matching could be possible, but not always.) The second ("robustness") aspect, implied by dariober and several commenters, is whether means are suitable for ratios in the first place. As suggested by @Nick Cox under the question, the geometric mean(s) might be better; see also. Alternatives include removal of outliers; the use of trimmed means; the use of $median(x)/median(y)$ or $median(x/y)$. Also, bootstrap estimation could be appropriate for ratios.
When to use Mean(X/Y) versus Mean(X)/Mean(Y)? I first consider $mean(x/y)$ versus $mean(x)/mean(y)$. As Eoin and dariober suggest, the more practically relevant quantity should be used, which is often the former, although the latter is not t
38,119
When to use Mean(X/Y) versus Mean(X)/Mean(Y)?
In my opinion mean(X/Y) is more meaningful because your experimental unit (not sure this is the correct term) is the individual, not the aggregate. Let's try to see this with a contrived example. Consider this data set: individual cons act ratio A 5 2 2.5 B 5 2 2.5 C 5 2 2.5 D 5 2 2.5 E 5 2 2.5 F 5 2 2.5 G 5 2 2.5 H 5 2 2.5 I 5 2 2.5 J 5 2 2.5 K 1000 2000 0.5 I would say that in this dataset the average ratio is just below 2.5 (~2.3 in fact) because you observe 10 individuals with ratio 2.5 and only one individual with ratio 0.5. However, if you calculate mean(cons) / mean(act) you get an overall average ratio of ~0.52 because individual K dominates the dataset by having much higher values, but individual K is just one out 11 individuals. On the other hand, you may want to give more weight to K if you think its values are more reliable than the other individuals'.
When to use Mean(X/Y) versus Mean(X)/Mean(Y)?
In my opinion mean(X/Y) is more meaningful because your experimental unit (not sure this is the correct term) is the individual, not the aggregate. Let's try to see this with a contrived example. Cons
When to use Mean(X/Y) versus Mean(X)/Mean(Y)? In my opinion mean(X/Y) is more meaningful because your experimental unit (not sure this is the correct term) is the individual, not the aggregate. Let's try to see this with a contrived example. Consider this data set: individual cons act ratio A 5 2 2.5 B 5 2 2.5 C 5 2 2.5 D 5 2 2.5 E 5 2 2.5 F 5 2 2.5 G 5 2 2.5 H 5 2 2.5 I 5 2 2.5 J 5 2 2.5 K 1000 2000 0.5 I would say that in this dataset the average ratio is just below 2.5 (~2.3 in fact) because you observe 10 individuals with ratio 2.5 and only one individual with ratio 0.5. However, if you calculate mean(cons) / mean(act) you get an overall average ratio of ~0.52 because individual K dominates the dataset by having much higher values, but individual K is just one out 11 individuals. On the other hand, you may want to give more weight to K if you think its values are more reliable than the other individuals'.
When to use Mean(X/Y) versus Mean(X)/Mean(Y)? In my opinion mean(X/Y) is more meaningful because your experimental unit (not sure this is the correct term) is the individual, not the aggregate. Let's try to see this with a contrived example. Cons
38,120
Multiply, add, or condition on probability?
Let's follow up on GlenB's advice and make those Venn diagrams. We do this below with the heterosexual stereotype colours representing mother sick with red/pink and dad sick with blue. With the two variables mother and father you can create 4 different disjoint situations. father sick and not mom sick not father sick and mom sick father sick and mom sick not father sick and not mom sick It is with those 4 situations that you can perform additive computations. Intuitively you want to figure out how much the two situations mom sick and father sick overlap (those two may not need to be jisjoint) Your formula but the answer is P(at least 1 catch it)= P(F)+P(M)-P(F AND M) and solve for P(F AND M) Stems from the following algebra You can compare it to a situation with 4 unknowns (the area's/probability of the 4 disjoint pieces) and you try to figure out the values by means of 4 equations. You know mom sick 0.09 = P(mom sick & not dad sick) + P(mom sick & dad sick) dad sick 0.10 = P(mom sick & dad sick) + P(not mom sick & dad sick) one or more sick 0.15 = P(mom sick & dad sick) + P(not mom sick & dad sick) + P(mom sick & dad sick) total probability must be one 1.00 = P(mom sick & dad sick) + P(not mom sick & dad sick) + P(mom sick & dad sick) + P(not mom sick & not dad sick) One final figure to explain the product and sum rule: When events are disjoint then you can use summation $$P(A \text{ or } B) = P(A) + P(B)$$ note that 'father sick' and 'mom sick' do not meed to be disjoint events. You still get a sum of those events in your solution, but that is due to algebra where we combine multiple equations. When events are independent then you can use the product $$P(A \text{ and } B) = P(A) \cdot P(B)$$ The independence means that the ratio's of the area's/probabilities are unaffected by the other variable. In the image you see the ratio's of 'mom sick' for different states of 'dad sick' whether or not dad is sick the ratio remains the same.
Multiply, add, or condition on probability?
Let's follow up on GlenB's advice and make those Venn diagrams. We do this below with the heterosexual stereotype colours representing mother sick with red/pink and dad sick with blue. With the two v
Multiply, add, or condition on probability? Let's follow up on GlenB's advice and make those Venn diagrams. We do this below with the heterosexual stereotype colours representing mother sick with red/pink and dad sick with blue. With the two variables mother and father you can create 4 different disjoint situations. father sick and not mom sick not father sick and mom sick father sick and mom sick not father sick and not mom sick It is with those 4 situations that you can perform additive computations. Intuitively you want to figure out how much the two situations mom sick and father sick overlap (those two may not need to be jisjoint) Your formula but the answer is P(at least 1 catch it)= P(F)+P(M)-P(F AND M) and solve for P(F AND M) Stems from the following algebra You can compare it to a situation with 4 unknowns (the area's/probability of the 4 disjoint pieces) and you try to figure out the values by means of 4 equations. You know mom sick 0.09 = P(mom sick & not dad sick) + P(mom sick & dad sick) dad sick 0.10 = P(mom sick & dad sick) + P(not mom sick & dad sick) one or more sick 0.15 = P(mom sick & dad sick) + P(not mom sick & dad sick) + P(mom sick & dad sick) total probability must be one 1.00 = P(mom sick & dad sick) + P(not mom sick & dad sick) + P(mom sick & dad sick) + P(not mom sick & not dad sick) One final figure to explain the product and sum rule: When events are disjoint then you can use summation $$P(A \text{ or } B) = P(A) + P(B)$$ note that 'father sick' and 'mom sick' do not meed to be disjoint events. You still get a sum of those events in your solution, but that is due to algebra where we combine multiple equations. When events are independent then you can use the product $$P(A \text{ and } B) = P(A) \cdot P(B)$$ The independence means that the ratio's of the area's/probabilities are unaffected by the other variable. In the image you see the ratio's of 'mom sick' for different states of 'dad sick' whether or not dad is sick the ratio remains the same.
Multiply, add, or condition on probability? Let's follow up on GlenB's advice and make those Venn diagrams. We do this below with the heterosexual stereotype colours representing mother sick with red/pink and dad sick with blue. With the two v
38,121
Multiply, add, or condition on probability?
My first question is that: I find it particularly difficult to differentiate between addition or multiplication rule when it comes to probabilities from independent events. That's not a question (you don't ask anything), but the answer to what I assume is your implied question is simple: there isn't an addition rule for independent events. The "addition rule" $P(A \text{ or } B) = P(A)+P(B)$ is for mutually exclusive events. Draw a Venn diagram, from which it's obvious why there's another term there for non-mutually exclusive events (representing the overlap which gets counted twice, once in A and once in B, whereupon you must then subtract one of the overlaps back off again). My third question, even when I calculate P(at least 1 catch it) = 1-P(both not catching it) = 1-P(NOT F)*P(NOT M), P(at least 1 catch it) does not equal to .15 given in the question. What's wrong with my calculation? Note that the multiplication rule requires independence. Did you make sure the events whose probability you multiplied were independent? Rules for union ("OR") and intersection ("AND") are: (i) $P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B)$ (ii) $P(A \text{ and } B) = P(A)\times P(B|A)$ $\:$ (General product rule) If you have mutually exclusive events, the third term on the RHS in (i) is $0$, whence "addition rule for mutually exclusive events". If you have independent events, the second term on the RHS in (ii) is equal to $P(B)$, whence "multiplication rule for independent events".
Multiply, add, or condition on probability?
My first question is that: I find it particularly difficult to differentiate between addition or multiplication rule when it comes to probabilities from independent events. That's not a question (you
Multiply, add, or condition on probability? My first question is that: I find it particularly difficult to differentiate between addition or multiplication rule when it comes to probabilities from independent events. That's not a question (you don't ask anything), but the answer to what I assume is your implied question is simple: there isn't an addition rule for independent events. The "addition rule" $P(A \text{ or } B) = P(A)+P(B)$ is for mutually exclusive events. Draw a Venn diagram, from which it's obvious why there's another term there for non-mutually exclusive events (representing the overlap which gets counted twice, once in A and once in B, whereupon you must then subtract one of the overlaps back off again). My third question, even when I calculate P(at least 1 catch it) = 1-P(both not catching it) = 1-P(NOT F)*P(NOT M), P(at least 1 catch it) does not equal to .15 given in the question. What's wrong with my calculation? Note that the multiplication rule requires independence. Did you make sure the events whose probability you multiplied were independent? Rules for union ("OR") and intersection ("AND") are: (i) $P(A \text{ or } B) = P(A) + P(B) - P(A \text{ and } B)$ (ii) $P(A \text{ and } B) = P(A)\times P(B|A)$ $\:$ (General product rule) If you have mutually exclusive events, the third term on the RHS in (i) is $0$, whence "addition rule for mutually exclusive events". If you have independent events, the second term on the RHS in (ii) is equal to $P(B)$, whence "multiplication rule for independent events".
Multiply, add, or condition on probability? My first question is that: I find it particularly difficult to differentiate between addition or multiplication rule when it comes to probabilities from independent events. That's not a question (you
38,122
Can I remove sample outliers using standard deviation?
The use of a standard deviation-based threshold for "outlier" detection is generally not a good idea. I think there may be some confusion as to what "normal data" mean as this refers to the statistical distribution of the data, not a qualitative description of the population from which they derive. Your post suggests to me that there may have been a confusion that being from a disease-free population meant the data were "normal" when, in fact, the normality discussed in the post you linked is that of the standard normal distribution in statistics. Perhaps the link was not the correct one as I also did not see anywhere there where the recommendation was made to omit cases as outliers when they occur beyond some number of standard deviations from the mean. This criterion doesn't make sense for outlier detection because we expect there to be values of certain extremes as a function of the normal distribution. In other words, by the definition of the normal distribution, we expect ~5% of the sample data to fall outside of 1.96 standard deviations from the mean. This does not make them outliers, they just make them rarer "extremes" in the distribution. This is before considering the issue raised by @whuber wherein the presence of outliers will increase the standard deviation anyway. Now, to the issue of your noted model performance change when omitting the "outliers." The general gist of linear regression models is to predict some kind of a conditional mean (with some caveats with respect to simplification obviously). When extreme cases are omitted, then we are left with cases whose central tendencies are all relatively alike with reduced variation. You mention that the MSE improves when omitting those cases beyond certain standard deviations, which is an almost guaranteed because you are selectively omitting cases that will have large deviations from the mean. Thinking about the equation for MSE, residuals that are very large get squared (to make them positive) and thus get even larger, and these very large residuals are more likely in cases where the raw data are far from the mean to begin with. The MSE thus is a biased indicator of model performance (in this case), and I'd recommend looking at things like the predictive distribution plots to see whether the model is actually makes realistic predictions of the data rather than just how large residuals are on average. To the question of outliers, you may consider thinking about identifying influential cases on the model and formal outlier detection methods. There are many univariate and multivariate outlier tests, but the overall identification of outliers is sometimes questionable as it may be better to think about outliers as arising from unique data generation processes rather than providing irrelevant information about the model. When outliers represent clearly incorrect data (e.g., data entry error, experimenter issue, out-of-range value), then it is more justifiable to remove those observations. It sounds like you may be concerned specifically with outliers caused by differences in your sites. If that's the case, then you may transition to multilevel models where each site is a grouping variable that can have random intercepts and slopes. This gets back to, ultimately, choosing a model that reflects your beliefs about what is causing the data you've observed.
Can I remove sample outliers using standard deviation?
The use of a standard deviation-based threshold for "outlier" detection is generally not a good idea. I think there may be some confusion as to what "normal data" mean as this refers to the statistica
Can I remove sample outliers using standard deviation? The use of a standard deviation-based threshold for "outlier" detection is generally not a good idea. I think there may be some confusion as to what "normal data" mean as this refers to the statistical distribution of the data, not a qualitative description of the population from which they derive. Your post suggests to me that there may have been a confusion that being from a disease-free population meant the data were "normal" when, in fact, the normality discussed in the post you linked is that of the standard normal distribution in statistics. Perhaps the link was not the correct one as I also did not see anywhere there where the recommendation was made to omit cases as outliers when they occur beyond some number of standard deviations from the mean. This criterion doesn't make sense for outlier detection because we expect there to be values of certain extremes as a function of the normal distribution. In other words, by the definition of the normal distribution, we expect ~5% of the sample data to fall outside of 1.96 standard deviations from the mean. This does not make them outliers, they just make them rarer "extremes" in the distribution. This is before considering the issue raised by @whuber wherein the presence of outliers will increase the standard deviation anyway. Now, to the issue of your noted model performance change when omitting the "outliers." The general gist of linear regression models is to predict some kind of a conditional mean (with some caveats with respect to simplification obviously). When extreme cases are omitted, then we are left with cases whose central tendencies are all relatively alike with reduced variation. You mention that the MSE improves when omitting those cases beyond certain standard deviations, which is an almost guaranteed because you are selectively omitting cases that will have large deviations from the mean. Thinking about the equation for MSE, residuals that are very large get squared (to make them positive) and thus get even larger, and these very large residuals are more likely in cases where the raw data are far from the mean to begin with. The MSE thus is a biased indicator of model performance (in this case), and I'd recommend looking at things like the predictive distribution plots to see whether the model is actually makes realistic predictions of the data rather than just how large residuals are on average. To the question of outliers, you may consider thinking about identifying influential cases on the model and formal outlier detection methods. There are many univariate and multivariate outlier tests, but the overall identification of outliers is sometimes questionable as it may be better to think about outliers as arising from unique data generation processes rather than providing irrelevant information about the model. When outliers represent clearly incorrect data (e.g., data entry error, experimenter issue, out-of-range value), then it is more justifiable to remove those observations. It sounds like you may be concerned specifically with outliers caused by differences in your sites. If that's the case, then you may transition to multilevel models where each site is a grouping variable that can have random intercepts and slopes. This gets back to, ultimately, choosing a model that reflects your beliefs about what is causing the data you've observed.
Can I remove sample outliers using standard deviation? The use of a standard deviation-based threshold for "outlier" detection is generally not a good idea. I think there may be some confusion as to what "normal data" mean as this refers to the statistica
38,123
Can I remove sample outliers using standard deviation?
The answer from @Billy (+1) gets to the critical points of the question you posed. These are just a few further thoughts on your modeling strategy that are too extensive to fit into comments. First, from what you describe it's not clear what you will gain with elastic net. With 6000 cases and what seems to be an outcome that takes on continuous values, you have a lot of flexibility in fitting your model without the variable omission and coefficient penalization involved in elastic net. By usual rules of thumb for biomedical studies, you could evaluate 300 or more predictors in a regression model without much risk of overfitting the model (a case/predictor ratio of 20). If you have thousands of predictors, like with RNA sequencing (RNAseq) data, elastic net might make sense--depending on how you want to apply your model in the future. Second, it's not clear what you mean precisely by a "non-linear model" in this context. Some models that appear to be non-linear, like fitting outcomes to polynomial functions of predictors, are still "linear models" insofar as the models are linear in the regression coefficients. Sometimes you need a truly non-linear model, but linear modeling can cover a remarkably wide range of applications. You can use regression splines to model predictors flexibly, do non-linear transformations of variables before linear regression (like the log transform often used for RNAseq data), or use generalized linear models to have a nonlinear mapping between a linear-model predictor function and outcome. Those are all still considered linear models in an important technical sense. Consider whether you really need a non-linear model for your application. If you can perform your "non-linear" modeling in the context of generalized linear models and you do need to use elastic net, standard tools allow you to do that together instead of separately. Third, remember that extreme values aren't necessarily "outliers" if the values of the associated predictor variables are also appropriately extreme. What is of concern is when differences between the observed and the model-predicted values (the residuals) are large or vary systematically. You certainly don't want to be removing extreme values as "outliers" at an early stage of analysis unless you know the values to have some technical error. Fourth, do be sure to include your sites as predictors in the model. Even if the biochemical assays were all performed at the same central location, it's possible for differences among sites in sample handling, patient characteristics, etc. to be important in a way that requires some form of statistical control. The search function on this site can lead you to much information about these issues. If you don't find an answer that helps with future questions, ask further focused questions. See this help page for ways to write questions that can help both you and other visitors to the site.
Can I remove sample outliers using standard deviation?
The answer from @Billy (+1) gets to the critical points of the question you posed. These are just a few further thoughts on your modeling strategy that are too extensive to fit into comments. First, f
Can I remove sample outliers using standard deviation? The answer from @Billy (+1) gets to the critical points of the question you posed. These are just a few further thoughts on your modeling strategy that are too extensive to fit into comments. First, from what you describe it's not clear what you will gain with elastic net. With 6000 cases and what seems to be an outcome that takes on continuous values, you have a lot of flexibility in fitting your model without the variable omission and coefficient penalization involved in elastic net. By usual rules of thumb for biomedical studies, you could evaluate 300 or more predictors in a regression model without much risk of overfitting the model (a case/predictor ratio of 20). If you have thousands of predictors, like with RNA sequencing (RNAseq) data, elastic net might make sense--depending on how you want to apply your model in the future. Second, it's not clear what you mean precisely by a "non-linear model" in this context. Some models that appear to be non-linear, like fitting outcomes to polynomial functions of predictors, are still "linear models" insofar as the models are linear in the regression coefficients. Sometimes you need a truly non-linear model, but linear modeling can cover a remarkably wide range of applications. You can use regression splines to model predictors flexibly, do non-linear transformations of variables before linear regression (like the log transform often used for RNAseq data), or use generalized linear models to have a nonlinear mapping between a linear-model predictor function and outcome. Those are all still considered linear models in an important technical sense. Consider whether you really need a non-linear model for your application. If you can perform your "non-linear" modeling in the context of generalized linear models and you do need to use elastic net, standard tools allow you to do that together instead of separately. Third, remember that extreme values aren't necessarily "outliers" if the values of the associated predictor variables are also appropriately extreme. What is of concern is when differences between the observed and the model-predicted values (the residuals) are large or vary systematically. You certainly don't want to be removing extreme values as "outliers" at an early stage of analysis unless you know the values to have some technical error. Fourth, do be sure to include your sites as predictors in the model. Even if the biochemical assays were all performed at the same central location, it's possible for differences among sites in sample handling, patient characteristics, etc. to be important in a way that requires some form of statistical control. The search function on this site can lead you to much information about these issues. If you don't find an answer that helps with future questions, ask further focused questions. See this help page for ways to write questions that can help both you and other visitors to the site.
Can I remove sample outliers using standard deviation? The answer from @Billy (+1) gets to the critical points of the question you posed. These are just a few further thoughts on your modeling strategy that are too extensive to fit into comments. First, f
38,124
Is there any classification algorithm that doesn't give probability?
TLDR: probabilities are not required to build a ROC curve, only a numerical scale supporting the decision. I'm studying the ROC Curve, and I was wondering if there is any classification algorithm that doesn't return the output class as a result of a certain threshold from the probabilities of the algo? I previously let this question slip because I focused on what's the actual problem. Many algorithms do not output probabilities at all (it's one of their main selling points actually). SVMs and K-NNs, for example. Below I'll explain why this is not a problem to build a ROC curve. Because if there is one, how could you have a ROC Curve if you can't use thresholds to draw it, as it gives the output class as a certainty? If your algorithm does not give you any other numerical scale of support for the decision, then your ROC curve has only one point. It's not a wrong ROC, per se, but its usefulness is dubious. So I'd say that if you don't have this scale (continuous or not), then you can't draw a ROC curve. Luckly, most algorithms do have this scale. In SVMs it's the distance to the margin, in logistic regression it's the output probability, in decision trees it's the leaf probability, in K-NNs it's the neighborhood voting proportions, etc.
Is there any classification algorithm that doesn't give probability?
TLDR: probabilities are not required to build a ROC curve, only a numerical scale supporting the decision. I'm studying the ROC Curve, and I was wondering if there is any classification algorithm
Is there any classification algorithm that doesn't give probability? TLDR: probabilities are not required to build a ROC curve, only a numerical scale supporting the decision. I'm studying the ROC Curve, and I was wondering if there is any classification algorithm that doesn't return the output class as a result of a certain threshold from the probabilities of the algo? I previously let this question slip because I focused on what's the actual problem. Many algorithms do not output probabilities at all (it's one of their main selling points actually). SVMs and K-NNs, for example. Below I'll explain why this is not a problem to build a ROC curve. Because if there is one, how could you have a ROC Curve if you can't use thresholds to draw it, as it gives the output class as a certainty? If your algorithm does not give you any other numerical scale of support for the decision, then your ROC curve has only one point. It's not a wrong ROC, per se, but its usefulness is dubious. So I'd say that if you don't have this scale (continuous or not), then you can't draw a ROC curve. Luckly, most algorithms do have this scale. In SVMs it's the distance to the margin, in logistic regression it's the output probability, in decision trees it's the leaf probability, in K-NNs it's the neighborhood voting proportions, etc.
Is there any classification algorithm that doesn't give probability? TLDR: probabilities are not required to build a ROC curve, only a numerical scale supporting the decision. I'm studying the ROC Curve, and I was wondering if there is any classification algorithm
38,125
Is there any classification algorithm that doesn't give probability?
Support Vector Machines and $k$-Nearest Neighbors come to mind. (See here for a motivation for short answers. Longer answers are always welcome.)
Is there any classification algorithm that doesn't give probability?
Support Vector Machines and $k$-Nearest Neighbors come to mind. (See here for a motivation for short answers. Longer answers are always welcome.)
Is there any classification algorithm that doesn't give probability? Support Vector Machines and $k$-Nearest Neighbors come to mind. (See here for a motivation for short answers. Longer answers are always welcome.)
Is there any classification algorithm that doesn't give probability? Support Vector Machines and $k$-Nearest Neighbors come to mind. (See here for a motivation for short answers. Longer answers are always welcome.)
38,126
Independence of $X+Y$ and $X-Y$
They're not: If $X+Y=12$ then both rolls were sixes, so $X-Y=0$. So you have: $$1 = \mathbb{P}(X-Y =0|X+Y=12) \neq \mathbb{P}(X-Y =0) = \frac{1}{6}.$$
Independence of $X+Y$ and $X-Y$
They're not: If $X+Y=12$ then both rolls were sixes, so $X-Y=0$. So you have: $$1 = \mathbb{P}(X-Y =0|X+Y=12) \neq \mathbb{P}(X-Y =0) = \frac{1}{6}.$$
Independence of $X+Y$ and $X-Y$ They're not: If $X+Y=12$ then both rolls were sixes, so $X-Y=0$. So you have: $$1 = \mathbb{P}(X-Y =0|X+Y=12) \neq \mathbb{P}(X-Y =0) = \frac{1}{6}.$$
Independence of $X+Y$ and $X-Y$ They're not: If $X+Y=12$ then both rolls were sixes, so $X-Y=0$. So you have: $$1 = \mathbb{P}(X-Y =0|X+Y=12) \neq \mathbb{P}(X-Y =0) = \frac{1}{6}.$$
38,127
Independence of $X+Y$ and $X-Y$
When checking if the random variables are independent or not, the first thing needing to check is the range of the random variables. If the range of one random variable varies according to values of other random variables, then they are not independent, and stop. Otherwise, need to check further. In this problem, when A+B = 12, A-B can have only one value 0. if A+B = 11, A-B can be -1 and 1. So they are not independent. This skill is very useful for test/exam in probability and statistics, because it can save you a lot of time.
Independence of $X+Y$ and $X-Y$
When checking if the random variables are independent or not, the first thing needing to check is the range of the random variables. If the range of one random variable varies according to values of o
Independence of $X+Y$ and $X-Y$ When checking if the random variables are independent or not, the first thing needing to check is the range of the random variables. If the range of one random variable varies according to values of other random variables, then they are not independent, and stop. Otherwise, need to check further. In this problem, when A+B = 12, A-B can have only one value 0. if A+B = 11, A-B can be -1 and 1. So they are not independent. This skill is very useful for test/exam in probability and statistics, because it can save you a lot of time.
Independence of $X+Y$ and $X-Y$ When checking if the random variables are independent or not, the first thing needing to check is the range of the random variables. If the range of one random variable varies according to values of o
38,128
How to handle predictors that are highly correlated to the response
It sounds like what you have is a powerfully predictive variable, and there is no reason to remove it. What you have to watch out in situations like this is what is called leakage. Leakage is when you have a predictor that is just some version of your response in disguise. For example, suppose that you have a system at your company that, when fraud is detected, first switched the account into "investigation" status, and then when the investigation is complete, cancels it due to fraud. The "investigation status" will look like a very powerful variable, but it is caused by the response (fraud). If you went to implement your model, attempting to detect fraud, then the "investigation status" variable would be useless, as if an account is in investigation status, you already know its fraudulent. You can see why this is called leakage, the response has "leaked into" the predictors. So, think carefully about whether this could be the case with your account status, but I suspect not. In that case, you just have a really good predictive variable. Most people trying to fraud have chosen checks and most people that have chosen checks and a third of people who have chosen checks are frauds so whenever my model tends to classify as a fraud any observation with check as the payment type. You shouldn't evaluate your model by classifying records as fraud or non fraud. Instead, you should get your model to assign probabilities of fraud to each evaluation record, and work directly with those probabilities. If you use this context, then your issue here goes away, as you will simply observe that using a check gives a high probability of fraud, which does not mean that all check users are fraudulent.
How to handle predictors that are highly correlated to the response
It sounds like what you have is a powerfully predictive variable, and there is no reason to remove it. What you have to watch out in situations like this is what is called leakage. Leakage is when yo
How to handle predictors that are highly correlated to the response It sounds like what you have is a powerfully predictive variable, and there is no reason to remove it. What you have to watch out in situations like this is what is called leakage. Leakage is when you have a predictor that is just some version of your response in disguise. For example, suppose that you have a system at your company that, when fraud is detected, first switched the account into "investigation" status, and then when the investigation is complete, cancels it due to fraud. The "investigation status" will look like a very powerful variable, but it is caused by the response (fraud). If you went to implement your model, attempting to detect fraud, then the "investigation status" variable would be useless, as if an account is in investigation status, you already know its fraudulent. You can see why this is called leakage, the response has "leaked into" the predictors. So, think carefully about whether this could be the case with your account status, but I suspect not. In that case, you just have a really good predictive variable. Most people trying to fraud have chosen checks and most people that have chosen checks and a third of people who have chosen checks are frauds so whenever my model tends to classify as a fraud any observation with check as the payment type. You shouldn't evaluate your model by classifying records as fraud or non fraud. Instead, you should get your model to assign probabilities of fraud to each evaluation record, and work directly with those probabilities. If you use this context, then your issue here goes away, as you will simply observe that using a check gives a high probability of fraud, which does not mean that all check users are fraudulent.
How to handle predictors that are highly correlated to the response It sounds like what you have is a powerfully predictive variable, and there is no reason to remove it. What you have to watch out in situations like this is what is called leakage. Leakage is when yo
38,129
How to handle predictors that are highly correlated to the response
In addition to @Matthew Drury's answer, you could train different models for different transaction methods (cheques, non-cheques). This way the features of people using cheques would be highlighted and the column will also remain in the data. See if the tool you are using allows grouping when fitting models. This could save additional work.
How to handle predictors that are highly correlated to the response
In addition to @Matthew Drury's answer, you could train different models for different transaction methods (cheques, non-cheques). This way the features of people using cheques would be highlighted an
How to handle predictors that are highly correlated to the response In addition to @Matthew Drury's answer, you could train different models for different transaction methods (cheques, non-cheques). This way the features of people using cheques would be highlighted and the column will also remain in the data. See if the tool you are using allows grouping when fitting models. This could save additional work.
How to handle predictors that are highly correlated to the response In addition to @Matthew Drury's answer, you could train different models for different transaction methods (cheques, non-cheques). This way the features of people using cheques would be highlighted an
38,130
How to handle predictors that are highly correlated to the response
Imbalance By far the cleanest* way to (50%-50%) balance your training set is to use observation weights. Say 1% of cases is fraudulent, 99% is not, then give each non-fraudulent case weight 1, and each fraudulent transaction weight 99. This is equivalent to adding 98 copies of each fraudulent cases back into the training set. For example, in R's rpart or lm use argument weights in randomForest set classwt = c(0.5, 0.5). In Python's sklearn.tree.DecisionTreeClassifier set class_weight = "balanced". Doing so will make the classifier "work harder" on the minority outcome class (fraud), as misclassifying it now carries a highly increased cost. *I write "cleanest" because (randomly) oversampling creates unnecessary sampling noise as certain cases are sampled more or less than 99 times. Weighting, instead, treats all cases equally. Decision Trees Given that you are using a classification tree, you could force the node check == yes to split, so that you do not have to classify all cases that end up in that node as fraudulent (of course I'm speculating -- I don't know your data). Lastly, classification trees naturally output probabilities. Say a new case falls into an end-node (a leaf) that consists of 72% fraudulent cases and 28% non-frauds, then the estimated probability that the new case is fraudulent equals 72%.
How to handle predictors that are highly correlated to the response
Imbalance By far the cleanest* way to (50%-50%) balance your training set is to use observation weights. Say 1% of cases is fraudulent, 99% is not, then give each non-fraudulent case weight 1, and eac
How to handle predictors that are highly correlated to the response Imbalance By far the cleanest* way to (50%-50%) balance your training set is to use observation weights. Say 1% of cases is fraudulent, 99% is not, then give each non-fraudulent case weight 1, and each fraudulent transaction weight 99. This is equivalent to adding 98 copies of each fraudulent cases back into the training set. For example, in R's rpart or lm use argument weights in randomForest set classwt = c(0.5, 0.5). In Python's sklearn.tree.DecisionTreeClassifier set class_weight = "balanced". Doing so will make the classifier "work harder" on the minority outcome class (fraud), as misclassifying it now carries a highly increased cost. *I write "cleanest" because (randomly) oversampling creates unnecessary sampling noise as certain cases are sampled more or less than 99 times. Weighting, instead, treats all cases equally. Decision Trees Given that you are using a classification tree, you could force the node check == yes to split, so that you do not have to classify all cases that end up in that node as fraudulent (of course I'm speculating -- I don't know your data). Lastly, classification trees naturally output probabilities. Say a new case falls into an end-node (a leaf) that consists of 72% fraudulent cases and 28% non-frauds, then the estimated probability that the new case is fraudulent equals 72%.
How to handle predictors that are highly correlated to the response Imbalance By far the cleanest* way to (50%-50%) balance your training set is to use observation weights. Say 1% of cases is fraudulent, 99% is not, then give each non-fraudulent case weight 1, and eac
38,131
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true?
(Started as a comment, but it's much too long) Let's consider this a different way. A more general version of the question is --- can we use reasoning involving conditional probabilities when the thing we condition on is false? It's not simply permissible -- it's necessary. Consider this in the context of Bayes theorem: $$P(A_i|B) = \frac{P(B|A_i)\,P(A_i)}{\sum\limits_j P(B|A_j)\,P(A_j)}$$ Note that the $A_j$ are mutually exclusive (and exhaustive). All but one of the conditionals in the denominator must pertain to a condition that doesn't hold - but that doesn't imply that reasoning involving those conditional probabilities will be invalid -- Bayes' theorem is true as a result of us reasoning using conditionals that condition on events that we know don't hold. The conditional probability $P(B|A_j)$ is a perfectly valid conditional probability, whether or not $A_j$ actually obtains. It's perfectly okay to reason via conditional probabilities that relate to conditions that don't hold; the results are logically valid. [Indeed, I bet you do it constantly without any concern.] For example, if I say "Alison would have her umbrella if it were raining" and use this plus some data to support a conclusion: "She doesn't have her umbrella, so it's not raining", my conclusion doesn't become invalid because the conditional was untrue (The fact that "it's not raining" doesn't endanger the truth of the conditional that reasoning was based on: "if it were raining").
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true?
(Started as a comment, but it's much too long) Let's consider this a different way. A more general version of the question is --- can we use reasoning involving conditional probabilities when the thin
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true? (Started as a comment, but it's much too long) Let's consider this a different way. A more general version of the question is --- can we use reasoning involving conditional probabilities when the thing we condition on is false? It's not simply permissible -- it's necessary. Consider this in the context of Bayes theorem: $$P(A_i|B) = \frac{P(B|A_i)\,P(A_i)}{\sum\limits_j P(B|A_j)\,P(A_j)}$$ Note that the $A_j$ are mutually exclusive (and exhaustive). All but one of the conditionals in the denominator must pertain to a condition that doesn't hold - but that doesn't imply that reasoning involving those conditional probabilities will be invalid -- Bayes' theorem is true as a result of us reasoning using conditionals that condition on events that we know don't hold. The conditional probability $P(B|A_j)$ is a perfectly valid conditional probability, whether or not $A_j$ actually obtains. It's perfectly okay to reason via conditional probabilities that relate to conditions that don't hold; the results are logically valid. [Indeed, I bet you do it constantly without any concern.] For example, if I say "Alison would have her umbrella if it were raining" and use this plus some data to support a conclusion: "She doesn't have her umbrella, so it's not raining", my conclusion doesn't become invalid because the conditional was untrue (The fact that "it's not raining" doesn't endanger the truth of the conditional that reasoning was based on: "if it were raining").
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true? (Started as a comment, but it's much too long) Let's consider this a different way. A more general version of the question is --- can we use reasoning involving conditional probabilities when the thin
38,132
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true?
You have a misunderstanding. The 'alternative' hypothesis ($H_1$) is simply the negation of the null hypothesis. When conducting, say, a power analysis, we will specify a specific sampling distribution around a point estimate (for example a mean treatment effect) that we believe in, but rejecting the null does not make that point estimate true. Based on the logic of hypothesis testing, the alternative hypothesis is not that point estimate, it is just the negation of the null. There is no particular sampling distribution of a test statistic that is associated with the negation of the null. In addition, the meaning of the $p$-value is predicated on what may well be a counterfactual premise. The $p$-value is the probability of getting a test statistic as far away from your null point value for your parameter (or further) if that point value were true, whether it is actually true of not. Even if the null isn't true, it can be true that the test statistic would have the specified distribution under the null. You are striking on an important insight, though. Once you no longer believe the null obtains, it is no longer clear what meaning the $p$-value has to offer.
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true?
You have a misunderstanding. The 'alternative' hypothesis ($H_1$) is simply the negation of the null hypothesis. When conducting, say, a power analysis, we will specify a specific sampling distribut
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true? You have a misunderstanding. The 'alternative' hypothesis ($H_1$) is simply the negation of the null hypothesis. When conducting, say, a power analysis, we will specify a specific sampling distribution around a point estimate (for example a mean treatment effect) that we believe in, but rejecting the null does not make that point estimate true. Based on the logic of hypothesis testing, the alternative hypothesis is not that point estimate, it is just the negation of the null. There is no particular sampling distribution of a test statistic that is associated with the negation of the null. In addition, the meaning of the $p$-value is predicated on what may well be a counterfactual premise. The $p$-value is the probability of getting a test statistic as far away from your null point value for your parameter (or further) if that point value were true, whether it is actually true of not. Even if the null isn't true, it can be true that the test statistic would have the specified distribution under the null. You are striking on an important insight, though. Once you no longer believe the null obtains, it is no longer clear what meaning the $p$-value has to offer.
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true? You have a misunderstanding. The 'alternative' hypothesis ($H_1$) is simply the negation of the null hypothesis. When conducting, say, a power analysis, we will specify a specific sampling distribut
38,133
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true?
The principle is like a "fuzzy" version of the contraposition principle (or reductio ad absurdum principle, I'm not sure). Consider that every dog has four legs. Then if you sample an animal with two legs you are sure it is not a dog. Now only consider that every dog has a high probability to have four legs (in other words a high majority of dogs have four legs). Then if you sample an animal with two legs you conclude it is unlikely a dog. This is the principle of hypothesis testing (but in practice it requires a sensible choice of the event having high probability under $H_0$).
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true?
The principle is like a "fuzzy" version of the contraposition principle (or reductio ad absurdum principle, I'm not sure). Consider that every dog has four legs. Then if you sample an animal with two
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true? The principle is like a "fuzzy" version of the contraposition principle (or reductio ad absurdum principle, I'm not sure). Consider that every dog has four legs. Then if you sample an animal with two legs you are sure it is not a dog. Now only consider that every dog has a high probability to have four legs (in other words a high majority of dogs have four legs). Then if you sample an animal with two legs you conclude it is unlikely a dog. This is the principle of hypothesis testing (but in practice it requires a sensible choice of the event having high probability under $H_0$).
If we disbelieve $H_0$, why quote a p value calculated assuming $H_0$ was true? The principle is like a "fuzzy" version of the contraposition principle (or reductio ad absurdum principle, I'm not sure). Consider that every dog has four legs. Then if you sample an animal with two
38,134
Is kNN best for classification?
There is no such thing as the best classifier, it always depends on the context, what kind of data/problem is at hand. As you mention, kNN is slow when you have a lot of observations, since it does not generalize over data in advance, it scans historical database each time a prediction is needed. With kNN you need to think carefully about the distance measure. For instance, if one feature is measured in 1000s of kilometers, another feature in 0.001 grams, the first feature will dominate the distance measure. You can normalize the features, or give certain importance weights, based on the domain knowledge. Also, in a very high dimensional space the distance to all neighbors becomes more or less the same, and the notion of nearest and far neighbors becomes blurred.
Is kNN best for classification?
There is no such thing as the best classifier, it always depends on the context, what kind of data/problem is at hand. As you mention, kNN is slow when you have a lot of observations, since it does no
Is kNN best for classification? There is no such thing as the best classifier, it always depends on the context, what kind of data/problem is at hand. As you mention, kNN is slow when you have a lot of observations, since it does not generalize over data in advance, it scans historical database each time a prediction is needed. With kNN you need to think carefully about the distance measure. For instance, if one feature is measured in 1000s of kilometers, another feature in 0.001 grams, the first feature will dominate the distance measure. You can normalize the features, or give certain importance weights, based on the domain knowledge. Also, in a very high dimensional space the distance to all neighbors becomes more or less the same, and the notion of nearest and far neighbors becomes blurred.
Is kNN best for classification? There is no such thing as the best classifier, it always depends on the context, what kind of data/problem is at hand. As you mention, kNN is slow when you have a lot of observations, since it does no
38,135
Is kNN best for classification?
What you're referring to is called Bias. Since kNN is not model based, it has low Bias, but that also means it can have high Variance. This is called the Bias-Variance tradeoff. Basically, there's no guarantee that just because it has low Bias it will have a good "testing performance". Quite the contrary, it could easily overfit the data and have very low testing performance. There's a really great book by Hastie, Tibrishiani and Friedman called The Elements of Statistical Learning that briefly discusses the topic. It's (legally) available for free online here. On page 37 they discuss the Bias-Variance tradeoff in the context of kNN, so it should be particularly useful for you.
Is kNN best for classification?
What you're referring to is called Bias. Since kNN is not model based, it has low Bias, but that also means it can have high Variance. This is called the Bias-Variance tradeoff. Basically, there's no
Is kNN best for classification? What you're referring to is called Bias. Since kNN is not model based, it has low Bias, but that also means it can have high Variance. This is called the Bias-Variance tradeoff. Basically, there's no guarantee that just because it has low Bias it will have a good "testing performance". Quite the contrary, it could easily overfit the data and have very low testing performance. There's a really great book by Hastie, Tibrishiani and Friedman called The Elements of Statistical Learning that briefly discusses the topic. It's (legally) available for free online here. On page 37 they discuss the Bias-Variance tradeoff in the context of kNN, so it should be particularly useful for you.
Is kNN best for classification? What you're referring to is called Bias. Since kNN is not model based, it has low Bias, but that also means it can have high Variance. This is called the Bias-Variance tradeoff. Basically, there's no
38,136
Is kNN best for classification?
Do you know $k$? If $k$ is unknown all bets are off. How do you define 'best'? In a statistical sense, best implies minimizing the risk with a squared error loss function. If this isn't the case, and even if this is the case how are you going to compare methods? As addressed by inzl there is no best classifier. If you know that your data takes a spherical form, you might want to try a k-means based approach, and under that condition alone the k-means based approach would be more statistically efficient (not to mention k-means is more computationally efficient). It should also be noted that for large data sets kNN falls apart even for moderate dimensions, that is why we use approximate nearest neighbors (an active area of research).
Is kNN best for classification?
Do you know $k$? If $k$ is unknown all bets are off. How do you define 'best'? In a statistical sense, best implies minimizing the risk with a squared error loss function. If this isn't the case
Is kNN best for classification? Do you know $k$? If $k$ is unknown all bets are off. How do you define 'best'? In a statistical sense, best implies minimizing the risk with a squared error loss function. If this isn't the case, and even if this is the case how are you going to compare methods? As addressed by inzl there is no best classifier. If you know that your data takes a spherical form, you might want to try a k-means based approach, and under that condition alone the k-means based approach would be more statistically efficient (not to mention k-means is more computationally efficient). It should also be noted that for large data sets kNN falls apart even for moderate dimensions, that is why we use approximate nearest neighbors (an active area of research).
Is kNN best for classification? Do you know $k$? If $k$ is unknown all bets are off. How do you define 'best'? In a statistical sense, best implies minimizing the risk with a squared error loss function. If this isn't the case
38,137
Is kNN best for classification?
Given infinite data, k-NN is guaranteed to approach the Bayes error rate under ideal conditions. You probably don't have infinite data, and your k is probably not large enough (it has to approach infinity). In practice, there's no reason k-NN should be the best classifier given finite data!
Is kNN best for classification?
Given infinite data, k-NN is guaranteed to approach the Bayes error rate under ideal conditions. You probably don't have infinite data, and your k is probably not large enough (it has to approach infi
Is kNN best for classification? Given infinite data, k-NN is guaranteed to approach the Bayes error rate under ideal conditions. You probably don't have infinite data, and your k is probably not large enough (it has to approach infinity). In practice, there's no reason k-NN should be the best classifier given finite data!
Is kNN best for classification? Given infinite data, k-NN is guaranteed to approach the Bayes error rate under ideal conditions. You probably don't have infinite data, and your k is probably not large enough (it has to approach infi
38,138
Is kNN best for classification?
I would atleast consider NaiveBayes along with knn. You can do cross validation with both knn and Naive Bayes on your training data and select the best one.
Is kNN best for classification?
I would atleast consider NaiveBayes along with knn. You can do cross validation with both knn and Naive Bayes on your training data and select the best one.
Is kNN best for classification? I would atleast consider NaiveBayes along with knn. You can do cross validation with both knn and Naive Bayes on your training data and select the best one.
Is kNN best for classification? I would atleast consider NaiveBayes along with knn. You can do cross validation with both knn and Naive Bayes on your training data and select the best one.
38,139
A naive question about the Kolmogorov Smirnov test
You can standardize the exponential distribution easily enough multiplying variates by the rate parameter (it's a reciprocal scale parameter). But if you're estimating the rate parameter from the data, the Kolmogorov–Smirnov statistic doesn't have the same distribution as when the exponential distribution is completely specified. See Lilliefors (1969), "On the Kolmogorov–Smirnov tests for the exponential distribution with mean parameters", JASA, 64, 325. And https://stats.stackexchange.com/a/392686/17230 for an intuitive explanation of the phenomenon in general. You can compare the observed value of the KS test statistic calculated from the data to the tabulated critical values given in the reference. Or simulate the distribution of the statistic yourself as @Glen_b & @soakley have suggested. Note that Lilliefors points out its distribution doesn't depend on the true values of the parameters—generally true for scale & location parameters—, so for a given sample size you can do this once simulating from the standard exponential distribution, & keep the results for future reference; you don't need to repeat the simulation for each new data-set of the same sample size. And there's therefore no approximation involved (except that coming from simulation error). The difference made to the distribution of the KS statistic $D$ by estimating rather than pre-specifying the parameters is not trivial: Lilliefors does give some asymptotic results (worked out rather crudely, but good enough for government work). Stephens has tabulated quantiles for the modified statistic $$T(n) = \left(D - \frac{0.2}{n}\right)\left(\sqrt{n} + 0.26 + \frac{0.5}{\sqrt{n}}\right)$$ where $D$ is the KS test statistic & $n$ the sample size. According to Durbin (1975), "Kolmogorov–Smirnov tests when parameters are estimated with applications to tests of exponentiality and tests on spacings", Biometrika, 62, 1, these are very close to the exact values for larger sample sizes. They can be found in Pearson & Hartley (1972), Biometrika Tables for Statisticians, CUP, or in Stephens (1974), "EDF Statistics for goodness of fit and some comparisons", JASA, 69, 347. I'm not aware of any published correction to the p-value of the ordinary KS test to approximate that of the Lilliefors test; a power-law relationship seems like it might be useful:
A naive question about the Kolmogorov Smirnov test
You can standardize the exponential distribution easily enough multiplying variates by the rate parameter (it's a reciprocal scale parameter). But if you're estimating the rate parameter from the data
A naive question about the Kolmogorov Smirnov test You can standardize the exponential distribution easily enough multiplying variates by the rate parameter (it's a reciprocal scale parameter). But if you're estimating the rate parameter from the data, the Kolmogorov–Smirnov statistic doesn't have the same distribution as when the exponential distribution is completely specified. See Lilliefors (1969), "On the Kolmogorov–Smirnov tests for the exponential distribution with mean parameters", JASA, 64, 325. And https://stats.stackexchange.com/a/392686/17230 for an intuitive explanation of the phenomenon in general. You can compare the observed value of the KS test statistic calculated from the data to the tabulated critical values given in the reference. Or simulate the distribution of the statistic yourself as @Glen_b & @soakley have suggested. Note that Lilliefors points out its distribution doesn't depend on the true values of the parameters—generally true for scale & location parameters—, so for a given sample size you can do this once simulating from the standard exponential distribution, & keep the results for future reference; you don't need to repeat the simulation for each new data-set of the same sample size. And there's therefore no approximation involved (except that coming from simulation error). The difference made to the distribution of the KS statistic $D$ by estimating rather than pre-specifying the parameters is not trivial: Lilliefors does give some asymptotic results (worked out rather crudely, but good enough for government work). Stephens has tabulated quantiles for the modified statistic $$T(n) = \left(D - \frac{0.2}{n}\right)\left(\sqrt{n} + 0.26 + \frac{0.5}{\sqrt{n}}\right)$$ where $D$ is the KS test statistic & $n$ the sample size. According to Durbin (1975), "Kolmogorov–Smirnov tests when parameters are estimated with applications to tests of exponentiality and tests on spacings", Biometrika, 62, 1, these are very close to the exact values for larger sample sizes. They can be found in Pearson & Hartley (1972), Biometrika Tables for Statisticians, CUP, or in Stephens (1974), "EDF Statistics for goodness of fit and some comparisons", JASA, 69, 347. I'm not aware of any published correction to the p-value of the ordinary KS test to approximate that of the Lilliefors test; a power-law relationship seems like it might be useful:
A naive question about the Kolmogorov Smirnov test You can standardize the exponential distribution easily enough multiplying variates by the rate parameter (it's a reciprocal scale parameter). But if you're estimating the rate parameter from the data
38,140
A naive question about the Kolmogorov Smirnov test
You don't need to normalize, but can get the p-value for a goodness-of-fit test by simulation. Here is some sample R code, taken from Greg Snow's answer to a similar question (KS test - R, Minitab (and Systat)): data <- c(7.2,10.5,10.67,0.15,3.92,3.28,0.89,2.29,13.82,0.43) simp <- replicate(100000, {x <- rexp(length(data),rate=1/mean(data)); ks.test(x,"pexp",rate=1/mean(x))$p.value} ) mean(simp <= ks.test(data,"pexp",1/mean(data))$p.value) The method is described by Clauset et. al in a SIAM paper "Power-Law Distributions in Empirical Data."
A naive question about the Kolmogorov Smirnov test
You don't need to normalize, but can get the p-value for a goodness-of-fit test by simulation. Here is some sample R code, taken from Greg Snow's answer to a similar question (KS test - R, Minitab (an
A naive question about the Kolmogorov Smirnov test You don't need to normalize, but can get the p-value for a goodness-of-fit test by simulation. Here is some sample R code, taken from Greg Snow's answer to a similar question (KS test - R, Minitab (and Systat)): data <- c(7.2,10.5,10.67,0.15,3.92,3.28,0.89,2.29,13.82,0.43) simp <- replicate(100000, {x <- rexp(length(data),rate=1/mean(data)); ks.test(x,"pexp",rate=1/mean(x))$p.value} ) mean(simp <= ks.test(data,"pexp",1/mean(data))$p.value) The method is described by Clauset et. al in a SIAM paper "Power-Law Distributions in Empirical Data."
A naive question about the Kolmogorov Smirnov test You don't need to normalize, but can get the p-value for a goodness-of-fit test by simulation. Here is some sample R code, taken from Greg Snow's answer to a similar question (KS test - R, Minitab (an
38,141
A naive question about the Kolmogorov Smirnov test
No, you don't need to normalise your data since the KS statistic is defined in terms of the raw data (actually in terms of the empirical distribution of these data): http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov.E2.80.93Smirnov_statistic I don't know Python, but in R you can conduct this test as follows: x = rexp(100,1) ks.test(x,"pexp",1) For this purpose, and by construction, you need to know the parameters of the distribution. You should not plug estimators in it, this breaks the convergence of the statistic and you have to use a different test (see the wikipedia article). If you want to estimate the parameters and check whether the fitted model is good, then what you actually need is a goodness of fit test, for which you have a variety of options: http://en.wikipedia.org/wiki/Goodness_of_fit
A naive question about the Kolmogorov Smirnov test
No, you don't need to normalise your data since the KS statistic is defined in terms of the raw data (actually in terms of the empirical distribution of these data): http://en.wikipedia.org/wiki/Kolmo
A naive question about the Kolmogorov Smirnov test No, you don't need to normalise your data since the KS statistic is defined in terms of the raw data (actually in terms of the empirical distribution of these data): http://en.wikipedia.org/wiki/Kolmogorov%E2%80%93Smirnov_test#Kolmogorov.E2.80.93Smirnov_statistic I don't know Python, but in R you can conduct this test as follows: x = rexp(100,1) ks.test(x,"pexp",1) For this purpose, and by construction, you need to know the parameters of the distribution. You should not plug estimators in it, this breaks the convergence of the statistic and you have to use a different test (see the wikipedia article). If you want to estimate the parameters and check whether the fitted model is good, then what you actually need is a goodness of fit test, for which you have a variety of options: http://en.wikipedia.org/wiki/Goodness_of_fit
A naive question about the Kolmogorov Smirnov test No, you don't need to normalise your data since the KS statistic is defined in terms of the raw data (actually in terms of the empirical distribution of these data): http://en.wikipedia.org/wiki/Kolmo
38,142
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$?
Clearly not. An easy counterexample (here done in R), that I think satisfies all your constraints: set.seed(239843) x=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) z=x*y var(x) [1] 0.8413043 var(y) [1] 0.2425253 var(z) [1] 2425.296 What's going on: x is a series with mean 100 and sd 1. y alternates between 0.01 and 0.99. z=xy therefore alternates between (about) 1 and 99, but is always $<x$ Alternative [more general] question) Assuming finite variances, is it true that for any random variable a and b such that 0≤a≤b, we have var(a)≤var(b)? Even more clearly not; without the need for a "y" like variable, it's pretty obvious: Consider one set of values that alternates between 1 and 99, and a second one that alternates between 100 and 101. Adding in the new condition that X and Y have positive covariance: set.seed(239843) oldx=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) x = oldx + y # oldX and Y are independent, so X and Y now have +ve covariance z=x*y cov(x,y) [1] 0.2739745 # sample covariance happens to be positive in this case also var(x);var(y);var(z) [1] 1.065326 [1] 0.2425253 [1] 2481.243 If you work out the answers for this case algebraically (compute the population variances and relevant population covariance), you'll see this isn't just a numerical accident from a fortunate choice of seed.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v
Clearly not. An easy counterexample (here done in R), that I think satisfies all your constraints: set.seed(239843) x=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) z=x*y var(x) [1] 0.8413043 var(
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$? Clearly not. An easy counterexample (here done in R), that I think satisfies all your constraints: set.seed(239843) x=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) z=x*y var(x) [1] 0.8413043 var(y) [1] 0.2425253 var(z) [1] 2425.296 What's going on: x is a series with mean 100 and sd 1. y alternates between 0.01 and 0.99. z=xy therefore alternates between (about) 1 and 99, but is always $<x$ Alternative [more general] question) Assuming finite variances, is it true that for any random variable a and b such that 0≤a≤b, we have var(a)≤var(b)? Even more clearly not; without the need for a "y" like variable, it's pretty obvious: Consider one set of values that alternates between 1 and 99, and a second one that alternates between 100 and 101. Adding in the new condition that X and Y have positive covariance: set.seed(239843) oldx=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) x = oldx + y # oldX and Y are independent, so X and Y now have +ve covariance z=x*y cov(x,y) [1] 0.2739745 # sample covariance happens to be positive in this case also var(x);var(y);var(z) [1] 1.065326 [1] 0.2425253 [1] 2481.243 If you work out the answers for this case algebraically (compute the population variances and relevant population covariance), you'll see this isn't just a numerical accident from a fortunate choice of seed.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v Clearly not. An easy counterexample (here done in R), that I think satisfies all your constraints: set.seed(239843) x=rnorm(100,100,1) y=rep(c(0.01,0.99),times=50) z=x*y var(x) [1] 0.8413043 var(
38,143
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$?
I do not think Var$(Z)\le $Var$(X)$. Imagine that $X$ is a time series that meanders about values near 100, almost always between 98 and 102. Now imagine that $Z$ meanders between 0 and 100, but is always less than $X$. The variance of $Z$ is clearly going to be larger in such a case than the variance of $X$. This is an example where $X$ and $Z$ are stationary around some constants, but it could easily be extended to a trend stationary example... I am not sure if it would extend to integrated time series... need to think on that.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v
I do not think Var$(Z)\le $Var$(X)$. Imagine that $X$ is a time series that meanders about values near 100, almost always between 98 and 102. Now imagine that $Z$ meanders between 0 and 100, but is al
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$? I do not think Var$(Z)\le $Var$(X)$. Imagine that $X$ is a time series that meanders about values near 100, almost always between 98 and 102. Now imagine that $Z$ meanders between 0 and 100, but is always less than $X$. The variance of $Z$ is clearly going to be larger in such a case than the variance of $X$. This is an example where $X$ and $Z$ are stationary around some constants, but it could easily be extended to a trend stationary example... I am not sure if it would extend to integrated time series... need to think on that.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v I do not think Var$(Z)\le $Var$(X)$. Imagine that $X$ is a time series that meanders about values near 100, almost always between 98 and 102. Now imagine that $Z$ meanders between 0 and 100, but is al
38,144
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$?
For the general case, the answer is no. For the specific cases, it is also no. A simple counter example is take $ y\sim U (0,1) $ and take $ x\sim Gamma (a, a) $ such that we have $ E (x)=1 $ and $ var (x)=a^{-1}$ . Take $ x $ and $ y $ as independent, and we have: $$ var (z) =E [var (z|y)]+var [E (z|y)]=E [y^2a^{-1}]+var [y]=var (y) + E (y^2)a^{-1}=\frac {1}{12} +\frac {1}{3} var (x)=var (x)\frac {a+4}{12} $$ Now we just choose any value for $ a $ such that $ a> 8$ and we will have $ var (z)> var (x) $
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v
For the general case, the answer is no. For the specific cases, it is also no. A simple counter example is take $ y\sim U (0,1) $ and take $ x\sim Gamma (a, a) $ such that we have $ E (x)=1 $ and $ v
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$? For the general case, the answer is no. For the specific cases, it is also no. A simple counter example is take $ y\sim U (0,1) $ and take $ x\sim Gamma (a, a) $ such that we have $ E (x)=1 $ and $ var (x)=a^{-1}$ . Take $ x $ and $ y $ as independent, and we have: $$ var (z) =E [var (z|y)]+var [E (z|y)]=E [y^2a^{-1}]+var [y]=var (y) + E (y^2)a^{-1}=\frac {1}{12} +\frac {1}{3} var (x)=var (x)\frac {a+4}{12} $$ Now we just choose any value for $ a $ such that $ a> 8$ and we will have $ var (z)> var (x) $
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v For the general case, the answer is no. For the specific cases, it is also no. A simple counter example is take $ y\sim U (0,1) $ and take $ x\sim Gamma (a, a) $ such that we have $ E (x)=1 $ and $ v
38,145
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$?
Let us be clear that the "variance" under discussion appears to be a random variable derived from a finite portion of a time series. Specifically, the raw $k^\text{th}$ moment of $\mathrm{X} =(X_1, X_2, \ldots, X_N)$ is $$\mu_k(\mathrm{X}) = (X_1^k+X_2^k+\cdots+X_N^k)/N,$$ which is a random variable, and the variance is $$\text{var}(\mathrm{X}) = \mu_2(\mathrm{X}) - \mu_1^2(\mathrm{X}),$$ which also is a random variable. Similarly we may define moments $\mu_{jk}$ of the bivariate series $(X_i,Y_i)$ and from those compute a covariance. All these definitions make sense even when either series is constant (although then the moments and variance may reduce to numbers rather than random variables). To show that counterexamples exist even when $X$ and $Y$ have positive covariance, let the $Y_i$ be bounded by $0$ and $1$, let $\mathrm{Y}$ have nonzero variance, pick $0 \lt \varepsilon \lt 1$, and define $$X_i = 1 + \varepsilon Y_i \ge 0.$$ By construction there is perfect (unit) correlation between each $X_i$ and $Y_i$ as well as between $\mu_k(\mathrm{X})$ and $\mu_k(\mathrm{Y})$ for any $k\gt 0$; certainly the covariances are positive. Yet, since $Z_i=X_iY_i = Y_i + \varepsilon Y_i^2$, $$\text{Var}(\mathrm{Z}) = \text{Var}(\mathrm{Y}) + 2\varepsilon\mu_1(\mathrm{Y}^3) + \varepsilon^2 \mu_1(\mathrm{Y}^4) \gt \text{Var}(\mathrm{Y}) \gt \varepsilon^2 \text{Var}(\mathrm{Y}) = \text{Var}(\mathrm{X}),$$ disproving the conjecture in the question. The same analysis (coupled with the fact that $\mu_1(\mathrm{Y}^4)\lt \mu_1(\mathrm{Y}^2)$) demonstrates that for sufficiently large $\varepsilon\gt 1$, the inequality must be reversed. Thus there is no necessary inequality relating $\text{Var}(\mathrm{X})$ and $\text{Var}(\mathrm{Z})$.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v
Let us be clear that the "variance" under discussion appears to be a random variable derived from a finite portion of a time series. Specifically, the raw $k^\text{th}$ moment of $\mathrm{X} =(X_1, X
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$? Let us be clear that the "variance" under discussion appears to be a random variable derived from a finite portion of a time series. Specifically, the raw $k^\text{th}$ moment of $\mathrm{X} =(X_1, X_2, \ldots, X_N)$ is $$\mu_k(\mathrm{X}) = (X_1^k+X_2^k+\cdots+X_N^k)/N,$$ which is a random variable, and the variance is $$\text{var}(\mathrm{X}) = \mu_2(\mathrm{X}) - \mu_1^2(\mathrm{X}),$$ which also is a random variable. Similarly we may define moments $\mu_{jk}$ of the bivariate series $(X_i,Y_i)$ and from those compute a covariance. All these definitions make sense even when either series is constant (although then the moments and variance may reduce to numbers rather than random variables). To show that counterexamples exist even when $X$ and $Y$ have positive covariance, let the $Y_i$ be bounded by $0$ and $1$, let $\mathrm{Y}$ have nonzero variance, pick $0 \lt \varepsilon \lt 1$, and define $$X_i = 1 + \varepsilon Y_i \ge 0.$$ By construction there is perfect (unit) correlation between each $X_i$ and $Y_i$ as well as between $\mu_k(\mathrm{X})$ and $\mu_k(\mathrm{Y})$ for any $k\gt 0$; certainly the covariances are positive. Yet, since $Z_i=X_iY_i = Y_i + \varepsilon Y_i^2$, $$\text{Var}(\mathrm{Z}) = \text{Var}(\mathrm{Y}) + 2\varepsilon\mu_1(\mathrm{Y}^3) + \varepsilon^2 \mu_1(\mathrm{Y}^4) \gt \text{Var}(\mathrm{Y}) \gt \varepsilon^2 \text{Var}(\mathrm{Y}) = \text{Var}(\mathrm{X}),$$ disproving the conjecture in the question. The same analysis (coupled with the fact that $\mu_1(\mathrm{Y}^4)\lt \mu_1(\mathrm{Y}^2)$) demonstrates that for sufficiently large $\varepsilon\gt 1$, the inequality must be reversed. Thus there is no necessary inequality relating $\text{Var}(\mathrm{X})$ and $\text{Var}(\mathrm{Z})$.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v Let us be clear that the "variance" under discussion appears to be a random variable derived from a finite portion of a time series. Specifically, the raw $k^\text{th}$ moment of $\mathrm{X} =(X_1, X
38,146
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$?
Assume that the processes $\{X\}$ and $\{Y\}$ are ergodic/stationary with finite moments, and independent. Then $\{XY\}$ is also ergodic and $$\operatorname{Var(XY)} = E(X^2Y^2) - [E(XY)]^2 = E(X^2)E(Y^2) - [E(X)]^2[E(Y)]^2$$ the break-up of expected values due to independence. You are asking $$E(X^2)E(Y^2) - [E(X)]^2[E(Y)]^2 \leq E(X^2) - [E(X)]^2\;\;??$$ $$\Rightarrow [E(X)]^2\cdot [1-[E(Y)]^2] \leq E(X^2)\cdot [1-E(Y^2)]\;\; ?? \qquad[1]$$ Since $0\leq Y \leq 1$ we have $$0\leq E(Y) \leq 1 \Rightarrow 0\leq [E(Y)]^2 \leq1,\;\; 0\leq E(Y^2) \leq1$$ and also $$E(Y^2) > [E(Y)]^2 \Rightarrow [1-[E(Y)]^2] > [1-E(Y^2)] \qquad[2]$$ Examining the desired inequality $[1]$ and the true inequality $[2]$ one sees that $[1]$ may or may not hold, since $[E(X)]^2 < E(X^2)$. I would say this is an instructive example of how things change when we move from a deterministic to a stochastic assumption - because if the $y_i$'s are designated as a deterministic sequence, then of course the variance of $X_iy_i$ is no greater than the variance of $X_i$.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v
Assume that the processes $\{X\}$ and $\{Y\}$ are ergodic/stationary with finite moments, and independent. Then $\{XY\}$ is also ergodic and $$\operatorname{Var(XY)} = E(X^2Y^2) - [E(XY)]^2 = E(X^2)E(
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{var}(X)$? Assume that the processes $\{X\}$ and $\{Y\}$ are ergodic/stationary with finite moments, and independent. Then $\{XY\}$ is also ergodic and $$\operatorname{Var(XY)} = E(X^2Y^2) - [E(XY)]^2 = E(X^2)E(Y^2) - [E(X)]^2[E(Y)]^2$$ the break-up of expected values due to independence. You are asking $$E(X^2)E(Y^2) - [E(X)]^2[E(Y)]^2 \leq E(X^2) - [E(X)]^2\;\;??$$ $$\Rightarrow [E(X)]^2\cdot [1-[E(Y)]^2] \leq E(X^2)\cdot [1-E(Y^2)]\;\; ?? \qquad[1]$$ Since $0\leq Y \leq 1$ we have $$0\leq E(Y) \leq 1 \Rightarrow 0\leq [E(Y)]^2 \leq1,\;\; 0\leq E(Y^2) \leq1$$ and also $$E(Y^2) > [E(Y)]^2 \Rightarrow [1-[E(Y)]^2] > [1-E(Y^2)] \qquad[2]$$ Examining the desired inequality $[1]$ and the true inequality $[2]$ one sees that $[1]$ may or may not hold, since $[E(X)]^2 < E(X^2)$. I would say this is an instructive example of how things change when we move from a deterministic to a stochastic assumption - because if the $y_i$'s are designated as a deterministic sequence, then of course the variance of $X_iy_i$ is no greater than the variance of $X_i$.
If two time series $X$ and $Z$ follow $0 \leq Z \leq X$, can we say that $\text{var}(Z) \leq \text{v Assume that the processes $\{X\}$ and $\{Y\}$ are ergodic/stationary with finite moments, and independent. Then $\{XY\}$ is also ergodic and $$\operatorname{Var(XY)} = E(X^2Y^2) - [E(XY)]^2 = E(X^2)E(
38,147
Quadratic models with R. The use of poly(..) and I(..) functions (R-language)
You have the first model all sorts of wrong; that model var1 ~ var2 * var3 says the variance in var1 is explained by the main effects of factor var2 and continuous covariate var3 and their interaction. In other words, the model is one where each level of var2 has a separate intercept and slope for the regressions lines fitted. There are no polynomials here. The second model is wrong also, but what you actually want is unclear from the description given. That model var1 ~ poly(var2,1) * poly(var3,1) where you cast poly(var2,1) as a factor is effectively the same as the first, just with extra effort. poly() generates orthogonal (by default) polynomials of its first argument of degree specified by the second argument. Hence the first order polynomial of 1:10 is > poly(1:10, 1) 1 [1,] -0.49543369 [2,] -0.38533732 [3,] -0.27524094 [4,] -0.16514456 [5,] -0.05504819 [6,] 0.05504819 [7,] 0.16514456 [8,] 0.27524094 [9,] 0.38533732 [10,] 0.49543369 attr(,"degree") [1] 1 attr(,"coefs") attr(,"coefs")$alpha [1] 5.5 attr(,"coefs")$norm2 [1] 1.0 10.0 82.5 attr(,"class") [1] "poly" "matrix" The second orthogonal polynomial of the vector 1:10 is essentially 1:10 and (1:10) * (1:10) but done in a way as to make the two new vectors orthogonal (or uncorrelated) > poly(1:10, 2) 1 2 [1,] -0.49543369 0.52223297 [2,] -0.38533732 0.17407766 [3,] -0.27524094 -0.08703883 [4,] -0.16514456 -0.26111648 [5,] -0.05504819 -0.34815531 [6,] 0.05504819 -0.34815531 [7,] 0.16514456 -0.26111648 [8,] 0.27524094 -0.08703883 [9,] 0.38533732 0.17407766 [10,] 0.49543369 0.52223297 attr(,"degree") [1] 1 2 attr(,"coefs") attr(,"coefs")$alpha [1] 5.5 5.5 attr(,"coefs")$norm2 [1] 1.0 10.0 82.5 528.0 attr(,"class") [1] "poly" "matrix" I'm not clear what you want, but if you want to explore models for different polynomials of var2 and var3 then just use var1 ~ poly(var2, 2) + poly(var3, 2) for main effects of quadratic polynomials of var2 and var3. Or more complex var1 ~ poly(var2, 2) * poly(var3, 3) which is the main effects of a quadratic in var2 and a cubic in var3, plus their interaction. I( ) isolates or insulates the contents in the parentheses from R's formula parsing code. For example, you might commonly see var1 ~ var2 + var2^2 + var3 + var3^2 which is the same as var1 ~ poly(var2, 2) + poly(var3, 2) (except the polynomials are not orthogonal), or it should be. Unfortunately, ^ in a formula means ordered terms, i.e. itself plus its interaction, because ^ has special meaning. To stop R interpreting ^ incorrectly, you wrap those terms in I( ). i.e. var1 ~ var2 + I(var2^2) + var3 + I(var3^2) However, do note that var2 and I(var2^2) will be correlated (likewise for var3 and I(var3^2)) and correlated variables in a model can cause issues. Hence the use of poly() which produces orthogonal polynomials, as discussed above. Note also that poly() can give you the usual raw polynomials via use of raw = TRUE. Hence this might be more what you were expecting for the quadratic of the vector 1:10 > poly(1:10, 2, raw = TRUE) 1 2 [1,] 1 1 [2,] 2 4 [3,] 3 9 [4,] 4 16 [5,] 5 25 [6,] 6 36 [7,] 7 49 [8,] 8 64 [9,] 9 81 [10,] 10 100 attr(,"degree") [1] 1 2 attr(,"class") [1] "poly" "matrix" But poly(1:10, 2) would be better in a model.
Quadratic models with R. The use of poly(..) and I(..) functions (R-language)
You have the first model all sorts of wrong; that model var1 ~ var2 * var3 says the variance in var1 is explained by the main effects of factor var2 and continuous covariate var3 and their interactio
Quadratic models with R. The use of poly(..) and I(..) functions (R-language) You have the first model all sorts of wrong; that model var1 ~ var2 * var3 says the variance in var1 is explained by the main effects of factor var2 and continuous covariate var3 and their interaction. In other words, the model is one where each level of var2 has a separate intercept and slope for the regressions lines fitted. There are no polynomials here. The second model is wrong also, but what you actually want is unclear from the description given. That model var1 ~ poly(var2,1) * poly(var3,1) where you cast poly(var2,1) as a factor is effectively the same as the first, just with extra effort. poly() generates orthogonal (by default) polynomials of its first argument of degree specified by the second argument. Hence the first order polynomial of 1:10 is > poly(1:10, 1) 1 [1,] -0.49543369 [2,] -0.38533732 [3,] -0.27524094 [4,] -0.16514456 [5,] -0.05504819 [6,] 0.05504819 [7,] 0.16514456 [8,] 0.27524094 [9,] 0.38533732 [10,] 0.49543369 attr(,"degree") [1] 1 attr(,"coefs") attr(,"coefs")$alpha [1] 5.5 attr(,"coefs")$norm2 [1] 1.0 10.0 82.5 attr(,"class") [1] "poly" "matrix" The second orthogonal polynomial of the vector 1:10 is essentially 1:10 and (1:10) * (1:10) but done in a way as to make the two new vectors orthogonal (or uncorrelated) > poly(1:10, 2) 1 2 [1,] -0.49543369 0.52223297 [2,] -0.38533732 0.17407766 [3,] -0.27524094 -0.08703883 [4,] -0.16514456 -0.26111648 [5,] -0.05504819 -0.34815531 [6,] 0.05504819 -0.34815531 [7,] 0.16514456 -0.26111648 [8,] 0.27524094 -0.08703883 [9,] 0.38533732 0.17407766 [10,] 0.49543369 0.52223297 attr(,"degree") [1] 1 2 attr(,"coefs") attr(,"coefs")$alpha [1] 5.5 5.5 attr(,"coefs")$norm2 [1] 1.0 10.0 82.5 528.0 attr(,"class") [1] "poly" "matrix" I'm not clear what you want, but if you want to explore models for different polynomials of var2 and var3 then just use var1 ~ poly(var2, 2) + poly(var3, 2) for main effects of quadratic polynomials of var2 and var3. Or more complex var1 ~ poly(var2, 2) * poly(var3, 3) which is the main effects of a quadratic in var2 and a cubic in var3, plus their interaction. I( ) isolates or insulates the contents in the parentheses from R's formula parsing code. For example, you might commonly see var1 ~ var2 + var2^2 + var3 + var3^2 which is the same as var1 ~ poly(var2, 2) + poly(var3, 2) (except the polynomials are not orthogonal), or it should be. Unfortunately, ^ in a formula means ordered terms, i.e. itself plus its interaction, because ^ has special meaning. To stop R interpreting ^ incorrectly, you wrap those terms in I( ). i.e. var1 ~ var2 + I(var2^2) + var3 + I(var3^2) However, do note that var2 and I(var2^2) will be correlated (likewise for var3 and I(var3^2)) and correlated variables in a model can cause issues. Hence the use of poly() which produces orthogonal polynomials, as discussed above. Note also that poly() can give you the usual raw polynomials via use of raw = TRUE. Hence this might be more what you were expecting for the quadratic of the vector 1:10 > poly(1:10, 2, raw = TRUE) 1 2 [1,] 1 1 [2,] 2 4 [3,] 3 9 [4,] 4 16 [5,] 5 25 [6,] 6 36 [7,] 7 49 [8,] 8 64 [9,] 9 81 [10,] 10 100 attr(,"degree") [1] 1 2 attr(,"class") [1] "poly" "matrix" But poly(1:10, 2) would be better in a model.
Quadratic models with R. The use of poly(..) and I(..) functions (R-language) You have the first model all sorts of wrong; that model var1 ~ var2 * var3 says the variance in var1 is explained by the main effects of factor var2 and continuous covariate var3 and their interactio
38,148
Why does the bootstrapped correlation revolve around zero while the original correlation $\approx 0.52$?
There is an obvious reason for that: You are sampling from both series separately, thus destroying any correlation. You probably want to sample pairs, not observations in each series, e.g. index <- sample(132,132, replace=TRUE) euro.nzd.corr[i] = cor(euro[index], nzd[index]) Fixing your code should allow you to recover a distribution centered on .5 but you might want to look up some literature before relying on these inferences as there are some niceties about bootstrapping correlations. As @NickCox pointed out, the fact that both set of observations are times series also creates further difficulties. You should be able to find a lot of material on all that.
Why does the bootstrapped correlation revolve around zero while the original correlation $\approx 0.
There is an obvious reason for that: You are sampling from both series separately, thus destroying any correlation. You probably want to sample pairs, not observations in each series, e.g. index <- sa
Why does the bootstrapped correlation revolve around zero while the original correlation $\approx 0.52$? There is an obvious reason for that: You are sampling from both series separately, thus destroying any correlation. You probably want to sample pairs, not observations in each series, e.g. index <- sample(132,132, replace=TRUE) euro.nzd.corr[i] = cor(euro[index], nzd[index]) Fixing your code should allow you to recover a distribution centered on .5 but you might want to look up some literature before relying on these inferences as there are some niceties about bootstrapping correlations. As @NickCox pointed out, the fact that both set of observations are times series also creates further difficulties. You should be able to find a lot of material on all that.
Why does the bootstrapped correlation revolve around zero while the original correlation $\approx 0. There is an obvious reason for that: You are sampling from both series separately, thus destroying any correlation. You probably want to sample pairs, not observations in each series, e.g. index <- sa
38,149
Standard errors for covariance estimate in R
In response to whuber's follow-up, I would advocate that an all-purpose black-box approach would be using a non-parametric bootstrap. The basic pseudocode is: Jointly resample from observed rows of data, allowing for replications and holding the sample size fixed. Re-estimate covariance in the resampled data. Repeat 1-2 for a sufficient number of iterations. Use the simulated values to compute variance estimates or empirical 0.025 and 0.975 quantiles to form confidence intervals. An example here: set.seed(1) x <- seq(-3, 3, length.out=100) do.one <- function(x) { y <- rnorm(100, x) d <- data.frame(x, y) ## bootstrap out bs.out <- replicate(1000, { dd <- d[sample(1:100, replace=TRUE), ] cov(dd)[1, 2] }) bs.lower <- quantile(bs.out, 0.025) bs.upper <- quantile(bs.out, 0.975) ## in the absence of random error, y=x so cov(x, y)=var(x) (bs.lower < var(x)) & (bs.upper > var(x)) } o <- replicate(1000, do.one(x)) mean(o) ## should be 95% if bs estimates correct CIs Feel free to try this simulation with any random or non-random distribution of $X$ and functional form of the mean model. I am unsure (though cautiously optimistic) CIs based on bootstrapped covariance estimates give correct 95% coverage.
Standard errors for covariance estimate in R
In response to whuber's follow-up, I would advocate that an all-purpose black-box approach would be using a non-parametric bootstrap. The basic pseudocode is: Jointly resample from observed rows of d
Standard errors for covariance estimate in R In response to whuber's follow-up, I would advocate that an all-purpose black-box approach would be using a non-parametric bootstrap. The basic pseudocode is: Jointly resample from observed rows of data, allowing for replications and holding the sample size fixed. Re-estimate covariance in the resampled data. Repeat 1-2 for a sufficient number of iterations. Use the simulated values to compute variance estimates or empirical 0.025 and 0.975 quantiles to form confidence intervals. An example here: set.seed(1) x <- seq(-3, 3, length.out=100) do.one <- function(x) { y <- rnorm(100, x) d <- data.frame(x, y) ## bootstrap out bs.out <- replicate(1000, { dd <- d[sample(1:100, replace=TRUE), ] cov(dd)[1, 2] }) bs.lower <- quantile(bs.out, 0.025) bs.upper <- quantile(bs.out, 0.975) ## in the absence of random error, y=x so cov(x, y)=var(x) (bs.lower < var(x)) & (bs.upper > var(x)) } o <- replicate(1000, do.one(x)) mean(o) ## should be 95% if bs estimates correct CIs Feel free to try this simulation with any random or non-random distribution of $X$ and functional form of the mean model. I am unsure (though cautiously optimistic) CIs based on bootstrapped covariance estimates give correct 95% coverage.
Standard errors for covariance estimate in R In response to whuber's follow-up, I would advocate that an all-purpose black-box approach would be using a non-parametric bootstrap. The basic pseudocode is: Jointly resample from observed rows of d
38,150
Standard errors for covariance estimate in R
This is not an answer to the original question, but to your request to AdamO. (As far as I'm concerned he's covered the original question.) I'd make it a comment but I think it's too long Would you be able to derive a closed form solution assuming the variables are normal for example? see http://en.wikipedia.org/wiki/Estimation_of_covariance_matrices#Concluding_steps and http://en.wikipedia.org/wiki/Wishart_distribution The second link gives the variance of the $(i,j)\,$ element of the distribution of the scatter matrix for multivariate normal random variables. From there you can get the variance of the sample covariance and hence the standard error. Specifically, $\sum _{{i=1}}^{n}(X_{i}-\overline {X})(X_{i}-\overline {X})^{{\mathrm {T}}}\sim W_{p}(\Sigma ,n-1)$ implies $\text{Var}(\sum _{{i=1}}^{n}(X_{i}-\overline {X})(Y_{i}-\overline {Y}))=(n-1)(\Sigma_{XY}^2+\Sigma_{XX}\Sigma_{YY})$, or $\text{Var}(\frac{1}{n-1}\sum _{{i=1}}^{n}(X_{i}-\overline {X})(Y_{i}-\overline {Y}))=(n-1)^{-1}(\Sigma_{XY}^2+\Sigma_{XX}\Sigma_{YY})$ Or, for a more general result, If $S_{XY}=\frac{1}{n}\sum _{{i=1}}^{n}(X_{i}-\overline {X})(Y_{i}-\overline {Y}))$ then these notes by Thomas S. Richardson, here give $\text{Var}(S_{XY})=\frac{(n−1)^2}{n^3}(μ_{22}−μ_{11}^2)+ \frac{(n−1)}{n^3} (μ_{11}^2 + μ_{20} μ_{02})$ (where $\mu_{rs}=E[(X-\mu_{_X})^r\,(Y-\mu_{_Y})^s]$) however, wolfies notes in his answer here that this is incorrect. If I haven't made an error, his result corresponds to a flip of sign on the second $\mu_{11}$ term: $\text{Var}(S_{XY})=\frac{(n−1)^2}{n^3}(μ_{22}-μ_{11}^2)+ \frac{(n−1)}{n^3} ( μ_{20} μ_{02}-μ_{11}^2 )$ Note that correcting this for the $\frac{1}{n-1}$ version is a simple matter of multiplying the above result by $(\frac{n}{n-1})^2$. IIRC, there's more details in vol. I of Kendall and Stuart
Standard errors for covariance estimate in R
This is not an answer to the original question, but to your request to AdamO. (As far as I'm concerned he's covered the original question.) I'd make it a comment but I think it's too long Would you b
Standard errors for covariance estimate in R This is not an answer to the original question, but to your request to AdamO. (As far as I'm concerned he's covered the original question.) I'd make it a comment but I think it's too long Would you be able to derive a closed form solution assuming the variables are normal for example? see http://en.wikipedia.org/wiki/Estimation_of_covariance_matrices#Concluding_steps and http://en.wikipedia.org/wiki/Wishart_distribution The second link gives the variance of the $(i,j)\,$ element of the distribution of the scatter matrix for multivariate normal random variables. From there you can get the variance of the sample covariance and hence the standard error. Specifically, $\sum _{{i=1}}^{n}(X_{i}-\overline {X})(X_{i}-\overline {X})^{{\mathrm {T}}}\sim W_{p}(\Sigma ,n-1)$ implies $\text{Var}(\sum _{{i=1}}^{n}(X_{i}-\overline {X})(Y_{i}-\overline {Y}))=(n-1)(\Sigma_{XY}^2+\Sigma_{XX}\Sigma_{YY})$, or $\text{Var}(\frac{1}{n-1}\sum _{{i=1}}^{n}(X_{i}-\overline {X})(Y_{i}-\overline {Y}))=(n-1)^{-1}(\Sigma_{XY}^2+\Sigma_{XX}\Sigma_{YY})$ Or, for a more general result, If $S_{XY}=\frac{1}{n}\sum _{{i=1}}^{n}(X_{i}-\overline {X})(Y_{i}-\overline {Y}))$ then these notes by Thomas S. Richardson, here give $\text{Var}(S_{XY})=\frac{(n−1)^2}{n^3}(μ_{22}−μ_{11}^2)+ \frac{(n−1)}{n^3} (μ_{11}^2 + μ_{20} μ_{02})$ (where $\mu_{rs}=E[(X-\mu_{_X})^r\,(Y-\mu_{_Y})^s]$) however, wolfies notes in his answer here that this is incorrect. If I haven't made an error, his result corresponds to a flip of sign on the second $\mu_{11}$ term: $\text{Var}(S_{XY})=\frac{(n−1)^2}{n^3}(μ_{22}-μ_{11}^2)+ \frac{(n−1)}{n^3} ( μ_{20} μ_{02}-μ_{11}^2 )$ Note that correcting this for the $\frac{1}{n-1}$ version is a simple matter of multiplying the above result by $(\frac{n}{n-1})^2$. IIRC, there's more details in vol. I of Kendall and Stuart
Standard errors for covariance estimate in R This is not an answer to the original question, but to your request to AdamO. (As far as I'm concerned he's covered the original question.) I'd make it a comment but I think it's too long Would you b
38,151
Standard errors for covariance estimate in R
The OP's question does not define what formula R uses as sample covariance estimator. However, following the link in Glen's answer, I assume that R is using: $$m_{11} = \frac{1}{n} \sum _{i=1}^n \left(X_i-\bar{X}\right) \left(Y_i-\bar{Y}\right)$$ ... also known as the $m_{11}$ sample central moment, which can be expressed in power sum notation $s_{r,t}=\sum _{i=1}^n X_i^r Y_i^t$ (using mathStatica here) as : ... which is a familiar alternative notation. We seek the variance of the estimator i.e. $\text{Var}(m_{1,1})$. Since the variance operator denotes the $2^\text{nd}$ central moment of $m_{1,1}$, we can find the exact symbolic solution (for any distribution whose moments exists) with the mathStatica function: I would note that the solution so obtained is different to that referenced in the link given by Glen above to a paper. Perhaps they are computing something else?! There is now a long list of published 'moment of moments' papers that have been shown to contain incorrect results by mathStatica, including some results by Fisher himself, and some of the results in Stuart and Ord - see for instance Spot The Error There is an alternative defn of sample covariance using $\frac{1}{n-1}$ but that is not the one used in the paper referenced by Glen either.
Standard errors for covariance estimate in R
The OP's question does not define what formula R uses as sample covariance estimator. However, following the link in Glen's answer, I assume that R is using: $$m_{11} = \frac{1}{n} \sum _{i=1}^n \lef
Standard errors for covariance estimate in R The OP's question does not define what formula R uses as sample covariance estimator. However, following the link in Glen's answer, I assume that R is using: $$m_{11} = \frac{1}{n} \sum _{i=1}^n \left(X_i-\bar{X}\right) \left(Y_i-\bar{Y}\right)$$ ... also known as the $m_{11}$ sample central moment, which can be expressed in power sum notation $s_{r,t}=\sum _{i=1}^n X_i^r Y_i^t$ (using mathStatica here) as : ... which is a familiar alternative notation. We seek the variance of the estimator i.e. $\text{Var}(m_{1,1})$. Since the variance operator denotes the $2^\text{nd}$ central moment of $m_{1,1}$, we can find the exact symbolic solution (for any distribution whose moments exists) with the mathStatica function: I would note that the solution so obtained is different to that referenced in the link given by Glen above to a paper. Perhaps they are computing something else?! There is now a long list of published 'moment of moments' papers that have been shown to contain incorrect results by mathStatica, including some results by Fisher himself, and some of the results in Stuart and Ord - see for instance Spot The Error There is an alternative defn of sample covariance using $\frac{1}{n-1}$ but that is not the one used in the paper referenced by Glen either.
Standard errors for covariance estimate in R The OP's question does not define what formula R uses as sample covariance estimator. However, following the link in Glen's answer, I assume that R is using: $$m_{11} = \frac{1}{n} \sum _{i=1}^n \lef
38,152
Standard errors for covariance estimate in R
There is some confusion in the discussion above. Simple algebra shows that the expression ascribed to Richardson: Var$(S_{XY})=\frac{(n−1)^2}{n^3}(\mu_{22}−\mu^2_{11})+\frac{(n−1)}{n^3}(\mu^2_{11}+\mu_{20}\mu_{02})$ is identical to that obtained by wolfies using MathStatica. Both expressions clearly agree on the coefficients of $\mu_{22}$ and $\mu_{20}\mu_{02}$. For $\mu_{11}^2$, collecting terms in Richardson's expression gives: $[-(n-1)^2 + (n-1)]/n^3$ $= [(1-n)(n-1) + (n-1)]/n^3$ $= [(1-n) + 1](n-1)/n^3$ $= (2-n)(n-1)/n^3$ $= -(n-2)(n-1)/n^3$ $= -(-2+n)(-1+n)/n^3$ which is the coefficient for $\mu_{11}^2$ obtained by MathStatica. [The expression provided by Glen as a "correction" to MathStatica's is not equivalent, as can be seen by substituting $n=2$ and comparing coefficients for $\mu_{11}^2$.] The correct expression may also be derived in a few steps from results in Kendall's Advanced Theory of Statistics, Kendall & Stuart (1987), Fifth Edition, p.441, Example 13.3, where it is stated that: Var$(k_{11}) = \frac{1}{n}\kappa_{22} + \frac{1}{n-1}\kappa_{20}\kappa_{02} +\frac{1}{n-1}\kappa^2_{11}$. Simple algebra shows this is equivalent to the above expressions, noting that $k_{11}$ in K+S is the $k$-statistic, which is the unbiased estimator of the population covariance (aka the (1,1) product cumulant $\kappa_{11}$), so $k_{11} = \frac{n}{n-1} S_{XY}$. It is also necessary to note the following relations between cumulants and moments: $\mu_{11}=\kappa_{11}$, $\mu_{20}=\kappa_{20}$, $\mu_{02}=\kappa_{02}$ and $\mu_{22} = \kappa_{22} + \kappa_{20}\kappa_{02} + 2\kappa_{11}^2$. See K+S p.105, p.102 following (3.69) and p.87. [Lastly, Goldberger (1991), A Course in Econometrics, p.108 gives an expression for $V(S_{XY})$ that is incorrect. Specifically it contains a term $2(n-1)(\mu_{20}\mu_{02})/n^3$ that should instead be $(n-1)(\mu_{11}^2+ \mu_{20}\mu_{02})/n^3$.]
Standard errors for covariance estimate in R
There is some confusion in the discussion above. Simple algebra shows that the expression ascribed to Richardson: Var$(S_{XY})=\frac{(n−1)^2}{n^3}(\mu_{22}−\mu^2_{11})+\frac{(n−1)}{n^3}(\mu^2_{11}+\m
Standard errors for covariance estimate in R There is some confusion in the discussion above. Simple algebra shows that the expression ascribed to Richardson: Var$(S_{XY})=\frac{(n−1)^2}{n^3}(\mu_{22}−\mu^2_{11})+\frac{(n−1)}{n^3}(\mu^2_{11}+\mu_{20}\mu_{02})$ is identical to that obtained by wolfies using MathStatica. Both expressions clearly agree on the coefficients of $\mu_{22}$ and $\mu_{20}\mu_{02}$. For $\mu_{11}^2$, collecting terms in Richardson's expression gives: $[-(n-1)^2 + (n-1)]/n^3$ $= [(1-n)(n-1) + (n-1)]/n^3$ $= [(1-n) + 1](n-1)/n^3$ $= (2-n)(n-1)/n^3$ $= -(n-2)(n-1)/n^3$ $= -(-2+n)(-1+n)/n^3$ which is the coefficient for $\mu_{11}^2$ obtained by MathStatica. [The expression provided by Glen as a "correction" to MathStatica's is not equivalent, as can be seen by substituting $n=2$ and comparing coefficients for $\mu_{11}^2$.] The correct expression may also be derived in a few steps from results in Kendall's Advanced Theory of Statistics, Kendall & Stuart (1987), Fifth Edition, p.441, Example 13.3, where it is stated that: Var$(k_{11}) = \frac{1}{n}\kappa_{22} + \frac{1}{n-1}\kappa_{20}\kappa_{02} +\frac{1}{n-1}\kappa^2_{11}$. Simple algebra shows this is equivalent to the above expressions, noting that $k_{11}$ in K+S is the $k$-statistic, which is the unbiased estimator of the population covariance (aka the (1,1) product cumulant $\kappa_{11}$), so $k_{11} = \frac{n}{n-1} S_{XY}$. It is also necessary to note the following relations between cumulants and moments: $\mu_{11}=\kappa_{11}$, $\mu_{20}=\kappa_{20}$, $\mu_{02}=\kappa_{02}$ and $\mu_{22} = \kappa_{22} + \kappa_{20}\kappa_{02} + 2\kappa_{11}^2$. See K+S p.105, p.102 following (3.69) and p.87. [Lastly, Goldberger (1991), A Course in Econometrics, p.108 gives an expression for $V(S_{XY})$ that is incorrect. Specifically it contains a term $2(n-1)(\mu_{20}\mu_{02})/n^3$ that should instead be $(n-1)(\mu_{11}^2+ \mu_{20}\mu_{02})/n^3$.]
Standard errors for covariance estimate in R There is some confusion in the discussion above. Simple algebra shows that the expression ascribed to Richardson: Var$(S_{XY})=\frac{(n−1)^2}{n^3}(\mu_{22}−\mu^2_{11})+\frac{(n−1)}{n^3}(\mu^2_{11}+\m
38,153
Which performance measure to use when using SVM: MSE or MAE?
Actually, looking at both MAE and RMSE gives you additional information about the distribution of the errors: $\mathrm{MAE} \leq \mathrm{RMSE} \leq \mathrm{MAE}^2$ (for regression) if $\mathrm{RMSE}$ is close to $\mathrm{MAE}$, the model makes many relatively small errors if $\mathrm{RMSE}$ is close to $\mathrm{MAE}^2$, the model makes few but large errors
Which performance measure to use when using SVM: MSE or MAE?
Actually, looking at both MAE and RMSE gives you additional information about the distribution of the errors: $\mathrm{MAE} \leq \mathrm{RMSE} \leq \mathrm{MAE}^2$ (for regression) if $\mathrm{RMSE}$
Which performance measure to use when using SVM: MSE or MAE? Actually, looking at both MAE and RMSE gives you additional information about the distribution of the errors: $\mathrm{MAE} \leq \mathrm{RMSE} \leq \mathrm{MAE}^2$ (for regression) if $\mathrm{RMSE}$ is close to $\mathrm{MAE}$, the model makes many relatively small errors if $\mathrm{RMSE}$ is close to $\mathrm{MAE}^2$, the model makes few but large errors
Which performance measure to use when using SVM: MSE or MAE? Actually, looking at both MAE and RMSE gives you additional information about the distribution of the errors: $\mathrm{MAE} \leq \mathrm{RMSE} \leq \mathrm{MAE}^2$ (for regression) if $\mathrm{RMSE}$
38,154
Which performance measure to use when using SVM: MSE or MAE?
The choice of performance metric depends on what is important for the application that you are interested in. The MSE is a good performance metric for many applications as there is good reason to suppose that noise process is Gaussian. Sometimes it is better to use the MAE if you don't want your performance metric to be overly sensitive to outliers. Essentially there is no correct performance metric without knowing more about the nature of the application. On a different note I am not overly keen on support vector regression, as quite often there is a knowledge about the distribution of the noise in the response variable and we are likely to get a better model of the data if we build that expert knowledge into out model. That is why we have GLMs rather than just using least squares regression for everything. The loss function used in SVM does not have a very clear statistical interpretation of this nature. SVM regression also uses a loss function that is based on a sort of worst-case bound on the error, so if you use a performance metric that is essentially an average case statistic, then that suggests you should instead use a model based on average case performance rather than worst case (e.g. a GLM).
Which performance measure to use when using SVM: MSE or MAE?
The choice of performance metric depends on what is important for the application that you are interested in. The MSE is a good performance metric for many applications as there is good reason to sup
Which performance measure to use when using SVM: MSE or MAE? The choice of performance metric depends on what is important for the application that you are interested in. The MSE is a good performance metric for many applications as there is good reason to suppose that noise process is Gaussian. Sometimes it is better to use the MAE if you don't want your performance metric to be overly sensitive to outliers. Essentially there is no correct performance metric without knowing more about the nature of the application. On a different note I am not overly keen on support vector regression, as quite often there is a knowledge about the distribution of the noise in the response variable and we are likely to get a better model of the data if we build that expert knowledge into out model. That is why we have GLMs rather than just using least squares regression for everything. The loss function used in SVM does not have a very clear statistical interpretation of this nature. SVM regression also uses a loss function that is based on a sort of worst-case bound on the error, so if you use a performance metric that is essentially an average case statistic, then that suggests you should instead use a model based on average case performance rather than worst case (e.g. a GLM).
Which performance measure to use when using SVM: MSE or MAE? The choice of performance metric depends on what is important for the application that you are interested in. The MSE is a good performance metric for many applications as there is good reason to sup
38,155
Which performance measure to use when using SVM: MSE or MAE?
MAE is more intuitive than MSE to simply evaluate the overall error. MSE is easier to handle mathematically for variance analysis. For example, MSE is used to calculate the error variance $s_e^2$, which is a recurring value in regression statistics.
Which performance measure to use when using SVM: MSE or MAE?
MAE is more intuitive than MSE to simply evaluate the overall error. MSE is easier to handle mathematically for variance analysis. For example, MSE is used to calculate the error variance $s_e^2$, whi
Which performance measure to use when using SVM: MSE or MAE? MAE is more intuitive than MSE to simply evaluate the overall error. MSE is easier to handle mathematically for variance analysis. For example, MSE is used to calculate the error variance $s_e^2$, which is a recurring value in regression statistics.
Which performance measure to use when using SVM: MSE or MAE? MAE is more intuitive than MSE to simply evaluate the overall error. MSE is easier to handle mathematically for variance analysis. For example, MSE is used to calculate the error variance $s_e^2$, whi
38,156
Whether to use EFA or PCA to assess dimensionality of a set of Likert items
EFA versus PCA In a previous question on the differences between EFA and PCA, I state: Principal components analysis involves extracting linear composites of observed variables. Factor analysis is based on a formal model predicting observed variables from theoretical latent factors. I find that typically within the context of developing psychological scales factor analysis is more theoretically appropriate. Latent factors are often assumed to cause the observed variables. Assessing Scale Dimensionality Determining the dimensionality underlying a set of likert items is not just a question of EFA versus PCA. There are multiple techniques. William Revelle has some software in R for implementing several techniques (see this discussion). In general there is rarely a definitive answer as to how many factors are required to model a set of items. If you extract more factors, you can explain more variance in the items. Of course, just by chance you might explain some variance, so some approaches try to rule out chance (e.g., the parallel test). However, even with very large samples, where chance becomes less of an explanation, I'd expect to see systematic but small increases in variance explained by extracting more factors. Thus, you are left with the issue of how much variance must be explained by the first factor relative to others in order to conclude that the scale is sufficiently unidimensional for your purpose. Such issues are closely tied to application and broader issues of validity. You might find the following article useful to read, for a broader discussion of definitions and approaches at quantifying unidimensionality: Hattie, J. (1985). Methodology review: Assessing unidimensionality of tests and ltems. Applied Psychological Measurement, 9(2):139. Here's a web presentation examining a few different decision rules for defining unidimensionality
Whether to use EFA or PCA to assess dimensionality of a set of Likert items
EFA versus PCA In a previous question on the differences between EFA and PCA, I state: Principal components analysis involves extracting linear composites of observed variables. Factor analysis is ba
Whether to use EFA or PCA to assess dimensionality of a set of Likert items EFA versus PCA In a previous question on the differences between EFA and PCA, I state: Principal components analysis involves extracting linear composites of observed variables. Factor analysis is based on a formal model predicting observed variables from theoretical latent factors. I find that typically within the context of developing psychological scales factor analysis is more theoretically appropriate. Latent factors are often assumed to cause the observed variables. Assessing Scale Dimensionality Determining the dimensionality underlying a set of likert items is not just a question of EFA versus PCA. There are multiple techniques. William Revelle has some software in R for implementing several techniques (see this discussion). In general there is rarely a definitive answer as to how many factors are required to model a set of items. If you extract more factors, you can explain more variance in the items. Of course, just by chance you might explain some variance, so some approaches try to rule out chance (e.g., the parallel test). However, even with very large samples, where chance becomes less of an explanation, I'd expect to see systematic but small increases in variance explained by extracting more factors. Thus, you are left with the issue of how much variance must be explained by the first factor relative to others in order to conclude that the scale is sufficiently unidimensional for your purpose. Such issues are closely tied to application and broader issues of validity. You might find the following article useful to read, for a broader discussion of definitions and approaches at quantifying unidimensionality: Hattie, J. (1985). Methodology review: Assessing unidimensionality of tests and ltems. Applied Psychological Measurement, 9(2):139. Here's a web presentation examining a few different decision rules for defining unidimensionality
Whether to use EFA or PCA to assess dimensionality of a set of Likert items EFA versus PCA In a previous question on the differences between EFA and PCA, I state: Principal components analysis involves extracting linear composites of observed variables. Factor analysis is ba
38,157
Whether to use EFA or PCA to assess dimensionality of a set of Likert items
Firstly, neither PCA or EFA will give you an estimate of the dimension of the scale. They are both essentially data reduction techniques. That being said, EFA is probably better for this purpose as it tells you how much of the variance in each question is accounted for in the model (the communality). To estimate dimension, you need to use some other technique. The best ones tend to be parallel analysis, the minimum average partial criterion, and examination of the scree plot. The eigenvalues greater than one does not tend to perform well in this situation. If you have a large amount of data, I would suggest that you take 2/3rds of it and build models. Then, fit the models you have developed to the last third of your data. This will reduce the chances of you over-fitting your data (i.e. modeling noise). This is a form of cross-validation, and is extremely important when using techniques such as factor analysis and principal components analysis, as there are many subjective decisions (factors, rotations etc) which need to be made as part of the process.
Whether to use EFA or PCA to assess dimensionality of a set of Likert items
Firstly, neither PCA or EFA will give you an estimate of the dimension of the scale. They are both essentially data reduction techniques. That being said, EFA is probably better for this purpose as it
Whether to use EFA or PCA to assess dimensionality of a set of Likert items Firstly, neither PCA or EFA will give you an estimate of the dimension of the scale. They are both essentially data reduction techniques. That being said, EFA is probably better for this purpose as it tells you how much of the variance in each question is accounted for in the model (the communality). To estimate dimension, you need to use some other technique. The best ones tend to be parallel analysis, the minimum average partial criterion, and examination of the scree plot. The eigenvalues greater than one does not tend to perform well in this situation. If you have a large amount of data, I would suggest that you take 2/3rds of it and build models. Then, fit the models you have developed to the last third of your data. This will reduce the chances of you over-fitting your data (i.e. modeling noise). This is a form of cross-validation, and is extremely important when using techniques such as factor analysis and principal components analysis, as there are many subjective decisions (factors, rotations etc) which need to be made as part of the process.
Whether to use EFA or PCA to assess dimensionality of a set of Likert items Firstly, neither PCA or EFA will give you an estimate of the dimension of the scale. They are both essentially data reduction techniques. That being said, EFA is probably better for this purpose as it
38,158
Whether to use EFA or PCA to assess dimensionality of a set of Likert items
Two things not mentioned so far: One: With only 6 items, you are going to have a hard time finding a lot of dimensions. Two: If you do EFA, rather than look at scree plots or eigenvalues or some other numeric test, examine several solutions and see which makes sense. Ideally, you'll be able to follow @richiemorrisroe and have a training and test sample, especially with so few items.
Whether to use EFA or PCA to assess dimensionality of a set of Likert items
Two things not mentioned so far: One: With only 6 items, you are going to have a hard time finding a lot of dimensions. Two: If you do EFA, rather than look at scree plots or eigenvalues or some oth
Whether to use EFA or PCA to assess dimensionality of a set of Likert items Two things not mentioned so far: One: With only 6 items, you are going to have a hard time finding a lot of dimensions. Two: If you do EFA, rather than look at scree plots or eigenvalues or some other numeric test, examine several solutions and see which makes sense. Ideally, you'll be able to follow @richiemorrisroe and have a training and test sample, especially with so few items.
Whether to use EFA or PCA to assess dimensionality of a set of Likert items Two things not mentioned so far: One: With only 6 items, you are going to have a hard time finding a lot of dimensions. Two: If you do EFA, rather than look at scree plots or eigenvalues or some oth
38,159
Interpreting two-sided, two-sample, Welch T-Test
(1a) You don't need the Welch test to cope with different sample sizes. That's automatically handled by the Student t-test. (1b) If you think there's a real chance the variances in the two populations are strongly different, then you are assuming a priori that the two populations differ. It might not be a difference of location--that's what a t-test evaluates--but it's still an important difference nonetheless. Don't paper it over by adopting a test that ignores this difference! (Differences in variance often arise where one sample is "contaminated" with a few extreme results, simultaneously shifting the location and increasing the variance. Because of the large variance it can be difficult to detect the shift in location (no matter how great it is) in a small to medium size sample, because the increase in variance is roughly proportional to the squared change in location. This form of "contamination" occurs, for instance, when only a fraction of an experimental group responds to the treatment.) Therefore you should consider a more appropriate test, such as a slippage test. Even better would be a less automated graphical approach using exploratory data analysis techniques. (2) Use a two-sided test when a change of average in either direction (greater or lesser) is possible. Otherwise, when you are testing only for an increase or decrease in average, use a one-sided test. (3) Rounding would be incorrect and you shouldn't have to do it: most algorithms for computing t distributions don't care whether the DoF is an integer. Rounding is not a big deal, but if you're using a t-test in the first place, you're concerned about small sample sizes (for otherwise the simpler z-test will work fine) and even small changes in DoF can matter a little.
Interpreting two-sided, two-sample, Welch T-Test
(1a) You don't need the Welch test to cope with different sample sizes. That's automatically handled by the Student t-test. (1b) If you think there's a real chance the variances in the two population
Interpreting two-sided, two-sample, Welch T-Test (1a) You don't need the Welch test to cope with different sample sizes. That's automatically handled by the Student t-test. (1b) If you think there's a real chance the variances in the two populations are strongly different, then you are assuming a priori that the two populations differ. It might not be a difference of location--that's what a t-test evaluates--but it's still an important difference nonetheless. Don't paper it over by adopting a test that ignores this difference! (Differences in variance often arise where one sample is "contaminated" with a few extreme results, simultaneously shifting the location and increasing the variance. Because of the large variance it can be difficult to detect the shift in location (no matter how great it is) in a small to medium size sample, because the increase in variance is roughly proportional to the squared change in location. This form of "contamination" occurs, for instance, when only a fraction of an experimental group responds to the treatment.) Therefore you should consider a more appropriate test, such as a slippage test. Even better would be a less automated graphical approach using exploratory data analysis techniques. (2) Use a two-sided test when a change of average in either direction (greater or lesser) is possible. Otherwise, when you are testing only for an increase or decrease in average, use a one-sided test. (3) Rounding would be incorrect and you shouldn't have to do it: most algorithms for computing t distributions don't care whether the DoF is an integer. Rounding is not a big deal, but if you're using a t-test in the first place, you're concerned about small sample sizes (for otherwise the simpler z-test will work fine) and even small changes in DoF can matter a little.
Interpreting two-sided, two-sample, Welch T-Test (1a) You don't need the Welch test to cope with different sample sizes. That's automatically handled by the Student t-test. (1b) If you think there's a real chance the variances in the two population
38,160
Interpreting two-sided, two-sample, Welch T-Test
Dividing by 2 is for p-values. If you compare critical values the division by 2 is not necessary. The function getCriticalValue should be the quantile function of Student's t distribution. Thus it should take 2 values, the probability and the degrees of freedom. If you want 2-sided hypothesis as indicated by your code, then you need 0.975 quantile. For the rounding, since the degrees of freedom are positive Math.round looks good.
Interpreting two-sided, two-sample, Welch T-Test
Dividing by 2 is for p-values. If you compare critical values the division by 2 is not necessary. The function getCriticalValue should be the quantile function of Student's t distribution. Thus it sho
Interpreting two-sided, two-sample, Welch T-Test Dividing by 2 is for p-values. If you compare critical values the division by 2 is not necessary. The function getCriticalValue should be the quantile function of Student's t distribution. Thus it should take 2 values, the probability and the degrees of freedom. If you want 2-sided hypothesis as indicated by your code, then you need 0.975 quantile. For the rounding, since the degrees of freedom are positive Math.round looks good.
Interpreting two-sided, two-sample, Welch T-Test Dividing by 2 is for p-values. If you compare critical values the division by 2 is not necessary. The function getCriticalValue should be the quantile function of Student's t distribution. Thus it sho
38,161
Interpreting two-sided, two-sample, Welch T-Test
It's not absolutely necessary to round the degrees of freedom to an integer. Student's t-distribution can be defined for all positive real values of this parameter. Restricting it to a positive integer may make the critical value easier to calculate though, depending on how you're doing that. And it will make very little difference in practice with any reasonable sample sizes.
Interpreting two-sided, two-sample, Welch T-Test
It's not absolutely necessary to round the degrees of freedom to an integer. Student's t-distribution can be defined for all positive real values of this parameter. Restricting it to a positive intege
Interpreting two-sided, two-sample, Welch T-Test It's not absolutely necessary to round the degrees of freedom to an integer. Student's t-distribution can be defined for all positive real values of this parameter. Restricting it to a positive integer may make the critical value easier to calculate though, depending on how you're doing that. And it will make very little difference in practice with any reasonable sample sizes.
Interpreting two-sided, two-sample, Welch T-Test It's not absolutely necessary to round the degrees of freedom to an integer. Student's t-distribution can be defined for all positive real values of this parameter. Restricting it to a positive intege
38,162
Interpreting two-sided, two-sample, Welch T-Test
I'm working with the OP on the benchmarking project and wanted to thank you all for clearing some things up. Also I wanted to provide a bit more information in case that affects the advice. The sample size ranges from 5 - 700+ (as many as can be completed in 8 seconds or until the margin of error is at or below 1%. The critical values are pulled from an object for simplicity (because other calculations determine the degrees of freedom as sample-size minus 1). /** * T-Distribution two-tailed critical values for 95% confidence * http://www.itl.nist.gov/div898/handbook/eda/section3/eda3672.htm */ T_DISTRIBUTION = { '1': 12.706, '2': 4.303, '3': 3.182, '4': 2.776 /* , ... */ } Update I checked and the difference between variances seems rather high. Variances: 4,474,400,141.236059 3,032,977,106.8208385 226,854,226,665.14194 24,612,581.169126578 We are testing the operations per second of various code snippets (some are slower, lower ops/sec, others faster, higher ops/sec). Also we used to simply compare the overlap between means +- margin of error of each but it was suggested that a t-test is better because it can find results with statistical significance.
Interpreting two-sided, two-sample, Welch T-Test
I'm working with the OP on the benchmarking project and wanted to thank you all for clearing some things up. Also I wanted to provide a bit more information in case that affects the advice. The sample
Interpreting two-sided, two-sample, Welch T-Test I'm working with the OP on the benchmarking project and wanted to thank you all for clearing some things up. Also I wanted to provide a bit more information in case that affects the advice. The sample size ranges from 5 - 700+ (as many as can be completed in 8 seconds or until the margin of error is at or below 1%. The critical values are pulled from an object for simplicity (because other calculations determine the degrees of freedom as sample-size minus 1). /** * T-Distribution two-tailed critical values for 95% confidence * http://www.itl.nist.gov/div898/handbook/eda/section3/eda3672.htm */ T_DISTRIBUTION = { '1': 12.706, '2': 4.303, '3': 3.182, '4': 2.776 /* , ... */ } Update I checked and the difference between variances seems rather high. Variances: 4,474,400,141.236059 3,032,977,106.8208385 226,854,226,665.14194 24,612,581.169126578 We are testing the operations per second of various code snippets (some are slower, lower ops/sec, others faster, higher ops/sec). Also we used to simply compare the overlap between means +- margin of error of each but it was suggested that a t-test is better because it can find results with statistical significance.
Interpreting two-sided, two-sample, Welch T-Test I'm working with the OP on the benchmarking project and wanted to thank you all for clearing some things up. Also I wanted to provide a bit more information in case that affects the advice. The sample
38,163
SAS/IML compared to R
Full disclosure: I work at SAS. The IML blog is http://blogs.sas.com/iml. Both languages are matrix-vector languages with a rich run-time library and the ability to write your own functions. For data analysis tasks and matrix computations, they both provide the neccessary tools to help you analyze your data. The SAS/IML syntax is very similar to the SAS DATA step, so it appeals to SAS programmers. You can also call all of the SAS DATA step functions, and you can call any SAS procedure from within SAS/IML by using the SUBMIT/ENDSUBMIT statements. The SAS/IML Studio application is very nice for developing programs and for creating graphics. The R community creates and shares a large number of packages, including packages written by top academic researchers. New statistical methods appear in R very quickly. The R community has many help and discussion lists. The SAS/IML language does not contain every statistical analysis (as a built-in function) because the assumption is that you will call SAS/STAT or SAS/ETS procedures when you need a specialized analysis. For example, SAS/IML does not have functions for mixed modelling, but you can prepare the data in SAS/IML, call the MIXED or GLIMMIX procedure, and then use IML some more to manipulate or modify the output from the procedure. In chapter 11 (and 16) of my book, I show how to call R from SAS/IML, transfer data back and forth, and generally show how to get the best of both worlds.
SAS/IML compared to R
Full disclosure: I work at SAS. The IML blog is http://blogs.sas.com/iml. Both languages are matrix-vector languages with a rich run-time library and the ability to write your own functions. For data
SAS/IML compared to R Full disclosure: I work at SAS. The IML blog is http://blogs.sas.com/iml. Both languages are matrix-vector languages with a rich run-time library and the ability to write your own functions. For data analysis tasks and matrix computations, they both provide the neccessary tools to help you analyze your data. The SAS/IML syntax is very similar to the SAS DATA step, so it appeals to SAS programmers. You can also call all of the SAS DATA step functions, and you can call any SAS procedure from within SAS/IML by using the SUBMIT/ENDSUBMIT statements. The SAS/IML Studio application is very nice for developing programs and for creating graphics. The R community creates and shares a large number of packages, including packages written by top academic researchers. New statistical methods appear in R very quickly. The R community has many help and discussion lists. The SAS/IML language does not contain every statistical analysis (as a built-in function) because the assumption is that you will call SAS/STAT or SAS/ETS procedures when you need a specialized analysis. For example, SAS/IML does not have functions for mixed modelling, but you can prepare the data in SAS/IML, call the MIXED or GLIMMIX procedure, and then use IML some more to manipulate or modify the output from the procedure. In chapter 11 (and 16) of my book, I show how to call R from SAS/IML, transfer data back and forth, and generally show how to get the best of both worlds.
SAS/IML compared to R Full disclosure: I work at SAS. The IML blog is http://blogs.sas.com/iml. Both languages are matrix-vector languages with a rich run-time library and the ability to write your own functions. For data
38,164
SAS/IML compared to R
You might want to pick up (or look at) a copy of Rick Wicklin's book: Statistical Programming with SAS IML software https://support.sas.com/content/dam/SAS/support/en/books/statistical-programming-with-sas-iml-software/63119_excerpt.pdf He also has a blog about IML. And, on SAS' site, there is a section about IML: http://support.sas.com/forums/forum.jspa?forumID=47 And you will want IMLStudio, which offers a multiple window view that is much easier to integrate with Base SAS than the old IML was. I have used Base SAS and SAS Stat a lot. I've only barely looked at IML. But, from what I've seen, your knowledge of R should help.
SAS/IML compared to R
You might want to pick up (or look at) a copy of Rick Wicklin's book: Statistical Programming with SAS IML software https://support.sas.com/content/dam/SAS/support/en/books/statistical-programming-wit
SAS/IML compared to R You might want to pick up (or look at) a copy of Rick Wicklin's book: Statistical Programming with SAS IML software https://support.sas.com/content/dam/SAS/support/en/books/statistical-programming-with-sas-iml-software/63119_excerpt.pdf He also has a blog about IML. And, on SAS' site, there is a section about IML: http://support.sas.com/forums/forum.jspa?forumID=47 And you will want IMLStudio, which offers a multiple window view that is much easier to integrate with Base SAS than the old IML was. I have used Base SAS and SAS Stat a lot. I've only barely looked at IML. But, from what I've seen, your knowledge of R should help.
SAS/IML compared to R You might want to pick up (or look at) a copy of Rick Wicklin's book: Statistical Programming with SAS IML software https://support.sas.com/content/dam/SAS/support/en/books/statistical-programming-wit
38,165
SAS/IML compared to R
I have never used it, but I know for new versions of IML, you can call R routines. Maybe start by looking at http://support.sas.com/rnd/app/studio/statr.pdf.
SAS/IML compared to R
I have never used it, but I know for new versions of IML, you can call R routines. Maybe start by looking at http://support.sas.com/rnd/app/studio/statr.pdf.
SAS/IML compared to R I have never used it, but I know for new versions of IML, you can call R routines. Maybe start by looking at http://support.sas.com/rnd/app/studio/statr.pdf.
SAS/IML compared to R I have never used it, but I know for new versions of IML, you can call R routines. Maybe start by looking at http://support.sas.com/rnd/app/studio/statr.pdf.
38,166
Binomial test for a binary variable
You cannot determine this through a statistical test, for a trivial reason and a profound reason. The trivial reason is that your data consist of $k$ ones and $n-k$ zeros with $n$ about 100k. These data conform extremely closely to a Bernoulli($k/n$) distribution. No testing is necessary. The profound reason is that you are implicitly assuming the data are independently random--but they might not be. If, for instance, they are collected by sampling a process over time, then you might be seeing long strings of $0$ followed by long strings of $1$. Modeling these as draws from a Bernoulli distribution would likely be a poor choice. Another possibility is that the values are independent but the probability of a $1$ is varying over time. (This would be an "overdispersed" Binomial model.) No transformation of $ 0, 1 $ will produce a normal distribution! Perhaps what you are hoping is that some statistic, such as the sample mean, is normally distributed. The Central Limit Theorem guarantees that, provided the values are independent and that the probabilities are not tending over time to either $0$ or $1$.
Binomial test for a binary variable
You cannot determine this through a statistical test, for a trivial reason and a profound reason. The trivial reason is that your data consist of $k$ ones and $n-k$ zeros with $n$ about 100k. These d
Binomial test for a binary variable You cannot determine this through a statistical test, for a trivial reason and a profound reason. The trivial reason is that your data consist of $k$ ones and $n-k$ zeros with $n$ about 100k. These data conform extremely closely to a Bernoulli($k/n$) distribution. No testing is necessary. The profound reason is that you are implicitly assuming the data are independently random--but they might not be. If, for instance, they are collected by sampling a process over time, then you might be seeing long strings of $0$ followed by long strings of $1$. Modeling these as draws from a Bernoulli distribution would likely be a poor choice. Another possibility is that the values are independent but the probability of a $1$ is varying over time. (This would be an "overdispersed" Binomial model.) No transformation of $ 0, 1 $ will produce a normal distribution! Perhaps what you are hoping is that some statistic, such as the sample mean, is normally distributed. The Central Limit Theorem guarantees that, provided the values are independent and that the probabilities are not tending over time to either $0$ or $1$.
Binomial test for a binary variable You cannot determine this through a statistical test, for a trivial reason and a profound reason. The trivial reason is that your data consist of $k$ ones and $n-k$ zeros with $n$ about 100k. These d
38,167
Binomial test for a binary variable
I completely agree with @whuber -- just wanted to add: If you were to try to transform the data. How would you go about doing so? You would map 0 to some number say, -5 and 1 to some other number?, say 5? So now instead of having: 0 0 0 1 0 1 1 0 1 0 1 You have: -5 -5 -5 5 -5 5 5 -5 5 -5 5 This cannot possible be normally distributed because you still only have two values! Each of these entries could however be Binomial(1,p) just as @whuber described [the same as Bernoulli(p) ], but not Binomial(N,p) because N is never greater than 1 if you only have binary data.
Binomial test for a binary variable
I completely agree with @whuber -- just wanted to add: If you were to try to transform the data. How would you go about doing so? You would map 0 to some number say, -5 and 1 to some other number?,
Binomial test for a binary variable I completely agree with @whuber -- just wanted to add: If you were to try to transform the data. How would you go about doing so? You would map 0 to some number say, -5 and 1 to some other number?, say 5? So now instead of having: 0 0 0 1 0 1 1 0 1 0 1 You have: -5 -5 -5 5 -5 5 5 -5 5 -5 5 This cannot possible be normally distributed because you still only have two values! Each of these entries could however be Binomial(1,p) just as @whuber described [the same as Bernoulli(p) ], but not Binomial(N,p) because N is never greater than 1 if you only have binary data.
Binomial test for a binary variable I completely agree with @whuber -- just wanted to add: If you were to try to transform the data. How would you go about doing so? You would map 0 to some number say, -5 and 1 to some other number?,
38,168
Binomial test for a binary variable
ALL binary variables have the binomial distribution, provided that the probability of success (probability to observe 1) does not change and that all their instances are independent. A rule of thumb says that binomial distribution can be fairly approximated by normal distribution when n*p>30, with n=number of instances, p=probability of success. So, I argue that your question is about testing for independence and constant success rate. For the former, you can use Bradley run test http://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm (I suppose it is also known by another name). For the latter, I have only a rough answer: you can split your sample in k subgroups and then build a test using the k proportions of success in the subgroups.
Binomial test for a binary variable
ALL binary variables have the binomial distribution, provided that the probability of success (probability to observe 1) does not change and that all their instances are independent. A rule of thumb s
Binomial test for a binary variable ALL binary variables have the binomial distribution, provided that the probability of success (probability to observe 1) does not change and that all their instances are independent. A rule of thumb says that binomial distribution can be fairly approximated by normal distribution when n*p>30, with n=number of instances, p=probability of success. So, I argue that your question is about testing for independence and constant success rate. For the former, you can use Bradley run test http://www.itl.nist.gov/div898/handbook/eda/section3/eda35d.htm (I suppose it is also known by another name). For the latter, I have only a rough answer: you can split your sample in k subgroups and then build a test using the k proportions of success in the subgroups.
Binomial test for a binary variable ALL binary variables have the binomial distribution, provided that the probability of success (probability to observe 1) does not change and that all their instances are independent. A rule of thumb s
38,169
Conditional Expectation as a function of X
Without getting very much into measure theory,consider the random vector $(X,Y)$ with density $f_{X,Y}(\cdot,\cdot)$ (wrt a dominating measure $\text d\mu(x,y)$) decomposed as $$f_{X,Y}(x,y)=f_Y(y)\times f_{X|Y}(x|y)$$ where $f_Y(\cdot)$ is a probability density (wrt the appropriate dominating measure $\text d\mu_2(y)$) attached with the random variable $Y$ $f_{X|Y}(\cdot|y)$ is a probability density (wrt the appropriate dominating measure $\text d\mu_1(x)$) for (almost) every $y\in\mathcal Y$, attached with a random variable $Z_y$. In this notation, $y$ is a parameter of the density. For a fixed value of $y\in\mathcal Y$, $f_{X|Y}(\cdot|y)$ can thus be understood as a regular density over the set $\mathcal X$ and $$f_{X|Y}(\cdot|y):\ x\longmapsto f_{X|Y}(x|y)$$ is a non-negative integrable (measurable) function on $\mathcal X$ such that $$\int_\mathcal Xf_{X|Y}(x|y)\text d\mu_1(x)=1$$ This means that, for a fixed value of $y\in\mathcal Y$, the expectation of $Z_y\sim f_{X|Y}(\cdot|y)$ can considered and, provided it exists for this specific value of $y\in\mathcal Y$, be defined as $$\mathbb E_y[X] = \int_\mathcal X xf_{X|Y}(x|y)\text d\mu_1(x)\tag{1}$$ It is usually written as $\mathbb E[X|Y=y]$. In the event (1) exists for all values of $y\in\mathcal Y$, the function $$\varphi:\ y \longmapsto \mathbb E[X|Y=y]$$ is rigorously defined. It can therefore be called to transform the random variable $Y$ into the new random variable $\varphi(Y)$. It is usually written as $\mathbb E[X|Y]$ and is equal to (1) when the realisation of $Y$ is equal to $y$. As seen above, this random variable $\varphi(Y)=\mathbb E[X|Y]$ is not a function of the random variable $X$, even though they may be correlated with one another. Hence, the citation First, the expectation of $\mathbb E[X∣Y=y]$ is taken with respect to $f_{X∣Y}(x∣y)$. We assume that the random variable $Y$ is already fixed at the state $Y=y$. Thus, the only source of randomness is $X$. should be restated as [with highlighted changes]: First, the expectation $\mathbb E[X∣Y=y]$ is taken with respect to the distribution with density $f_{X∣Y}(x∣y)$. We assume that the random variable $Y$ is already observed at the realisation $Y=y$. Thus, the only remaining source of randomness in the expectation is $X$ with distribution $f_{X∣Y}(\cdot∣y)$, that is, the conditional distribution of $X$ given $Y=y$. Similarly, Secondly, since the expectation $\mathbb E[X∣Y=y]$ has eliminated the randomness of $X$, the resulting function is in $y$. should state Secondly, since the expectation $\mathbb E[X∣Y=y]$ has eliminated the (remaining) conditional randomness of $X$ given $Y=y$, the resulting function is a function of $y$.
Conditional Expectation as a function of X
Without getting very much into measure theory,consider the random vector $(X,Y)$ with density $f_{X,Y}(\cdot,\cdot)$ (wrt a dominating measure $\text d\mu(x,y)$) decomposed as $$f_{X,Y}(x,y)=f_Y(y)\ti
Conditional Expectation as a function of X Without getting very much into measure theory,consider the random vector $(X,Y)$ with density $f_{X,Y}(\cdot,\cdot)$ (wrt a dominating measure $\text d\mu(x,y)$) decomposed as $$f_{X,Y}(x,y)=f_Y(y)\times f_{X|Y}(x|y)$$ where $f_Y(\cdot)$ is a probability density (wrt the appropriate dominating measure $\text d\mu_2(y)$) attached with the random variable $Y$ $f_{X|Y}(\cdot|y)$ is a probability density (wrt the appropriate dominating measure $\text d\mu_1(x)$) for (almost) every $y\in\mathcal Y$, attached with a random variable $Z_y$. In this notation, $y$ is a parameter of the density. For a fixed value of $y\in\mathcal Y$, $f_{X|Y}(\cdot|y)$ can thus be understood as a regular density over the set $\mathcal X$ and $$f_{X|Y}(\cdot|y):\ x\longmapsto f_{X|Y}(x|y)$$ is a non-negative integrable (measurable) function on $\mathcal X$ such that $$\int_\mathcal Xf_{X|Y}(x|y)\text d\mu_1(x)=1$$ This means that, for a fixed value of $y\in\mathcal Y$, the expectation of $Z_y\sim f_{X|Y}(\cdot|y)$ can considered and, provided it exists for this specific value of $y\in\mathcal Y$, be defined as $$\mathbb E_y[X] = \int_\mathcal X xf_{X|Y}(x|y)\text d\mu_1(x)\tag{1}$$ It is usually written as $\mathbb E[X|Y=y]$. In the event (1) exists for all values of $y\in\mathcal Y$, the function $$\varphi:\ y \longmapsto \mathbb E[X|Y=y]$$ is rigorously defined. It can therefore be called to transform the random variable $Y$ into the new random variable $\varphi(Y)$. It is usually written as $\mathbb E[X|Y]$ and is equal to (1) when the realisation of $Y$ is equal to $y$. As seen above, this random variable $\varphi(Y)=\mathbb E[X|Y]$ is not a function of the random variable $X$, even though they may be correlated with one another. Hence, the citation First, the expectation of $\mathbb E[X∣Y=y]$ is taken with respect to $f_{X∣Y}(x∣y)$. We assume that the random variable $Y$ is already fixed at the state $Y=y$. Thus, the only source of randomness is $X$. should be restated as [with highlighted changes]: First, the expectation $\mathbb E[X∣Y=y]$ is taken with respect to the distribution with density $f_{X∣Y}(x∣y)$. We assume that the random variable $Y$ is already observed at the realisation $Y=y$. Thus, the only remaining source of randomness in the expectation is $X$ with distribution $f_{X∣Y}(\cdot∣y)$, that is, the conditional distribution of $X$ given $Y=y$. Similarly, Secondly, since the expectation $\mathbb E[X∣Y=y]$ has eliminated the randomness of $X$, the resulting function is in $y$. should state Secondly, since the expectation $\mathbb E[X∣Y=y]$ has eliminated the (remaining) conditional randomness of $X$ given $Y=y$, the resulting function is a function of $y$.
Conditional Expectation as a function of X Without getting very much into measure theory,consider the random vector $(X,Y)$ with density $f_{X,Y}(\cdot,\cdot)$ (wrt a dominating measure $\text d\mu(x,y)$) decomposed as $$f_{X,Y}(x,y)=f_Y(y)\ti
38,170
Conditional Expectation as a function of X
If you take the expectation of $X$, it's not a function of $X$. You integrate $xf(x)$ over $x$, and so $x$ is gone, producing only a number. If you take the expectation of $X|Y=y$, it's not a function of $X$. You integrate $xf(x|y)$ over $x$ (think of it as $xf(x)$ with a different $f(x)$ for different $y$), and so $x$ is gone, producing only a number, which may be different for different $y$s, hence a function of $y$.
Conditional Expectation as a function of X
If you take the expectation of $X$, it's not a function of $X$. You integrate $xf(x)$ over $x$, and so $x$ is gone, producing only a number. If you take the expectation of $X|Y=y$, it's not a function
Conditional Expectation as a function of X If you take the expectation of $X$, it's not a function of $X$. You integrate $xf(x)$ over $x$, and so $x$ is gone, producing only a number. If you take the expectation of $X|Y=y$, it's not a function of $X$. You integrate $xf(x|y)$ over $x$ (think of it as $xf(x)$ with a different $f(x)$ for different $y$), and so $x$ is gone, producing only a number, which may be different for different $y$s, hence a function of $y$.
Conditional Expectation as a function of X If you take the expectation of $X$, it's not a function of $X$. You integrate $xf(x)$ over $x$, and so $x$ is gone, producing only a number. If you take the expectation of $X|Y=y$, it's not a function
38,171
Can you say that you reject the null at the 95% level?
Sure. This is just a case of sloppy language. You can either say "I reject the null hypothesis with 95% confidence" or "I reject the null at a significance level of .05." Both of these statements are shorthand for this much longer statement: "If the null hypotheses were true, and I repeated this experiment/survey/analysis a large number times with a different random sample each time, then in less than 5% of those samples would I have found a deviation from the prediction of the null as large or larger than the one I found in the sample I actually have." When people are talking about stats we sometimes talk about the level of confidence (95%) and sometimes we talk about the error rate (5%), but they're both just different ways of talking about the same level of confidence/significance. The only reason this doesn't end up being that confusing in practice is that no one in their right mind would ever actually claim that they rejected the null at "5% confidence" or with a "95% error rate."
Can you say that you reject the null at the 95% level?
Sure. This is just a case of sloppy language. You can either say "I reject the null hypothesis with 95% confidence" or "I reject the null at a significance level of .05." Both of these statements are
Can you say that you reject the null at the 95% level? Sure. This is just a case of sloppy language. You can either say "I reject the null hypothesis with 95% confidence" or "I reject the null at a significance level of .05." Both of these statements are shorthand for this much longer statement: "If the null hypotheses were true, and I repeated this experiment/survey/analysis a large number times with a different random sample each time, then in less than 5% of those samples would I have found a deviation from the prediction of the null as large or larger than the one I found in the sample I actually have." When people are talking about stats we sometimes talk about the level of confidence (95%) and sometimes we talk about the error rate (5%), but they're both just different ways of talking about the same level of confidence/significance. The only reason this doesn't end up being that confusing in practice is that no one in their right mind would ever actually claim that they rejected the null at "5% confidence" or with a "95% error rate."
Can you say that you reject the null at the 95% level? Sure. This is just a case of sloppy language. You can either say "I reject the null hypothesis with 95% confidence" or "I reject the null at a significance level of .05." Both of these statements are
38,172
Can you say that you reject the null at the 95% level?
You can* but should not. You're conflating language of confidence intervals with hypothesis tests. The level of a test is the significance level. "Confidence" levels (coverage) is a property of confidence intervals. There are connections between them but they are NOT the same thing and you should not mix terminology between them. If you're performing hypothesis tests, use the terminology of hypothesis tests. If you're calculating confidence intervals use the terminology of confidence intervals. What could be easier? * (I sure can't stop you, these space lasers are useless)
Can you say that you reject the null at the 95% level?
You can* but should not. You're conflating language of confidence intervals with hypothesis tests. The level of a test is the significance level. "Confidence" levels (coverage) is a property of confid
Can you say that you reject the null at the 95% level? You can* but should not. You're conflating language of confidence intervals with hypothesis tests. The level of a test is the significance level. "Confidence" levels (coverage) is a property of confidence intervals. There are connections between them but they are NOT the same thing and you should not mix terminology between them. If you're performing hypothesis tests, use the terminology of hypothesis tests. If you're calculating confidence intervals use the terminology of confidence intervals. What could be easier? * (I sure can't stop you, these space lasers are useless)
Can you say that you reject the null at the 95% level? You can* but should not. You're conflating language of confidence intervals with hypothesis tests. The level of a test is the significance level. "Confidence" levels (coverage) is a property of confid
38,173
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form?
Your estimator is what's known as an M-estimator of $\rho$-type, where in this case $\rho = -f$. If your function $f$ is differentiable, then it is known that under some (fairly strong) conditions, the M-estimator is consistent for the true maximizer of $f$, and is in fact asymptotically normal. See Chapter 7 of Boos & Stefanski's Essential Statistical Inference (2013) for a detailed treatment of M-estimation.
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form?
Your estimator is what's known as an M-estimator of $\rho$-type, where in this case $\rho = -f$. If your function $f$ is differentiable, then it is known that under some (fairly strong) conditions, th
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form? Your estimator is what's known as an M-estimator of $\rho$-type, where in this case $\rho = -f$. If your function $f$ is differentiable, then it is known that under some (fairly strong) conditions, the M-estimator is consistent for the true maximizer of $f$, and is in fact asymptotically normal. See Chapter 7 of Boos & Stefanski's Essential Statistical Inference (2013) for a detailed treatment of M-estimation.
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form? Your estimator is what's known as an M-estimator of $\rho$-type, where in this case $\rho = -f$. If your function $f$ is differentiable, then it is known that under some (fairly strong) conditions, th
38,174
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form?
On a general basis, $\hat\theta(X)$ is biased, due to the fact that it is equivariant under reparameterisation, i.e., if$$\eta=h(\theta)$$ is another parameterisation of the model, with $h$ a bijection, then $$\hat\eta(X)=h(\hat\theta(X))$$ while unbiasedness does not carry under arbitrary transforms $h$. Note also that most parameters (or parameterisations) do not allow for the existence of an unbiased estimator, see e.g. the $\eta=1/p$ example for the Binomial model.
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form?
On a general basis, $\hat\theta(X)$ is biased, due to the fact that it is equivariant under reparameterisation, i.e., if$$\eta=h(\theta)$$ is another parameterisation of the model, with $h$ a bijectio
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form? On a general basis, $\hat\theta(X)$ is biased, due to the fact that it is equivariant under reparameterisation, i.e., if$$\eta=h(\theta)$$ is another parameterisation of the model, with $h$ a bijection, then $$\hat\eta(X)=h(\hat\theta(X))$$ while unbiasedness does not carry under arbitrary transforms $h$. Note also that most parameters (or parameterisations) do not allow for the existence of an unbiased estimator, see e.g. the $\eta=1/p$ example for the Binomial model.
How to prove unbiasedness/consistency/normality of an estimator that doesn't have a closed form? On a general basis, $\hat\theta(X)$ is biased, due to the fact that it is equivariant under reparameterisation, i.e., if$$\eta=h(\theta)$$ is another parameterisation of the model, with $h$ a bijectio
38,175
Why the second term is transposed, but not the first one?
If you use the convention that $(\boldsymbol{x} - \boldsymbol{\mu})$ is a column vector, i.e. $(\boldsymbol{x} - \boldsymbol{\mu}) = \begin{bmatrix} x_{1} - \mu_1\\ x_{2} - \mu_2\\ \vdots \\ x_{m}- \mu_m \end{bmatrix}$, then $(\boldsymbol{x} - \boldsymbol{\mu})^T$ is a row vector, i.e $(\boldsymbol{x} - \boldsymbol{\mu})^T= [x_{1} - \mu_1, x_{2} - \mu_2,\dots ,x_{m} - \mu_m]$. The product of a column vector and a row vector forms a matrix with the corresponding pairwise products as entries. The product of a row vector and a column vector (the dot product) results in the sum of the pairwise products. Since your $Var(X)$ is a variance-covariance matrix, you need to have the product of a column vector and a row vector, i.e. $(\boldsymbol{x}-\boldsymbol{\mu})(\boldsymbol{x}-\boldsymbol{\mu})^T$.
Why the second term is transposed, but not the first one?
If you use the convention that $(\boldsymbol{x} - \boldsymbol{\mu})$ is a column vector, i.e. $(\boldsymbol{x} - \boldsymbol{\mu}) = \begin{bmatrix} x_{1} - \mu_1\\ x_{2} - \mu_
Why the second term is transposed, but not the first one? If you use the convention that $(\boldsymbol{x} - \boldsymbol{\mu})$ is a column vector, i.e. $(\boldsymbol{x} - \boldsymbol{\mu}) = \begin{bmatrix} x_{1} - \mu_1\\ x_{2} - \mu_2\\ \vdots \\ x_{m}- \mu_m \end{bmatrix}$, then $(\boldsymbol{x} - \boldsymbol{\mu})^T$ is a row vector, i.e $(\boldsymbol{x} - \boldsymbol{\mu})^T= [x_{1} - \mu_1, x_{2} - \mu_2,\dots ,x_{m} - \mu_m]$. The product of a column vector and a row vector forms a matrix with the corresponding pairwise products as entries. The product of a row vector and a column vector (the dot product) results in the sum of the pairwise products. Since your $Var(X)$ is a variance-covariance matrix, you need to have the product of a column vector and a row vector, i.e. $(\boldsymbol{x}-\boldsymbol{\mu})(\boldsymbol{x}-\boldsymbol{\mu})^T$.
Why the second term is transposed, but not the first one? If you use the convention that $(\boldsymbol{x} - \boldsymbol{\mu})$ is a column vector, i.e. $(\boldsymbol{x} - \boldsymbol{\mu}) = \begin{bmatrix} x_{1} - \mu_1\\ x_{2} - \mu_
38,176
Why the second term is transposed, but not the first one?
When you multiply matrices, the adjacent dimensions need to match, so you can multiply the (n, k) matrix by (k, m) matrix, or (m, k) by (k, n), but not any other way around. Where you would see the transpose symbol it depends on if the data is stored row-wise or column-wise. If you take something like a dot product of row vectors, you would transpose the second element so you multiply (1, n) by (n, 1), but if the data had the initial shape of (n, 1), you would do the opposite.
Why the second term is transposed, but not the first one?
When you multiply matrices, the adjacent dimensions need to match, so you can multiply the (n, k) matrix by (k, m) matrix, or (m, k) by (k, n), but not any other way around. Where you would see the tr
Why the second term is transposed, but not the first one? When you multiply matrices, the adjacent dimensions need to match, so you can multiply the (n, k) matrix by (k, m) matrix, or (m, k) by (k, n), but not any other way around. Where you would see the transpose symbol it depends on if the data is stored row-wise or column-wise. If you take something like a dot product of row vectors, you would transpose the second element so you multiply (1, n) by (n, 1), but if the data had the initial shape of (n, 1), you would do the opposite.
Why the second term is transposed, but not the first one? When you multiply matrices, the adjacent dimensions need to match, so you can multiply the (n, k) matrix by (k, m) matrix, or (m, k) by (k, n), but not any other way around. Where you would see the tr
38,177
Why the second term is transposed, but not the first one?
For any column vector $x$ (eg $x \in \mathbb R^{n \times 1}$) $x^Tx$ is (a 1x1 matrix and thus 'is isomorphic to' (*)) a scalar. $xx^T$ is a matrix. (and if it's 1x1, then it could be treated as a scalar similarly.) (*) in your case 'is isomorphic to' just means 'can be treated as'
Why the second term is transposed, but not the first one?
For any column vector $x$ (eg $x \in \mathbb R^{n \times 1}$) $x^Tx$ is (a 1x1 matrix and thus 'is isomorphic to' (*)) a scalar. $xx^T$ is a matrix. (and if it's 1x1, then it could be treated as a s
Why the second term is transposed, but not the first one? For any column vector $x$ (eg $x \in \mathbb R^{n \times 1}$) $x^Tx$ is (a 1x1 matrix and thus 'is isomorphic to' (*)) a scalar. $xx^T$ is a matrix. (and if it's 1x1, then it could be treated as a scalar similarly.) (*) in your case 'is isomorphic to' just means 'can be treated as'
Why the second term is transposed, but not the first one? For any column vector $x$ (eg $x \in \mathbb R^{n \times 1}$) $x^Tx$ is (a 1x1 matrix and thus 'is isomorphic to' (*)) a scalar. $xx^T$ is a matrix. (and if it's 1x1, then it could be treated as a s
38,178
How to simulate standard deviation
Standard error decreases as the sample size increases. Standard deviation is a related concept but perhaps not related enough to warrant such similar terminology that confuses everyone who is starting to learn statistics. A sampling distribution is the distribution of values you would get if you repeatedly sampled from a population and calculated some statistic, say the mean, each time. The standard deviation of that sampling distribution is the standard error. For the standard error of the mean, it decreases by $\sqrt{n}$, so $s/\sqrt{n}$ as an estimate of the standard error (where $s$ is the sample standard deviation). The standard deviation of a distribution is whatever it is, and it doesn’t care how large a sample you draw or if you even sample at all. It sounds like you want to simulate data from a distribution with the mean and standard deviation you’ve calculated from the sample of $15$, so do that. If you’re willing to assume a normal distribution, the R command is rnorm and the Python command is numpy.random.normal.
How to simulate standard deviation
Standard error decreases as the sample size increases. Standard deviation is a related concept but perhaps not related enough to warrant such similar terminology that confuses everyone who is starting
How to simulate standard deviation Standard error decreases as the sample size increases. Standard deviation is a related concept but perhaps not related enough to warrant such similar terminology that confuses everyone who is starting to learn statistics. A sampling distribution is the distribution of values you would get if you repeatedly sampled from a population and calculated some statistic, say the mean, each time. The standard deviation of that sampling distribution is the standard error. For the standard error of the mean, it decreases by $\sqrt{n}$, so $s/\sqrt{n}$ as an estimate of the standard error (where $s$ is the sample standard deviation). The standard deviation of a distribution is whatever it is, and it doesn’t care how large a sample you draw or if you even sample at all. It sounds like you want to simulate data from a distribution with the mean and standard deviation you’ve calculated from the sample of $15$, so do that. If you’re willing to assume a normal distribution, the R command is rnorm and the Python command is numpy.random.normal.
How to simulate standard deviation Standard error decreases as the sample size increases. Standard deviation is a related concept but perhaps not related enough to warrant such similar terminology that confuses everyone who is starting
38,179
How to simulate standard deviation
Standard deviation does not decrease with sample size. The bigger your sample is, the closer the standard deviation should be to the standard deviation of the population. What follows, with larger sample size the spread of the standard deviations estimated on larger vs smaller samples would decrease, because based on larger samples we would get more precise. Below you can see a numerical example in R for this, where we simulate draws from standard normal distribution (with sd=1) for 15 and 100 samples, and then estimate standard deviations for them. > summary(replicate(100000, sd(rnorm(15)))) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.3039 0.8515 0.9762 0.9824 1.1061 1.8886 > summary(replicate(100000, sd(rnorm(100)))) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.6916 0.9498 0.9971 0.9980 1.0451 1.3089
How to simulate standard deviation
Standard deviation does not decrease with sample size. The bigger your sample is, the closer the standard deviation should be to the standard deviation of the population. What follows, with larger sam
How to simulate standard deviation Standard deviation does not decrease with sample size. The bigger your sample is, the closer the standard deviation should be to the standard deviation of the population. What follows, with larger sample size the spread of the standard deviations estimated on larger vs smaller samples would decrease, because based on larger samples we would get more precise. Below you can see a numerical example in R for this, where we simulate draws from standard normal distribution (with sd=1) for 15 and 100 samples, and then estimate standard deviations for them. > summary(replicate(100000, sd(rnorm(15)))) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.3039 0.8515 0.9762 0.9824 1.1061 1.8886 > summary(replicate(100000, sd(rnorm(100)))) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.6916 0.9498 0.9971 0.9980 1.0451 1.3089
How to simulate standard deviation Standard deviation does not decrease with sample size. The bigger your sample is, the closer the standard deviation should be to the standard deviation of the population. What follows, with larger sam
38,180
How to simulate standard deviation
You specifically ask about simulation. Following @Dave's Answer (+1), here are a couple of simulations in R. Suppose I take a million samples of size $n = 16$ from a population distributed as $\mathsf{Gamma}(\mathrm{shape} = 4,\, \mathrm{rate}=.1),$ so that the population mean is $\mu = 40$ the population variance is $\sigma^2 = 400,$ and $\sigma = 20.$ Then the sample means (averages) $A =\bar X_{15}$ have $E(A) = 40$ and standard errors $SD(A)= \sigma/\sqrt{n} = 5.$ With a million samples, the simulation results should be accurate to about three significant digits. set.seed(904) a = replicate(10^6, mean(rgamma(16, 4, .1))) mean(a); sd(a) [1] 40.00176 # aprx 40 [1] 4.996061 # aprx 5 By contrast, let's do a similar simulation of a million samples of size $n = 100$ from the same population. Now $E(\bar X_{100}) = 40$ and $SD(\bar X_{100}) = \sigma/\sqrt{n} = 20/\sqrt{100} = 2.$ set.seed(2020) a = replicate(10^6, mean(rgamma(100, 4, .1))) mean(a); sd(a) [1] 40.0014 # aprx 40 [1] 2.001084 # aprx 20/10 = 2
How to simulate standard deviation
You specifically ask about simulation. Following @Dave's Answer (+1), here are a couple of simulations in R. Suppose I take a million samples of size $n = 16$ from a population distributed as $\mathsf
How to simulate standard deviation You specifically ask about simulation. Following @Dave's Answer (+1), here are a couple of simulations in R. Suppose I take a million samples of size $n = 16$ from a population distributed as $\mathsf{Gamma}(\mathrm{shape} = 4,\, \mathrm{rate}=.1),$ so that the population mean is $\mu = 40$ the population variance is $\sigma^2 = 400,$ and $\sigma = 20.$ Then the sample means (averages) $A =\bar X_{15}$ have $E(A) = 40$ and standard errors $SD(A)= \sigma/\sqrt{n} = 5.$ With a million samples, the simulation results should be accurate to about three significant digits. set.seed(904) a = replicate(10^6, mean(rgamma(16, 4, .1))) mean(a); sd(a) [1] 40.00176 # aprx 40 [1] 4.996061 # aprx 5 By contrast, let's do a similar simulation of a million samples of size $n = 100$ from the same population. Now $E(\bar X_{100}) = 40$ and $SD(\bar X_{100}) = \sigma/\sqrt{n} = 20/\sqrt{100} = 2.$ set.seed(2020) a = replicate(10^6, mean(rgamma(100, 4, .1))) mean(a); sd(a) [1] 40.0014 # aprx 40 [1] 2.001084 # aprx 20/10 = 2
How to simulate standard deviation You specifically ask about simulation. Following @Dave's Answer (+1), here are a couple of simulations in R. Suppose I take a million samples of size $n = 16$ from a population distributed as $\mathsf
38,181
Why is power of a hypothesis test a concern when we can bootstrap any representative sample to make n approach infinity?
The amount of information relating to the hypotheses that you have is simply the information in the original data. Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there. The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn. As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples. Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there. With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy. For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples: The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one) A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well. We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value). -- To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.
Why is power of a hypothesis test a concern when we can bootstrap any representative sample to make
The amount of information relating to the hypotheses that you have is simply the information in the original data. Resampling that information, whether bootstrapping, permutation testing or any other
Why is power of a hypothesis test a concern when we can bootstrap any representative sample to make n approach infinity? The amount of information relating to the hypotheses that you have is simply the information in the original data. Resampling that information, whether bootstrapping, permutation testing or any other resampling, cannot add information that wasn't already there. The point of bootstrapping is to estimate the sampling distribution of some quantity, in essence by using the sample cdf as an approximation of the population cdf from which it was drawn. As normally understood, each bootstrap sample is the same size as the original sample (since taking a larger sample wouldn't tell you about the sampling variability at the sample size you have). What varies is the number of such bootstrap resamples. Increasing the number of bootstrap samples gives a more "accurate" sense of that approximation, but it doesn't add any information that wasn't already there. With a bootstrap test you can reduce the simulation error in a p-value calculation, but you can't shift the underlying p-value that you're approximating (which is just a function of the sample); your estimate of it is just less noisy. For example, let's say I do a bootstrapped one-sample t-test (with a one-sided alternative) and look at what happens when we increase the number of bootstrap samples: The blue line very close to 2 shows the t-statistic for our sample, which we see is unusually high (the estimated p-value is similar in both cases, but the estimated standard error of that p-value is about 30% as large for the second one) A qualitatively similar picture - noisier vs less noisy versions of identical underlying distribution shapes - would result from sampling the permutation distribution of some statistic as well. We see that the information hasn't changed; the basic shape of the bootstrap distribution of the statistic is the same, it's just that we get a slightly less noisy idea of it (and hence a slightly less noisy estimate of the p-value). -- To do a power analysis with a bootstrap or permutation test is a little tricky since you have to specify things that you didn't need to assume in the test, such as the specific distribution shape of the population. You can evaluate power under some specific distributional assumption. Presumably you don't have a particularly good idea what distribution that is, or you'd have been able to use that information to help construct the test (e.g. by starting with something that would have good power for a distribution reflecting what you understand about it, then perhaps robustifying it somewhat). You can of course investigate a variety of possible candidate distributions and a variety of sequences of alternatives, depending on the circumstances.
Why is power of a hypothesis test a concern when we can bootstrap any representative sample to make The amount of information relating to the hypotheses that you have is simply the information in the original data. Resampling that information, whether bootstrapping, permutation testing or any other
38,182
Distribution of Y from distribution of X
For a general function $h$, there is no direct formula to get the pdf of the random variable $Y=h(X)$ knowing the pdf of $X$. There is a formula in case when $h$ is a differentiable one-to-one mapping from the range (the support, I should say) of $X$ to the range of $Y$. I guess from your question that you don't know this formula, so let me give you the picture and the process to derive it. Take for example a random variable $X \sim {\cal N}(\mu, \sigma^2)$ and set $Y=\exp(X)$. The animation below shows some simulations of $X$ and the corresponding values of $Y$. The density of $X$ is shown in blue and the one of $Y$ is shown in orange in the vertical direction. Now the question is: knowing the density $f_{\textrm{blue}}$ of $X$, what is the density $f_{\textrm{orange}}$ of $Y$ ? Taking a point $y$ in the range of $Y$, the density $f_{\textrm{orange}}$ provides the probability that $Y$ belongs to a small area $\mathrm{d}y$ around $y$ by the formula $$ \Pr(Y \in \mathrm{d}y) \approx f_{\textrm{orange}}(y)|\mathrm{d}y| $$ where $|\mathrm{d}y|$ denotes the length of the small interval $\mathrm{d}y$. This probability is the pink area on the figure below. The probability $\Pr(Y \in \mathrm{d}y)$ also equals the probability $\Pr(X \in \mathrm{d}x)$, shown by the grey area below the blue curve, where $x=\log(y)$ because of $y=\exp(x)$, and $\mathrm{d}x$ is the small interval around $x$. This probability is given by $$ \Pr(X \in \mathrm{d}x) \approx f_{\textrm{blue}}(x)|\mathrm{d}x|. $$ It is clear that $|\mathrm{d}x| \neq |\mathrm{d}y|$. Remember that these two lengths are very small, hence the green function - let's call it $h$ instead of $\exp$ - is like a segment on the interval $\mathrm{d}x$, and the slope of this segment is the value $h'(x)$ of the derivative of $h$ at $x$. Therefore $|\mathrm{d}y| \approx h'(x)|\mathrm{d}x|$, and we finally get $$ \Pr(Y \in \mathrm{d}y) = \Pr(X \in \mathrm{d}x) \approx f_{\textrm{blue}}(x)\frac{|\mathrm{d}y|}{h'(x)}. $$ Expressing the right-hand side in terms of $y=h(x)$ instead of $x=h^{-1}(y)$, this gives $$ \Pr(Y \in \mathrm{d}y) \approx f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)\frac{|\mathrm{d}y|}{h'\bigl(h^{-1}(y)\bigr)}, $$ or, because of $\frac{1}{h'\bigl(h^{-1}(y)\bigr)}={(h^{-1})}'(y)$, this can be written $$ \Pr(Y \in \mathrm{d}y) \approx {(h^{-1})}'(y)\times f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)|\mathrm{d}y|. $$ By identifying this formula by the one defining the density of $Y$: $$ \Pr(Y \in \mathrm{d}y) \approx f_{\textrm{orange}}(y)|\mathrm{d}y|, $$ we finally get $$ \boxed{f_{\textrm{orange}}(y) = {(h^{-1})}'(y)\times f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)}. $$ This is the so-called change of variables formula. Be careful about one point: this formula is not correct in general. In my example, the factor $k$ relating $|\mathrm{d}x|$ and $|\mathrm{d}y|$ by the approximative equality $|\mathrm{d}y| \approx k|\mathrm{d}x|$ is $k = h'(x)$ because $h'(x)>0$ in this example ($h$ is increasing), and one has to take $-h'(x)$ if $h'(x) <0$. The general formula includes the absolute value: $$ \boxed{f_{\textrm{orange}}(y) = \bigl|{(h^{-1})}'(y)\bigr|\times f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)}. $$
Distribution of Y from distribution of X
For a general function $h$, there is no direct formula to get the pdf of the random variable $Y=h(X)$ knowing the pdf of $X$. There is a formula in case when $h$ is a differentiable one-to-one mapping
Distribution of Y from distribution of X For a general function $h$, there is no direct formula to get the pdf of the random variable $Y=h(X)$ knowing the pdf of $X$. There is a formula in case when $h$ is a differentiable one-to-one mapping from the range (the support, I should say) of $X$ to the range of $Y$. I guess from your question that you don't know this formula, so let me give you the picture and the process to derive it. Take for example a random variable $X \sim {\cal N}(\mu, \sigma^2)$ and set $Y=\exp(X)$. The animation below shows some simulations of $X$ and the corresponding values of $Y$. The density of $X$ is shown in blue and the one of $Y$ is shown in orange in the vertical direction. Now the question is: knowing the density $f_{\textrm{blue}}$ of $X$, what is the density $f_{\textrm{orange}}$ of $Y$ ? Taking a point $y$ in the range of $Y$, the density $f_{\textrm{orange}}$ provides the probability that $Y$ belongs to a small area $\mathrm{d}y$ around $y$ by the formula $$ \Pr(Y \in \mathrm{d}y) \approx f_{\textrm{orange}}(y)|\mathrm{d}y| $$ where $|\mathrm{d}y|$ denotes the length of the small interval $\mathrm{d}y$. This probability is the pink area on the figure below. The probability $\Pr(Y \in \mathrm{d}y)$ also equals the probability $\Pr(X \in \mathrm{d}x)$, shown by the grey area below the blue curve, where $x=\log(y)$ because of $y=\exp(x)$, and $\mathrm{d}x$ is the small interval around $x$. This probability is given by $$ \Pr(X \in \mathrm{d}x) \approx f_{\textrm{blue}}(x)|\mathrm{d}x|. $$ It is clear that $|\mathrm{d}x| \neq |\mathrm{d}y|$. Remember that these two lengths are very small, hence the green function - let's call it $h$ instead of $\exp$ - is like a segment on the interval $\mathrm{d}x$, and the slope of this segment is the value $h'(x)$ of the derivative of $h$ at $x$. Therefore $|\mathrm{d}y| \approx h'(x)|\mathrm{d}x|$, and we finally get $$ \Pr(Y \in \mathrm{d}y) = \Pr(X \in \mathrm{d}x) \approx f_{\textrm{blue}}(x)\frac{|\mathrm{d}y|}{h'(x)}. $$ Expressing the right-hand side in terms of $y=h(x)$ instead of $x=h^{-1}(y)$, this gives $$ \Pr(Y \in \mathrm{d}y) \approx f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)\frac{|\mathrm{d}y|}{h'\bigl(h^{-1}(y)\bigr)}, $$ or, because of $\frac{1}{h'\bigl(h^{-1}(y)\bigr)}={(h^{-1})}'(y)$, this can be written $$ \Pr(Y \in \mathrm{d}y) \approx {(h^{-1})}'(y)\times f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)|\mathrm{d}y|. $$ By identifying this formula by the one defining the density of $Y$: $$ \Pr(Y \in \mathrm{d}y) \approx f_{\textrm{orange}}(y)|\mathrm{d}y|, $$ we finally get $$ \boxed{f_{\textrm{orange}}(y) = {(h^{-1})}'(y)\times f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)}. $$ This is the so-called change of variables formula. Be careful about one point: this formula is not correct in general. In my example, the factor $k$ relating $|\mathrm{d}x|$ and $|\mathrm{d}y|$ by the approximative equality $|\mathrm{d}y| \approx k|\mathrm{d}x|$ is $k = h'(x)$ because $h'(x)>0$ in this example ($h$ is increasing), and one has to take $-h'(x)$ if $h'(x) <0$. The general formula includes the absolute value: $$ \boxed{f_{\textrm{orange}}(y) = \bigl|{(h^{-1})}'(y)\bigr|\times f_{\textrm{blue}}\bigl(h^{-1}(y)\bigr)}. $$
Distribution of Y from distribution of X For a general function $h$, there is no direct formula to get the pdf of the random variable $Y=h(X)$ knowing the pdf of $X$. There is a formula in case when $h$ is a differentiable one-to-one mapping
38,183
Distribution of Y from distribution of X
Suppose $X$ has a standard normal distribution, then pdf of $X$ is $f(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$ Suppose the mathematical relationship between the random variable $Y$ and $X$ is $Y=2X$. It will be easy to find the pdf of $Y$. Basically there are two methods to find the pdf of Y 1. use the CDF then take derivative the CDF. 2. Use variable transformation directly (use Jacobian) I will show the second method here. $y=2x \Rightarrow x=\frac{1}{2}y$ $J=\frac{dx}{dy}=\frac{1}{2}$ $f(y)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{y}{2})^2}|J|=\frac{1}{2\sqrt{2\pi}}e^{-\frac{1}{8}y^2}$ You can see the pdf of $Y$ is $\frac{1}{2\sqrt{2\pi}}e^{-\frac{1}{8}y^2}$
Distribution of Y from distribution of X
Suppose $X$ has a standard normal distribution, then pdf of $X$ is $f(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$ Suppose the mathematical relationship between the random variable $Y$ and $X$ is $Y=2
Distribution of Y from distribution of X Suppose $X$ has a standard normal distribution, then pdf of $X$ is $f(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$ Suppose the mathematical relationship between the random variable $Y$ and $X$ is $Y=2X$. It will be easy to find the pdf of $Y$. Basically there are two methods to find the pdf of Y 1. use the CDF then take derivative the CDF. 2. Use variable transformation directly (use Jacobian) I will show the second method here. $y=2x \Rightarrow x=\frac{1}{2}y$ $J=\frac{dx}{dy}=\frac{1}{2}$ $f(y)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}(\frac{y}{2})^2}|J|=\frac{1}{2\sqrt{2\pi}}e^{-\frac{1}{8}y^2}$ You can see the pdf of $Y$ is $\frac{1}{2\sqrt{2\pi}}e^{-\frac{1}{8}y^2}$
Distribution of Y from distribution of X Suppose $X$ has a standard normal distribution, then pdf of $X$ is $f(x)=\frac{1}{\sqrt{2\pi}}e^{-\frac{1}{2}x^2}$ Suppose the mathematical relationship between the random variable $Y$ and $X$ is $Y=2
38,184
Distribution of Y from distribution of X
Yes, it is possible. Let's see how. When $Y = f(X)$, the conditional P(Y|X) can be written as $$P(Y=y|x) = \delta(y - f(x))$$ where $\delta$ is the Dirac delta. The PDF of Y can be obtained by marginalising over X: $$ P(Y=y) = \int P(y|x) P(x)dx = \int \delta(y - f(x))P(x)dx $$ To compute such integral, we can use that $$\int h(x)\delta(g(x)) dx = \sum_i \frac{h(x_i)}{|g'(x_i)|}$$ where $x_i$ are the roots of $g$. Replacing it in our marginal with $g(x) = y - f(x)$, and $h(x)=P(x)$, we get $$ P(Y=y) = \sum_{i=1}^N \frac{P(x_i)}{|f'(x_i)|} $$ where $x_i$ are the solutions of $y = f(x)$. This is as far as we can go. Nevertheless, this formula evidences that: the problem can be treated as any other problem of probability: you define the random variables, the conditional probabilities, and compute the marginal; the solution does not necessarily have a closed formula, but it does not require $f$ to be bijective; when the function is bijective (single root), this solution reduces to the solution @Stéphane Laurent gave. I find this solution nice because I don't have to remember it; it is a consequence of the definition of $Y$ and $P(Y|X)$. Let's take an example: X is uniformly in $x \in [-1/2, 1/2]$, $Y = f(X) = X^2 \in [0, 1/4]$. Compute $P(Y=y)$. There are two solutions of $y=x^2$ in the interval $x \in [-1/2, 1/2]$: $\{-\sqrt{y}, \sqrt{y}\}$. Moreover, the derivative of $f(X)$ is $2x$. Using the equation above, we get $$P(y) = \sum_{i=1}^{2} \frac{P\left(x_{i}\right)}{|f'(x_{i})|} = \left( \frac{1}{2\sqrt{y}} + \frac{1}{2\sqrt{y}} \right) = \frac{1}{\sqrt{y}}$$ which can be confirmed e.g. in mathematica: Show[{Histogram[ RandomVariate[UniformDistribution[{-1/2, 1/2}], 100000]^2, Automatic, "PDF"], Plot[1/Sqrt[y], {y, 0.001, 1/4}, PlotStyle -> {Red, Thick}, PlotRange -> All]}] If X is uniform in $[0, 1]$ instead, there is only one solution and we get $$P(y) = \frac{1}{2\sqrt{y}}$$
Distribution of Y from distribution of X
Yes, it is possible. Let's see how. When $Y = f(X)$, the conditional P(Y|X) can be written as $$P(Y=y|x) = \delta(y - f(x))$$ where $\delta$ is the Dirac delta. The PDF of Y can be obtained by margina
Distribution of Y from distribution of X Yes, it is possible. Let's see how. When $Y = f(X)$, the conditional P(Y|X) can be written as $$P(Y=y|x) = \delta(y - f(x))$$ where $\delta$ is the Dirac delta. The PDF of Y can be obtained by marginalising over X: $$ P(Y=y) = \int P(y|x) P(x)dx = \int \delta(y - f(x))P(x)dx $$ To compute such integral, we can use that $$\int h(x)\delta(g(x)) dx = \sum_i \frac{h(x_i)}{|g'(x_i)|}$$ where $x_i$ are the roots of $g$. Replacing it in our marginal with $g(x) = y - f(x)$, and $h(x)=P(x)$, we get $$ P(Y=y) = \sum_{i=1}^N \frac{P(x_i)}{|f'(x_i)|} $$ where $x_i$ are the solutions of $y = f(x)$. This is as far as we can go. Nevertheless, this formula evidences that: the problem can be treated as any other problem of probability: you define the random variables, the conditional probabilities, and compute the marginal; the solution does not necessarily have a closed formula, but it does not require $f$ to be bijective; when the function is bijective (single root), this solution reduces to the solution @Stéphane Laurent gave. I find this solution nice because I don't have to remember it; it is a consequence of the definition of $Y$ and $P(Y|X)$. Let's take an example: X is uniformly in $x \in [-1/2, 1/2]$, $Y = f(X) = X^2 \in [0, 1/4]$. Compute $P(Y=y)$. There are two solutions of $y=x^2$ in the interval $x \in [-1/2, 1/2]$: $\{-\sqrt{y}, \sqrt{y}\}$. Moreover, the derivative of $f(X)$ is $2x$. Using the equation above, we get $$P(y) = \sum_{i=1}^{2} \frac{P\left(x_{i}\right)}{|f'(x_{i})|} = \left( \frac{1}{2\sqrt{y}} + \frac{1}{2\sqrt{y}} \right) = \frac{1}{\sqrt{y}}$$ which can be confirmed e.g. in mathematica: Show[{Histogram[ RandomVariate[UniformDistribution[{-1/2, 1/2}], 100000]^2, Automatic, "PDF"], Plot[1/Sqrt[y], {y, 0.001, 1/4}, PlotStyle -> {Red, Thick}, PlotRange -> All]}] If X is uniform in $[0, 1]$ instead, there is only one solution and we get $$P(y) = \frac{1}{2\sqrt{y}}$$
Distribution of Y from distribution of X Yes, it is possible. Let's see how. When $Y = f(X)$, the conditional P(Y|X) can be written as $$P(Y=y|x) = \delta(y - f(x))$$ where $\delta$ is the Dirac delta. The PDF of Y can be obtained by margina
38,185
Distribution of Y from distribution of X
As I understand your question, you ask about situation where we have random variable $X$, its density function $f(x)$, and you are interested of finding probability density function of random variable $Y$ defined as $Y = f(X)$. Such relationship is pretty straightforward for cumulative distribution function, but does not have to exist for probability density function. Recall that function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. (Wikipedia, italics added) and look at the two figures below showing PDF of standard normal $f(x)$. Every $x$-axis value is related to exactly one $y$-axis value (left figure), but some $y$-axis values are related to more than one $x$-axis value (right figure), so the inverse relationship is not a function. This does not have to be true for all the functions, but only to the ones that are not one-to-one mappings. If you are interested in probability of observing $f(x)$ values, it can be obtained by simulation. To compute probabilities sample values from random variable $X$ and pass them through density function $f(\cdot)$ the same way as you could make any other transformations of $X$. This enables you to obtain $y=f(x)$ and then compute empirical probabilities for $y$ values. Example of R code for normal distribution is provided below. hist(dnorm(rnorm(1e5)))
Distribution of Y from distribution of X
As I understand your question, you ask about situation where we have random variable $X$, its density function $f(x)$, and you are interested of finding probability density function of random variable
Distribution of Y from distribution of X As I understand your question, you ask about situation where we have random variable $X$, its density function $f(x)$, and you are interested of finding probability density function of random variable $Y$ defined as $Y = f(X)$. Such relationship is pretty straightforward for cumulative distribution function, but does not have to exist for probability density function. Recall that function is a relation between a set of inputs and a set of permissible outputs with the property that each input is related to exactly one output. (Wikipedia, italics added) and look at the two figures below showing PDF of standard normal $f(x)$. Every $x$-axis value is related to exactly one $y$-axis value (left figure), but some $y$-axis values are related to more than one $x$-axis value (right figure), so the inverse relationship is not a function. This does not have to be true for all the functions, but only to the ones that are not one-to-one mappings. If you are interested in probability of observing $f(x)$ values, it can be obtained by simulation. To compute probabilities sample values from random variable $X$ and pass them through density function $f(\cdot)$ the same way as you could make any other transformations of $X$. This enables you to obtain $y=f(x)$ and then compute empirical probabilities for $y$ values. Example of R code for normal distribution is provided below. hist(dnorm(rnorm(1e5)))
Distribution of Y from distribution of X As I understand your question, you ask about situation where we have random variable $X$, its density function $f(x)$, and you are interested of finding probability density function of random variable
38,186
Nonlinear effect in an interaction term
This is exactly why I switched from Stata to R and Frank's rms package (called Design back then) a few years ago. Anyway, this somewhat hack-ish code will at least get you started. The syntax is a little outdated and there may be better ways to code this (haven't used Stata in a while), but here it goes EDIT: Re-written after my morning coffee... *** use automobile data sysuse auto *** create restricted cubic spline basis functions for mpg, with four knots mkspline mpgsp = mpg, cubic nknots(4) *** create the interactions gen formpg1=foreign*mpgsp1 gen formpg2=foreign*mpgsp2 gen formpg3=foreign*mpgsp3 *** Regressing price on foreign and mpg allowing for non-linear interactions xi: reg price i.foreign mpgsp* formpg* To test the total interaction test formpg1 formpg2 formpg3 Omit the first term for the test of any non-linear interaction terms, e.g. test formpg2 formpg3 To get the global 4 d.f. test for T, which in this example is foreign, that Frank mentioned in his example above test _Iforeign_1 formpg1 formpg2 formpg3 Just change reg to logit for logistic regression. To graph the result, you need to form the linear predictor, e.g. using predictnl, which I never managed to get right. See a recent presentation by Patrick Royston at http://www.stata.com/meeting/germany12/abstracts/desug12_royston.pdf for some ideas. Hope this helps.
Nonlinear effect in an interaction term
This is exactly why I switched from Stata to R and Frank's rms package (called Design back then) a few years ago. Anyway, this somewhat hack-ish code will at least get you started. The syntax is a lit
Nonlinear effect in an interaction term This is exactly why I switched from Stata to R and Frank's rms package (called Design back then) a few years ago. Anyway, this somewhat hack-ish code will at least get you started. The syntax is a little outdated and there may be better ways to code this (haven't used Stata in a while), but here it goes EDIT: Re-written after my morning coffee... *** use automobile data sysuse auto *** create restricted cubic spline basis functions for mpg, with four knots mkspline mpgsp = mpg, cubic nknots(4) *** create the interactions gen formpg1=foreign*mpgsp1 gen formpg2=foreign*mpgsp2 gen formpg3=foreign*mpgsp3 *** Regressing price on foreign and mpg allowing for non-linear interactions xi: reg price i.foreign mpgsp* formpg* To test the total interaction test formpg1 formpg2 formpg3 Omit the first term for the test of any non-linear interaction terms, e.g. test formpg2 formpg3 To get the global 4 d.f. test for T, which in this example is foreign, that Frank mentioned in his example above test _Iforeign_1 formpg1 formpg2 formpg3 Just change reg to logit for logistic regression. To graph the result, you need to form the linear predictor, e.g. using predictnl, which I never managed to get right. See a recent presentation by Patrick Royston at http://www.stata.com/meeting/germany12/abstracts/desug12_royston.pdf for some ideas. Hope this helps.
Nonlinear effect in an interaction term This is exactly why I switched from Stata to R and Frank's rms package (called Design back then) a few years ago. Anyway, this somewhat hack-ish code will at least get you started. The syntax is a lit
38,187
Nonlinear effect in an interaction term
The following uses the R rms package using ordinary least squares modeling, and models the nonlinear effect smoothly using a restricted cubic spline with 4 knots at default knot locations. This generates one linear component and 2 nonlinear components for a total of 3 parameters per treatment group. require(rms) dd <- datadist(mydata); options(datadist='dd') # facilitates plotting f <- ols(B ~ rcs(S, 4) * T, data=mydata) anova(f) # tests for interaction (shape differences across T, 3 d.f.) # anova includes a test for nonlinear interaction # also provides a global test for T, 4 d.f. plot(Predict(f, S, T)) # shows 2 estimated curves for 2 values of T ggplot(Predict(f, S, T)) # will be in next release; uses ggplot2 The plots include 0.95 pointwise confidence bands. There is an option to use simultaneous confidence bands instead. Because I saw "ols" mentioned elsewhere I neglected to notice that the response variable is categorical. To fit the logistic regression model instead of an ols model, substitute lrm( ) for ols( ). No other code changes are needed. You can use summary(f, ...) to get odds ratios for T or S. By default the odds ratio for S will be the inter-quartile-range effect of S at the reference (most frequent) level of T.
Nonlinear effect in an interaction term
The following uses the R rms package using ordinary least squares modeling, and models the nonlinear effect smoothly using a restricted cubic spline with 4 knots at default knot locations. This gener
Nonlinear effect in an interaction term The following uses the R rms package using ordinary least squares modeling, and models the nonlinear effect smoothly using a restricted cubic spline with 4 knots at default knot locations. This generates one linear component and 2 nonlinear components for a total of 3 parameters per treatment group. require(rms) dd <- datadist(mydata); options(datadist='dd') # facilitates plotting f <- ols(B ~ rcs(S, 4) * T, data=mydata) anova(f) # tests for interaction (shape differences across T, 3 d.f.) # anova includes a test for nonlinear interaction # also provides a global test for T, 4 d.f. plot(Predict(f, S, T)) # shows 2 estimated curves for 2 values of T ggplot(Predict(f, S, T)) # will be in next release; uses ggplot2 The plots include 0.95 pointwise confidence bands. There is an option to use simultaneous confidence bands instead. Because I saw "ols" mentioned elsewhere I neglected to notice that the response variable is categorical. To fit the logistic regression model instead of an ols model, substitute lrm( ) for ols( ). No other code changes are needed. You can use summary(f, ...) to get odds ratios for T or S. By default the odds ratio for S will be the inter-quartile-range effect of S at the reference (most frequent) level of T.
Nonlinear effect in an interaction term The following uses the R rms package using ordinary least squares modeling, and models the nonlinear effect smoothly using a restricted cubic spline with 4 knots at default knot locations. This gener
38,188
Nonlinear effect in an interaction term
Have you considered using a generalized additive model? Wikipedia link here Basically the model would be $$ g(y) = X'\beta+\displaystyle\sum_j f_j(Z_j)+\epsilon $$ or in your specific case $$ B = logit\left(f(S,T)\right) $$ In R, you could use the mgcv package, and run something like library(mgcv) m = gam(B~te(S,T),family=binomial) which would give you a nonparametric interaction of the two variables. If you wanted to separate out main effects from the interaction effect, you could equivalently fit m = gam(B~ti(S)+ti(T)+ti(S,T),family=binomial) you can then look at contour plots of your estimated interaction via plot(m,pages=1, scheme=2) (I prefer the contour plots, myself), or you could use the vis.gam function to look at predicted values. Or, if your treatment T is binary, you might fit m = gam(B~s(S,by=as.factor(T)),family=binomial) The textbook on all of this is made to go with the R package, and is here this, by Simon Wood. Also you'll want to check ?te, ?ti, etc.
Nonlinear effect in an interaction term
Have you considered using a generalized additive model? Wikipedia link here Basically the model would be $$ g(y) = X'\beta+\displaystyle\sum_j f_j(Z_j)+\epsilon $$ or in your specific case $$ B = lo
Nonlinear effect in an interaction term Have you considered using a generalized additive model? Wikipedia link here Basically the model would be $$ g(y) = X'\beta+\displaystyle\sum_j f_j(Z_j)+\epsilon $$ or in your specific case $$ B = logit\left(f(S,T)\right) $$ In R, you could use the mgcv package, and run something like library(mgcv) m = gam(B~te(S,T),family=binomial) which would give you a nonparametric interaction of the two variables. If you wanted to separate out main effects from the interaction effect, you could equivalently fit m = gam(B~ti(S)+ti(T)+ti(S,T),family=binomial) you can then look at contour plots of your estimated interaction via plot(m,pages=1, scheme=2) (I prefer the contour plots, myself), or you could use the vis.gam function to look at predicted values. Or, if your treatment T is binary, you might fit m = gam(B~s(S,by=as.factor(T)),family=binomial) The textbook on all of this is made to go with the R package, and is here this, by Simon Wood. Also you'll want to check ?te, ?ti, etc.
Nonlinear effect in an interaction term Have you considered using a generalized additive model? Wikipedia link here Basically the model would be $$ g(y) = X'\beta+\displaystyle\sum_j f_j(Z_j)+\epsilon $$ or in your specific case $$ B = lo
38,189
Nonlinear effect in an interaction term
You may argue a non-linear effect by showing that a non-linear model fits better. For example you could implement a piecewise linear model to take into account changes in the influence of S. Dependent on your hypothesis, you could also linearise your factors. For example, a log transform of factors may reduce your residuals. This could be used to argue that the relationship between factors and the response is not linear, since the transformed variables fit better. I hope that helps.
Nonlinear effect in an interaction term
You may argue a non-linear effect by showing that a non-linear model fits better. For example you could implement a piecewise linear model to take into account changes in the influence of S. Dependent
Nonlinear effect in an interaction term You may argue a non-linear effect by showing that a non-linear model fits better. For example you could implement a piecewise linear model to take into account changes in the influence of S. Dependent on your hypothesis, you could also linearise your factors. For example, a log transform of factors may reduce your residuals. This could be used to argue that the relationship between factors and the response is not linear, since the transformed variables fit better. I hope that helps.
Nonlinear effect in an interaction term You may argue a non-linear effect by showing that a non-linear model fits better. For example you could implement a piecewise linear model to take into account changes in the influence of S. Dependent
38,190
How to interpret coefficients of $x$ and $x^2$ in same regression
Such an equation describes a curved relationship between $y$ and $x$ - a parabola: (This particular set of parameters correspond to a minimum at $x= -\frac{_5}{^6}$, just off the left margin of this plot.) Consequently, you should keep all terms in the same x-variable together, since they describe the way $y$ is related to $x$. Do I interpret them as a summation of the two coefficients, so the effect of a one unit change of x on y is 0.5 + 0.3 = 0.8. No. The effect of the $x^2$ term on a one-unit change in $x$ is not constant. Consider increasing $x$ from 0 to 1 and then from 10 to 11: At $0$, the expected value of $y$ is $a$ (a=7 in my plot) At $1$, the expected value of $y$ is $a+0.5\times 1+0.3\times 1^2$ The average increase in $y$ when $x$ increases from 0 to 1 is $0.5\times 1+0.3\times 1^2 = 0.8$ At $10$, the expected value of $y$ is $a+0.5\times 10+0.3\times 10^2$ (a=7 in my plot) At $11$, the expected value of $y$ is $a+0.5\times 11+0.3\times 11^2$ The average increase in $y$ when $x$ increases from 10 to 11 is $0.5\times (11-10)+0.3\times (11^2-10^2) = 6.8$ So there's not one single number -- it depends on which $x$ you look at. It may be useful to describe the effect of a unit change at some low value, some high value and somewhere in between.
How to interpret coefficients of $x$ and $x^2$ in same regression
Such an equation describes a curved relationship between $y$ and $x$ - a parabola: (This particular set of parameters correspond to a minimum at $x= -\frac{_5}{^6}$, just off the left margin of this
How to interpret coefficients of $x$ and $x^2$ in same regression Such an equation describes a curved relationship between $y$ and $x$ - a parabola: (This particular set of parameters correspond to a minimum at $x= -\frac{_5}{^6}$, just off the left margin of this plot.) Consequently, you should keep all terms in the same x-variable together, since they describe the way $y$ is related to $x$. Do I interpret them as a summation of the two coefficients, so the effect of a one unit change of x on y is 0.5 + 0.3 = 0.8. No. The effect of the $x^2$ term on a one-unit change in $x$ is not constant. Consider increasing $x$ from 0 to 1 and then from 10 to 11: At $0$, the expected value of $y$ is $a$ (a=7 in my plot) At $1$, the expected value of $y$ is $a+0.5\times 1+0.3\times 1^2$ The average increase in $y$ when $x$ increases from 0 to 1 is $0.5\times 1+0.3\times 1^2 = 0.8$ At $10$, the expected value of $y$ is $a+0.5\times 10+0.3\times 10^2$ (a=7 in my plot) At $11$, the expected value of $y$ is $a+0.5\times 11+0.3\times 11^2$ The average increase in $y$ when $x$ increases from 10 to 11 is $0.5\times (11-10)+0.3\times (11^2-10^2) = 6.8$ So there's not one single number -- it depends on which $x$ you look at. It may be useful to describe the effect of a unit change at some low value, some high value and somewhere in between.
How to interpret coefficients of $x$ and $x^2$ in same regression Such an equation describes a curved relationship between $y$ and $x$ - a parabola: (This particular set of parameters correspond to a minimum at $x= -\frac{_5}{^6}$, just off the left margin of this
38,191
How to interpret coefficients of $x$ and $x^2$ in same regression
It doesn't pay to interpret them separately. They are connected. The formula for the vertex in a quadratic equation $y = a x^{2} + b x + c$ is $-\frac{b}{2a}$. The effect of changing $x$ from $s$ to $t$ is $a (t^{2} - s^{2}) + b (t - s)$. In a regression setting I often set $s$ to the first quartile of $x$ and $t$ to the $3^{\textrm{rd}}$ quartile, so as to estimate the inter-quartile-range $x$ effect.
How to interpret coefficients of $x$ and $x^2$ in same regression
It doesn't pay to interpret them separately. They are connected. The formula for the vertex in a quadratic equation $y = a x^{2} + b x + c$ is $-\frac{b}{2a}$. The effect of changing $x$ from $s$ t
How to interpret coefficients of $x$ and $x^2$ in same regression It doesn't pay to interpret them separately. They are connected. The formula for the vertex in a quadratic equation $y = a x^{2} + b x + c$ is $-\frac{b}{2a}$. The effect of changing $x$ from $s$ to $t$ is $a (t^{2} - s^{2}) + b (t - s)$. In a regression setting I often set $s$ to the first quartile of $x$ and $t$ to the $3^{\textrm{rd}}$ quartile, so as to estimate the inter-quartile-range $x$ effect.
How to interpret coefficients of $x$ and $x^2$ in same regression It doesn't pay to interpret them separately. They are connected. The formula for the vertex in a quadratic equation $y = a x^{2} + b x + c$ is $-\frac{b}{2a}$. The effect of changing $x$ from $s$ t
38,192
How to interpret coefficients of $x$ and $x^2$ in same regression
The most straightforward way to interpret is through multivariate Taylor expansion. If you don't know what is it, then forget what I just wrote. If you take a derivative of the model specification, you'll see that your coefficients are the Taylor series coefficients.
How to interpret coefficients of $x$ and $x^2$ in same regression
The most straightforward way to interpret is through multivariate Taylor expansion. If you don't know what is it, then forget what I just wrote. If you take a derivative of the model specification, yo
How to interpret coefficients of $x$ and $x^2$ in same regression The most straightforward way to interpret is through multivariate Taylor expansion. If you don't know what is it, then forget what I just wrote. If you take a derivative of the model specification, you'll see that your coefficients are the Taylor series coefficients.
How to interpret coefficients of $x$ and $x^2$ in same regression The most straightforward way to interpret is through multivariate Taylor expansion. If you don't know what is it, then forget what I just wrote. If you take a derivative of the model specification, yo
38,193
Sign of coefficients in linear regression vs. the sign of correlation
Expanding on @Maarten 's answer: Suppose you are predicting the damage done by a fire. First, you look at one IV: Number of firefighters called to the scene. To your surprise, you find a strong positive relationship: More firefighters, more damage. Then you think of adding "size of fire" and add that to the equation; the relationship between firefighters and damage will now be negative. That's mediation. Another way this can happen is moderation: Where the two IV interact.
Sign of coefficients in linear regression vs. the sign of correlation
Expanding on @Maarten 's answer: Suppose you are predicting the damage done by a fire. First, you look at one IV: Number of firefighters called to the scene. To your surprise, you find a strong posi
Sign of coefficients in linear regression vs. the sign of correlation Expanding on @Maarten 's answer: Suppose you are predicting the damage done by a fire. First, you look at one IV: Number of firefighters called to the scene. To your surprise, you find a strong positive relationship: More firefighters, more damage. Then you think of adding "size of fire" and add that to the equation; the relationship between firefighters and damage will now be negative. That's mediation. Another way this can happen is moderation: Where the two IV interact.
Sign of coefficients in linear regression vs. the sign of correlation Expanding on @Maarten 's answer: Suppose you are predicting the damage done by a fire. First, you look at one IV: Number of firefighters called to the scene. To your surprise, you find a strong posi
38,194
Sign of coefficients in linear regression vs. the sign of correlation
The statement is true iff you included only one explanatory variable. I suppose you know that the regression coefficent in such a regression is just $r_{x,y}\frac{s_y}{s_x}$, where $r_{x,y}$ is the correlation coefficient, and $s_y$ and $s_x$ are the standard deviations of $y$ and $x$ respectively. Since standard deviations cannot be negative, the ratio $\frac{s_y}{s_x}$ will always be positive, so the sign of the regression coefficient is only determined by the sign of the correlation. However, things are very different when you add more than one explanatory variable.
Sign of coefficients in linear regression vs. the sign of correlation
The statement is true iff you included only one explanatory variable. I suppose you know that the regression coefficent in such a regression is just $r_{x,y}\frac{s_y}{s_x}$, where $r_{x,y}$ is the co
Sign of coefficients in linear regression vs. the sign of correlation The statement is true iff you included only one explanatory variable. I suppose you know that the regression coefficent in such a regression is just $r_{x,y}\frac{s_y}{s_x}$, where $r_{x,y}$ is the correlation coefficient, and $s_y$ and $s_x$ are the standard deviations of $y$ and $x$ respectively. Since standard deviations cannot be negative, the ratio $\frac{s_y}{s_x}$ will always be positive, so the sign of the regression coefficient is only determined by the sign of the correlation. However, things are very different when you add more than one explanatory variable.
Sign of coefficients in linear regression vs. the sign of correlation The statement is true iff you included only one explanatory variable. I suppose you know that the regression coefficent in such a regression is just $r_{x,y}\frac{s_y}{s_x}$, where $r_{x,y}$ is the co
38,195
Sign of coefficients in linear regression vs. the sign of correlation
Assume you have a regression with two regressors (plus a constant), $$y_i = a + b_1x_{1i}+b_2x_{2i} + u_i$$ Then if you work the normal equations (a bit tedious) you will find that $$\hat b_1 = \frac {\operatorname {\hat Var}(X_2)\cdot \operatorname {\hat Cov}(Y,X_1) - \operatorname {\hat Cov}(X_1,X_2)\cdot \operatorname {\hat Cov}(Y,X_2)}{\operatorname {\hat Var}(X_1)\cdot\operatorname {\hat Var}(X_2)\cdot [1-\hat \rho_{1,2}^2]} $$ where the hat indicates sample variances (without the bias correction term), and covariances, and $\hat \rho_{1,2}$ is the sample correlation coefficient between the two regressors. The denominator is always positive, so the sign of the estimated coefficient depends on the numerator. Then if you have (for example) $$0 < \operatorname {\hat Cov}(Y,X_1) < \frac {\operatorname {\hat Cov}(X_1,X_2)\cdot \operatorname {\hat Cov}(Y,X_2)}{\operatorname {\hat Var}(X_2)}$$ which is perfectly possible, then you will have positive pair-wise correlation between the dependent variable and regressor $X_1$, but negative coefficient of this regressor in the context of multiple regression. In other words, if one examines the dependent variable and regressor $X_1$ alone, they tend to move together (i.e. one will obtain a positive coefficient in the context of simple regression), but if regressor $X_2$ is present the marginal effect of $X_1$ on the dependent variable emerges as negative. This is an instance of the famous "sign reversal paradox", which is not really a paradox. Intuition (for this case)? If $X_2$ strongly correlates positively with the dependent variable and $X_1$, then in the simple regression the apparent positive relation between $Y$ and $X_1$ is due to the underlying effect of $X_2$ which is absent. When $X_2$ enters the specification, it takes on this positive effect, and "reveals" that the "pure" effect of $X_1$ on the dependent variable is, after all, negative. Note that the simple regression here would constitute a case of "omitted variables bias" in the estimation, since $X_2$ does belong to the specification. In some fields, $X_2$ is called a "confounder". Analogous results hold of course for the other coefficient, or for more than two regressors. See also Positive correlation and negative regressor coefficient sign
Sign of coefficients in linear regression vs. the sign of correlation
Assume you have a regression with two regressors (plus a constant), $$y_i = a + b_1x_{1i}+b_2x_{2i} + u_i$$ Then if you work the normal equations (a bit tedious) you will find that $$\hat b_1 = \frac
Sign of coefficients in linear regression vs. the sign of correlation Assume you have a regression with two regressors (plus a constant), $$y_i = a + b_1x_{1i}+b_2x_{2i} + u_i$$ Then if you work the normal equations (a bit tedious) you will find that $$\hat b_1 = \frac {\operatorname {\hat Var}(X_2)\cdot \operatorname {\hat Cov}(Y,X_1) - \operatorname {\hat Cov}(X_1,X_2)\cdot \operatorname {\hat Cov}(Y,X_2)}{\operatorname {\hat Var}(X_1)\cdot\operatorname {\hat Var}(X_2)\cdot [1-\hat \rho_{1,2}^2]} $$ where the hat indicates sample variances (without the bias correction term), and covariances, and $\hat \rho_{1,2}$ is the sample correlation coefficient between the two regressors. The denominator is always positive, so the sign of the estimated coefficient depends on the numerator. Then if you have (for example) $$0 < \operatorname {\hat Cov}(Y,X_1) < \frac {\operatorname {\hat Cov}(X_1,X_2)\cdot \operatorname {\hat Cov}(Y,X_2)}{\operatorname {\hat Var}(X_2)}$$ which is perfectly possible, then you will have positive pair-wise correlation between the dependent variable and regressor $X_1$, but negative coefficient of this regressor in the context of multiple regression. In other words, if one examines the dependent variable and regressor $X_1$ alone, they tend to move together (i.e. one will obtain a positive coefficient in the context of simple regression), but if regressor $X_2$ is present the marginal effect of $X_1$ on the dependent variable emerges as negative. This is an instance of the famous "sign reversal paradox", which is not really a paradox. Intuition (for this case)? If $X_2$ strongly correlates positively with the dependent variable and $X_1$, then in the simple regression the apparent positive relation between $Y$ and $X_1$ is due to the underlying effect of $X_2$ which is absent. When $X_2$ enters the specification, it takes on this positive effect, and "reveals" that the "pure" effect of $X_1$ on the dependent variable is, after all, negative. Note that the simple regression here would constitute a case of "omitted variables bias" in the estimation, since $X_2$ does belong to the specification. In some fields, $X_2$ is called a "confounder". Analogous results hold of course for the other coefficient, or for more than two regressors. See also Positive correlation and negative regressor coefficient sign
Sign of coefficients in linear regression vs. the sign of correlation Assume you have a regression with two regressors (plus a constant), $$y_i = a + b_1x_{1i}+b_2x_{2i} + u_i$$ Then if you work the normal equations (a bit tedious) you will find that $$\hat b_1 = \frac
38,196
Is " independent and identically distributed" an assumption or a fact ?
In practice being independent and identically distributed is an assumption; it may sometimes be a good approximation, but it's next to impossible to demonstrate that it actually holds. Generally, the best you can do is show that it doesn't fail too badly. This is what diagnostics, and to some extent hypothesis tests attempt to do. For example, if someone looks at an ACF of residuals (for data observed in sequence) to see if there's any obvious serial correlation (which would mean that independence didn't hold) ... but having small sample correlations doesn't imply independence. [If you're trying to assess assumptions for some statistical procedure -- or especially if you're trying to choose between possible procedures -- it's generally best to avoid hypothesis tests for that purpose. Hypothesis tests don't answer the question you really need an answer to for such a purpose, and using the data to choose in that manner will impact the properties of whichever later procedure you choose. If you must test something like that, avoid testing the data you're running the main test on.]
Is " independent and identically distributed" an assumption or a fact ?
In practice being independent and identically distributed is an assumption; it may sometimes be a good approximation, but it's next to impossible to demonstrate that it actually holds. Generally, the
Is " independent and identically distributed" an assumption or a fact ? In practice being independent and identically distributed is an assumption; it may sometimes be a good approximation, but it's next to impossible to demonstrate that it actually holds. Generally, the best you can do is show that it doesn't fail too badly. This is what diagnostics, and to some extent hypothesis tests attempt to do. For example, if someone looks at an ACF of residuals (for data observed in sequence) to see if there's any obvious serial correlation (which would mean that independence didn't hold) ... but having small sample correlations doesn't imply independence. [If you're trying to assess assumptions for some statistical procedure -- or especially if you're trying to choose between possible procedures -- it's generally best to avoid hypothesis tests for that purpose. Hypothesis tests don't answer the question you really need an answer to for such a purpose, and using the data to choose in that manner will impact the properties of whichever later procedure you choose. If you must test something like that, avoid testing the data you're running the main test on.]
Is " independent and identically distributed" an assumption or a fact ? In practice being independent and identically distributed is an assumption; it may sometimes be a good approximation, but it's next to impossible to demonstrate that it actually holds. Generally, the
38,197
Is " independent and identically distributed" an assumption or a fact ?
Just to add to the discussion, this is mostly an assumption that simplifies the mathematics of inference. To take a concrete example, I am in the field of image processing and usually most algorithms will assume that the noise in the image is IID. This is hardly ever the case because most of the time we do some pre processing on the imaging (for ex: smoothing or averaging) and this will introduce correlation among neighbourhood imaging pixels. Also, pixels belonging to similar structures will have similar properties, the point spread function of the measurement device etc. will all make the IID assumption strictly not true. In any real world case, it usually turns out to be an assumption but it depends on what you are trying to achieve to be able to tell whether the assumption is valid or not.
Is " independent and identically distributed" an assumption or a fact ?
Just to add to the discussion, this is mostly an assumption that simplifies the mathematics of inference. To take a concrete example, I am in the field of image processing and usually most algorithms
Is " independent and identically distributed" an assumption or a fact ? Just to add to the discussion, this is mostly an assumption that simplifies the mathematics of inference. To take a concrete example, I am in the field of image processing and usually most algorithms will assume that the noise in the image is IID. This is hardly ever the case because most of the time we do some pre processing on the imaging (for ex: smoothing or averaging) and this will introduce correlation among neighbourhood imaging pixels. Also, pixels belonging to similar structures will have similar properties, the point spread function of the measurement device etc. will all make the IID assumption strictly not true. In any real world case, it usually turns out to be an assumption but it depends on what you are trying to achieve to be able to tell whether the assumption is valid or not.
Is " independent and identically distributed" an assumption or a fact ? Just to add to the discussion, this is mostly an assumption that simplifies the mathematics of inference. To take a concrete example, I am in the field of image processing and usually most algorithms
38,198
Is " independent and identically distributed" an assumption or a fact ?
De Finetti would say that conditional independence is a logical consequence of your assumption that the sequence of random variables whose values you can observe in your experiment is exchangeable.
Is " independent and identically distributed" an assumption or a fact ?
De Finetti would say that conditional independence is a logical consequence of your assumption that the sequence of random variables whose values you can observe in your experiment is exchangeable.
Is " independent and identically distributed" an assumption or a fact ? De Finetti would say that conditional independence is a logical consequence of your assumption that the sequence of random variables whose values you can observe in your experiment is exchangeable.
Is " independent and identically distributed" an assumption or a fact ? De Finetti would say that conditional independence is a logical consequence of your assumption that the sequence of random variables whose values you can observe in your experiment is exchangeable.
38,199
Is " independent and identically distributed" an assumption or a fact ?
It depends on the problem but iid is usually an assumption based on two random variables being approximately independent and identically distributed (or at least we have good reason to believe they are). In most cases where we assume iid, we can't make the claim of perfect independence or that the distributions of the two random variables are perfectly identical, but we make the assumption anyway and then check the assumption based on the data. However, there are some cases when iid could be considered a "fact." For example, consider an experiment where you put a single die in a cup, shake the cup, and roll the die. If you do this twice, I do not think anyone would have trouble accepting as fact that the two rolls of the die are iid.
Is " independent and identically distributed" an assumption or a fact ?
It depends on the problem but iid is usually an assumption based on two random variables being approximately independent and identically distributed (or at least we have good reason to believe they ar
Is " independent and identically distributed" an assumption or a fact ? It depends on the problem but iid is usually an assumption based on two random variables being approximately independent and identically distributed (or at least we have good reason to believe they are). In most cases where we assume iid, we can't make the claim of perfect independence or that the distributions of the two random variables are perfectly identical, but we make the assumption anyway and then check the assumption based on the data. However, there are some cases when iid could be considered a "fact." For example, consider an experiment where you put a single die in a cup, shake the cup, and roll the die. If you do this twice, I do not think anyone would have trouble accepting as fact that the two rolls of the die are iid.
Is " independent and identically distributed" an assumption or a fact ? It depends on the problem but iid is usually an assumption based on two random variables being approximately independent and identically distributed (or at least we have good reason to believe they ar
38,200
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I think this comes from the fact that in the real world you don't really expect the standard null hypothesis to be true. If you're comparing the means of two populations, the null hypothesis says that $\mu_1 = \mu_2$, that is the two means are exactly equal. In many situations however a more accurate null hypothesis would say that $\mu_1$ and $\mu_2$ are almost equal (whatever that means). For small sample sizes, the difference between means will only give a low p-value if the measured difference is "relatively" large. However for sufficiently large sample sizes even a tiny difference in means can become statistically significant, even though for practical purposes the numbers are the same. There is some good information for this question here as well: Why is "statistically significant" not enough?
Why does statistical significance increase with data, BUT the effects may not be meaningful?
I think this comes from the fact that in the real world you don't really expect the standard null hypothesis to be true. If you're comparing the means of two populations, the null hypothesis says that
Why does statistical significance increase with data, BUT the effects may not be meaningful? I think this comes from the fact that in the real world you don't really expect the standard null hypothesis to be true. If you're comparing the means of two populations, the null hypothesis says that $\mu_1 = \mu_2$, that is the two means are exactly equal. In many situations however a more accurate null hypothesis would say that $\mu_1$ and $\mu_2$ are almost equal (whatever that means). For small sample sizes, the difference between means will only give a low p-value if the measured difference is "relatively" large. However for sufficiently large sample sizes even a tiny difference in means can become statistically significant, even though for practical purposes the numbers are the same. There is some good information for this question here as well: Why is "statistically significant" not enough?
Why does statistical significance increase with data, BUT the effects may not be meaningful? I think this comes from the fact that in the real world you don't really expect the standard null hypothesis to be true. If you're comparing the means of two populations, the null hypothesis says that