idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
33,401
Should the word "component" be singular or plural in the name for PCA?
When Hotelling first used the words 'analysis', 'component(s)' and 'principal' together. It was in the plural form. But it wasn't exactly as PCA yet and it was 'method of principal components' or in the title 'Analysis of a complex of statistical variables into principal components.' In most other languages the plural form is used. For instance, in French it is 'Analyse en composantes principales' and in German it is 'Hauptkomponentenanalyse'. However, in the English language the singular form is more common. The reason for the singular form is because this is the common way in English language. See for the canonical question on the English forum: When are attributive nouns plural?. And there are many other related question asking for instance for 'number(s) analysis'.
Should the word "component" be singular or plural in the name for PCA?
When Hotelling first used the words 'analysis', 'component(s)' and 'principal' together. It was in the plural form. But it wasn't exactly as PCA yet and it was 'method of principal components' or in t
Should the word "component" be singular or plural in the name for PCA? When Hotelling first used the words 'analysis', 'component(s)' and 'principal' together. It was in the plural form. But it wasn't exactly as PCA yet and it was 'method of principal components' or in the title 'Analysis of a complex of statistical variables into principal components.' In most other languages the plural form is used. For instance, in French it is 'Analyse en composantes principales' and in German it is 'Hauptkomponentenanalyse'. However, in the English language the singular form is more common. The reason for the singular form is because this is the common way in English language. See for the canonical question on the English forum: When are attributive nouns plural?. And there are many other related question asking for instance for 'number(s) analysis'.
Should the word "component" be singular or plural in the name for PCA? When Hotelling first used the words 'analysis', 'component(s)' and 'principal' together. It was in the plural form. But it wasn't exactly as PCA yet and it was 'method of principal components' or in t
33,402
Should the word "component" be singular or plural in the name for PCA?
I learned it as "principal components analysis" and I find some others insisting that the singular is better. You buy books at a "book store", not at a "books store", and you won't touch something with a ten-foot pole, not a ten-feet pole, etc. That is a trait of the English language. In German it's exactly the other way around: the plural is used for things like this. So "principal component analysis" follows the pattern usually followed in English.
Should the word "component" be singular or plural in the name for PCA?
I learned it as "principal components analysis" and I find some others insisting that the singular is better. You buy books at a "book store", not at a "books store", and you won't touch something wit
Should the word "component" be singular or plural in the name for PCA? I learned it as "principal components analysis" and I find some others insisting that the singular is better. You buy books at a "book store", not at a "books store", and you won't touch something with a ten-foot pole, not a ten-feet pole, etc. That is a trait of the English language. In German it's exactly the other way around: the plural is used for things like this. So "principal component analysis" follows the pattern usually followed in English.
Should the word "component" be singular or plural in the name for PCA? I learned it as "principal components analysis" and I find some others insisting that the singular is better. You buy books at a "book store", not at a "books store", and you won't touch something wit
33,403
Should the word "component" be singular or plural in the name for PCA?
If you want to rely on google search as a "majority vote" criterion, I think you should put quotes around the three words. This what I get: "Principal component analysis": 12,400,000 results "Principal componentS analysis": 2,880,000 results Wikipedia's article is also without S.
Should the word "component" be singular or plural in the name for PCA?
If you want to rely on google search as a "majority vote" criterion, I think you should put quotes around the three words. This what I get: "Principal component analysis": 12,400,000 results "Princi
Should the word "component" be singular or plural in the name for PCA? If you want to rely on google search as a "majority vote" criterion, I think you should put quotes around the three words. This what I get: "Principal component analysis": 12,400,000 results "Principal componentS analysis": 2,880,000 results Wikipedia's article is also without S.
Should the word "component" be singular or plural in the name for PCA? If you want to rely on google search as a "majority vote" criterion, I think you should put quotes around the three words. This what I get: "Principal component analysis": 12,400,000 results "Princi
33,404
Counter intuitive Bayesian theorem
On the face of it your assumptions are inconsistent in that you think more people will default than have smartphones but you also think all defaulters have smartphones. Part of the problem is that some of your assumptions are for users of your app and part for the whole population and you treat these as being for the same group If instead you just consider users of your app, you might have $P(B)=0.8$ and $P(A)=1$ and $P(A \mid B)=1$. This will now give you $P(B \mid A)= P(A\mid B) \space P (B) \space / \space P(A) = 1 \times 0.8 / 1 = 0.8$ and there are no problems there apart from the lack of value in considering $A$ since all users of your app have smartphones
Counter intuitive Bayesian theorem
On the face of it your assumptions are inconsistent in that you think more people will default than have smartphones but you also think all defaulters have smartphones. Part of the problem is that som
Counter intuitive Bayesian theorem On the face of it your assumptions are inconsistent in that you think more people will default than have smartphones but you also think all defaulters have smartphones. Part of the problem is that some of your assumptions are for users of your app and part for the whole population and you treat these as being for the same group If instead you just consider users of your app, you might have $P(B)=0.8$ and $P(A)=1$ and $P(A \mid B)=1$. This will now give you $P(B \mid A)= P(A\mid B) \space P (B) \space / \space P(A) = 1 \times 0.8 / 1 = 0.8$ and there are no problems there apart from the lack of value in considering $A$ since all users of your app have smartphones
Counter intuitive Bayesian theorem On the face of it your assumptions are inconsistent in that you think more people will default than have smartphones but you also think all defaulters have smartphones. Part of the problem is that som
33,405
Counter intuitive Bayesian theorem
You have an inconsistent set of assumptions. Like saying 80 % of the population of the world like soccer. 100 % of all the people, who like soccer, like also tennis, which implies at least 80 % of the population like tennis. But then you say, 50 % of the population like tennis...! In order to derive $P(A)$, you could specify first $P(A\vert B^c)$ and then calculate $P(A) = P(A\vert B)P(B) + P(A\vert B^c)(1-P(B))$. Or directly deriving $P(A)$ from other reasoning, but in a consistent way with your previous assumptions.
Counter intuitive Bayesian theorem
You have an inconsistent set of assumptions. Like saying 80 % of the population of the world like soccer. 100 % of all the people, who like soccer, like also tennis, which implies at least 80 % of th
Counter intuitive Bayesian theorem You have an inconsistent set of assumptions. Like saying 80 % of the population of the world like soccer. 100 % of all the people, who like soccer, like also tennis, which implies at least 80 % of the population like tennis. But then you say, 50 % of the population like tennis...! In order to derive $P(A)$, you could specify first $P(A\vert B^c)$ and then calculate $P(A) = P(A\vert B)P(B) + P(A\vert B^c)(1-P(B))$. Or directly deriving $P(A)$ from other reasoning, but in a consistent way with your previous assumptions.
Counter intuitive Bayesian theorem You have an inconsistent set of assumptions. Like saying 80 % of the population of the world like soccer. 100 % of all the people, who like soccer, like also tennis, which implies at least 80 % of th
33,406
Counter intuitive Bayesian theorem
You've written: P(B|A) = P(A|B) * P (B) / P(A) = 1 * 0.8 / 0.5 = 1.6 by which you mean: P(default|smartphone) = P(smartphone|default) * P (default) / P(smartphone) = 1.6 which seems wrong, and indeed is. The problem here is you have forgotten that there is an implicit condition in some of these probabilities, namely that the person actually has a loan (or equivalently, that some of the probabilities are for different populations). So in fact the numbers you have used are: P(smartphone|default,loan) * P(default,loan) / P(smartphone) which leads to a nonsense answer because P(smartphone) isn't matched on the "has a loan" condition (it's the probability that any random person has a phone). For Bayes' rule to work here you would need to use the probability that someone has a smartphone given they have a loan, which, since you note that "users need a smartphone to install my app", will of course be 1, leading to the (correct, but)not very useful result that: P(default|loan,smartphone) = P(phone|default) * P(default|loan) / P(phone|loan) = 1 * 0.8 / 1 = 0.8 i.e. you learn nothing to refine your prior from the fact that the user has a smartphone, which is intuitively obvious, since they need a smartphone to even have the chance to default, so all defaulting borrowers AND non-defaulting borrowers have smartphones (the only people who might not have smartphones, are those who don't have loans). As an aside, we could do some rough analysis like: P(default|smartphone) = P(smartphone|default) * P (default) / P(smartphone) = P(loan) * P(default|loan) / P(smartphone) = P(default) / P(smartphone) Which suggests that the probability that a random person who has a smartphone defaults on of the loans is higher than the probability that a general random person does, which makes sense since the fraction of the general population who don't have a smart phone can't get a loan so can't possibly default. But again this is hardly informative.
Counter intuitive Bayesian theorem
You've written: P(B|A) = P(A|B) * P (B) / P(A) = 1 * 0.8 / 0.5 = 1.6 by which you mean: P(default|smartphone) = P(smartphone|default) * P (default) / P(smartphone) = 1.6 which seems wrong, and indee
Counter intuitive Bayesian theorem You've written: P(B|A) = P(A|B) * P (B) / P(A) = 1 * 0.8 / 0.5 = 1.6 by which you mean: P(default|smartphone) = P(smartphone|default) * P (default) / P(smartphone) = 1.6 which seems wrong, and indeed is. The problem here is you have forgotten that there is an implicit condition in some of these probabilities, namely that the person actually has a loan (or equivalently, that some of the probabilities are for different populations). So in fact the numbers you have used are: P(smartphone|default,loan) * P(default,loan) / P(smartphone) which leads to a nonsense answer because P(smartphone) isn't matched on the "has a loan" condition (it's the probability that any random person has a phone). For Bayes' rule to work here you would need to use the probability that someone has a smartphone given they have a loan, which, since you note that "users need a smartphone to install my app", will of course be 1, leading to the (correct, but)not very useful result that: P(default|loan,smartphone) = P(phone|default) * P(default|loan) / P(phone|loan) = 1 * 0.8 / 1 = 0.8 i.e. you learn nothing to refine your prior from the fact that the user has a smartphone, which is intuitively obvious, since they need a smartphone to even have the chance to default, so all defaulting borrowers AND non-defaulting borrowers have smartphones (the only people who might not have smartphones, are those who don't have loans). As an aside, we could do some rough analysis like: P(default|smartphone) = P(smartphone|default) * P (default) / P(smartphone) = P(loan) * P(default|loan) / P(smartphone) = P(default) / P(smartphone) Which suggests that the probability that a random person who has a smartphone defaults on of the loans is higher than the probability that a general random person does, which makes sense since the fraction of the general population who don't have a smart phone can't get a loan so can't possibly default. But again this is hardly informative.
Counter intuitive Bayesian theorem You've written: P(B|A) = P(A|B) * P (B) / P(A) = 1 * 0.8 / 0.5 = 1.6 by which you mean: P(default|smartphone) = P(smartphone|default) * P (default) / P(smartphone) = 1.6 which seems wrong, and indee
33,407
Plotting confidence interval bars from summary statistics
In MATLAB, you might want to try the errorbar function: http://www.mathworks.de/de/help/matlab/ref/errorbar.html Alternatively, you can do it the dumb and manual way. For example, given a matrix of data points "a", you can calculate your means using the function m = mean(a), calculate your CIs (depending on what CI you need), and plot the results by hand. Demonstration if you already know the mean and CI, assuming CIs are in a matrix CI (first and second column) and means are in a matrix a: plot(1:length(CI),a,'o','markersize', 10) % plot the mean hold on; plot(1:length(CI),CI(1,:),'v','markersize', 6) % plot lower CI boundary hold on; plot(1:length(CI),CI(2,:),'^','markersize', 6) % plot upper CI boundary hold on; for I = 1:length(CI) % connect upper and lower bound with a line line([I I],[CI(1,I) CI(2,I)]) hold on; end; axis([0 length(CI)+1 min(CI(1,:))*0.75 max(CI(2,:))*1.25]) % scale axis Demonstration in the case where you know individual measurements, for a repeated-measures experiment, 3+ conditions, one condition per column, one subject per line in matrix a, no missing samples, 95% CI as by MATLAB's ttest(): [H,P,CI] = ttest(a); % calculate 95% CIs for every column in matrix a % CIs are now in the matrix CI! plot(1:length(CI),[mean(a)],'o','markersize', 10) % plot the mean hold on; plot(1:length(CI),CI(1,:),'v','markersize', 6) % plot lower CI boundary hold on; plot(1:length(CI),CI(2,:),'^','markersize', 6) % plot upper CI boundary hold on; for I = 1:length(CI) % connect upper and lower bound with a line line([I I],[CI(1,I) CI(2,I)]) hold on; end; axis([0 length(CI)+1 min(CI(1,:))*0.75 max(CI(2,:))*1.25]) % scale axis
Plotting confidence interval bars from summary statistics
In MATLAB, you might want to try the errorbar function: http://www.mathworks.de/de/help/matlab/ref/errorbar.html Alternatively, you can do it the dumb and manual way. For example, given a matrix of da
Plotting confidence interval bars from summary statistics In MATLAB, you might want to try the errorbar function: http://www.mathworks.de/de/help/matlab/ref/errorbar.html Alternatively, you can do it the dumb and manual way. For example, given a matrix of data points "a", you can calculate your means using the function m = mean(a), calculate your CIs (depending on what CI you need), and plot the results by hand. Demonstration if you already know the mean and CI, assuming CIs are in a matrix CI (first and second column) and means are in a matrix a: plot(1:length(CI),a,'o','markersize', 10) % plot the mean hold on; plot(1:length(CI),CI(1,:),'v','markersize', 6) % plot lower CI boundary hold on; plot(1:length(CI),CI(2,:),'^','markersize', 6) % plot upper CI boundary hold on; for I = 1:length(CI) % connect upper and lower bound with a line line([I I],[CI(1,I) CI(2,I)]) hold on; end; axis([0 length(CI)+1 min(CI(1,:))*0.75 max(CI(2,:))*1.25]) % scale axis Demonstration in the case where you know individual measurements, for a repeated-measures experiment, 3+ conditions, one condition per column, one subject per line in matrix a, no missing samples, 95% CI as by MATLAB's ttest(): [H,P,CI] = ttest(a); % calculate 95% CIs for every column in matrix a % CIs are now in the matrix CI! plot(1:length(CI),[mean(a)],'o','markersize', 10) % plot the mean hold on; plot(1:length(CI),CI(1,:),'v','markersize', 6) % plot lower CI boundary hold on; plot(1:length(CI),CI(2,:),'^','markersize', 6) % plot upper CI boundary hold on; for I = 1:length(CI) % connect upper and lower bound with a line line([I I],[CI(1,I) CI(2,I)]) hold on; end; axis([0 length(CI)+1 min(CI(1,:))*0.75 max(CI(2,:))*1.25]) % scale axis
Plotting confidence interval bars from summary statistics In MATLAB, you might want to try the errorbar function: http://www.mathworks.de/de/help/matlab/ref/errorbar.html Alternatively, you can do it the dumb and manual way. For example, given a matrix of da
33,408
Plotting confidence interval bars from summary statistics
Look if this helps you. R solution: par(mfrow=c(2,1)) # to stack the charts on column #Dataset 1 upperlimit = c(10,12,8,14) lowerlimit = c(5,9,4,7) mean = c(8,10,6,12) df = data.frame(cbind(upperlimit,lowerlimit,mean)) plot(df$mean, ylim = c(0,30), xlim = c(1,4)) install.packages("plotrix") require(plotrix) plotCI(df$mean,y=NULL, uiw=df$upperlimit-df$mean, liw=df$mean-df$lowerlimit, err="y", pch=20, slty=3, scol = "black", add=TRUE) #Dataset 2 upperlimit_2 = upperlimit*1.5 lowerlimit_2 = lowerlimit*0.8 mean_2 = upperlimit_2-lowerlimit_2 df_2 = data.frame(cbind(upperlimit_2,lowerlimit_2,mean_2)) plot(df$mean_2, ylim = c(0,30), xlim = c(1,4)) plotCI(df_2$mean_2,y=NULL, uiw=df_2$upperlimit_2-df_2$mean_2, liw=df_2$mean_2- df_2$lowerlimit_2, err="y", pch=20, slty=3, scol = "black", add=TRUE) rm(upperlimit,lowerlimit,mean,df,upperlimit_2,lowerlimit_2,mean_2,df_2) #remove the objects stored from workspace par(mfrow=c(1,1)) # go back to default (one graph at a time)
Plotting confidence interval bars from summary statistics
Look if this helps you. R solution: par(mfrow=c(2,1)) # to stack the charts on column #Dataset 1 upperlimit = c(10,12,8,14) lowerlimit = c(5,9,4,7) mean = c(8,10,6,12) df = data.frame(cbind(upperli
Plotting confidence interval bars from summary statistics Look if this helps you. R solution: par(mfrow=c(2,1)) # to stack the charts on column #Dataset 1 upperlimit = c(10,12,8,14) lowerlimit = c(5,9,4,7) mean = c(8,10,6,12) df = data.frame(cbind(upperlimit,lowerlimit,mean)) plot(df$mean, ylim = c(0,30), xlim = c(1,4)) install.packages("plotrix") require(plotrix) plotCI(df$mean,y=NULL, uiw=df$upperlimit-df$mean, liw=df$mean-df$lowerlimit, err="y", pch=20, slty=3, scol = "black", add=TRUE) #Dataset 2 upperlimit_2 = upperlimit*1.5 lowerlimit_2 = lowerlimit*0.8 mean_2 = upperlimit_2-lowerlimit_2 df_2 = data.frame(cbind(upperlimit_2,lowerlimit_2,mean_2)) plot(df$mean_2, ylim = c(0,30), xlim = c(1,4)) plotCI(df_2$mean_2,y=NULL, uiw=df_2$upperlimit_2-df_2$mean_2, liw=df_2$mean_2- df_2$lowerlimit_2, err="y", pch=20, slty=3, scol = "black", add=TRUE) rm(upperlimit,lowerlimit,mean,df,upperlimit_2,lowerlimit_2,mean_2,df_2) #remove the objects stored from workspace par(mfrow=c(1,1)) # go back to default (one graph at a time)
Plotting confidence interval bars from summary statistics Look if this helps you. R solution: par(mfrow=c(2,1)) # to stack the charts on column #Dataset 1 upperlimit = c(10,12,8,14) lowerlimit = c(5,9,4,7) mean = c(8,10,6,12) df = data.frame(cbind(upperli
33,409
Plotting confidence interval bars from summary statistics
This type of plot in R using ggplot2, though you might have to do some fiddling with the axis font size: library(ggplot2) data.estimates = data.frame( var = c('1', '2', '3', '4', '5', '6', '7', '8', '9'), par = c(1.12210,0.18489,1.22011,1.027446235,0.43521,0.53464,1.93316,-0.43806,-0.12029), se = c(0.42569,0.32162,0.58351,0.771608551,0.24803,0.65372,0.92717,0.45939,0.51558)) data.estimates$idr <- exp(data.estimates$par) data.estimates$upper <- exp(data.estimates$par + (1.96*data.estimates$se)) data.estimates$lower <- exp(data.estimates$par - (1.96*data.estimates$se)) p2 <- ggplot(data.estimates, aes(var,idr, size=10)) + theme_bw(base_size=10) p2 + geom_point() +geom_errorbar(aes(x = var, ymin = lower, ymax = upper, size=2), width = 0.2) + scale_y_log10(limits=c(0.1, 50), breaks=c(0.1, 0.5, 1, 5, 10, 25, 50)) + xlab("Site") + ylab("RR")
Plotting confidence interval bars from summary statistics
This type of plot in R using ggplot2, though you might have to do some fiddling with the axis font size: library(ggplot2) data.estimates = data.frame( var = c('1', '2', '3', '4', '5', '6', '7', '8
Plotting confidence interval bars from summary statistics This type of plot in R using ggplot2, though you might have to do some fiddling with the axis font size: library(ggplot2) data.estimates = data.frame( var = c('1', '2', '3', '4', '5', '6', '7', '8', '9'), par = c(1.12210,0.18489,1.22011,1.027446235,0.43521,0.53464,1.93316,-0.43806,-0.12029), se = c(0.42569,0.32162,0.58351,0.771608551,0.24803,0.65372,0.92717,0.45939,0.51558)) data.estimates$idr <- exp(data.estimates$par) data.estimates$upper <- exp(data.estimates$par + (1.96*data.estimates$se)) data.estimates$lower <- exp(data.estimates$par - (1.96*data.estimates$se)) p2 <- ggplot(data.estimates, aes(var,idr, size=10)) + theme_bw(base_size=10) p2 + geom_point() +geom_errorbar(aes(x = var, ymin = lower, ymax = upper, size=2), width = 0.2) + scale_y_log10(limits=c(0.1, 50), breaks=c(0.1, 0.5, 1, 5, 10, 25, 50)) + xlab("Site") + ylab("RR")
Plotting confidence interval bars from summary statistics This type of plot in R using ggplot2, though you might have to do some fiddling with the axis font size: library(ggplot2) data.estimates = data.frame( var = c('1', '2', '3', '4', '5', '6', '7', '8
33,410
Plotting confidence interval bars from summary statistics
This could be done in R with points() (or plot(..., type="p")) and segments(). There might also be R functions designed to create the CI's for you, but those might require the original data. The multiple panels in the same figure created with par(mfrow=c(4,1)). If you don't know any R, this would be hard to do easily (as in, you would have to learn a bit more R or get someone to help with your specific data set).
Plotting confidence interval bars from summary statistics
This could be done in R with points() (or plot(..., type="p")) and segments(). There might also be R functions designed to create the CI's for you, but those might require the original data. The mult
Plotting confidence interval bars from summary statistics This could be done in R with points() (or plot(..., type="p")) and segments(). There might also be R functions designed to create the CI's for you, but those might require the original data. The multiple panels in the same figure created with par(mfrow=c(4,1)). If you don't know any R, this would be hard to do easily (as in, you would have to learn a bit more R or get someone to help with your specific data set).
Plotting confidence interval bars from summary statistics This could be done in R with points() (or plot(..., type="p")) and segments(). There might also be R functions designed to create the CI's for you, but those might require the original data. The mult
33,411
Plotting confidence interval bars from summary statistics
In Stata use serrbar or ciplot (SSC) or eclplot (Stata Journal, SSC).
Plotting confidence interval bars from summary statistics
In Stata use serrbar or ciplot (SSC) or eclplot (Stata Journal, SSC).
Plotting confidence interval bars from summary statistics In Stata use serrbar or ciplot (SSC) or eclplot (Stata Journal, SSC).
Plotting confidence interval bars from summary statistics In Stata use serrbar or ciplot (SSC) or eclplot (Stata Journal, SSC).
33,412
Plotting confidence interval bars from summary statistics
Assuming you have access to the original data you can do this in R with the lineplot.CI function in the sciplot library Example with mtcars dataset: lineplot.CI(x.factor=gear, response=mpg, group=vs, data=mtcars) Note that lineplot.CI by default plots SE bars (it can be changed defining a new function with the argument ci.fun to plot 95% CI intervals) lineplot.CI(x.factor=gear, response=mpg, group=vs, data=mtcars, ci.fun=function(x) c(mean(x)-1.96*se(x), mean(x)+1.96*se(x)))
Plotting confidence interval bars from summary statistics
Assuming you have access to the original data you can do this in R with the lineplot.CI function in the sciplot library Example with mtcars dataset: lineplot.CI(x.factor=gear, response=mpg, group=vs,
Plotting confidence interval bars from summary statistics Assuming you have access to the original data you can do this in R with the lineplot.CI function in the sciplot library Example with mtcars dataset: lineplot.CI(x.factor=gear, response=mpg, group=vs, data=mtcars) Note that lineplot.CI by default plots SE bars (it can be changed defining a new function with the argument ci.fun to plot 95% CI intervals) lineplot.CI(x.factor=gear, response=mpg, group=vs, data=mtcars, ci.fun=function(x) c(mean(x)-1.96*se(x), mean(x)+1.96*se(x)))
Plotting confidence interval bars from summary statistics Assuming you have access to the original data you can do this in R with the lineplot.CI function in the sciplot library Example with mtcars dataset: lineplot.CI(x.factor=gear, response=mpg, group=vs,
33,413
Plotting confidence interval bars from summary statistics
GraphPad Prism can easily make this kind of graph, plotting error bars from error values you enter. Create a grouped table formatted for entery of mean, - error and + error.
Plotting confidence interval bars from summary statistics
GraphPad Prism can easily make this kind of graph, plotting error bars from error values you enter. Create a grouped table formatted for entery of mean, - error and + error.
Plotting confidence interval bars from summary statistics GraphPad Prism can easily make this kind of graph, plotting error bars from error values you enter. Create a grouped table formatted for entery of mean, - error and + error.
Plotting confidence interval bars from summary statistics GraphPad Prism can easily make this kind of graph, plotting error bars from error values you enter. Create a grouped table formatted for entery of mean, - error and + error.
33,414
When fitting a linear regression model, is it always recommended to plot the residuals?
Another good reason to plot residuals is to check the linearity assumptions. If the residuals are similar whatever the predicted value, then your model seems good. If the residuals are small for small predicted values, and large for large predicted values, then the assumption of linearity does not seem good. In that case, I would try predicting the log of the Y value instead. Another thing to look at is whether the residuals are normally distributed - if not, then again there might be ways to make a better model.
When fitting a linear regression model, is it always recommended to plot the residuals?
Another good reason to plot residuals is to check the linearity assumptions. If the residuals are similar whatever the predicted value, then your model seems good. If the residuals are small for small
When fitting a linear regression model, is it always recommended to plot the residuals? Another good reason to plot residuals is to check the linearity assumptions. If the residuals are similar whatever the predicted value, then your model seems good. If the residuals are small for small predicted values, and large for large predicted values, then the assumption of linearity does not seem good. In that case, I would try predicting the log of the Y value instead. Another thing to look at is whether the residuals are normally distributed - if not, then again there might be ways to make a better model.
When fitting a linear regression model, is it always recommended to plot the residuals? Another good reason to plot residuals is to check the linearity assumptions. If the residuals are similar whatever the predicted value, then your model seems good. If the residuals are small for small
33,415
When fitting a linear regression model, is it always recommended to plot the residuals?
Recommendations are for free, so people are always happy to recommend doing more work when they are not the ones who have to do it. That being said, plotting data, including your predictions, is a good practice that will help you notice problems with the data that you have no way to know based on numbers alone (see the famous Anscombe's quartet, for example). Typing plot(resid(model)) or plot(predicted, observed) or plot(observed-predicted, observed) is really not that much work. Of course, if you have 100 000 models, one for each gene, or each voxel in a brain, then nobody is checking residuals. (but maybe they should) Edit, answer to comment: Many of the assumptions are important for inference but not for predictions, so if assumptions are violated but predictions are good, then it's not really a big deal in a sense that it won't make predictions invalid. However, if residuals are not independent, that means that there is an effect in the data that you can model better, hence improving your predictions. If they are not normal, then you might consider using a more robust model., which would again improve your predictions (according to some metrics). Finally, if they are not homoskedastic, then you can sometimes improve model fit by modeling variance, or you can realize that you are using the wrong model e.g., Gaussian, instead of Poisson/logistic, and you can fix it and again improve your predictions. I am not saying it will always make a large difference or that you must satisfy your assumptions, but sometimes it helps.
When fitting a linear regression model, is it always recommended to plot the residuals?
Recommendations are for free, so people are always happy to recommend doing more work when they are not the ones who have to do it. That being said, plotting data, including your predictions, is a goo
When fitting a linear regression model, is it always recommended to plot the residuals? Recommendations are for free, so people are always happy to recommend doing more work when they are not the ones who have to do it. That being said, plotting data, including your predictions, is a good practice that will help you notice problems with the data that you have no way to know based on numbers alone (see the famous Anscombe's quartet, for example). Typing plot(resid(model)) or plot(predicted, observed) or plot(observed-predicted, observed) is really not that much work. Of course, if you have 100 000 models, one for each gene, or each voxel in a brain, then nobody is checking residuals. (but maybe they should) Edit, answer to comment: Many of the assumptions are important for inference but not for predictions, so if assumptions are violated but predictions are good, then it's not really a big deal in a sense that it won't make predictions invalid. However, if residuals are not independent, that means that there is an effect in the data that you can model better, hence improving your predictions. If they are not normal, then you might consider using a more robust model., which would again improve your predictions (according to some metrics). Finally, if they are not homoskedastic, then you can sometimes improve model fit by modeling variance, or you can realize that you are using the wrong model e.g., Gaussian, instead of Poisson/logistic, and you can fix it and again improve your predictions. I am not saying it will always make a large difference or that you must satisfy your assumptions, but sometimes it helps.
When fitting a linear regression model, is it always recommended to plot the residuals? Recommendations are for free, so people are always happy to recommend doing more work when they are not the ones who have to do it. That being said, plotting data, including your predictions, is a goo
33,416
When fitting a linear regression model, is it always recommended to plot the residuals?
"The model's predictions are accurate" - If by "predictions", you mean "predictions from real-world data", then this is something that you almost certainly do not know for sure. Why? Almost always by "test set", we mean a set of data that have been bootstrapped out of our training data. So, if our training data aren't perfectly representative of data that your model will encounter in the wild (which is what you're going to be applying your model to, if prediction is your goal), then neither is your test data. Almost surely your training data are not perfectly representative of data you will see in the wild. Is it good enough? You have to check for these sorts of things. Your overall perspective on the model evaluation process is backward. We need to be scientists about the matter. Being a scientist about the matter means that you're not looking for evidence that your model is right... it means meticulously looking for evidence that your model could be wrong. The depth that you have delved for ways that you could be wrong is the true test of any hypothesis. Your hypothesis is "this linear regression model will do a good job predicting future Y from future X." You want to look in every corner for things that might suggest that this hypothesis is wrong. Obviously, you cannot check every corner, because there are infinitely many corners, many of which are so dark that you cannot see into them. Still, you should check the corners that you can check. One of those corners you check by holding out a test set. This is usually to check the "overfitting" corner - Is my model just regurgitating the training data, or is it actually doing some kind of generalization? However, there are many other corners. One of these that is extremely common to crop up in real life is the "data isn't representative" corner. This means that you trained your model on the data that you had, but the data that you had isn't a very good picture of all of the data that the model sees in real life. Checking out the residuals is one way to convince yourself that this might be a way for your model to fail (and remember, you are meticulously looking for ways that your model can fail). So, when you see this sort of thing: You should be thinking to yourself "okay, well what happens when my model is confronted with $x=2.5$?" Why? Because the data "curves", and the model doesn't (i.e., my residuals aren't independent of X). If your model might well be confronted with $x=2.5$ in the real world, then your training data isn't very representative of the real world, is it? You can see this because we don't have anything like $x=2.5$ in our training data. If you don't have anything like $x=2.5$ in your training data, then you don't have anything like it in your test data either. Bootstrapping test data out of training data doesn't fix problems with your training data! Anyway, the point here isn't that you should or shouldn't look at the residuals. The point here is that you are taking entirely the wrong perspective on how to evaluate your model. You shouldn't be focused on finding evidence that your model is good. You should be focused on finding evidence that your model is BAD! If you look really really hard for evidence that your model is bad, and don't find any, then and only then should you begin having confidence that your model is "good". Checking out the regression coefficient is one way to do this. Checking the residuals is another one of many ways to check for "common" pitfalls where models turn out to be "bad".
When fitting a linear regression model, is it always recommended to plot the residuals?
"The model's predictions are accurate" - If by "predictions", you mean "predictions from real-world data", then this is something that you almost certainly do not know for sure. Why? Almost always by
When fitting a linear regression model, is it always recommended to plot the residuals? "The model's predictions are accurate" - If by "predictions", you mean "predictions from real-world data", then this is something that you almost certainly do not know for sure. Why? Almost always by "test set", we mean a set of data that have been bootstrapped out of our training data. So, if our training data aren't perfectly representative of data that your model will encounter in the wild (which is what you're going to be applying your model to, if prediction is your goal), then neither is your test data. Almost surely your training data are not perfectly representative of data you will see in the wild. Is it good enough? You have to check for these sorts of things. Your overall perspective on the model evaluation process is backward. We need to be scientists about the matter. Being a scientist about the matter means that you're not looking for evidence that your model is right... it means meticulously looking for evidence that your model could be wrong. The depth that you have delved for ways that you could be wrong is the true test of any hypothesis. Your hypothesis is "this linear regression model will do a good job predicting future Y from future X." You want to look in every corner for things that might suggest that this hypothesis is wrong. Obviously, you cannot check every corner, because there are infinitely many corners, many of which are so dark that you cannot see into them. Still, you should check the corners that you can check. One of those corners you check by holding out a test set. This is usually to check the "overfitting" corner - Is my model just regurgitating the training data, or is it actually doing some kind of generalization? However, there are many other corners. One of these that is extremely common to crop up in real life is the "data isn't representative" corner. This means that you trained your model on the data that you had, but the data that you had isn't a very good picture of all of the data that the model sees in real life. Checking out the residuals is one way to convince yourself that this might be a way for your model to fail (and remember, you are meticulously looking for ways that your model can fail). So, when you see this sort of thing: You should be thinking to yourself "okay, well what happens when my model is confronted with $x=2.5$?" Why? Because the data "curves", and the model doesn't (i.e., my residuals aren't independent of X). If your model might well be confronted with $x=2.5$ in the real world, then your training data isn't very representative of the real world, is it? You can see this because we don't have anything like $x=2.5$ in our training data. If you don't have anything like $x=2.5$ in your training data, then you don't have anything like it in your test data either. Bootstrapping test data out of training data doesn't fix problems with your training data! Anyway, the point here isn't that you should or shouldn't look at the residuals. The point here is that you are taking entirely the wrong perspective on how to evaluate your model. You shouldn't be focused on finding evidence that your model is good. You should be focused on finding evidence that your model is BAD! If you look really really hard for evidence that your model is bad, and don't find any, then and only then should you begin having confidence that your model is "good". Checking out the regression coefficient is one way to do this. Checking the residuals is another one of many ways to check for "common" pitfalls where models turn out to be "bad".
When fitting a linear regression model, is it always recommended to plot the residuals? "The model's predictions are accurate" - If by "predictions", you mean "predictions from real-world data", then this is something that you almost certainly do not know for sure. Why? Almost always by
33,417
When fitting a linear regression model, is it always recommended to plot the residuals?
Accuracy is Everything As others have noted, you want your model to be accurate before you start screaming from the rooftops that it is useful. If your regression says that candy consumption predicts weight loss, you better hope that is actually true by checking whether or not that is valid. To give a visual example, we can plot a typical regression in R with the iris dataset, predicting petal dimensions for each flower. After we can plot the residuals: #### Fit Model #### fit <- lm(Petal.Length ~ Petal.Width, iris) #### Plot Residuals #### plot(density(resid(fit))) Which look fairly normal: Bad Fit and Consequences If we fit a really bad model, it can have grave consequences even if the predictors are significant. Consider if we decided to just add zero to the model this time and plot the residuals: #### Misspecify Model #### fit.bad <- lm(Petal.Length ~ 0, iris) #### Plot Again #### plot(density(resid(fit.bad))) Then residuals are now strangely bimodal: There is probably a clear reason why. Since the model actually isn't modeling anything (the syntax reads "average petal length is disaggregated by zero"), we are actually just getting the density of the raw values of petal length. As proof, we can run the following code: #### Plot Density of Variable #### plot(density(iris$Petal.Length)) And sure enough, the plot is exactly the same: Takeaway Message To ensure your model is behaving, always check performance. The results may come off as exciting until you find out your model isn't a model at all (or at least a very poor one).
When fitting a linear regression model, is it always recommended to plot the residuals?
Accuracy is Everything As others have noted, you want your model to be accurate before you start screaming from the rooftops that it is useful. If your regression says that candy consumption predicts
When fitting a linear regression model, is it always recommended to plot the residuals? Accuracy is Everything As others have noted, you want your model to be accurate before you start screaming from the rooftops that it is useful. If your regression says that candy consumption predicts weight loss, you better hope that is actually true by checking whether or not that is valid. To give a visual example, we can plot a typical regression in R with the iris dataset, predicting petal dimensions for each flower. After we can plot the residuals: #### Fit Model #### fit <- lm(Petal.Length ~ Petal.Width, iris) #### Plot Residuals #### plot(density(resid(fit))) Which look fairly normal: Bad Fit and Consequences If we fit a really bad model, it can have grave consequences even if the predictors are significant. Consider if we decided to just add zero to the model this time and plot the residuals: #### Misspecify Model #### fit.bad <- lm(Petal.Length ~ 0, iris) #### Plot Again #### plot(density(resid(fit.bad))) Then residuals are now strangely bimodal: There is probably a clear reason why. Since the model actually isn't modeling anything (the syntax reads "average petal length is disaggregated by zero"), we are actually just getting the density of the raw values of petal length. As proof, we can run the following code: #### Plot Density of Variable #### plot(density(iris$Petal.Length)) And sure enough, the plot is exactly the same: Takeaway Message To ensure your model is behaving, always check performance. The results may come off as exciting until you find out your model isn't a model at all (or at least a very poor one).
When fitting a linear regression model, is it always recommended to plot the residuals? Accuracy is Everything As others have noted, you want your model to be accurate before you start screaming from the rooftops that it is useful. If your regression says that candy consumption predicts
33,418
Can a variable be normally distributed on finite interval?
No, it cannot. At least if you by "distributed as" implies exactly. The range of the normal distribution extends from minus to plus infinity. As a practical matter, if the variance is sufficiently small, say on the order of $ (0.1)^2 $, then a variable constrained to $(0,1)$ can be approximately normally distributed.
Can a variable be normally distributed on finite interval?
No, it cannot. At least if you by "distributed as" implies exactly. The range of the normal distribution extends from minus to plus infinity. As a practical matter, if the variance is sufficiently sma
Can a variable be normally distributed on finite interval? No, it cannot. At least if you by "distributed as" implies exactly. The range of the normal distribution extends from minus to plus infinity. As a practical matter, if the variance is sufficiently small, say on the order of $ (0.1)^2 $, then a variable constrained to $(0,1)$ can be approximately normally distributed.
Can a variable be normally distributed on finite interval? No, it cannot. At least if you by "distributed as" implies exactly. The range of the normal distribution extends from minus to plus infinity. As a practical matter, if the variance is sufficiently sma
33,419
Can a variable be normally distributed on finite interval?
The answer to your literal question is "no", but the larger implicit question of how you should model your data is more complicated. As Jim says, a truncated normal model is one option. You can also look into converting your probabilities to log odds, which will range from $-\infty$ to $\infty$, or the Beta distribution as Nick Cox mentions. The Central Limit Theorem does in some sense apply to your data, but the CLT just says that the data goes to the normal distribution in the limiting case, it doesn't say that any particular distribution for finite sample size is normally distributed. That is, for any level of precision, there is some sample size for which the distribution is normal within that level of precision, but that doesn't mean that you have enough sample size for it to be normal to the level of precision needed. You mention in comments that the probabilities are small, which likely means the data is skewed. The more skewed data is, the larger a sample size is needed to get to a particular level of precision using the CLT. So you might want to look into approximating with a skewed distribution, such as Poisson. Depending on the data, you could converge to such a distribution faster than to normal. In the worse case scenario, you can probably use Chebyshev bounds.
Can a variable be normally distributed on finite interval?
The answer to your literal question is "no", but the larger implicit question of how you should model your data is more complicated. As Jim says, a truncated normal model is one option. You can also l
Can a variable be normally distributed on finite interval? The answer to your literal question is "no", but the larger implicit question of how you should model your data is more complicated. As Jim says, a truncated normal model is one option. You can also look into converting your probabilities to log odds, which will range from $-\infty$ to $\infty$, or the Beta distribution as Nick Cox mentions. The Central Limit Theorem does in some sense apply to your data, but the CLT just says that the data goes to the normal distribution in the limiting case, it doesn't say that any particular distribution for finite sample size is normally distributed. That is, for any level of precision, there is some sample size for which the distribution is normal within that level of precision, but that doesn't mean that you have enough sample size for it to be normal to the level of precision needed. You mention in comments that the probabilities are small, which likely means the data is skewed. The more skewed data is, the larger a sample size is needed to get to a particular level of precision using the CLT. So you might want to look into approximating with a skewed distribution, such as Poisson. Depending on the data, you could converge to such a distribution faster than to normal. In the worse case scenario, you can probably use Chebyshev bounds.
Can a variable be normally distributed on finite interval? The answer to your literal question is "no", but the larger implicit question of how you should model your data is more complicated. As Jim says, a truncated normal model is one option. You can also l
33,420
Can a variable be normally distributed on finite interval?
By definition the normal distribution has support $(-\infty, \infty)$. You may want to look into the truncated normal distibution. It can have bounded support $[a,b]$. Quoting from its wiki: [...] the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both).
Can a variable be normally distributed on finite interval?
By definition the normal distribution has support $(-\infty, \infty)$. You may want to look into the truncated normal distibution. It can have bounded support $[a,b]$. Quoting from its wiki: [...] th
Can a variable be normally distributed on finite interval? By definition the normal distribution has support $(-\infty, \infty)$. You may want to look into the truncated normal distibution. It can have bounded support $[a,b]$. Quoting from its wiki: [...] the truncated normal distribution is the probability distribution derived from that of a normally distributed random variable by bounding the random variable from either below or above (or both).
Can a variable be normally distributed on finite interval? By definition the normal distribution has support $(-\infty, \infty)$. You may want to look into the truncated normal distibution. It can have bounded support $[a,b]$. Quoting from its wiki: [...] th
33,421
Can a variable be normally distributed on finite interval?
Many situations are not exactly normal distributed. Possibly most practical situations might be not be truly normal distributed (when we model human length or weight by a normal distribution, does that mean that we consider negative values?). The normal distribution is a distribution of many numbers. When you have a sum of many effects/variables then the distribution will follow approximately the normal distribution. The first application of the normal distribution (or something that looks like it) dates back to deMoivre who used it as a model to approximate a binomial distribution (which does not have infinite support), which can be considered as a sum of many Bernouilli distributed variables. The question for you is whether your particular situation allows the use of an approximation with the normal distribution. You have mentioned in the comments a mean/sum of 400k samples, that sounds very much like a (approximately) normal distributed variable (although, depending on your goals, you might still wish to investigate more than just the mean of your sample, and gather more information from the distribution of your samples which is likely not normally distributed, since we are speaking of few, individual, numbers). Below is an image of a histogram (and normal approximation) of $X/400000$ with $X \sim Binom(n=400000,p=0.04)$. This variable ranges from 0 to 1.
Can a variable be normally distributed on finite interval?
Many situations are not exactly normal distributed. Possibly most practical situations might be not be truly normal distributed (when we model human length or weight by a normal distribution, does tha
Can a variable be normally distributed on finite interval? Many situations are not exactly normal distributed. Possibly most practical situations might be not be truly normal distributed (when we model human length or weight by a normal distribution, does that mean that we consider negative values?). The normal distribution is a distribution of many numbers. When you have a sum of many effects/variables then the distribution will follow approximately the normal distribution. The first application of the normal distribution (or something that looks like it) dates back to deMoivre who used it as a model to approximate a binomial distribution (which does not have infinite support), which can be considered as a sum of many Bernouilli distributed variables. The question for you is whether your particular situation allows the use of an approximation with the normal distribution. You have mentioned in the comments a mean/sum of 400k samples, that sounds very much like a (approximately) normal distributed variable (although, depending on your goals, you might still wish to investigate more than just the mean of your sample, and gather more information from the distribution of your samples which is likely not normally distributed, since we are speaking of few, individual, numbers). Below is an image of a histogram (and normal approximation) of $X/400000$ with $X \sim Binom(n=400000,p=0.04)$. This variable ranges from 0 to 1.
Can a variable be normally distributed on finite interval? Many situations are not exactly normal distributed. Possibly most practical situations might be not be truly normal distributed (when we model human length or weight by a normal distribution, does tha
33,422
Can a variable be normally distributed on finite interval?
Strictly speaking, a variable defined on a finite interval cannot be normally distributed. However, as mentioned previously it can be approximately so. In addition, in some cases it can be transformed to a normally distributed variable. For example, the Pearson correlation coefficient between two independent variables, which is restricted to a finite interval ($-1\le r\le1$), can be transformed to an approximately normally distributed variable $z$ using the Fisher transformation: $$z = {1\over2}\ln{1+r\over1-r}$$
Can a variable be normally distributed on finite interval?
Strictly speaking, a variable defined on a finite interval cannot be normally distributed. However, as mentioned previously it can be approximately so. In addition, in some cases it can be transforme
Can a variable be normally distributed on finite interval? Strictly speaking, a variable defined on a finite interval cannot be normally distributed. However, as mentioned previously it can be approximately so. In addition, in some cases it can be transformed to a normally distributed variable. For example, the Pearson correlation coefficient between two independent variables, which is restricted to a finite interval ($-1\le r\le1$), can be transformed to an approximately normally distributed variable $z$ using the Fisher transformation: $$z = {1\over2}\ln{1+r\over1-r}$$
Can a variable be normally distributed on finite interval? Strictly speaking, a variable defined on a finite interval cannot be normally distributed. However, as mentioned previously it can be approximately so. In addition, in some cases it can be transforme
33,423
Success Stories of "Statistics"? [closed]
The whole history of statistics is full of them. For example, $t$-tests were born in Guinness brewery as means of optimizing their processes: T-Distribution, also known as Student's t-distribution, gets its name from William Sealy Gosset who first published it in English in 1908 in the scientific journal Biometrika using his pseudonym "Student" because his employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley with small sample sizes. Linear regression was discovered by Carl Friedrich Gauss to predict planetary movement in astronomy. As about Poisson distribution... A further practical application of this distribution was made by Ladislaus Bortkiewicz in 1898 when he was given the task of investigating the number of soldiers in the Prussian army killed accidentally by horse kicks;: 23-25  this experiment introduced the Poisson distribution to the field of reliability engineering. Pierre-Simon Laplace applied Bayes theorem to estimate the mass of Saturn in the 1800s and his result was off just by 0.05%. Everything in statistics was discovered for solving practical problems and they gained popularity because of proving useful. Yes, it's not self-driving cars, but I doubt would have self-driving cars today if Gauss didn't do his research on the least squares in the 1800s. Every machine learning textbook mentions the Bayes theorem which was first studied by Thomas Bayes in the late 1700s. The examples are countless.
Success Stories of "Statistics"? [closed]
The whole history of statistics is full of them. For example, $t$-tests were born in Guinness brewery as means of optimizing their processes: T-Distribution, also known as Student's t-distribution, g
Success Stories of "Statistics"? [closed] The whole history of statistics is full of them. For example, $t$-tests were born in Guinness brewery as means of optimizing their processes: T-Distribution, also known as Student's t-distribution, gets its name from William Sealy Gosset who first published it in English in 1908 in the scientific journal Biometrika using his pseudonym "Student" because his employer preferred staff to use pen names when publishing scientific papers instead of their real name, so he used the name "Student" to hide his identity. Gosset worked at the Guinness Brewery in Dublin, Ireland, and was interested in the problems of small samples – for example, the chemical properties of barley with small sample sizes. Linear regression was discovered by Carl Friedrich Gauss to predict planetary movement in astronomy. As about Poisson distribution... A further practical application of this distribution was made by Ladislaus Bortkiewicz in 1898 when he was given the task of investigating the number of soldiers in the Prussian army killed accidentally by horse kicks;: 23-25  this experiment introduced the Poisson distribution to the field of reliability engineering. Pierre-Simon Laplace applied Bayes theorem to estimate the mass of Saturn in the 1800s and his result was off just by 0.05%. Everything in statistics was discovered for solving practical problems and they gained popularity because of proving useful. Yes, it's not self-driving cars, but I doubt would have self-driving cars today if Gauss didn't do his research on the least squares in the 1800s. Every machine learning textbook mentions the Bayes theorem which was first studied by Thomas Bayes in the late 1700s. The examples are countless.
Success Stories of "Statistics"? [closed] The whole history of statistics is full of them. For example, $t$-tests were born in Guinness brewery as means of optimizing their processes: T-Distribution, also known as Student's t-distribution, g
33,424
Success Stories of "Statistics"? [closed]
The German Tank Problem is a statistical approach to estimating a population size given a sample. The goal is to estimate the total number of items $N$, given a random sample of the population which has observable serial numbers from $1$ to $N$. The problem is so named for its real-life application, in which Allied intelligence agencies wanted to estimate the number of German tanks being produced during World War II. By observing the serial numbers on a limited number of destroyed tanks, statisticians were able to infer the total number of tanks in the population with remarkable accuracy. Post-war analysis revealed that the statistical estimates were often superior to the estimates generated from conventional intelligence methods. Other statistics developed during wartime include the Receiver Operator Characteristic (ROC) curve, which is a means of evaluating a classifier. WWII radar operators would have to classify radar blips as either enemy planes, or false alarms like birds or weather. The development of the ROC curve allowed a principled means of evaluating the performance of an individual radar operator, indicating whether they could correctly and reliably identify enemy aircraft, or if they would require further training. The ROC curve is used in many fields from medicine to meteorology to evaluate the performance of a classification method.
Success Stories of "Statistics"? [closed]
The German Tank Problem is a statistical approach to estimating a population size given a sample. The goal is to estimate the total number of items $N$, given a random sample of the population which h
Success Stories of "Statistics"? [closed] The German Tank Problem is a statistical approach to estimating a population size given a sample. The goal is to estimate the total number of items $N$, given a random sample of the population which has observable serial numbers from $1$ to $N$. The problem is so named for its real-life application, in which Allied intelligence agencies wanted to estimate the number of German tanks being produced during World War II. By observing the serial numbers on a limited number of destroyed tanks, statisticians were able to infer the total number of tanks in the population with remarkable accuracy. Post-war analysis revealed that the statistical estimates were often superior to the estimates generated from conventional intelligence methods. Other statistics developed during wartime include the Receiver Operator Characteristic (ROC) curve, which is a means of evaluating a classifier. WWII radar operators would have to classify radar blips as either enemy planes, or false alarms like birds or weather. The development of the ROC curve allowed a principled means of evaluating the performance of an individual radar operator, indicating whether they could correctly and reliably identify enemy aircraft, or if they would require further training. The ROC curve is used in many fields from medicine to meteorology to evaluate the performance of a classification method.
Success Stories of "Statistics"? [closed] The German Tank Problem is a statistical approach to estimating a population size given a sample. The goal is to estimate the total number of items $N$, given a random sample of the population which h
33,425
Success Stories of "Statistics"? [closed]
To add some medical examples to the excellent cases already cited by others: Richard Doll established the link between smoking and lung cancer. Although a medical doctor, the link was established using epidemiological techniques. Florence Nightingale, the Lady of the Lamp. The common perception is that she spent nights in hospitals during the Crimean War tending to wounded soldiers. In fact, she spent her time compiling statistics that demonstrated that vastly more of deaths were caused by injury and disease rather than direct enemy action on the battle field. One of the drawings she used to illustrate her point still bears her name: the Nightingale plot. She was the first female member of the Royal Statistical Society. John Snow identified the source of the Broad Street cholera outbreak in 1854. The first generally recognised modern clincal trial was conducted by James Lind, a Royal Navy surgeon, who in 1747 established that lack of vitamin C was the cause of scurvy. As a result of his work, the Royal Navy began to issue a daily ration of lemons to its sailors, thereby giving rise to the use of the (not very flattering in some quarters) soubriquet of "Limey" for Britons.
Success Stories of "Statistics"? [closed]
To add some medical examples to the excellent cases already cited by others: Richard Doll established the link between smoking and lung cancer. Although a medical doctor, the link was established usi
Success Stories of "Statistics"? [closed] To add some medical examples to the excellent cases already cited by others: Richard Doll established the link between smoking and lung cancer. Although a medical doctor, the link was established using epidemiological techniques. Florence Nightingale, the Lady of the Lamp. The common perception is that she spent nights in hospitals during the Crimean War tending to wounded soldiers. In fact, she spent her time compiling statistics that demonstrated that vastly more of deaths were caused by injury and disease rather than direct enemy action on the battle field. One of the drawings she used to illustrate her point still bears her name: the Nightingale plot. She was the first female member of the Royal Statistical Society. John Snow identified the source of the Broad Street cholera outbreak in 1854. The first generally recognised modern clincal trial was conducted by James Lind, a Royal Navy surgeon, who in 1747 established that lack of vitamin C was the cause of scurvy. As a result of his work, the Royal Navy began to issue a daily ration of lemons to its sailors, thereby giving rise to the use of the (not very flattering in some quarters) soubriquet of "Limey" for Britons.
Success Stories of "Statistics"? [closed] To add some medical examples to the excellent cases already cited by others: Richard Doll established the link between smoking and lung cancer. Although a medical doctor, the link was established usi
33,426
Success Stories of "Statistics"? [closed]
Success stories of statistics are everywhere. The reason that you don't see newspaper articles about statistical success stories is not because they rarely happen; it's because they happen so often that it's not considered news. Some examples of successful applications of statistics are: Every successful scientific study involving a large number of individuals or measurements Every time an organization makes a good decision based on a large amount of data Every time an instrument is correctly calibrated or tested by taking a large number of measurements Every time someone makes a successful prediction about the future based on a large number of things that happened in the past And there are millions of examples of each of these. If you want to find a statistical success story in the news, look at any news article about any great achievement. Statistics plays a role in everything.
Success Stories of "Statistics"? [closed]
Success stories of statistics are everywhere. The reason that you don't see newspaper articles about statistical success stories is not because they rarely happen; it's because they happen so often th
Success Stories of "Statistics"? [closed] Success stories of statistics are everywhere. The reason that you don't see newspaper articles about statistical success stories is not because they rarely happen; it's because they happen so often that it's not considered news. Some examples of successful applications of statistics are: Every successful scientific study involving a large number of individuals or measurements Every time an organization makes a good decision based on a large amount of data Every time an instrument is correctly calibrated or tested by taking a large number of measurements Every time someone makes a successful prediction about the future based on a large number of things that happened in the past And there are millions of examples of each of these. If you want to find a statistical success story in the news, look at any news article about any great achievement. Statistics plays a role in everything.
Success Stories of "Statistics"? [closed] Success stories of statistics are everywhere. The reason that you don't see newspaper articles about statistical success stories is not because they rarely happen; it's because they happen so often th
33,427
Success Stories of "Statistics"? [closed]
When we think of classical statistical models such as regression model, it seems more difficult to think of equally well known and successful applications of such models. But in terms of the classical statistical models such as regression, have their been any successful applications of these models on a similar scale to the successful applications of machine learning? Although there are many unsuccessful or wrong applications in science, I would say that science is for a large part a demonstration of successful applications. To answer the question why classical models aren't often in the news, let's divide statistics in the inference and algorithmic viewpoints (Efron & Hastie, 2021) or, similarly, the prediction and explanation viewpoints (Yarkoni & Westfall, 2017). Then, your machine learning examples all belong in the prediction category. I think these obtain so much attention because they offer the basis for automated systems. Even more so, they offer automated models which companies can use to earn money with, so they are incentivized to spend money on selling the models. Conversely, successful applications of inference often do not mention the model; only the outcome.
Success Stories of "Statistics"? [closed]
When we think of classical statistical models such as regression model, it seems more difficult to think of equally well known and successful applications of such models. But in terms of the classic
Success Stories of "Statistics"? [closed] When we think of classical statistical models such as regression model, it seems more difficult to think of equally well known and successful applications of such models. But in terms of the classical statistical models such as regression, have their been any successful applications of these models on a similar scale to the successful applications of machine learning? Although there are many unsuccessful or wrong applications in science, I would say that science is for a large part a demonstration of successful applications. To answer the question why classical models aren't often in the news, let's divide statistics in the inference and algorithmic viewpoints (Efron & Hastie, 2021) or, similarly, the prediction and explanation viewpoints (Yarkoni & Westfall, 2017). Then, your machine learning examples all belong in the prediction category. I think these obtain so much attention because they offer the basis for automated systems. Even more so, they offer automated models which companies can use to earn money with, so they are incentivized to spend money on selling the models. Conversely, successful applications of inference often do not mention the model; only the outcome.
Success Stories of "Statistics"? [closed] When we think of classical statistical models such as regression model, it seems more difficult to think of equally well known and successful applications of such models. But in terms of the classic
33,428
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily problematic?
The assumption that the relationships are the same at a finer level of aggregation is exactly the ecological fallacy. The problem, more generally, of the relationship depending on how you aggregate is the Modifiable Areal Unit Problem
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily probl
The assumption that the relationships are the same at a finer level of aggregation is exactly the ecological fallacy. The problem, more generally, of the relationship depending on how you aggregate is
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily problematic? The assumption that the relationships are the same at a finer level of aggregation is exactly the ecological fallacy. The problem, more generally, of the relationship depending on how you aggregate is the Modifiable Areal Unit Problem
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily probl The assumption that the relationships are the same at a finer level of aggregation is exactly the ecological fallacy. The problem, more generally, of the relationship depending on how you aggregate is
33,429
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily problematic?
+1 to Thomas' answer. That said, this is not always a bad idea. For instance, in forecasting, we frequently have a large number of noisy time series that we can reasonably expect to share some common dynamics. In such cases, it's common practice to estimate these common dynamics on an aggregate level and then impose them on the separate series we are interested in. A common example is the impact of yearly seasonality on retail sales: you see the seasonality on, say, ice cream well enough if you aggregate over multiple stock keeping units (SKUs) and stores, but often not on the disaggregate SKU $\times$ store level. So people will aggregate total sales, estimate seasonality and push this down to the disaggregate series. This approach typically helps forecasting accuracy. In the end, this is again a case of the bias-variance tradeoff: this idea will inject some bias into the lower level models, but reduce variance, compared to estimating (say) seasonality on the lower levels. But then again, not including seasonality on the lower levels will do exactly the same. Either approach may be better than modeling seasonality on the disaggregate level - or they may both be worse, depending on the situation.
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily probl
+1 to Thomas' answer. That said, this is not always a bad idea. For instance, in forecasting, we frequently have a large number of noisy time series that we can reasonably expect to share some common
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily problematic? +1 to Thomas' answer. That said, this is not always a bad idea. For instance, in forecasting, we frequently have a large number of noisy time series that we can reasonably expect to share some common dynamics. In such cases, it's common practice to estimate these common dynamics on an aggregate level and then impose them on the separate series we are interested in. A common example is the impact of yearly seasonality on retail sales: you see the seasonality on, say, ice cream well enough if you aggregate over multiple stock keeping units (SKUs) and stores, but often not on the disaggregate SKU $\times$ store level. So people will aggregate total sales, estimate seasonality and push this down to the disaggregate series. This approach typically helps forecasting accuracy. In the end, this is again a case of the bias-variance tradeoff: this idea will inject some bias into the lower level models, but reduce variance, compared to estimating (say) seasonality on the lower levels. But then again, not including seasonality on the lower levels will do exactly the same. Either approach may be better than modeling seasonality on the disaggregate level - or they may both be worse, depending on the situation.
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily probl +1 to Thomas' answer. That said, this is not always a bad idea. For instance, in forecasting, we frequently have a large number of noisy time series that we can reasonably expect to share some common
33,430
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily problematic?
The question is addressed generally in the field referred to Aggregate Analysis. Here, for example, is an extract from a paper in this area: Aggregate analysis has been established as a standard method on the study of market response behavior for a long time. Aggregation has advanced our understanding of the linkages among social characteristics and aggregate response behavior. However, aggregate analysis has been hindered by fragmentary and unsystematic procedures to determine the most appropriate level of aggregation. The general objective of this paper is to provide a conceptual framework to determine the level of aggregation of variables in data analysis. In addition, statistical procedures are suggested in this framework to verify and to determine the level of aggregation represented by a variable. The conceptual framework is useful for deciding if the variables are to be analyzed from micro-analysis focus or macro-analysis focus. The statistical procedures enable the researcher to systematically identify and verify the level(s) of aggregation of variables in an existing data set. A key take-away is whether "variables are to be analyzed from micro-analysis focus or macro-analysis focus". My personal experience upon applying a macro company developed prediction model at the field office level, and then aggregating for a hopefully a better company-wide forecast, proved to be somewhat unsuccessful. There can be apparently different cross-currents occurring locally (perhaps requiring an expanded model). In the literature, there is also a reference to micro-level heterogeneity, which upon aggregation may (or may not) largely cancel. With luck, one can achieve a parsimonious model that is actually more accurate forecasting with company-level data. It may also avoid producing conflicting results. Generally speaking, model misspecification occurring at the local level may result in bias, which upon aggregating, could degrade forecast quality.
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily probl
The question is addressed generally in the field referred to Aggregate Analysis. Here, for example, is an extract from a paper in this area: Aggregate analysis has been established as a standard meth
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily problematic? The question is addressed generally in the field referred to Aggregate Analysis. Here, for example, is an extract from a paper in this area: Aggregate analysis has been established as a standard method on the study of market response behavior for a long time. Aggregation has advanced our understanding of the linkages among social characteristics and aggregate response behavior. However, aggregate analysis has been hindered by fragmentary and unsystematic procedures to determine the most appropriate level of aggregation. The general objective of this paper is to provide a conceptual framework to determine the level of aggregation of variables in data analysis. In addition, statistical procedures are suggested in this framework to verify and to determine the level of aggregation represented by a variable. The conceptual framework is useful for deciding if the variables are to be analyzed from micro-analysis focus or macro-analysis focus. The statistical procedures enable the researcher to systematically identify and verify the level(s) of aggregation of variables in an existing data set. A key take-away is whether "variables are to be analyzed from micro-analysis focus or macro-analysis focus". My personal experience upon applying a macro company developed prediction model at the field office level, and then aggregating for a hopefully a better company-wide forecast, proved to be somewhat unsuccessful. There can be apparently different cross-currents occurring locally (perhaps requiring an expanded model). In the literature, there is also a reference to micro-level heterogeneity, which upon aggregation may (or may not) largely cancel. With luck, one can achieve a parsimonious model that is actually more accurate forecasting with company-level data. It may also avoid producing conflicting results. Generally speaking, model misspecification occurring at the local level may result in bias, which upon aggregating, could degrade forecast quality.
Is there a name for applying estimation at a lower level of aggregation, and is it necessarily probl The question is addressed generally in the field referred to Aggregate Analysis. Here, for example, is an extract from a paper in this area: Aggregate analysis has been established as a standard meth
33,431
Why do I need statistical power for AB testing if my results are significant?
Power is generally something you calculate before you perform a study. For example, let's say you are trying to test whether medication A is more effective than medication B. Because of some cost, each new participant is really expensive. So you calculate the minimum effect size you want to be able to detect (e.g. it lowers blood pressure by 10 points) and then determine from that information what sample size you would need to detect a 10 point difference in treatment. Let's say the power analysis says you need 40 participants. Now let's say that the actual difference between treatment A and B is much larger than you minimum--- say 30 points. You would be able to detect this difference with a much smaller sample size. The point of your power analysis is to set a minimum effect size you qualitatively feel you need to detect. So, power analysis isn't something you really ever do after a study, especially if your results are significant. If your results are significant, they're significant. No strings attached (well, at least related to power).
Why do I need statistical power for AB testing if my results are significant?
Power is generally something you calculate before you perform a study. For example, let's say you are trying to test whether medication A is more effective than medication B. Because of some cost, eac
Why do I need statistical power for AB testing if my results are significant? Power is generally something you calculate before you perform a study. For example, let's say you are trying to test whether medication A is more effective than medication B. Because of some cost, each new participant is really expensive. So you calculate the minimum effect size you want to be able to detect (e.g. it lowers blood pressure by 10 points) and then determine from that information what sample size you would need to detect a 10 point difference in treatment. Let's say the power analysis says you need 40 participants. Now let's say that the actual difference between treatment A and B is much larger than you minimum--- say 30 points. You would be able to detect this difference with a much smaller sample size. The point of your power analysis is to set a minimum effect size you qualitatively feel you need to detect. So, power analysis isn't something you really ever do after a study, especially if your results are significant. If your results are significant, they're significant. No strings attached (well, at least related to power).
Why do I need statistical power for AB testing if my results are significant? Power is generally something you calculate before you perform a study. For example, let's say you are trying to test whether medication A is more effective than medication B. Because of some cost, eac
33,432
Why do I need statistical power for AB testing if my results are significant?
You are absolutely, exactly, completely right. This precise argument has been published by Hoenig & Heisey, "The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis" (2001, The American Statistician). Actually, they frame it the other way around: people often use "post hoc power" after finding no significant effect, and this "power calculation" "shows" that their study was underpowered to find the effect size they did find. But of course, in a precisely analogous way to yours, that is just a reformulation of the fact that a p value larger than 0.05 is logically equivalent to power that is too low to detect the observed effect at $\alpha=0.05$.
Why do I need statistical power for AB testing if my results are significant?
You are absolutely, exactly, completely right. This precise argument has been published by Hoenig & Heisey, "The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis" (2001, T
Why do I need statistical power for AB testing if my results are significant? You are absolutely, exactly, completely right. This precise argument has been published by Hoenig & Heisey, "The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis" (2001, The American Statistician). Actually, they frame it the other way around: people often use "post hoc power" after finding no significant effect, and this "power calculation" "shows" that their study was underpowered to find the effect size they did find. But of course, in a precisely analogous way to yours, that is just a reformulation of the fact that a p value larger than 0.05 is logically equivalent to power that is too low to detect the observed effect at $\alpha=0.05$.
Why do I need statistical power for AB testing if my results are significant? You are absolutely, exactly, completely right. This precise argument has been published by Hoenig & Heisey, "The Abuse of Power: The Pervasive Fallacy of Power Calculations for Data Analysis" (2001, T
33,433
Why do I need statistical power for AB testing if my results are significant?
In addition to its use in deciding on a required sample size before a study (described in Tanner Phillips' excellent answer), there's another reason to care about statistical power: low statistical power can be a sign of the file drawer problem. It is true that if you run a single study and get a significant result, then the statistical power of your design is at this point irrelevant. It's a calculation of how likely something that already happened was to happen, which isn't really useful information to you after your study is done. However, there's another way to end up with a significant result in a study despite low power: Run lots of trials (or use lots of different dependent variables, or analyze your data lots of different ways, use your imagination), each of which is poorly powered to detect an effect and probably won't work, and then publish whichever one turns out significant by chance. Thus, when a reader of a paper notices that the study design described therein is not sufficiently powered to reliably detect typical effect sizes for its domain, they have to decide which is more likely: The study authors had a theoretical reason to expect the effect size to be larger than is typical for their domain, and they turned out to be right. The study authors are engaging in some p-hacking. We would all like to live in a world where the former was more common, but many scientific fields that rely most heavily on inferential statistics are currently in the middle of reckoning with the frequency of the latter. This argument has been made most notably by John Ioannidis in his paper Why Most Published Research Findings are False.
Why do I need statistical power for AB testing if my results are significant?
In addition to its use in deciding on a required sample size before a study (described in Tanner Phillips' excellent answer), there's another reason to care about statistical power: low statistical p
Why do I need statistical power for AB testing if my results are significant? In addition to its use in deciding on a required sample size before a study (described in Tanner Phillips' excellent answer), there's another reason to care about statistical power: low statistical power can be a sign of the file drawer problem. It is true that if you run a single study and get a significant result, then the statistical power of your design is at this point irrelevant. It's a calculation of how likely something that already happened was to happen, which isn't really useful information to you after your study is done. However, there's another way to end up with a significant result in a study despite low power: Run lots of trials (or use lots of different dependent variables, or analyze your data lots of different ways, use your imagination), each of which is poorly powered to detect an effect and probably won't work, and then publish whichever one turns out significant by chance. Thus, when a reader of a paper notices that the study design described therein is not sufficiently powered to reliably detect typical effect sizes for its domain, they have to decide which is more likely: The study authors had a theoretical reason to expect the effect size to be larger than is typical for their domain, and they turned out to be right. The study authors are engaging in some p-hacking. We would all like to live in a world where the former was more common, but many scientific fields that rely most heavily on inferential statistics are currently in the middle of reckoning with the frequency of the latter. This argument has been made most notably by John Ioannidis in his paper Why Most Published Research Findings are False.
Why do I need statistical power for AB testing if my results are significant? In addition to its use in deciding on a required sample size before a study (described in Tanner Phillips' excellent answer), there's another reason to care about statistical power: low statistical p
33,434
Why do I need statistical power for AB testing if my results are significant?
Let us consider a test of whether $\mu = 0$ or $\mu \neq 0$. Well, let's measure $\mu$! Alas, there is always statistical variation in the outcome of a measurement. Let's call the scale of the noise $\Delta\mu$. If your measurement was low-powered, it means that the anticipated effect size, $\mu^\star$, wasn't much bigger than the level of noise $\Delta\mu$. Thus, we should be worried if we appeared to able to significantly distinguish a new effect of size $\mu^\star$ from noise. Slightly more formally, if the study is low-powered, whilst a significant result is rare under $H_0$ (the rate given by definition by $\alpha$), it is also rare under the anticipated effect size under $H_1$ (the rate given by definition by power)! So what can we really conclude? These kinds of considerations led Birnbaum to propose a measure of evidence against the null of the ratio, $$ \frac{\text{power}}{\alpha} $$ such that low-power implies weaker evidence against the null. More formally again, if you denote the odds that an effect is real by $R$, and consider simple hypotheses, the probability that an effect is real given a significant result is $$ P = \frac{\text{power} \cdot R}{\text{power} \cdot R + \alpha} $$ This follows simply by Bayes theorem. So truly, low-powered studies result in weaker evidence. See e.g., this article for further discussion (I'm sure there are heaps more).
Why do I need statistical power for AB testing if my results are significant?
Let us consider a test of whether $\mu = 0$ or $\mu \neq 0$. Well, let's measure $\mu$! Alas, there is always statistical variation in the outcome of a measurement. Let's call the scale of the noise $
Why do I need statistical power for AB testing if my results are significant? Let us consider a test of whether $\mu = 0$ or $\mu \neq 0$. Well, let's measure $\mu$! Alas, there is always statistical variation in the outcome of a measurement. Let's call the scale of the noise $\Delta\mu$. If your measurement was low-powered, it means that the anticipated effect size, $\mu^\star$, wasn't much bigger than the level of noise $\Delta\mu$. Thus, we should be worried if we appeared to able to significantly distinguish a new effect of size $\mu^\star$ from noise. Slightly more formally, if the study is low-powered, whilst a significant result is rare under $H_0$ (the rate given by definition by $\alpha$), it is also rare under the anticipated effect size under $H_1$ (the rate given by definition by power)! So what can we really conclude? These kinds of considerations led Birnbaum to propose a measure of evidence against the null of the ratio, $$ \frac{\text{power}}{\alpha} $$ such that low-power implies weaker evidence against the null. More formally again, if you denote the odds that an effect is real by $R$, and consider simple hypotheses, the probability that an effect is real given a significant result is $$ P = \frac{\text{power} \cdot R}{\text{power} \cdot R + \alpha} $$ This follows simply by Bayes theorem. So truly, low-powered studies result in weaker evidence. See e.g., this article for further discussion (I'm sure there are heaps more).
Why do I need statistical power for AB testing if my results are significant? Let us consider a test of whether $\mu = 0$ or $\mu \neq 0$. Well, let's measure $\mu$! Alas, there is always statistical variation in the outcome of a measurement. Let's call the scale of the noise $
33,435
Why do I need statistical power for AB testing if my results are significant?
There ARE times where it is appropriate to determine statistical power after a result is generated. If you have a very large sample size, then even a small difference between group A and group B will be statistically significant because the power to detect this difference is very high. Power depends on three things: your alpha level, the difference between group A and group B ("effect size") you hope to be able to detect, and the sample size. Changing any one of them will change the power. It is also useful to compute the power of a study to determine why a result is not statistically significant. Many small studies, for example, are underpowered to detect a difference as significant only because the sample size is too small, or because the effect size is too small for the results to be significant with the sample size that was used. Many results have been dismissed based on a p-value alone, when in fact the study was underpowered to detect a difference, or the difference is in line with larger studies which were statistically significant because they had a larger sample size. In that case the problem is often a sample size issue, not an effect size issue. Sometimes just adding ONE more subject to the same experiment can push the results into statistical significance. A post-hoc power analysis can also determine the "achieved alpha" for a result, which is often much lower (or higher) than .05. I know that you should ideally state your hypothesis in advance, including alpha level and sample size and the effect size you desire to detect, but sometimes when you're exploring your data you stumble on a relationship that is significant and meaningful. It is not "data fishing" to report this. Indeed, the p-value has been over-relied upon in statistical results, when sample size, effect size, alpha, confidence intervals and statistical power are very important in explaining results, too. Journals are increasingly recognizing this. It's also important to do your homework and look at what other studies in your field have done in terms of sample sizes and effect sizes and alpha levels, and the resulting effect on power.
Why do I need statistical power for AB testing if my results are significant?
There ARE times where it is appropriate to determine statistical power after a result is generated. If you have a very large sample size, then even a small difference between group A and group B will
Why do I need statistical power for AB testing if my results are significant? There ARE times where it is appropriate to determine statistical power after a result is generated. If you have a very large sample size, then even a small difference between group A and group B will be statistically significant because the power to detect this difference is very high. Power depends on three things: your alpha level, the difference between group A and group B ("effect size") you hope to be able to detect, and the sample size. Changing any one of them will change the power. It is also useful to compute the power of a study to determine why a result is not statistically significant. Many small studies, for example, are underpowered to detect a difference as significant only because the sample size is too small, or because the effect size is too small for the results to be significant with the sample size that was used. Many results have been dismissed based on a p-value alone, when in fact the study was underpowered to detect a difference, or the difference is in line with larger studies which were statistically significant because they had a larger sample size. In that case the problem is often a sample size issue, not an effect size issue. Sometimes just adding ONE more subject to the same experiment can push the results into statistical significance. A post-hoc power analysis can also determine the "achieved alpha" for a result, which is often much lower (or higher) than .05. I know that you should ideally state your hypothesis in advance, including alpha level and sample size and the effect size you desire to detect, but sometimes when you're exploring your data you stumble on a relationship that is significant and meaningful. It is not "data fishing" to report this. Indeed, the p-value has been over-relied upon in statistical results, when sample size, effect size, alpha, confidence intervals and statistical power are very important in explaining results, too. Journals are increasingly recognizing this. It's also important to do your homework and look at what other studies in your field have done in terms of sample sizes and effect sizes and alpha levels, and the resulting effect on power.
Why do I need statistical power for AB testing if my results are significant? There ARE times where it is appropriate to determine statistical power after a result is generated. If you have a very large sample size, then even a small difference between group A and group B will
33,436
Test to know when to use GLM over Linear Regression?
As with many other cases in statistics, the goal of finding a single test to replace one's judgement is a bad one. There are several sources of information you can and should use while deciding: the theoretical expectation of the distribution, prior empirical work on the topic, the properties of the data (e.g. is it truncated or zero-inflated?), and the residual distributions and other diagnostics after fitting models. But there is no single, general test (or even a set of tests) that will tell you what to do. And there cannot be one. I recognise the intuitive appeal of having a decision tree to follow when making such a choice, especially in an area that is complex and new to you. But there are few hard boundaries in the areas you need to consider, and so this decision does not lend itself well to such a workflow. You need to use judgement, and developing that will take time and practice.
Test to know when to use GLM over Linear Regression?
As with many other cases in statistics, the goal of finding a single test to replace one's judgement is a bad one. There are several sources of information you can and should use while deciding: the
Test to know when to use GLM over Linear Regression? As with many other cases in statistics, the goal of finding a single test to replace one's judgement is a bad one. There are several sources of information you can and should use while deciding: the theoretical expectation of the distribution, prior empirical work on the topic, the properties of the data (e.g. is it truncated or zero-inflated?), and the residual distributions and other diagnostics after fitting models. But there is no single, general test (or even a set of tests) that will tell you what to do. And there cannot be one. I recognise the intuitive appeal of having a decision tree to follow when making such a choice, especially in an area that is complex and new to you. But there are few hard boundaries in the areas you need to consider, and so this decision does not lend itself well to such a workflow. You need to use judgement, and developing that will take time and practice.
Test to know when to use GLM over Linear Regression? As with many other cases in statistics, the goal of finding a single test to replace one's judgement is a bad one. There are several sources of information you can and should use while deciding: the
33,437
Test to know when to use GLM over Linear Regression?
Another great answer from @mkt on this forum. Here are a few more pointers you might find useful. GLMs include some widely used types of regression models: Binary Logistic Regression Models; Binomial Logistic Regression Models; Multinomial Logistic Regression Models; Ordinal Logistic Regression Models; Poisson Regression Models; Beta Regression Models; Gamma Regression Models. As pointed out by @COOLSerdash in his comment, beta regression models share some features - such as linear predictor, link function, dispersion parameter - with GLMs (GLMs; McCullagh and Nelder 1989), but are NOT special cases of the GLM framework. However, I included them in the above list because of their similarity with GLMs and their practical value. A good place to start would be to familiarize yourself with each of these types of models and when it might be used. Binary Logistic Regression Models These types of models are used to model the relationship between a binary dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the survival status of patients at a local hospital assessed 30 days following a surgical intervention for treating a particular disease such that Y = 1 for a patient who survived and Y = 0 for a patient who died. Furthermore, if p = 2, then X1 could represent Age (expressed in years) and X2 could represent gender. For all the subsequent examples below, it will be assumed that p = 2 and that X1 and X2 will have the same meaning as in the current example. Binomial Logistic Regression Models These types of models are used to model the relationship between a binomial dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the number of correct questions (out of 10) answered by patients on a questionnaire eliciting their knowledge of the symptoms associated with their disease. Multinomial Logistic Regression Models These types of models are used to model the relationship between a nominal dependent variable Y with more than 2 categories and a set of independent variables X1, ..., Xp. Ordinal Logistic Regression Models These types of models are used to model the relationship between an ordinal dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the degree of pain experienced by patients immediately after surgery, expressed on an ordinal scale from 1 to 5, where 1 stands for no pain and 5 stands for severe pain. Poisson Regression Models These types of models are used to model the relationship between a count dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the number of hospital days (out of 30) when patients had to use pain relieving medication following their surgery. Beta Regression Models These types of models are used to model the relationship between a dependent variable Y expressed as a continuous proportion taking values in the open interval (0,1) and a set of independent variables X1, ..., Xp. For example, if the disease in question is a brain disease, Y could represent the fraction of the brain area still affected by disease 30 days post-surgery relative to the total brain area for patients who survived the surgery. Gamma Regression Models These types of models are used to model the relationship between a positive-valued, continuous dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the healthcare utilization costs of patients who survived up to the 30-day mark.
Test to know when to use GLM over Linear Regression?
Another great answer from @mkt on this forum. Here are a few more pointers you might find useful. GLMs include some widely used types of regression models: Binary Logistic Regression Models; Binomi
Test to know when to use GLM over Linear Regression? Another great answer from @mkt on this forum. Here are a few more pointers you might find useful. GLMs include some widely used types of regression models: Binary Logistic Regression Models; Binomial Logistic Regression Models; Multinomial Logistic Regression Models; Ordinal Logistic Regression Models; Poisson Regression Models; Beta Regression Models; Gamma Regression Models. As pointed out by @COOLSerdash in his comment, beta regression models share some features - such as linear predictor, link function, dispersion parameter - with GLMs (GLMs; McCullagh and Nelder 1989), but are NOT special cases of the GLM framework. However, I included them in the above list because of their similarity with GLMs and their practical value. A good place to start would be to familiarize yourself with each of these types of models and when it might be used. Binary Logistic Regression Models These types of models are used to model the relationship between a binary dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the survival status of patients at a local hospital assessed 30 days following a surgical intervention for treating a particular disease such that Y = 1 for a patient who survived and Y = 0 for a patient who died. Furthermore, if p = 2, then X1 could represent Age (expressed in years) and X2 could represent gender. For all the subsequent examples below, it will be assumed that p = 2 and that X1 and X2 will have the same meaning as in the current example. Binomial Logistic Regression Models These types of models are used to model the relationship between a binomial dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the number of correct questions (out of 10) answered by patients on a questionnaire eliciting their knowledge of the symptoms associated with their disease. Multinomial Logistic Regression Models These types of models are used to model the relationship between a nominal dependent variable Y with more than 2 categories and a set of independent variables X1, ..., Xp. Ordinal Logistic Regression Models These types of models are used to model the relationship between an ordinal dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the degree of pain experienced by patients immediately after surgery, expressed on an ordinal scale from 1 to 5, where 1 stands for no pain and 5 stands for severe pain. Poisson Regression Models These types of models are used to model the relationship between a count dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the number of hospital days (out of 30) when patients had to use pain relieving medication following their surgery. Beta Regression Models These types of models are used to model the relationship between a dependent variable Y expressed as a continuous proportion taking values in the open interval (0,1) and a set of independent variables X1, ..., Xp. For example, if the disease in question is a brain disease, Y could represent the fraction of the brain area still affected by disease 30 days post-surgery relative to the total brain area for patients who survived the surgery. Gamma Regression Models These types of models are used to model the relationship between a positive-valued, continuous dependent variable Y and a set of independent variables X1, ..., Xp. For example, Y could represent the healthcare utilization costs of patients who survived up to the 30-day mark.
Test to know when to use GLM over Linear Regression? Another great answer from @mkt on this forum. Here are a few more pointers you might find useful. GLMs include some widely used types of regression models: Binary Logistic Regression Models; Binomi
33,438
Test to know when to use GLM over Linear Regression?
This was a reply to @Victor's comment on @mkt's answer, but it grew rather large, and I suppose it answers the question. The point of using a GLM is to allow different error distributions than Gaussian. Is the data generating process continuous, with a central tendency and can it take on both positive and negative values? Then a regular LM is a decent starting point. Is the answer to any of these questions no? Then determine which error distribution could be appropriate and start with a GLM or GAM using that error distribution. Isabella's answer provides some concrete examples of when to use which distribution. After this, you should always perform visual diagnostics. Your assumptions may be reasonable from a theoretical standpoint, but severely violated in practice. There is no singular method, or test for this process, because even in cases where the assumptions for a normal (or really any) distribution are violated, the model could still approximate the process well. Remember that all models are wrong. The point is to find one that is useful, and a good starting point for that is theoretical substantiation. Reserve tests only for comparison of a handful of candidate models. (Of course, this is assuming you are using your model for inference. For prediction problems, you shouldn't be looking at goodness-of-fit at all.)
Test to know when to use GLM over Linear Regression?
This was a reply to @Victor's comment on @mkt's answer, but it grew rather large, and I suppose it answers the question. The point of using a GLM is to allow different error distributions than Gaussia
Test to know when to use GLM over Linear Regression? This was a reply to @Victor's comment on @mkt's answer, but it grew rather large, and I suppose it answers the question. The point of using a GLM is to allow different error distributions than Gaussian. Is the data generating process continuous, with a central tendency and can it take on both positive and negative values? Then a regular LM is a decent starting point. Is the answer to any of these questions no? Then determine which error distribution could be appropriate and start with a GLM or GAM using that error distribution. Isabella's answer provides some concrete examples of when to use which distribution. After this, you should always perform visual diagnostics. Your assumptions may be reasonable from a theoretical standpoint, but severely violated in practice. There is no singular method, or test for this process, because even in cases where the assumptions for a normal (or really any) distribution are violated, the model could still approximate the process well. Remember that all models are wrong. The point is to find one that is useful, and a good starting point for that is theoretical substantiation. Reserve tests only for comparison of a handful of candidate models. (Of course, this is assuming you are using your model for inference. For prediction problems, you shouldn't be looking at goodness-of-fit at all.)
Test to know when to use GLM over Linear Regression? This was a reply to @Victor's comment on @mkt's answer, but it grew rather large, and I suppose it answers the question. The point of using a GLM is to allow different error distributions than Gaussia
33,439
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Nothing went wrong. The adjusted p-values are correct. Adjusted $p=1$ simply means no evidence at all for rejecting the null hypothesis. However p.adjust(data2$raw.p, method = "holm") is always better than the Bonferroni adjustment. Holm's method, which is a step down Bonferroni adjustment, gives the same error rate control as Bonferroni but is more powerful (smaller p-values). As the help page for ?p.adjust says: There seems no reason to use the unmodified Bonferroni correction because it is dominated by Holm's method, which is also valid under arbitrary assumptions. For your specific experiment, there is so little evidence of real effects that you won't get any significant results even with Holm's method.
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Nothing went wrong. The adjusted p-values are correct. Adjusted $p=1$ simply means no evidence at all for rejecting the null hypothesis. However p.adjust(data2$raw.p, method = "holm") is always bett
Many p-values are equal to 1 after Bonferroni correction; is it normal? Nothing went wrong. The adjusted p-values are correct. Adjusted $p=1$ simply means no evidence at all for rejecting the null hypothesis. However p.adjust(data2$raw.p, method = "holm") is always better than the Bonferroni adjustment. Holm's method, which is a step down Bonferroni adjustment, gives the same error rate control as Bonferroni but is more powerful (smaller p-values). As the help page for ?p.adjust says: There seems no reason to use the unmodified Bonferroni correction because it is dominated by Holm's method, which is also valid under arbitrary assumptions. For your specific experiment, there is so little evidence of real effects that you won't get any significant results even with Holm's method.
Many p-values are equal to 1 after Bonferroni correction; is it normal? Nothing went wrong. The adjusted p-values are correct. Adjusted $p=1$ simply means no evidence at all for rejecting the null hypothesis. However p.adjust(data2$raw.p, method = "holm") is always bett
33,440
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Thanks for reading my 1997 JASA paper! If I had a do-over, I would rephrase my comment that a (single-step) Bonferroni adjusted p-value is not a probability "per se." (And I would no longer use the dreaded "per se." Yecchhhh!) The Bonferroni adjusted p is in fact an upper bound on the probability that the smallest (random) p-value is smaller than (smaller than or equal to in the discrete case) the given (fixed) p-value, assuming the complete null model describes the randomness. And certainly, 1.0 is an upper bound on any probability. But the bigger and more important point of my paper is that you can find these adjusted p-values exactly in such a way that accounts for the correlations between the multiple test statistics, assuming the classical linear model. These exact adjusted p-values are in fact probabilities when calculated in single-step fashion; see p. 302 of my JASA paper for the math. (To get the single-step p-values, you need to modify the expression somewhat; see my 1993 Wiley-Interscience book and my SAS book). While I used an enhanced Monte Carlo method to approximate this exact probability, better methods have been developed since; please see Hothorn, T., Bretz, F., and Westfall, P. (2008). Simultaneous Inference in General Parametric Models, Biometrical Journal 50(3), 346–363. So, single-step adjusted p-values, when computed exactly, are bona fide probabilities. But, except for the smallest one, step-down adjusted p-values are not bona fide probabilities. They are constructed from bona fide probabilities, but they are not probabilities. Hope this helps!
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Thanks for reading my 1997 JASA paper! If I had a do-over, I would rephrase my comment that a (single-step) Bonferroni adjusted p-value is not a probability "per se." (And I would no longer use the dr
Many p-values are equal to 1 after Bonferroni correction; is it normal? Thanks for reading my 1997 JASA paper! If I had a do-over, I would rephrase my comment that a (single-step) Bonferroni adjusted p-value is not a probability "per se." (And I would no longer use the dreaded "per se." Yecchhhh!) The Bonferroni adjusted p is in fact an upper bound on the probability that the smallest (random) p-value is smaller than (smaller than or equal to in the discrete case) the given (fixed) p-value, assuming the complete null model describes the randomness. And certainly, 1.0 is an upper bound on any probability. But the bigger and more important point of my paper is that you can find these adjusted p-values exactly in such a way that accounts for the correlations between the multiple test statistics, assuming the classical linear model. These exact adjusted p-values are in fact probabilities when calculated in single-step fashion; see p. 302 of my JASA paper for the math. (To get the single-step p-values, you need to modify the expression somewhat; see my 1993 Wiley-Interscience book and my SAS book). While I used an enhanced Monte Carlo method to approximate this exact probability, better methods have been developed since; please see Hothorn, T., Bretz, F., and Westfall, P. (2008). Simultaneous Inference in General Parametric Models, Biometrical Journal 50(3), 346–363. So, single-step adjusted p-values, when computed exactly, are bona fide probabilities. But, except for the smallest one, step-down adjusted p-values are not bona fide probabilities. They are constructed from bona fide probabilities, but they are not probabilities. Hope this helps!
Many p-values are equal to 1 after Bonferroni correction; is it normal? Thanks for reading my 1997 JASA paper! If I had a do-over, I would rephrase my comment that a (single-step) Bonferroni adjusted p-value is not a probability "per se." (And I would no longer use the dr
33,441
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Just to add to @gordon-smyth and @student-t 's answer. Another way to look at it is to adjust the $\alpha$-level yourself instead of adjusting the $p$-values via the p.adjust() function. For the Bonferroni correction this is easy enough. If your $\alpha$ level is $0.05$, then you divide this by the number tests, which then is your new Bonferroni adjusted $\alpha$-level. In your case $0.05/24=0.00208$. As you can see, none of your raw.p's make that cut-off. So your output makes sense. If you want to use the Holm-Bonferroni method, you can also do this quickly by hand (see here).
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Just to add to @gordon-smyth and @student-t 's answer. Another way to look at it is to adjust the $\alpha$-level yourself instead of adjusting the $p$-values via the p.adjust() function. For the Bonfe
Many p-values are equal to 1 after Bonferroni correction; is it normal? Just to add to @gordon-smyth and @student-t 's answer. Another way to look at it is to adjust the $\alpha$-level yourself instead of adjusting the $p$-values via the p.adjust() function. For the Bonferroni correction this is easy enough. If your $\alpha$ level is $0.05$, then you divide this by the number tests, which then is your new Bonferroni adjusted $\alpha$-level. In your case $0.05/24=0.00208$. As you can see, none of your raw.p's make that cut-off. So your output makes sense. If you want to use the Holm-Bonferroni method, you can also do this quickly by hand (see here).
Many p-values are equal to 1 after Bonferroni correction; is it normal? Just to add to @gordon-smyth and @student-t 's answer. Another way to look at it is to adjust the $\alpha$-level yourself instead of adjusting the $p$-values via the p.adjust() function. For the Bonfe
33,442
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Everything works as expected because Bonferroni could give you adjusted p value greater than one. The R function rounded it off to one because a probability over one makes no sense. This is an example where Bonferroni shows reduction in statistical power. You may want to try other multiple comparison methods or adjust the significance level.
Many p-values are equal to 1 after Bonferroni correction; is it normal?
Everything works as expected because Bonferroni could give you adjusted p value greater than one. The R function rounded it off to one because a probability over one makes no sense. This is an example
Many p-values are equal to 1 after Bonferroni correction; is it normal? Everything works as expected because Bonferroni could give you adjusted p value greater than one. The R function rounded it off to one because a probability over one makes no sense. This is an example where Bonferroni shows reduction in statistical power. You may want to try other multiple comparison methods or adjust the significance level.
Many p-values are equal to 1 after Bonferroni correction; is it normal? Everything works as expected because Bonferroni could give you adjusted p value greater than one. The R function rounded it off to one because a probability over one makes no sense. This is an example
33,443
What programming language for statistical inference?
I couldnt agree more with a vote for R. R is the "Lingua Franca" of the statistics world. It is the definition of cutting edge, while most packages for MATLAB and SAS take several months. The language is very simple to understand as opposed to SAS. It also gives you the power to connect with C/C++/Python and databases. Consider Revolution Analytics version of R for a bit more performance. http://www.revolutionanalytics.com/products/revolution-r.php
What programming language for statistical inference?
I couldnt agree more with a vote for R. R is the "Lingua Franca" of the statistics world. It is the definition of cutting edge, while most packages for MATLAB and SAS take several months. The language
What programming language for statistical inference? I couldnt agree more with a vote for R. R is the "Lingua Franca" of the statistics world. It is the definition of cutting edge, while most packages for MATLAB and SAS take several months. The language is very simple to understand as opposed to SAS. It also gives you the power to connect with C/C++/Python and databases. Consider Revolution Analytics version of R for a bit more performance. http://www.revolutionanalytics.com/products/revolution-r.php
What programming language for statistical inference? I couldnt agree more with a vote for R. R is the "Lingua Franca" of the statistics world. It is the definition of cutting edge, while most packages for MATLAB and SAS take several months. The language
33,444
What programming language for statistical inference?
Well, you can PAY for MATLAB, and then either (1) program the stuff you really need from the ground up or (2) PAY MORE for MATLAB toolboxes. And discover that doing useful statistics in MATLAB was an afterthought handled in the increasingly less useful Statistics Toolbox. Or...you can download R for FREE and search for (and find!) the packages you need, which you can also download for FREE. Lots of small scale production stuff can be done in R. If you're doing something really big (think US census), you probably need to go learn SAS--and get your employer to pay for it.
What programming language for statistical inference?
Well, you can PAY for MATLAB, and then either (1) program the stuff you really need from the ground up or (2) PAY MORE for MATLAB toolboxes. And discover that doing useful statistics in MATLAB was an
What programming language for statistical inference? Well, you can PAY for MATLAB, and then either (1) program the stuff you really need from the ground up or (2) PAY MORE for MATLAB toolboxes. And discover that doing useful statistics in MATLAB was an afterthought handled in the increasingly less useful Statistics Toolbox. Or...you can download R for FREE and search for (and find!) the packages you need, which you can also download for FREE. Lots of small scale production stuff can be done in R. If you're doing something really big (think US census), you probably need to go learn SAS--and get your employer to pay for it.
What programming language for statistical inference? Well, you can PAY for MATLAB, and then either (1) program the stuff you really need from the ground up or (2) PAY MORE for MATLAB toolboxes. And discover that doing useful statistics in MATLAB was an
33,445
What programming language for statistical inference?
"Popularity" depends on the community and the definition of "statistics". World-wide, taking a broad view of "statistical inference" as including any methods of drawing conclusions or taking actions based on quantitative data, there is little question that Excel beats all other applications, including R, SAS, Stata, SPSS, and S-Plus. (The links point to different kinds of statistics, but they are highly suggestive, to say the least.) Python and MATLAB aren't even blips in the statistics. I am not saying that this is a good thing or that we should like it: that's just how it is and that's how it's going to stay for a very long time. We shouldn't draw any inferences from what may appear to be popular "here" in this forum. Commercial software vendors support their own forums, so naturally a place like SE will favor people using less actively supported software, especially free, open-source, and academic solutions.
What programming language for statistical inference?
"Popularity" depends on the community and the definition of "statistics". World-wide, taking a broad view of "statistical inference" as including any methods of drawing conclusions or taking actions
What programming language for statistical inference? "Popularity" depends on the community and the definition of "statistics". World-wide, taking a broad view of "statistical inference" as including any methods of drawing conclusions or taking actions based on quantitative data, there is little question that Excel beats all other applications, including R, SAS, Stata, SPSS, and S-Plus. (The links point to different kinds of statistics, but they are highly suggestive, to say the least.) Python and MATLAB aren't even blips in the statistics. I am not saying that this is a good thing or that we should like it: that's just how it is and that's how it's going to stay for a very long time. We shouldn't draw any inferences from what may appear to be popular "here" in this forum. Commercial software vendors support their own forums, so naturally a place like SE will favor people using less actively supported software, especially free, open-source, and academic solutions.
What programming language for statistical inference? "Popularity" depends on the community and the definition of "statistics". World-wide, taking a broad view of "statistical inference" as including any methods of drawing conclusions or taking actions
33,446
What programming language for statistical inference?
It should be clear by looking at the most popular tags that R is the most popular language on this site. Whether that makes it the most popular language for statistical analysis can't be inferred directly, but one might suppose as much.
What programming language for statistical inference?
It should be clear by looking at the most popular tags that R is the most popular language on this site. Whether that makes it the most popular language for statistical analysis can't be inferred dir
What programming language for statistical inference? It should be clear by looking at the most popular tags that R is the most popular language on this site. Whether that makes it the most popular language for statistical analysis can't be inferred directly, but one might suppose as much.
What programming language for statistical inference? It should be clear by looking at the most popular tags that R is the most popular language on this site. Whether that makes it the most popular language for statistical analysis can't be inferred dir
33,447
What programming language for statistical inference?
R and SAS have each their pros and cons. I think more statisticians need to embrace the fact that lots of great statistical software is available, rather than endlessly bicker about which is superior. R is free. SAS is very expensive. R gives you the ability to do just about anything. SAS may or may not. R has amazing graphical abilities. Seeing SAS graphics makes it feel like 1985 all over again. SAS has great customer support. R support = hours of searching mailing list archives. Also with a name like "R", search engine results are often poor. R is extremely slow and does not deal well with large data sets. SAS does fine with large data sets. SAS tends to be more robust. In my experience, when it comes to mixed effects modeling or anything involving design of experiments (such as analyzing crossover designs), SAS is superior. For large scale, brute force simulations, I use Fortran. I used to use C, but have found Fortran is much easier to use. I've never used MATLAB. If I need statistical power of R but the speed of Fortran, I will write the time-intensive operations (i.e. loops) in Fortran and call the subroutine from R.
What programming language for statistical inference?
R and SAS have each their pros and cons. I think more statisticians need to embrace the fact that lots of great statistical software is available, rather than endlessly bicker about which is superior.
What programming language for statistical inference? R and SAS have each their pros and cons. I think more statisticians need to embrace the fact that lots of great statistical software is available, rather than endlessly bicker about which is superior. R is free. SAS is very expensive. R gives you the ability to do just about anything. SAS may or may not. R has amazing graphical abilities. Seeing SAS graphics makes it feel like 1985 all over again. SAS has great customer support. R support = hours of searching mailing list archives. Also with a name like "R", search engine results are often poor. R is extremely slow and does not deal well with large data sets. SAS does fine with large data sets. SAS tends to be more robust. In my experience, when it comes to mixed effects modeling or anything involving design of experiments (such as analyzing crossover designs), SAS is superior. For large scale, brute force simulations, I use Fortran. I used to use C, but have found Fortran is much easier to use. I've never used MATLAB. If I need statistical power of R but the speed of Fortran, I will write the time-intensive operations (i.e. loops) in Fortran and call the subroutine from R.
What programming language for statistical inference? R and SAS have each their pros and cons. I think more statisticians need to embrace the fact that lots of great statistical software is available, rather than endlessly bicker about which is superior.
33,448
What programming language for statistical inference?
My preference goes to Python, and perhaps, Java. First, they are real programming languages. Second, they are the most popular languages (TIOBE Index). You can also convert between these languages using several scripting languages. In the past I use DMelt platform http://jwork.org/dmelt/ to perform statistical calculations, and I was very impressed by the graphics in 2D and 3D, which can be easily achieved for professional papers. R package did not impress me with the graphics.
What programming language for statistical inference?
My preference goes to Python, and perhaps, Java. First, they are real programming languages. Second, they are the most popular languages (TIOBE Index). You can also convert between these languages usi
What programming language for statistical inference? My preference goes to Python, and perhaps, Java. First, they are real programming languages. Second, they are the most popular languages (TIOBE Index). You can also convert between these languages using several scripting languages. In the past I use DMelt platform http://jwork.org/dmelt/ to perform statistical calculations, and I was very impressed by the graphics in 2D and 3D, which can be easily achieved for professional papers. R package did not impress me with the graphics.
What programming language for statistical inference? My preference goes to Python, and perhaps, Java. First, they are real programming languages. Second, they are the most popular languages (TIOBE Index). You can also convert between these languages usi
33,449
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average of the variables do not?
Comment in answer format to show simulation: @periwinkle's Comment that the average takes non-interger values should be enough. However, the mean and variance of a Poisson random variable are numerically equal, and this is not true for the mean of independent Poisson random variables. Easy to verify by standard formulas for means of variances of linear combinations. Also illustrated by a simple simulation in R as below: set.seed(827) x1 = rpois(10^4, 5); x2 = rpois(10^4, 10); x3 = rpois(10^4, 20) t = x1+x2+x3; mean(t); var(t) [1] 35.0542 # mean & var both aprx 35 w/in margin of sim err [1] 35.14318 a = t/3; mean(a); var(a) [1] 11.68473 # obviously unequal for average of three [1] 3.904797 $E((X_1+X_2+X_3)/ 3) = 1/3(4 + 10 + 20) = 35/3,$ $Var((X_1+X_2+X_3)/3) = 1/9(5 + 10 + 20) = 35/9\ne 35/3.$
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average
Comment in answer format to show simulation: @periwinkle's Comment that the average takes non-interger values should be enough. However, the mean and variance of a Poisson random variable are numerica
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average of the variables do not? Comment in answer format to show simulation: @periwinkle's Comment that the average takes non-interger values should be enough. However, the mean and variance of a Poisson random variable are numerically equal, and this is not true for the mean of independent Poisson random variables. Easy to verify by standard formulas for means of variances of linear combinations. Also illustrated by a simple simulation in R as below: set.seed(827) x1 = rpois(10^4, 5); x2 = rpois(10^4, 10); x3 = rpois(10^4, 20) t = x1+x2+x3; mean(t); var(t) [1] 35.0542 # mean & var both aprx 35 w/in margin of sim err [1] 35.14318 a = t/3; mean(a); var(a) [1] 11.68473 # obviously unequal for average of three [1] 3.904797 $E((X_1+X_2+X_3)/ 3) = 1/3(4 + 10 + 20) = 35/3,$ $Var((X_1+X_2+X_3)/3) = 1/9(5 + 10 + 20) = 35/9\ne 35/3.$
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average Comment in answer format to show simulation: @periwinkle's Comment that the average takes non-interger values should be enough. However, the mean and variance of a Poisson random variable are numerica
33,450
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average of the variables do not?
The Poisson distribution is a probability distribution defined on the set $\mathbb N$ of natural numbers $0,1,2,\dots$. We also say that $\mathbb N$ is the support of the Poisson distribution. This distribution is often used to model experiments whose outcomes represent counts. If $X$ is a random variable following a Poisson distribution with parameter $\lambda$ then for a natural number $k \in \mathbb N$, $$ \mathbb P(X=k) = e^{-\lambda} \frac{\lambda^k}{k!}. $$ It can be shown that the sum $X+Y$ of two independent Poisson-distributed variables $X,Y$ still follows a Poisson distribution. Now, assume that you have $N$ independent random variables $X_1, \dots, X_N$ each of them following a Poisson distribution. Their sum $X_1+ \dots + X_N$ will be a natural number and by an induction reasonment we can show that $X_1+ \dots + X_N$ also follows a Poisson-distribution. However their average, $\frac{X_1 + \dots + X_N}{N}$, does not need to be a natural number. For example if $N=3$ and $X_1 = 1, X_2 = 0, X_3 = 7$ then $\frac{X_1 +X_2 + X_3}{3} = \frac{8}{3} \approx 2.67.$ Thus the average of Poisson random variables can take non-integer values (but it also can take integer values) which is against the definition of a Poisson distribution. More precisely, the support of the average is not $\mathbb N$ but rather belongs to $\mathbb Q$ the set of rational numbers (which contains $\mathbb N$). This means that the average can't (by definition) follow a Poisson distribution. In the same spirit, the statement above "It can be shown that the sum $X+Y$ of two independent Poisson-distributed variables $X,Y$ still follows a Poisson distribution" is not true if $X$ and $Y$ are not independent anymore. Take for example $Y=X$ (thus $X$ and $Y$ are not independent) then the sum $X+Y=2X$ only takes even values and thus $\mathbb P(2X=1) = \mathbb P(2X=3) = \dots = 0$ which is not in agreement with the definition of a Poisson distribution since the quantity $e^{-\lambda} \frac{\lambda^k}{k!}$ is strictly greater than $0$ for all natural numbers $k$. I hope this is clear enough to help.
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average
The Poisson distribution is a probability distribution defined on the set $\mathbb N$ of natural numbers $0,1,2,\dots$. We also say that $\mathbb N$ is the support of the Poisson distribution. This di
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average of the variables do not? The Poisson distribution is a probability distribution defined on the set $\mathbb N$ of natural numbers $0,1,2,\dots$. We also say that $\mathbb N$ is the support of the Poisson distribution. This distribution is often used to model experiments whose outcomes represent counts. If $X$ is a random variable following a Poisson distribution with parameter $\lambda$ then for a natural number $k \in \mathbb N$, $$ \mathbb P(X=k) = e^{-\lambda} \frac{\lambda^k}{k!}. $$ It can be shown that the sum $X+Y$ of two independent Poisson-distributed variables $X,Y$ still follows a Poisson distribution. Now, assume that you have $N$ independent random variables $X_1, \dots, X_N$ each of them following a Poisson distribution. Their sum $X_1+ \dots + X_N$ will be a natural number and by an induction reasonment we can show that $X_1+ \dots + X_N$ also follows a Poisson-distribution. However their average, $\frac{X_1 + \dots + X_N}{N}$, does not need to be a natural number. For example if $N=3$ and $X_1 = 1, X_2 = 0, X_3 = 7$ then $\frac{X_1 +X_2 + X_3}{3} = \frac{8}{3} \approx 2.67.$ Thus the average of Poisson random variables can take non-integer values (but it also can take integer values) which is against the definition of a Poisson distribution. More precisely, the support of the average is not $\mathbb N$ but rather belongs to $\mathbb Q$ the set of rational numbers (which contains $\mathbb N$). This means that the average can't (by definition) follow a Poisson distribution. In the same spirit, the statement above "It can be shown that the sum $X+Y$ of two independent Poisson-distributed variables $X,Y$ still follows a Poisson distribution" is not true if $X$ and $Y$ are not independent anymore. Take for example $Y=X$ (thus $X$ and $Y$ are not independent) then the sum $X+Y=2X$ only takes even values and thus $\mathbb P(2X=1) = \mathbb P(2X=3) = \dots = 0$ which is not in agreement with the definition of a Poisson distribution since the quantity $e^{-\lambda} \frac{\lambda^k}{k!}$ is strictly greater than $0$ for all natural numbers $k$. I hope this is clear enough to help.
Why does the sum of Poisson distributed random variables have a Poisson distribution but the average The Poisson distribution is a probability distribution defined on the set $\mathbb N$ of natural numbers $0,1,2,\dots$. We also say that $\mathbb N$ is the support of the Poisson distribution. This di
33,451
Why are neural networks smooth functions?
A smooth function has continuous derivatives, up to some specified order. At the very least, this implies that the function is continuously differentiable (i.e. the first derivative exists everywhere and is continuous). More specifically, a function is $C^k$ smooth if the 1st through $k$th order derivatives exist everywhere, and are continuous. Neural nets can be written as compositions of elementary functions (typically affine transformations and nonlinear activation functions, but there are other possibilities). For example, in feedforward networks, each layer implements a function whose output is passed as input to the next layer. Historically, neural nets have tended to be smooth, because the elementary functions used to construct them were themselves smooth. In particular, nonlinear activation functions were typically chosen to be smooth sigmoidal functions like $\tanh$ or the logistic sigmoid function. However, the quote is not generally true. Modern neural nets often use piecewise linear activation functions like the rectified linear (ReLU) activation function and its variants. Although this function is continuous, it's not smooth because the derivative doesn't exist at zero. Therefore, neural nets using these activation functions are not smooth either. In fact, the quote isn't generally true, even historically. The McCulloch-Pitts model was the first artificial neural net. It was composed of thresholded linear units, which output binary values. This is equivalent to using a step function as the activation function. This function isn't even continuous, let alone smooth.
Why are neural networks smooth functions?
A smooth function has continuous derivatives, up to some specified order. At the very least, this implies that the function is continuously differentiable (i.e. the first derivative exists everywhere
Why are neural networks smooth functions? A smooth function has continuous derivatives, up to some specified order. At the very least, this implies that the function is continuously differentiable (i.e. the first derivative exists everywhere and is continuous). More specifically, a function is $C^k$ smooth if the 1st through $k$th order derivatives exist everywhere, and are continuous. Neural nets can be written as compositions of elementary functions (typically affine transformations and nonlinear activation functions, but there are other possibilities). For example, in feedforward networks, each layer implements a function whose output is passed as input to the next layer. Historically, neural nets have tended to be smooth, because the elementary functions used to construct them were themselves smooth. In particular, nonlinear activation functions were typically chosen to be smooth sigmoidal functions like $\tanh$ or the logistic sigmoid function. However, the quote is not generally true. Modern neural nets often use piecewise linear activation functions like the rectified linear (ReLU) activation function and its variants. Although this function is continuous, it's not smooth because the derivative doesn't exist at zero. Therefore, neural nets using these activation functions are not smooth either. In fact, the quote isn't generally true, even historically. The McCulloch-Pitts model was the first artificial neural net. It was composed of thresholded linear units, which output binary values. This is equivalent to using a step function as the activation function. This function isn't even continuous, let alone smooth.
Why are neural networks smooth functions? A smooth function has continuous derivatives, up to some specified order. At the very least, this implies that the function is continuously differentiable (i.e. the first derivative exists everywhere
33,452
Why are neural networks smooth functions?
They refer to the smoothness, as understood in mathematics, so a function that is continuous and differentiable. As explained by Nick S on math.stackexchange.com: A function being smooth is actually a stronger case than a function being continuous. For a function to be continuous, the epsilon delta definition of continuity simply needs to hold, so there are no breaks or holes in the function (in the 2-d case). For a function to be smooth, it has to have continuous derivatives up to a certain order, say k. Some of the answers at math.stackexchange.com mention infinite differentiability, but in machine learning the term would be rather used in looser sense of not-necessary-infinite differentiability, since we rather wouldn't need infinite differentiability for anything. This can be illustrated using the figure used on scikit-learn site (below), showing decision boundaries of different classifiers. If you look at decision tree, random forest, or AdaBoost, the decision boundaries are overlayed rectangles, with sharp, rapidly changing boundaries. For neural network, the boundary is smooth both in mathematical sense and in common, everyday sense, where we say that something is smooth, i.e. something rather roundish, without sharp edges. Those are decision boundaries of classifiers, but regression analogs of those algorithms work almost the same. Decision tree is an algorithm that outputs a number of, automatically generated, if ... else ... statements that lead to final nodes where it makes the final prediction, e.g. if age > 25 and gender = male and nationality = German then height = 172 cm. By design, this would produce predictions that are characterized by "jumps", because one node would predict height = 172 cm while other height = 167 cm and there might be nothing in-between. MARS regression is build in terms of piecewise linear units with "breaks", so the regression equation when using single feature $x$, and two breaks, could be something like below $$ y = b + w_1 \max(0, x - a_1) + w_2 \max(0, x - a_2) $$ notice that the $\max$ function is an element that is continuous, but not differentiable (it is even used as an example in Wikipedia), so the output would not be smooth. Neural networks are build in terms of layers, where each layer is build from neurons like $$ h(x) = \sigma(wx + b) $$ so when the neurons are smooth, the output would be smooth as well. Notice however that if you used neural network with one hidden layer using two neurons, $\operatorname{ReLU}(x) = \max(0, x)$ activation on hidden layer, and linear activation on output layer, then the network could be something like $$ \newcommand{\relu}{\operatorname{ReLU}} y = b + w^{(2)}_1 \relu(w^{(1)}_1 x + a_1) + w^{(2)}_2 \relu(w^{(1)}_2 x + a_2) $$ that is almost the same model as MARS, so isn't smooth as well... There are also other examples where modern neural networks architectures do not need to lead to smooth solutions, so the statement is not generally true.
Why are neural networks smooth functions?
They refer to the smoothness, as understood in mathematics, so a function that is continuous and differentiable. As explained by Nick S on math.stackexchange.com: A function being smooth is actually
Why are neural networks smooth functions? They refer to the smoothness, as understood in mathematics, so a function that is continuous and differentiable. As explained by Nick S on math.stackexchange.com: A function being smooth is actually a stronger case than a function being continuous. For a function to be continuous, the epsilon delta definition of continuity simply needs to hold, so there are no breaks or holes in the function (in the 2-d case). For a function to be smooth, it has to have continuous derivatives up to a certain order, say k. Some of the answers at math.stackexchange.com mention infinite differentiability, but in machine learning the term would be rather used in looser sense of not-necessary-infinite differentiability, since we rather wouldn't need infinite differentiability for anything. This can be illustrated using the figure used on scikit-learn site (below), showing decision boundaries of different classifiers. If you look at decision tree, random forest, or AdaBoost, the decision boundaries are overlayed rectangles, with sharp, rapidly changing boundaries. For neural network, the boundary is smooth both in mathematical sense and in common, everyday sense, where we say that something is smooth, i.e. something rather roundish, without sharp edges. Those are decision boundaries of classifiers, but regression analogs of those algorithms work almost the same. Decision tree is an algorithm that outputs a number of, automatically generated, if ... else ... statements that lead to final nodes where it makes the final prediction, e.g. if age > 25 and gender = male and nationality = German then height = 172 cm. By design, this would produce predictions that are characterized by "jumps", because one node would predict height = 172 cm while other height = 167 cm and there might be nothing in-between. MARS regression is build in terms of piecewise linear units with "breaks", so the regression equation when using single feature $x$, and two breaks, could be something like below $$ y = b + w_1 \max(0, x - a_1) + w_2 \max(0, x - a_2) $$ notice that the $\max$ function is an element that is continuous, but not differentiable (it is even used as an example in Wikipedia), so the output would not be smooth. Neural networks are build in terms of layers, where each layer is build from neurons like $$ h(x) = \sigma(wx + b) $$ so when the neurons are smooth, the output would be smooth as well. Notice however that if you used neural network with one hidden layer using two neurons, $\operatorname{ReLU}(x) = \max(0, x)$ activation on hidden layer, and linear activation on output layer, then the network could be something like $$ \newcommand{\relu}{\operatorname{ReLU}} y = b + w^{(2)}_1 \relu(w^{(1)}_1 x + a_1) + w^{(2)}_2 \relu(w^{(1)}_2 x + a_2) $$ that is almost the same model as MARS, so isn't smooth as well... There are also other examples where modern neural networks architectures do not need to lead to smooth solutions, so the statement is not generally true.
Why are neural networks smooth functions? They refer to the smoothness, as understood in mathematics, so a function that is continuous and differentiable. As explained by Nick S on math.stackexchange.com: A function being smooth is actually
33,453
Why are neural networks smooth functions?
When the book was written nobody was using relu . It’s not even mentioned in the book. All activations were smooth sigmoids. In this case neural net output is indeed a smooth function of its parameters such as weights and biases. That’s how you make backpropagation work nicely but slowly. Once relu came to the picture derivatives calculations became much faster, because it became piecewise linear instead of smooth nonlinear
Why are neural networks smooth functions?
When the book was written nobody was using relu . It’s not even mentioned in the book. All activations were smooth sigmoids. In this case neural net output is indeed a smooth function of its parameter
Why are neural networks smooth functions? When the book was written nobody was using relu . It’s not even mentioned in the book. All activations were smooth sigmoids. In this case neural net output is indeed a smooth function of its parameters such as weights and biases. That’s how you make backpropagation work nicely but slowly. Once relu came to the picture derivatives calculations became much faster, because it became piecewise linear instead of smooth nonlinear
Why are neural networks smooth functions? When the book was written nobody was using relu . It’s not even mentioned in the book. All activations were smooth sigmoids. In this case neural net output is indeed a smooth function of its parameter
33,454
Why GLMs predict the mean and not the mode?
The goal of maximum likelihood fitting is to determine the parameters of some distribution that best fit the data - and more generally, how said parameters may vary with covariates. In the case of GLMs, we want to determine the parameters $\theta$ of some exponential family distribution, and how they are a function of some covariates $X$. For any probability distribution in the overdispersed exponential family, the mean $\mu$ is guaranteed to be related to the canonical exponential family parameter $\mathbf{\theta}$ through the canonical link function, $\theta = g(\mu)$. We can even determine a general formula for $g$, and typically $g$ is invertible as well. If we simply set $\mu = g^{-1}(\theta)$ and $\theta = X\beta$, we automatically get a model for how $\mu$ and $\theta$ vary with $X$, no matter what distribution we are dealing with, and that model can be easily and reliably fit to data by convex optimization. Matt's answer shows how it works for the Bernoulli distribution, but the real magic is that it works for every distribution in the family. The mode does not enjoy these properties. In fact, as Cliff AB points out, the mode may not even have a bijective relationship with the distribution parameter, so inference from the mode is of very limited power. Take the Bernoulli distribution, for example. Its mode is either 0 or 1, and knowing the mode only tells you whether $p$, the probability of 1, is greater or less than 1/2. In contrast, the mean tells you exactly what $p$ is. Now, to clarify some confusion in the question: maximum likelihood is not about finding the mode of a distribution, because the likelihood is not the same function as the distribution. The likelihood involves your model distribution in its formula, but that's where the similarities end. The likelihood function $L(\theta)$ takes a parameter value $\theta$ as input, and tells you how "likely" your entire dataset is, given the model distribution has that $\theta$. The model distribution $f_\theta(y)$ depends on $\theta$, but as a function, it takes a value $y$ as input and tells you how often a random sample from that distribution will equal $y$. The maximum of $L(\theta)$ and the mode of $f_\theta(y)$ are not the same thing. Maybe it helps to see the likelihood's formula. In the case of IID data $y_1,y_2,\ldots,y_n$, we have $$L(\theta) = \prod_{i=1}^n f_\theta(y_i)$$ The values of $y_i$ are all fixed - they are the values from your data. Maximum likelihood is finding the $\theta$ that maximizes $L(\theta)$. Finding the mode of the distribution would be finding the $y$ that maximizes $f_\theta(y)$, which is not what we want: $y$ is fixed in the likelihood, not a variable. So finding the maximum of the likelihood function is not, in general, the same as finding the mode of the model distribution. (It is the mode of another distribution, if you ask an objective Bayesian, but that's a very different story!)
Why GLMs predict the mean and not the mode?
The goal of maximum likelihood fitting is to determine the parameters of some distribution that best fit the data - and more generally, how said parameters may vary with covariates. In the case of GLM
Why GLMs predict the mean and not the mode? The goal of maximum likelihood fitting is to determine the parameters of some distribution that best fit the data - and more generally, how said parameters may vary with covariates. In the case of GLMs, we want to determine the parameters $\theta$ of some exponential family distribution, and how they are a function of some covariates $X$. For any probability distribution in the overdispersed exponential family, the mean $\mu$ is guaranteed to be related to the canonical exponential family parameter $\mathbf{\theta}$ through the canonical link function, $\theta = g(\mu)$. We can even determine a general formula for $g$, and typically $g$ is invertible as well. If we simply set $\mu = g^{-1}(\theta)$ and $\theta = X\beta$, we automatically get a model for how $\mu$ and $\theta$ vary with $X$, no matter what distribution we are dealing with, and that model can be easily and reliably fit to data by convex optimization. Matt's answer shows how it works for the Bernoulli distribution, but the real magic is that it works for every distribution in the family. The mode does not enjoy these properties. In fact, as Cliff AB points out, the mode may not even have a bijective relationship with the distribution parameter, so inference from the mode is of very limited power. Take the Bernoulli distribution, for example. Its mode is either 0 or 1, and knowing the mode only tells you whether $p$, the probability of 1, is greater or less than 1/2. In contrast, the mean tells you exactly what $p$ is. Now, to clarify some confusion in the question: maximum likelihood is not about finding the mode of a distribution, because the likelihood is not the same function as the distribution. The likelihood involves your model distribution in its formula, but that's where the similarities end. The likelihood function $L(\theta)$ takes a parameter value $\theta$ as input, and tells you how "likely" your entire dataset is, given the model distribution has that $\theta$. The model distribution $f_\theta(y)$ depends on $\theta$, but as a function, it takes a value $y$ as input and tells you how often a random sample from that distribution will equal $y$. The maximum of $L(\theta)$ and the mode of $f_\theta(y)$ are not the same thing. Maybe it helps to see the likelihood's formula. In the case of IID data $y_1,y_2,\ldots,y_n$, we have $$L(\theta) = \prod_{i=1}^n f_\theta(y_i)$$ The values of $y_i$ are all fixed - they are the values from your data. Maximum likelihood is finding the $\theta$ that maximizes $L(\theta)$. Finding the mode of the distribution would be finding the $y$ that maximizes $f_\theta(y)$, which is not what we want: $y$ is fixed in the likelihood, not a variable. So finding the maximum of the likelihood function is not, in general, the same as finding the mode of the model distribution. (It is the mode of another distribution, if you ask an objective Bayesian, but that's a very different story!)
Why GLMs predict the mean and not the mode? The goal of maximum likelihood fitting is to determine the parameters of some distribution that best fit the data - and more generally, how said parameters may vary with covariates. In the case of GLM
33,455
Why GLMs predict the mean and not the mode?
There are two things to argue here: The facts that a glm attempts the predict $y$ as the mean of a conditional distribution, and estimates its parameters $\beta$ by maximum likelihood are consistent. Estimating the parameters by maximum likelihood is not determining the mode of any distribution. At least not in the classical formulation of a glm. Lets take the simplest non-trivial glm as a working example, the logistic model. In logistic regression we have a response $y$ which is 0, 1 valued. We postulate that $y$ is bernoulli distributed conditional on our data $$ y \mid X \sim Bernoulli(p(X)) $$ And we attempt to estimate the mean of this conditional distribution (which in this case is just $p$) by linking it to a linear function of $X$ $$ \log\left(\frac{p}{1-p}\right) = X \beta $$ Pausing and reflecting, we see in this case that it is natural to want to know $p$, which is a mean of a conditional distribution. In the glm setup, $p$ is not estimated directly, it is $\beta$ that the estimation procedure targets. To get at $\beta$ we use maximum likelihood. The probability of observing a datapoint $y$ from the conditional bernoulli distribution, given the value of $X$ observed, and a specific set of parameters $\beta$ ,is $$ P \left( y \mid X, \beta \right) = p^y (1-p)^{1-y} $$ where $p$ is a function of $\beta$ and $X$ through the linking relationship. Notice that it is $y$ that is sampled from a probability distribution here, not beta. To apply maximum likelihood, we flip this around into a function of $\beta$, considering both $X$ and $y$ as fixed and observed: $$ L(\beta) = p^y (1-p)^{1-y} $$ But, $L$ is not a density function, it is a likelihood. When you maximize the likelihood you are not estimating the mode of a distribution because there simply is no distribution to, well, mode-ize. You can produce a density from $L$ by providing a prior distribution on the parameters $\beta$ and using Bayes's rule, but in the classical glm formulation, this is not done.
Why GLMs predict the mean and not the mode?
There are two things to argue here: The facts that a glm attempts the predict $y$ as the mean of a conditional distribution, and estimates its parameters $\beta$ by maximum likelihood are consistent.
Why GLMs predict the mean and not the mode? There are two things to argue here: The facts that a glm attempts the predict $y$ as the mean of a conditional distribution, and estimates its parameters $\beta$ by maximum likelihood are consistent. Estimating the parameters by maximum likelihood is not determining the mode of any distribution. At least not in the classical formulation of a glm. Lets take the simplest non-trivial glm as a working example, the logistic model. In logistic regression we have a response $y$ which is 0, 1 valued. We postulate that $y$ is bernoulli distributed conditional on our data $$ y \mid X \sim Bernoulli(p(X)) $$ And we attempt to estimate the mean of this conditional distribution (which in this case is just $p$) by linking it to a linear function of $X$ $$ \log\left(\frac{p}{1-p}\right) = X \beta $$ Pausing and reflecting, we see in this case that it is natural to want to know $p$, which is a mean of a conditional distribution. In the glm setup, $p$ is not estimated directly, it is $\beta$ that the estimation procedure targets. To get at $\beta$ we use maximum likelihood. The probability of observing a datapoint $y$ from the conditional bernoulli distribution, given the value of $X$ observed, and a specific set of parameters $\beta$ ,is $$ P \left( y \mid X, \beta \right) = p^y (1-p)^{1-y} $$ where $p$ is a function of $\beta$ and $X$ through the linking relationship. Notice that it is $y$ that is sampled from a probability distribution here, not beta. To apply maximum likelihood, we flip this around into a function of $\beta$, considering both $X$ and $y$ as fixed and observed: $$ L(\beta) = p^y (1-p)^{1-y} $$ But, $L$ is not a density function, it is a likelihood. When you maximize the likelihood you are not estimating the mode of a distribution because there simply is no distribution to, well, mode-ize. You can produce a density from $L$ by providing a prior distribution on the parameters $\beta$ and using Bayes's rule, but in the classical glm formulation, this is not done.
Why GLMs predict the mean and not the mode? There are two things to argue here: The facts that a glm attempts the predict $y$ as the mean of a conditional distribution, and estimates its parameters $\beta$ by maximum likelihood are consistent.
33,456
Why GLMs predict the mean and not the mode?
Thanks for all the comments and answers. Although in none of them is 100% the answer to my question, all of them helped me to see through the apparent contradiction. Thus, I decided to formulate the answer myself, I think this is a summary of all ideas involved in the comments and answers: Maximization of likelihood through the data PDF $f(y; \theta, \phi)$ in GLMs is not related to the mode of $f$ (but to its mean) because of 2 reasons: When you maximize $f(y; \theta, \phi)$ you do not consider $f$ as a function of $y$, but as a function of $\boldsymbol\beta$ (the parameters of the linear model). More specifically, when you differentiate $f$ to obtain a system of equations leading to determine $\boldsymbol\beta$, you do not do it with respect to $y$; you do it with respect to $\boldsymbol\beta$. Thus, the maximization process gives you the $\boldsymbol\beta$ that maximizes $f$. An optimal $\boldsymbol\beta$, and not an optimal $y$ (which, indeed, would be the mode), is the output of the maximization process. Additionally, in the maximization process, the mean, $\boldsymbol\mu$, is a function of $\boldsymbol\beta$. Therefore, through the maximization process we also obtain the optimal $\boldsymbol\mu$.
Why GLMs predict the mean and not the mode?
Thanks for all the comments and answers. Although in none of them is 100% the answer to my question, all of them helped me to see through the apparent contradiction. Thus, I decided to formulate the a
Why GLMs predict the mean and not the mode? Thanks for all the comments and answers. Although in none of them is 100% the answer to my question, all of them helped me to see through the apparent contradiction. Thus, I decided to formulate the answer myself, I think this is a summary of all ideas involved in the comments and answers: Maximization of likelihood through the data PDF $f(y; \theta, \phi)$ in GLMs is not related to the mode of $f$ (but to its mean) because of 2 reasons: When you maximize $f(y; \theta, \phi)$ you do not consider $f$ as a function of $y$, but as a function of $\boldsymbol\beta$ (the parameters of the linear model). More specifically, when you differentiate $f$ to obtain a system of equations leading to determine $\boldsymbol\beta$, you do not do it with respect to $y$; you do it with respect to $\boldsymbol\beta$. Thus, the maximization process gives you the $\boldsymbol\beta$ that maximizes $f$. An optimal $\boldsymbol\beta$, and not an optimal $y$ (which, indeed, would be the mode), is the output of the maximization process. Additionally, in the maximization process, the mean, $\boldsymbol\mu$, is a function of $\boldsymbol\beta$. Therefore, through the maximization process we also obtain the optimal $\boldsymbol\mu$.
Why GLMs predict the mean and not the mode? Thanks for all the comments and answers. Although in none of them is 100% the answer to my question, all of them helped me to see through the apparent contradiction. Thus, I decided to formulate the a
33,457
Log or square-root transformation for ARIMA
Transformations are like drugs ! Some are good for you and some aren't !. Haphazard selection of transformations should be studiously avoided. a) One of the requirements in order to perform valid statistical tests of necessity is that the variance of the errors from the proposed model must not be proven to be non-constant. If the variance of the errors changes at discrete points in time then one has recourse to Generalized Least Squares or GLM . b) If the variance of the errors is linearly relatable to the level of the observed series then a Logarithmic Transformation might be appropriate. If the square root of the variance of the errors is linearly relatable to the level of the original series then a Square Root transformation is appropriate. More generally the appropriate power transformation is found via the Box-Cox test where the optimal lambda is found. Note that the Box-Cox test is universally applicable and doesn't soley requite time series or spatail data. All of the above ( a and b ) require that the mean of the errors cannot be proven to differ significantly from zero for all points. If your data is not time series or spatial in nature then the only anomaly you can detect is a pulse. However if your data is time series or spatial then Level Shifts , Seasonal Pulses and/or Local Time Trends might be suggested to render the mean value of the error term to be 0.0 everywhere or at least not significantly different from 0.0 . In my opinion one should never willy-nilly transform the data unless one has to in order to satisfy (in part) the Gaussian assumptions. Some econometricians take logs for the simple and simply wrong reason in order to obtain direct estimates of elasticity's rather than assessing the % change in Y for a % change in x from the best model. Now one caveat, if one knows from theory or at least one thinks that one knows from theory that transformations are necessary i.e. proven by previous well-documented research , then by all means follow that paradigm as it may prove to be more beneficial that the empirical procedures I have laid out here. In closing use the original data, minimize any warping of the results by mindless transformations, test all assumptions and sleep well at night. Statisticians like Doctors should never do harm to their data/patients y providing drugs/transformations that have nasty and unwarranted side-effects. Hope This Helps . Data Analysis using time series techniques on time series data: a plot suggests a series that has structural change. The Chow Test yielded a signifciant break point . . Analysis of the modt recent 147 values starting at 1999/5 yielded with a Residual Plot with the following ACF . The forecast plot is . The final model is with all parameters statistically significant and no unwarranted power transformations which often unfortunately lead to wildly explosive and unrealistic forecasts. Power transforms are justified whrn it is proven via a Box-Cox test that the variablility of the ERRORS is related to the expected value as detailed here. N.B. that the variability of the original series is not used but the variability of model errors.
Log or square-root transformation for ARIMA
Transformations are like drugs ! Some are good for you and some aren't !. Haphazard selection of transformations should be studiously avoided. a) One of the requirements in order to perform valid sta
Log or square-root transformation for ARIMA Transformations are like drugs ! Some are good for you and some aren't !. Haphazard selection of transformations should be studiously avoided. a) One of the requirements in order to perform valid statistical tests of necessity is that the variance of the errors from the proposed model must not be proven to be non-constant. If the variance of the errors changes at discrete points in time then one has recourse to Generalized Least Squares or GLM . b) If the variance of the errors is linearly relatable to the level of the observed series then a Logarithmic Transformation might be appropriate. If the square root of the variance of the errors is linearly relatable to the level of the original series then a Square Root transformation is appropriate. More generally the appropriate power transformation is found via the Box-Cox test where the optimal lambda is found. Note that the Box-Cox test is universally applicable and doesn't soley requite time series or spatail data. All of the above ( a and b ) require that the mean of the errors cannot be proven to differ significantly from zero for all points. If your data is not time series or spatial in nature then the only anomaly you can detect is a pulse. However if your data is time series or spatial then Level Shifts , Seasonal Pulses and/or Local Time Trends might be suggested to render the mean value of the error term to be 0.0 everywhere or at least not significantly different from 0.0 . In my opinion one should never willy-nilly transform the data unless one has to in order to satisfy (in part) the Gaussian assumptions. Some econometricians take logs for the simple and simply wrong reason in order to obtain direct estimates of elasticity's rather than assessing the % change in Y for a % change in x from the best model. Now one caveat, if one knows from theory or at least one thinks that one knows from theory that transformations are necessary i.e. proven by previous well-documented research , then by all means follow that paradigm as it may prove to be more beneficial that the empirical procedures I have laid out here. In closing use the original data, minimize any warping of the results by mindless transformations, test all assumptions and sleep well at night. Statisticians like Doctors should never do harm to their data/patients y providing drugs/transformations that have nasty and unwarranted side-effects. Hope This Helps . Data Analysis using time series techniques on time series data: a plot suggests a series that has structural change. The Chow Test yielded a signifciant break point . . Analysis of the modt recent 147 values starting at 1999/5 yielded with a Residual Plot with the following ACF . The forecast plot is . The final model is with all parameters statistically significant and no unwarranted power transformations which often unfortunately lead to wildly explosive and unrealistic forecasts. Power transforms are justified whrn it is proven via a Box-Cox test that the variablility of the ERRORS is related to the expected value as detailed here. N.B. that the variability of the original series is not used but the variability of model errors.
Log or square-root transformation for ARIMA Transformations are like drugs ! Some are good for you and some aren't !. Haphazard selection of transformations should be studiously avoided. a) One of the requirements in order to perform valid sta
33,458
Log or square-root transformation for ARIMA
This question is answered beautifully by means of a spread-versus-level plot: a cube root transformation will stabilize the spreads of the data, providing a useful basis for further exploration and analysis. The data show a clear seasonality: plot(y) Take advantage of this by slicing the data into annual (or possibly biennial) groups. Within each group compute resistant descriptors of their typical value and their spread. Good choices are based on the 5-letter summary, consisting of the median (which splits the data into upper and lower halves), the medians of the two halves (the "hinges" or "fourths"), and the extremes. Because the extremes are not resistant to outliers, use the difference of the hinges to represent the spread. (This "fourth-spread" is the length of a box in a properly constructed box-and-whisker plot.) spread <- function(x) { n <- length(x) n.med <- (n + 1)/2 n.fourth <- (floor(n.med) + 1)/2 y <- sort(x)[c(floor(n.fourth), ceiling(n.fourth), floor(n+1 - n.fourth), ceiling(n+1 - n.fourth))] return( y %*% c(-1,-1,1,1)/2 ) } years <- floor((1:length(x) - 1) / 12) z <- split(x, years) boxplot(z, names=(min(years):max(years))+1976, ylab="y") The boxplots clearly get longer over time as the level of the data rises. This heteroscedasticity complicates analyses and interpretations. Often a power transformation can reduce or remove the heteroscedasticity altogether. A spread versus level plot shows whether a power transformation (which includes the logarithm) will be helpful for stabilizing the spread within the groups and suggests an appropriate value for the power: it is directly related to the slope of the spread-vs.-level plot on log-log scales. z.med <- unlist(lapply(z, median)) z.spread <- unlist(lapply(z, spread)) fit <- lm(log(z.spread) ~ log(z.med)) plot(log(z.med), log(z.spread), xlab="Log Level", ylab="Log Spread", main="Spread vs. Level Plot") abline(fit, lwd=2, col="Red") This plot shows good linearity and no large outliers, attesting to a fairly regular relationship between spread and level throughout the time period. When the fitted slope is $p$, the power to use is $\lambda=1-p$. Upon applying the suggested power transformation, the spread is (approximately) constant regardless of the level (and therefore regardless of the year): lambda <- 1 - coef(fit)[2] boxplot(lapply(z, function(u) u^lambda), names=(min(years):max(years))+1976, ylab=paste("y^", round(lambda, 2), sep=""), main="Boxplots of Re-expressed Values") plot(y^lambda, main= "Re-expressed Values", ylab=paste("y^", round(lambda, 2), sep="")) Often, powers that are reciprocals of small integers have useful or natural interpretations. Here, $\lambda = 0.32$ is so close to $1/3$ that it may as well be the cube root. In practice, one might choose to use the cube root, or perhaps round it to the even simpler fraction $1/2$ and take the square root, or sometimes go all the way to the logarithm (which corresponds to $\lambda = 0$). Conclusions In this example, the spread-versus-level plot (by virtue of its approximate linearity and lack of outliers) has shown that a power transformation will effectively stabilize the spread of the data and has automatically suggested the power to use. Although powers can be computed using various methods, none of the standard methods provides the insight or diagnostic power afforded by the spread-versus-level plot. This should be in the toolkit of every data analyst. References Tukey, John W. Exploratory Data Analysis. Addison-Wesley, 1977. Hoaglin, David C., Frederick Mosteller, and John W. Tukey, Understanding Robust and Exploratory Data Analysis. John Wiley and Sons, 1983.
Log or square-root transformation for ARIMA
This question is answered beautifully by means of a spread-versus-level plot: a cube root transformation will stabilize the spreads of the data, providing a useful basis for further exploration and an
Log or square-root transformation for ARIMA This question is answered beautifully by means of a spread-versus-level plot: a cube root transformation will stabilize the spreads of the data, providing a useful basis for further exploration and analysis. The data show a clear seasonality: plot(y) Take advantage of this by slicing the data into annual (or possibly biennial) groups. Within each group compute resistant descriptors of their typical value and their spread. Good choices are based on the 5-letter summary, consisting of the median (which splits the data into upper and lower halves), the medians of the two halves (the "hinges" or "fourths"), and the extremes. Because the extremes are not resistant to outliers, use the difference of the hinges to represent the spread. (This "fourth-spread" is the length of a box in a properly constructed box-and-whisker plot.) spread <- function(x) { n <- length(x) n.med <- (n + 1)/2 n.fourth <- (floor(n.med) + 1)/2 y <- sort(x)[c(floor(n.fourth), ceiling(n.fourth), floor(n+1 - n.fourth), ceiling(n+1 - n.fourth))] return( y %*% c(-1,-1,1,1)/2 ) } years <- floor((1:length(x) - 1) / 12) z <- split(x, years) boxplot(z, names=(min(years):max(years))+1976, ylab="y") The boxplots clearly get longer over time as the level of the data rises. This heteroscedasticity complicates analyses and interpretations. Often a power transformation can reduce or remove the heteroscedasticity altogether. A spread versus level plot shows whether a power transformation (which includes the logarithm) will be helpful for stabilizing the spread within the groups and suggests an appropriate value for the power: it is directly related to the slope of the spread-vs.-level plot on log-log scales. z.med <- unlist(lapply(z, median)) z.spread <- unlist(lapply(z, spread)) fit <- lm(log(z.spread) ~ log(z.med)) plot(log(z.med), log(z.spread), xlab="Log Level", ylab="Log Spread", main="Spread vs. Level Plot") abline(fit, lwd=2, col="Red") This plot shows good linearity and no large outliers, attesting to a fairly regular relationship between spread and level throughout the time period. When the fitted slope is $p$, the power to use is $\lambda=1-p$. Upon applying the suggested power transformation, the spread is (approximately) constant regardless of the level (and therefore regardless of the year): lambda <- 1 - coef(fit)[2] boxplot(lapply(z, function(u) u^lambda), names=(min(years):max(years))+1976, ylab=paste("y^", round(lambda, 2), sep=""), main="Boxplots of Re-expressed Values") plot(y^lambda, main= "Re-expressed Values", ylab=paste("y^", round(lambda, 2), sep="")) Often, powers that are reciprocals of small integers have useful or natural interpretations. Here, $\lambda = 0.32$ is so close to $1/3$ that it may as well be the cube root. In practice, one might choose to use the cube root, or perhaps round it to the even simpler fraction $1/2$ and take the square root, or sometimes go all the way to the logarithm (which corresponds to $\lambda = 0$). Conclusions In this example, the spread-versus-level plot (by virtue of its approximate linearity and lack of outliers) has shown that a power transformation will effectively stabilize the spread of the data and has automatically suggested the power to use. Although powers can be computed using various methods, none of the standard methods provides the insight or diagnostic power afforded by the spread-versus-level plot. This should be in the toolkit of every data analyst. References Tukey, John W. Exploratory Data Analysis. Addison-Wesley, 1977. Hoaglin, David C., Frederick Mosteller, and John W. Tukey, Understanding Robust and Exploratory Data Analysis. John Wiley and Sons, 1983.
Log or square-root transformation for ARIMA This question is answered beautifully by means of a spread-versus-level plot: a cube root transformation will stabilize the spreads of the data, providing a useful basis for further exploration and an
33,459
Log or square-root transformation for ARIMA
@digdeep, as usual @whuber provided an excellent and comprehensive answer from a statistical view point. I'm not trained in statistics, so take this response with a grain of salt. I have used the response below in my real world practice data, so I hope this is helpful. I'll try to provide a non statistician view of transformation of time series data for Arima modeling. There is no straightforward answer. Since you are interested in knowing which transformation to use, it might be helpful to review why we do transformation.We do transformation for 3 main reasons and there might be ton of other reasons: Transformation makes the data's linear structure more usable for ARIMA modeling. If variance in the data is increasing or changing then transformation of data might be helpful to stabilize the variance in data. Transformation also makes the errors/residuals in ARIMA model normally distributed which is a requirement in ARIMA modeling proposed by Box-Jenkins. There are several data transformations including Box-Cox, Log, square root, quartic and inverse and other transformations mentioned @irishstat. As with all the statistical methods there is no good guidance/answer on which transformation to select for a particular dataset. As the famous statistician G.E.P Box said "All models are wrong but some are useful", this would apply to the transformations as well "All transformations are wrong but some are useful". The best way to choose a transformation is to experiment. Since you have a long time series, I would hold out the last 12 - 24 months, and build a model using all the transformation and see if a particular transformation is helpful at predicting your out of sample data accurately. Also examine the residuals for normality assumption of your model. Hopefully, this would guide you in choosing an appropriate transformation. You might also want to compare this with non-transformed data and see if the transformation helped your model. @whuber's excellent graphical representation of your data motivated me to explore this data graphically using a decomposition method. I might add, R has an excellent decomposition method called STL which would be helpful in identifying patterns that you would normally not notice. For a dataset like this, STL decomposition is helpful in not only selecting an appropriate method for analyzing your data, it might also be helpful in identifying anomalies such as outliers/level shift/change in seasonality etc., See below. Notice that the remainder (irregular) component of the data, looks like there is stochastic seasonality and the variation is not random, there appears to be a pattern. See also change in level of trend component after 2004/2005 that @whuber is refrencing. Hopefully this is helpful. g <- stl(y,s.window = "periodic") plot(g)
Log or square-root transformation for ARIMA
@digdeep, as usual @whuber provided an excellent and comprehensive answer from a statistical view point. I'm not trained in statistics, so take this response with a grain of salt. I have used the resp
Log or square-root transformation for ARIMA @digdeep, as usual @whuber provided an excellent and comprehensive answer from a statistical view point. I'm not trained in statistics, so take this response with a grain of salt. I have used the response below in my real world practice data, so I hope this is helpful. I'll try to provide a non statistician view of transformation of time series data for Arima modeling. There is no straightforward answer. Since you are interested in knowing which transformation to use, it might be helpful to review why we do transformation.We do transformation for 3 main reasons and there might be ton of other reasons: Transformation makes the data's linear structure more usable for ARIMA modeling. If variance in the data is increasing or changing then transformation of data might be helpful to stabilize the variance in data. Transformation also makes the errors/residuals in ARIMA model normally distributed which is a requirement in ARIMA modeling proposed by Box-Jenkins. There are several data transformations including Box-Cox, Log, square root, quartic and inverse and other transformations mentioned @irishstat. As with all the statistical methods there is no good guidance/answer on which transformation to select for a particular dataset. As the famous statistician G.E.P Box said "All models are wrong but some are useful", this would apply to the transformations as well "All transformations are wrong but some are useful". The best way to choose a transformation is to experiment. Since you have a long time series, I would hold out the last 12 - 24 months, and build a model using all the transformation and see if a particular transformation is helpful at predicting your out of sample data accurately. Also examine the residuals for normality assumption of your model. Hopefully, this would guide you in choosing an appropriate transformation. You might also want to compare this with non-transformed data and see if the transformation helped your model. @whuber's excellent graphical representation of your data motivated me to explore this data graphically using a decomposition method. I might add, R has an excellent decomposition method called STL which would be helpful in identifying patterns that you would normally not notice. For a dataset like this, STL decomposition is helpful in not only selecting an appropriate method for analyzing your data, it might also be helpful in identifying anomalies such as outliers/level shift/change in seasonality etc., See below. Notice that the remainder (irregular) component of the data, looks like there is stochastic seasonality and the variation is not random, there appears to be a pattern. See also change in level of trend component after 2004/2005 that @whuber is refrencing. Hopefully this is helpful. g <- stl(y,s.window = "periodic") plot(g)
Log or square-root transformation for ARIMA @digdeep, as usual @whuber provided an excellent and comprehensive answer from a statistical view point. I'm not trained in statistics, so take this response with a grain of salt. I have used the resp
33,460
Unbiased estimator of variance of binomial variable
This answer cannot be correct. An estimator cannot depend on the values of the parameters: since they are unknown it would mean that you cannot compute the estimate. An unbiased estimator of the variance for every distribution (with finite second moment) is $$ S^2 = \frac{1}{n-1}\sum_{i=1}^n (y_i - \bar{y})^2.$$ By expanding the square and using the definition of the average $\bar{y}$, you can see that $$ S^2 = \frac{1}{n} \sum_{i=1}^n y_i^2 - \frac{2}{n(n-1)}\sum_{i\neq j}y_iy_j,$$ so if the variables are IID, $$E(S^2) = \frac{1}{n} nE(y_j^2) - \frac{2}{n(n-1)} \frac{n(n-1)}{2} E(y_j)^2. $$ As you see we do not need the hypothesis that the variables have a binomial distribution (except implicitly in the fact that the variance exists) in order to derive this estimator.
Unbiased estimator of variance of binomial variable
This answer cannot be correct. An estimator cannot depend on the values of the parameters: since they are unknown it would mean that you cannot compute the estimate. An unbiased estimator of the varia
Unbiased estimator of variance of binomial variable This answer cannot be correct. An estimator cannot depend on the values of the parameters: since they are unknown it would mean that you cannot compute the estimate. An unbiased estimator of the variance for every distribution (with finite second moment) is $$ S^2 = \frac{1}{n-1}\sum_{i=1}^n (y_i - \bar{y})^2.$$ By expanding the square and using the definition of the average $\bar{y}$, you can see that $$ S^2 = \frac{1}{n} \sum_{i=1}^n y_i^2 - \frac{2}{n(n-1)}\sum_{i\neq j}y_iy_j,$$ so if the variables are IID, $$E(S^2) = \frac{1}{n} nE(y_j^2) - \frac{2}{n(n-1)} \frac{n(n-1)}{2} E(y_j)^2. $$ As you see we do not need the hypothesis that the variables have a binomial distribution (except implicitly in the fact that the variance exists) in order to derive this estimator.
Unbiased estimator of variance of binomial variable This answer cannot be correct. An estimator cannot depend on the values of the parameters: since they are unknown it would mean that you cannot compute the estimate. An unbiased estimator of the varia
33,461
Unbiased estimator of variance of binomial variable
@gui11aume is right of course. An outline of a derivation specific to a $\operatorname{Bin}(1,\pi)$ distribution follows: Find the variance in terms of $\pi$ to reparameterize the probability mass function: $$\theta=\operatorname{Var}{Y_i}=\pi(1-\pi)$$ Find the maximum-likelihood estimator of $\theta$: $$\hat\theta=\frac{\sum{y_i}}{n}\left(1-\frac{\sum{y_i}}{n}\right)$$ Calculate its expectation: $$\newcommand{\E}{\operatorname{E}}\E\hat\theta=\theta\cdot\frac{n-1}{n}.$$ Note thankfully that the bias term is a constant. Write the unbiased estimator: $$\tilde\theta=\frac{\hat\theta}{\frac{n-1}{n}}=\frac{\sum{y_i}}{n}\left(1-\frac{\sum{y_i}}{n}\right)\cdot\frac{n}{n-1}=p(1-p)\cdot\frac{n}{n-1}$$ where $p$ is the statistic $\frac{\sum{y_i}}{n}$ Because $\sum{y}$ is sufficient & complete, $\tilde\theta$ is not just any unbiased estimator of the population variance, but the unique minimum-variance unbiased estimator.
Unbiased estimator of variance of binomial variable
@gui11aume is right of course. An outline of a derivation specific to a $\operatorname{Bin}(1,\pi)$ distribution follows: Find the variance in terms of $\pi$ to reparameterize the probability mass fu
Unbiased estimator of variance of binomial variable @gui11aume is right of course. An outline of a derivation specific to a $\operatorname{Bin}(1,\pi)$ distribution follows: Find the variance in terms of $\pi$ to reparameterize the probability mass function: $$\theta=\operatorname{Var}{Y_i}=\pi(1-\pi)$$ Find the maximum-likelihood estimator of $\theta$: $$\hat\theta=\frac{\sum{y_i}}{n}\left(1-\frac{\sum{y_i}}{n}\right)$$ Calculate its expectation: $$\newcommand{\E}{\operatorname{E}}\E\hat\theta=\theta\cdot\frac{n-1}{n}.$$ Note thankfully that the bias term is a constant. Write the unbiased estimator: $$\tilde\theta=\frac{\hat\theta}{\frac{n-1}{n}}=\frac{\sum{y_i}}{n}\left(1-\frac{\sum{y_i}}{n}\right)\cdot\frac{n}{n-1}=p(1-p)\cdot\frac{n}{n-1}$$ where $p$ is the statistic $\frac{\sum{y_i}}{n}$ Because $\sum{y}$ is sufficient & complete, $\tilde\theta$ is not just any unbiased estimator of the population variance, but the unique minimum-variance unbiased estimator.
Unbiased estimator of variance of binomial variable @gui11aume is right of course. An outline of a derivation specific to a $\operatorname{Bin}(1,\pi)$ distribution follows: Find the variance in terms of $\pi$ to reparameterize the probability mass fu
33,462
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed?
Here, as a counterexample, is a sample of points where $Y$ has a standard Normal distribution and the conditional distribution of $X$ is always Normal, too. This is obviously non-Normal. To guarantee Normality, you need (almost surely) that (1) $E[X\mid Y]$ must be a linear function of $Y$ and (2) $\operatorname{Var}(X\mid Y)$ must be constant. These are both characteristics of any Bivariate Normal distribution, so they are necessary conditions. When you write down the joint distribution implied by both these conditions, it will be Gaussian: that is, Bivariate Normal. These R commands generated the example. The conditional variance $Y^4$ is not constant. n <- 1e3 y <- rnorm(n) x <- rnorm(n, y, y^2) plot(x,y, col = "#00000040") # Semi-transparent points
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist
Here, as a counterexample, is a sample of points where $Y$ has a standard Normal distribution and the conditional distribution of $X$ is always Normal, too. This is obviously non-Normal. To guarantee
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed? Here, as a counterexample, is a sample of points where $Y$ has a standard Normal distribution and the conditional distribution of $X$ is always Normal, too. This is obviously non-Normal. To guarantee Normality, you need (almost surely) that (1) $E[X\mid Y]$ must be a linear function of $Y$ and (2) $\operatorname{Var}(X\mid Y)$ must be constant. These are both characteristics of any Bivariate Normal distribution, so they are necessary conditions. When you write down the joint distribution implied by both these conditions, it will be Gaussian: that is, Bivariate Normal. These R commands generated the example. The conditional variance $Y^4$ is not constant. n <- 1e3 y <- rnorm(n) x <- rnorm(n, y, y^2) plot(x,y, col = "#00000040") # Semi-transparent points
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist Here, as a counterexample, is a sample of points where $Y$ has a standard Normal distribution and the conditional distribution of $X$ is always Normal, too. This is obviously non-Normal. To guarantee
33,463
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed?
whuber shows, by means of a counterexample, that the product of a Gaussian $X|Y$ times a Gaussian r.v. $Y$ does not necessarily lead to a joint Gaussian distribution, which in this case certainly doesn't have a conditional Gaussian. Here is a (numerical) counter-example for your claim. Let $Y|X=x \sim N(x^3, 1)$ and $X \sim N(0,1)$. Then $(X,Y)$ is not jointly normal. The marginal of $Y$ is $$ f_Y(y) = \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{1}{2}(y - x^3)^2 - \frac{1}{2}x^2}dx. $$ Unfortunately this integral (as far as I know) cannot be computed analytically, but we can provide a fairly good approximation via adaptive quadratures. Thus the conditional we require is $$ f_{X|Y} = \frac{e^{-\frac{1}{2}(y - x^3)^2 - \frac{1}{2}x^2}}{\int_{-\infty}^{\infty} e^{-\frac{1}{2}(y - x^3)^2 - \frac{1}{2}x^2}dx}. $$ The result is disproved provided we are able to show that this conditional is not Gaussian. Toward this aim, let's fix $y = -1$. An elegant proof may be given by studying the properties of this function but here I am taking a brute-force approach in which I approximate this beast numerically. If the conditional is gaussian we expect it to be unimodal and symmetric. This is the contradiction I'll be looking for. # propto the joint distribution f_yx <- function(y, x) { exp(-0.5*(y-x^3)^2 - 0.5*x^2) } # the marginal of Y f_y <- function(y) { integrate(function(t) f_yx(y, t), lower = -Inf, Inf)$value } # the conditional of x given y x_given_y = function(x, y) f_yx(y,x)/f_y(y) # fix y y = -1 x <- seq(-3, 3, len=100) cond_y <- sapply(x, x_given_y, y=y) plot(x, cond_y, type ="l", lwd=2) If we can trust numerical integration (I'm using the integrate function here which is extremely robust!), we can see from the plot that this conditional density is definitely non-Gaussian. This contradicts the claim. Side comment. There exist non-Gaussian bivariate distributions which have Gaussian conditional densities. One example of this is $$ f(x,y) = C\exp\left(-(1+x^2)(1+y^2)\right),\quad -\infty <x,y<\infty,$$ where $C$ is the normalising constant. You can check that both $Y|X=x$ and $X|Y=x$ are Gaussian with suitable parameters.
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist
whuber shows, by means of a counterexample, that the product of a Gaussian $X|Y$ times a Gaussian r.v. $Y$ does not necessarily lead to a joint Gaussian distribution, which in this case certainly does
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed? whuber shows, by means of a counterexample, that the product of a Gaussian $X|Y$ times a Gaussian r.v. $Y$ does not necessarily lead to a joint Gaussian distribution, which in this case certainly doesn't have a conditional Gaussian. Here is a (numerical) counter-example for your claim. Let $Y|X=x \sim N(x^3, 1)$ and $X \sim N(0,1)$. Then $(X,Y)$ is not jointly normal. The marginal of $Y$ is $$ f_Y(y) = \frac{1}{2\pi}\int_{-\infty}^{\infty} e^{-\frac{1}{2}(y - x^3)^2 - \frac{1}{2}x^2}dx. $$ Unfortunately this integral (as far as I know) cannot be computed analytically, but we can provide a fairly good approximation via adaptive quadratures. Thus the conditional we require is $$ f_{X|Y} = \frac{e^{-\frac{1}{2}(y - x^3)^2 - \frac{1}{2}x^2}}{\int_{-\infty}^{\infty} e^{-\frac{1}{2}(y - x^3)^2 - \frac{1}{2}x^2}dx}. $$ The result is disproved provided we are able to show that this conditional is not Gaussian. Toward this aim, let's fix $y = -1$. An elegant proof may be given by studying the properties of this function but here I am taking a brute-force approach in which I approximate this beast numerically. If the conditional is gaussian we expect it to be unimodal and symmetric. This is the contradiction I'll be looking for. # propto the joint distribution f_yx <- function(y, x) { exp(-0.5*(y-x^3)^2 - 0.5*x^2) } # the marginal of Y f_y <- function(y) { integrate(function(t) f_yx(y, t), lower = -Inf, Inf)$value } # the conditional of x given y x_given_y = function(x, y) f_yx(y,x)/f_y(y) # fix y y = -1 x <- seq(-3, 3, len=100) cond_y <- sapply(x, x_given_y, y=y) plot(x, cond_y, type ="l", lwd=2) If we can trust numerical integration (I'm using the integrate function here which is extremely robust!), we can see from the plot that this conditional density is definitely non-Gaussian. This contradicts the claim. Side comment. There exist non-Gaussian bivariate distributions which have Gaussian conditional densities. One example of this is $$ f(x,y) = C\exp\left(-(1+x^2)(1+y^2)\right),\quad -\infty <x,y<\infty,$$ where $C$ is the normalising constant. You can check that both $Y|X=x$ and $X|Y=x$ are Gaussian with suitable parameters.
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist whuber shows, by means of a counterexample, that the product of a Gaussian $X|Y$ times a Gaussian r.v. $Y$ does not necessarily lead to a joint Gaussian distribution, which in this case certainly does
33,464
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed?
Here I will augment the excellent answer by whuber by showing the mathematical form of your general model and the sufficient conditions that imply a normal distribution for $Y|X$. Consider the general hierarchical model form: $$\begin{align} X|Y=y &\sim \text{N}(\mu(y),\sigma^2(y)), \\[6pt] Y &\sim \text{N}(\mu_*,\sigma^2_*). \\[6pt] \end{align}$$ This model gives the joint density kernel: $$\begin{align} f_{X,Y}(x,y) &= f_{X|Y}(x|y) f_{Y}(y) \\[12pt] &\propto \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Big( \frac{x-\mu(y)}{\sigma(y)} \Big)^2 \Bigg) \exp \Bigg( -\frac{1}{2} \Big( \frac{y-\mu_*}{\sigma_*} \Big)^2 \Bigg) \\[6pt] &= \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \Big( \frac{x-\mu(y)}{\sigma(y)} \Big)^2 + \Big( \frac{y-\mu_*}{\sigma_*} \Big)^2 \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(x-\mu(y))^2 \sigma_*^2 + (y-\mu_*)^2 \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg), \\[6pt] \end{align}$$ which gives the conditional density kernel: $$\begin{align} f_{Y|X}(y|x) &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg). \\[6pt] \end{align}$$ In general, this is not the form of a normal density. However, suppose we impose the following conditions on the condtional mean and variance of $X|Y$: $$\mu(y) = a + by \quad \quad \quad \quad \quad \sigma^2(y) = \sigma^2.$$ These conditions mean that we require $\mu(y) \equiv \mathbb{E}(X|Y=y)$ to be an affine function of $y$ and we require $\sigma^2(y) \equiv \mathbb{V}(X|Y=y)$ to be a fixed value. Incorporating these conditions gives: $$\begin{align} f_{Y|X}(y|x) &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{((a + by)^2 - 2x (a + by)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma^2}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(b^2 y^2 + 2ab y + a^2 b^2 - 2xa - 2xb y) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma^2}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\sigma^2 + b^2 \sigma_*^2 ) y^2 + 2(b(a - x) \sigma_*^2 - \mu_* \sigma^2) y}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{y^2 + 2[(b(a - x) \sigma_*^2 - \mu_* \sigma^2)/(\sigma^2 + b^2 \sigma_*^2) ] y}{\sigma^2 \sigma_*^2/(\sigma^2 + b^2 \sigma_*^2 ) } \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{1}{\sigma^2 \sigma_*^2/(\sigma^2 + b^2 \sigma_*^2 )} \cdot \Big( y - \frac{b(a - x) \sigma_*^2 - \mu_* \sigma^2}{\sigma^2 + b^2 \sigma_*^2} \Big)^2 \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \text{N} \Bigg( y \Bigg| \frac{b(a - x) \sigma_*^2 - \mu_* \sigma^2}{\sigma^2 + b^2 \sigma_*^2}, \frac{\sigma^2 \sigma_*^2}{\sigma^2 + b^2 \sigma_*^2} \Bigg). \\[6pt] \end{align}$$ Here we see that we have a normal distribution for $Y|X$ which confirms that the above conditions on the conditional mean and variance of $X|Y$ are sufficient to give this property.
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist
Here I will augment the excellent answer by whuber by showing the mathematical form of your general model and the sufficient conditions that imply a normal distribution for $Y|X$. Consider the genera
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed? Here I will augment the excellent answer by whuber by showing the mathematical form of your general model and the sufficient conditions that imply a normal distribution for $Y|X$. Consider the general hierarchical model form: $$\begin{align} X|Y=y &\sim \text{N}(\mu(y),\sigma^2(y)), \\[6pt] Y &\sim \text{N}(\mu_*,\sigma^2_*). \\[6pt] \end{align}$$ This model gives the joint density kernel: $$\begin{align} f_{X,Y}(x,y) &= f_{X|Y}(x|y) f_{Y}(y) \\[12pt] &\propto \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Big( \frac{x-\mu(y)}{\sigma(y)} \Big)^2 \Bigg) \exp \Bigg( -\frac{1}{2} \Big( \frac{y-\mu_*}{\sigma_*} \Big)^2 \Bigg) \\[6pt] &= \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \Big( \frac{x-\mu(y)}{\sigma(y)} \Big)^2 + \Big( \frac{y-\mu_*}{\sigma_*} \Big)^2 \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(x-\mu(y))^2 \sigma_*^2 + (y-\mu_*)^2 \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg), \\[6pt] \end{align}$$ which gives the conditional density kernel: $$\begin{align} f_{Y|X}(y|x) &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg). \\[6pt] \end{align}$$ In general, this is not the form of a normal density. However, suppose we impose the following conditions on the condtional mean and variance of $X|Y$: $$\mu(y) = a + by \quad \quad \quad \quad \quad \sigma^2(y) = \sigma^2.$$ These conditions mean that we require $\mu(y) \equiv \mathbb{E}(X|Y=y)$ to be an affine function of $y$ and we require $\sigma^2(y) \equiv \mathbb{V}(X|Y=y)$ to be a fixed value. Incorporating these conditions gives: $$\begin{align} f_{Y|X}(y|x) &\overset{y}{\propto} \frac{1}{\sigma(y)} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\mu(y)^2 - 2x \mu(y)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma(y)^2}{\sigma(y)^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{((a + by)^2 - 2x (a + by)) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma^2}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &= \frac{1}{\sigma} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(b^2 y^2 + 2ab y + a^2 b^2 - 2xa - 2xb y) \sigma_*^2 + (y^2-2y\mu_* + \mu_*^2) \sigma^2}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{(\sigma^2 + b^2 \sigma_*^2 ) y^2 + 2(b(a - x) \sigma_*^2 - \mu_* \sigma^2) y}{\sigma^2 \sigma_*^2} \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{y^2 + 2[(b(a - x) \sigma_*^2 - \mu_* \sigma^2)/(\sigma^2 + b^2 \sigma_*^2) ] y}{\sigma^2 \sigma_*^2/(\sigma^2 + b^2 \sigma_*^2 ) } \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \cdot \exp \Bigg( -\frac{1}{2} \Bigg[ \frac{1}{\sigma^2 \sigma_*^2/(\sigma^2 + b^2 \sigma_*^2 )} \cdot \Big( y - \frac{b(a - x) \sigma_*^2 - \mu_* \sigma^2}{\sigma^2 + b^2 \sigma_*^2} \Big)^2 \Bigg] \Bigg) \\[6pt] &\overset{y}{\propto} \text{N} \Bigg( y \Bigg| \frac{b(a - x) \sigma_*^2 - \mu_* \sigma^2}{\sigma^2 + b^2 \sigma_*^2}, \frac{\sigma^2 \sigma_*^2}{\sigma^2 + b^2 \sigma_*^2} \Bigg). \\[6pt] \end{align}$$ Here we see that we have a normal distribution for $Y|X$ which confirms that the above conditions on the conditional mean and variance of $X|Y$ are sufficient to give this property.
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist Here I will augment the excellent answer by whuber by showing the mathematical form of your general model and the sufficient conditions that imply a normal distribution for $Y|X$. Consider the genera
33,465
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed?
Here is another counterexample which gives closed-form distributions of $X, Y, X|Y = y$ and $Y|X = x$. Let $Y, Z \text{ i.i.d. } \sim N(0, 1)$, and define $X = \frac{Z}{Y}$. Then for $y \neq 0$ (the probability of $Y = 0$ is zero), \begin{align} X | Y = y \sim N(0, y^{-2}). \end{align} On the other hand, it is well known that the marginal distribution of $X$ is Cauchy distribution, i.e., \begin{align} f_X(x) = \frac{1}{\pi(1 + x^2)}, \quad x \in \mathbb{R}. \tag{1} \end{align} And the joint distribution of $X$ and $Y$ can be evaluated as (where $\Phi$ and $\varphi$ denote CDF and PDF of the standard normal distribution respectively): \begin{align} & F(x, y) = P[X \leq x, Y \leq y] \\ =& P[Z \leq Yx, Y \leq y, Y > 0] + P[Z \geq Yx, Y \leq y, Y < 0] \\ =& \begin{cases} \int_{-\infty}^y(1 - \Phi(tx))\varphi(t)dt & y < 0, \\[1em] \int_0^y\Phi(tx)\varphi(t)dt & y > 0. \end{cases} \end{align} Therefore, the joint density of $(X, Y)$ is given by \begin{align} & f(x, y) = \frac{\partial^2F(x, y)}{\partial x\partial y} \\ =&\begin{cases} -y\varphi(y)\varphi(yx) & y < 0, \\[1em] y\varphi(y)\varphi(yx) & y > 0 \end{cases} \\ =& \frac{1}{2\pi}|y|e^{-(1 + x^2)y^2/2}. \tag{2} \end{align} $(1)$ and $(2)$ together yield the conditional density of $Y$ given $X = x$: \begin{align} f_{Y|X}(y|X = x) = \frac{f(x, y)}{f_X(x)} = \frac{1}{2}|y|(1 + x^2)e^{-(1 + x^2)y^2/2}. \tag{3} \end{align} Obviously, $(3)$ is not the density of any normal distribution (with $y$ as the variate). Thus $Y | X = x$ is not normal. For example, when $x = 0$, $(3)$ looks like as follows: P.S., PDF $(3)$ may be termed as "double generalized gamma distribution", based on these two articles: Generalized gamma distribution and Double Gamma Distribution. The parameters linked to the generalized gamma distribution are $a = \sqrt{2(1 + x^2)^{-1}}$ (scale) and $d = 2, p = 2$.
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist
Here is another counterexample which gives closed-form distributions of $X, Y, X|Y = y$ and $Y|X = x$. Let $Y, Z \text{ i.i.d. } \sim N(0, 1)$, and define $X = \frac{Z}{Y}$. Then for $y \neq 0$ (the
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally distributed? Here is another counterexample which gives closed-form distributions of $X, Y, X|Y = y$ and $Y|X = x$. Let $Y, Z \text{ i.i.d. } \sim N(0, 1)$, and define $X = \frac{Z}{Y}$. Then for $y \neq 0$ (the probability of $Y = 0$ is zero), \begin{align} X | Y = y \sim N(0, y^{-2}). \end{align} On the other hand, it is well known that the marginal distribution of $X$ is Cauchy distribution, i.e., \begin{align} f_X(x) = \frac{1}{\pi(1 + x^2)}, \quad x \in \mathbb{R}. \tag{1} \end{align} And the joint distribution of $X$ and $Y$ can be evaluated as (where $\Phi$ and $\varphi$ denote CDF and PDF of the standard normal distribution respectively): \begin{align} & F(x, y) = P[X \leq x, Y \leq y] \\ =& P[Z \leq Yx, Y \leq y, Y > 0] + P[Z \geq Yx, Y \leq y, Y < 0] \\ =& \begin{cases} \int_{-\infty}^y(1 - \Phi(tx))\varphi(t)dt & y < 0, \\[1em] \int_0^y\Phi(tx)\varphi(t)dt & y > 0. \end{cases} \end{align} Therefore, the joint density of $(X, Y)$ is given by \begin{align} & f(x, y) = \frac{\partial^2F(x, y)}{\partial x\partial y} \\ =&\begin{cases} -y\varphi(y)\varphi(yx) & y < 0, \\[1em] y\varphi(y)\varphi(yx) & y > 0 \end{cases} \\ =& \frac{1}{2\pi}|y|e^{-(1 + x^2)y^2/2}. \tag{2} \end{align} $(1)$ and $(2)$ together yield the conditional density of $Y$ given $X = x$: \begin{align} f_{Y|X}(y|X = x) = \frac{f(x, y)}{f_X(x)} = \frac{1}{2}|y|(1 + x^2)e^{-(1 + x^2)y^2/2}. \tag{3} \end{align} Obviously, $(3)$ is not the density of any normal distribution (with $y$ as the variate). Thus $Y | X = x$ is not normal. For example, when $x = 0$, $(3)$ looks like as follows: P.S., PDF $(3)$ may be termed as "double generalized gamma distribution", based on these two articles: Generalized gamma distribution and Double Gamma Distribution. The parameters linked to the generalized gamma distribution are $a = \sqrt{2(1 + x^2)^{-1}}$ (scale) and $d = 2, p = 2$.
Suppose $X \mid Y$ and $Y$ are normally distributed. Does it follow that $Y \mid X$ is normally dist Here is another counterexample which gives closed-form distributions of $X, Y, X|Y = y$ and $Y|X = x$. Let $Y, Z \text{ i.i.d. } \sim N(0, 1)$, and define $X = \frac{Z}{Y}$. Then for $y \neq 0$ (the
33,466
Significance test for large sample sizes
The test is doing what it should be doing. You ask it if two quantities are equal, in the case of the original question, if zero is equal to some measure of independence that is zero when the distributions are independent (e.g., mutual information). Since the test has considerable sensitivity, due to the large sample size, the test correctly tells you that the two quantities are not equal. This is a design feature, not a bug, of hypothesis testing that is related to consistency (power converges to $1$ as the sample size increases). If you remember the Princess and the Pea fairy tale, you may recall that, no matter how trivial we might perceive a pea under the mattress, the princess was correct about there being a pea. If you want to assert that a pea under the mattress does not matter to you, that's fine, but it is a mistake to call the princess incorrect for noticing the pea when the pea was indeed under the mattress. Because of the consistency of most tests are the frequent availability of large amounts of data, hypothesis tests certainly can find differences that, while they are there, are not important or interesting, much like most people would not care if there is a pea under the mattress. This gets into the effect size and what kind of effect size is interesting. While statistics can (and has) come up with interesting ways to quantify effect sizes, determining an interesting effect size mostly falls outside of the realm of statistics and in the domains to which statistics is applied (the experts in medicine decide that for COVID studies, the experts in economics decide that for unemployment studies, etc). Once you have an effect size of interest, there are a number of statistics tricks related to it. First is that investigators can calculate the sample size required for detecting such an effect size with a certain power and $\alpha$-level. This is not so important for the situation where you already have a ton of data, but it is worth a mention. Examples of this are available in the pwr package in R. A second trick is equivalence testing, the easiest example of which to understand is two one-sided tests: TOST. Briefly, TOST does two hypothesis tests in order to bound our estimate of the true effect, rejecting that the true effect is too high or too low. A third trick is interval estimation. A frequentist might calculate a confidence interval to put bounds on the effect size, and a large sample size would lead to a relatively narrow confidence interval and correspondingly high precision in the estimate. A Bayesian might calculate a credible interval for the same purpose. All else equal, the credible interval should be narrower for a larger sample size, with a large sample resulting in a tight estimate of the true effect. Whether you go frequentist or Bayesian, a tight estimate and high precision sounds desirable. Once you have a range of plausible parameter values given by the interval estimate, you can analyze if any of those have any practical significance by comparing to your effect size. Depending on what you want to do, one of these three might be reasonable for handling situations where large data sets are available. However, they all need some declaration of an effect size of interest!
Significance test for large sample sizes
The test is doing what it should be doing. You ask it if two quantities are equal, in the case of the original question, if zero is equal to some measure of independence that is zero when the distribu
Significance test for large sample sizes The test is doing what it should be doing. You ask it if two quantities are equal, in the case of the original question, if zero is equal to some measure of independence that is zero when the distributions are independent (e.g., mutual information). Since the test has considerable sensitivity, due to the large sample size, the test correctly tells you that the two quantities are not equal. This is a design feature, not a bug, of hypothesis testing that is related to consistency (power converges to $1$ as the sample size increases). If you remember the Princess and the Pea fairy tale, you may recall that, no matter how trivial we might perceive a pea under the mattress, the princess was correct about there being a pea. If you want to assert that a pea under the mattress does not matter to you, that's fine, but it is a mistake to call the princess incorrect for noticing the pea when the pea was indeed under the mattress. Because of the consistency of most tests are the frequent availability of large amounts of data, hypothesis tests certainly can find differences that, while they are there, are not important or interesting, much like most people would not care if there is a pea under the mattress. This gets into the effect size and what kind of effect size is interesting. While statistics can (and has) come up with interesting ways to quantify effect sizes, determining an interesting effect size mostly falls outside of the realm of statistics and in the domains to which statistics is applied (the experts in medicine decide that for COVID studies, the experts in economics decide that for unemployment studies, etc). Once you have an effect size of interest, there are a number of statistics tricks related to it. First is that investigators can calculate the sample size required for detecting such an effect size with a certain power and $\alpha$-level. This is not so important for the situation where you already have a ton of data, but it is worth a mention. Examples of this are available in the pwr package in R. A second trick is equivalence testing, the easiest example of which to understand is two one-sided tests: TOST. Briefly, TOST does two hypothesis tests in order to bound our estimate of the true effect, rejecting that the true effect is too high or too low. A third trick is interval estimation. A frequentist might calculate a confidence interval to put bounds on the effect size, and a large sample size would lead to a relatively narrow confidence interval and correspondingly high precision in the estimate. A Bayesian might calculate a credible interval for the same purpose. All else equal, the credible interval should be narrower for a larger sample size, with a large sample resulting in a tight estimate of the true effect. Whether you go frequentist or Bayesian, a tight estimate and high precision sounds desirable. Once you have a range of plausible parameter values given by the interval estimate, you can analyze if any of those have any practical significance by comparing to your effect size. Depending on what you want to do, one of these three might be reasonable for handling situations where large data sets are available. However, they all need some declaration of an effect size of interest!
Significance test for large sample sizes The test is doing what it should be doing. You ask it if two quantities are equal, in the case of the original question, if zero is equal to some measure of independence that is zero when the distribu
33,467
Significance test for large sample sizes
This is a general phenomenon in hypothesis testing for a point-null hypothesis What you are dealing with here is a much larger issue in hypothesis testing than just the chi-squared test of independence. This is a phenomenon that arises in classical hypothesis testing whenever you are testing a point-null hypothesis (i.e., a null hypothesis that stipulates a single point for an unknown parameter). In such cases, the null hypothesis is a choice of a single parameter value, usually over an uncountable set of possible values. That is an extremely specific null hypothesis. To understand the phenomenon you are referring to, let's have a look at the properties of a consistent hypothesis test. Hypothesis tests are designed to test the specified null hypothesis and reject it (in favour of a specified alternative) if the evidence falsifies the null hypothesis. (More information on the mathematical structure of a hypothesis test is avaialble in this related answer.) Suppose you have data $\mathbf{x}_n$ and an unknown parameter $\theta \in \Theta$ and you pick a null hypothesis space $\Theta_0 \subset \Theta$. Let $\alpha$ denote the significance level for the test and let $\beta_n$ denote the resulting power function. A consistent hypothesis test will have the following limiting property for its power function: $$\lim_{n \rightarrow \infty} \beta_n(\theta) = 1 \quad \quad \quad \quad \quad \text{for all }\theta \in \Theta-\Theta_0 \text{ and } 0< \alpha <1,$$ which implies the following limiting property for its p-value function: $$\underset{n \rightarrow \infty}{\text{plim}} \ p(\mathbf{x}_n) = 0 \quad \quad \quad \quad \quad \text{for all }\theta \in \Theta-\Theta_0. \quad \quad \quad \quad \quad \quad \quad $$ Consistency under the point-null hypothesis: Hypothesis tests are designed to test the truth or falsity of the hypotheses you actually give them, so if you use an extremely specific null hypothesis, and that hypothesis is even slightly false, the test is designed to correctly infer that the null hypothesis is false. In particular, if the point-null value is $\theta_0$ then for any parameter value $\theta \neq \theta_0$ you will have $\text{plim}_{n \rightarrow \infty} p(\mathbf{x}_n) = 0$ (i.e., the p-value will converge stochastically to zero). One of the problems that arises in hypothesis testing occurs when we set a point-null hypothesis in a circumstance where that specific hypothesis is almost certainly false, but what we really want to know is something a bit broader --- e.g., whether the specified point-null is almost correct. A common case occurs when the parameter can be considered to be a continuous random variable, such that it will be equal to the stipulated point-null value with zero probability.$^\dagger$ In this case, the null hypothesis is false with probability one and so with a large amount of data the test gives us a tiny p-value which tells us that the null is false. Some users view this as a deficiency of the hypothesis test, but it is actually a case where the test is doing exactly what you ask it to. By specifying a point-null hypothesis you are asking the test to be very specific about the null hypothesis under consideration, and the test is complying with this instruction. So, what can you do to deal with this "problem". Firstly, you ought to recognise that you need to test the hypothesis you are actually interested in, not a hypothesis that is mathematically close to this but much more specific. Typically you can do this by setting some "tolerance" $\epsilon>0$ on your stipulated point-null value and testing the composite null hypothesis $\theta_0 - \epsilon \leqslant \theta \leqslant \theta_0 + \epsilon$. You can view the tolerance value as a measure of "practical significance", meaning that if the true parameter value is within the stipulated tolerance of the point-null value then it is "practically" equivalent to the point-null value. In this manner you can separate "statistical significance" from "practical significance" and ensure that the consistency property of the hypothesis test does not lead the p-value to converge to zero in cases where you don't want it to. Implementation of a "tolerance" in the null hypothesis for the chi-squared test of independence is quite complicated and so outside the scope of the present post (but feel free to ask a separate question for how to do this). In general you can alter existing tests to include a tolerance on a point-null hypothesis but you need to re-derive the test as a composite test to determine how the composite hypothesis affects the p-value function. This is a complicated exercise in general, but it can be automated into customised p-value functions once derived. $^\dagger$ You will sometimes see statistical commentators make a broader assertion that a point-null hypothesis is always false. That is not true --- a point-null hypothesis can be true. Moreover, even in the case where the parameter is viewed as a random variable, it can be equal to a specific value with positive probability. It is only if we are willing to stipulate that the parameter is a continuous random variable that it has probability zero of being equal to any specific value.
Significance test for large sample sizes
This is a general phenomenon in hypothesis testing for a point-null hypothesis What you are dealing with here is a much larger issue in hypothesis testing than just the chi-squared test of independenc
Significance test for large sample sizes This is a general phenomenon in hypothesis testing for a point-null hypothesis What you are dealing with here is a much larger issue in hypothesis testing than just the chi-squared test of independence. This is a phenomenon that arises in classical hypothesis testing whenever you are testing a point-null hypothesis (i.e., a null hypothesis that stipulates a single point for an unknown parameter). In such cases, the null hypothesis is a choice of a single parameter value, usually over an uncountable set of possible values. That is an extremely specific null hypothesis. To understand the phenomenon you are referring to, let's have a look at the properties of a consistent hypothesis test. Hypothesis tests are designed to test the specified null hypothesis and reject it (in favour of a specified alternative) if the evidence falsifies the null hypothesis. (More information on the mathematical structure of a hypothesis test is avaialble in this related answer.) Suppose you have data $\mathbf{x}_n$ and an unknown parameter $\theta \in \Theta$ and you pick a null hypothesis space $\Theta_0 \subset \Theta$. Let $\alpha$ denote the significance level for the test and let $\beta_n$ denote the resulting power function. A consistent hypothesis test will have the following limiting property for its power function: $$\lim_{n \rightarrow \infty} \beta_n(\theta) = 1 \quad \quad \quad \quad \quad \text{for all }\theta \in \Theta-\Theta_0 \text{ and } 0< \alpha <1,$$ which implies the following limiting property for its p-value function: $$\underset{n \rightarrow \infty}{\text{plim}} \ p(\mathbf{x}_n) = 0 \quad \quad \quad \quad \quad \text{for all }\theta \in \Theta-\Theta_0. \quad \quad \quad \quad \quad \quad \quad $$ Consistency under the point-null hypothesis: Hypothesis tests are designed to test the truth or falsity of the hypotheses you actually give them, so if you use an extremely specific null hypothesis, and that hypothesis is even slightly false, the test is designed to correctly infer that the null hypothesis is false. In particular, if the point-null value is $\theta_0$ then for any parameter value $\theta \neq \theta_0$ you will have $\text{plim}_{n \rightarrow \infty} p(\mathbf{x}_n) = 0$ (i.e., the p-value will converge stochastically to zero). One of the problems that arises in hypothesis testing occurs when we set a point-null hypothesis in a circumstance where that specific hypothesis is almost certainly false, but what we really want to know is something a bit broader --- e.g., whether the specified point-null is almost correct. A common case occurs when the parameter can be considered to be a continuous random variable, such that it will be equal to the stipulated point-null value with zero probability.$^\dagger$ In this case, the null hypothesis is false with probability one and so with a large amount of data the test gives us a tiny p-value which tells us that the null is false. Some users view this as a deficiency of the hypothesis test, but it is actually a case where the test is doing exactly what you ask it to. By specifying a point-null hypothesis you are asking the test to be very specific about the null hypothesis under consideration, and the test is complying with this instruction. So, what can you do to deal with this "problem". Firstly, you ought to recognise that you need to test the hypothesis you are actually interested in, not a hypothesis that is mathematically close to this but much more specific. Typically you can do this by setting some "tolerance" $\epsilon>0$ on your stipulated point-null value and testing the composite null hypothesis $\theta_0 - \epsilon \leqslant \theta \leqslant \theta_0 + \epsilon$. You can view the tolerance value as a measure of "practical significance", meaning that if the true parameter value is within the stipulated tolerance of the point-null value then it is "practically" equivalent to the point-null value. In this manner you can separate "statistical significance" from "practical significance" and ensure that the consistency property of the hypothesis test does not lead the p-value to converge to zero in cases where you don't want it to. Implementation of a "tolerance" in the null hypothesis for the chi-squared test of independence is quite complicated and so outside the scope of the present post (but feel free to ask a separate question for how to do this). In general you can alter existing tests to include a tolerance on a point-null hypothesis but you need to re-derive the test as a composite test to determine how the composite hypothesis affects the p-value function. This is a complicated exercise in general, but it can be automated into customised p-value functions once derived. $^\dagger$ You will sometimes see statistical commentators make a broader assertion that a point-null hypothesis is always false. That is not true --- a point-null hypothesis can be true. Moreover, even in the case where the parameter is viewed as a random variable, it can be equal to a specific value with positive probability. It is only if we are willing to stipulate that the parameter is a continuous random variable that it has probability zero of being equal to any specific value.
Significance test for large sample sizes This is a general phenomenon in hypothesis testing for a point-null hypothesis What you are dealing with here is a much larger issue in hypothesis testing than just the chi-squared test of independenc
33,468
Significance test for large sample sizes
When we do sample size determination for clinical trials we define a clinically significant (or clinically important) difference. That is a difference that is large enough to be worth detecting. The definition is given by the clinician. It is not a statistical issue. It depends on the clinical problem and requires a clinical judgement. Once the clinician has decided on that we pick the smallest sample size required to have high power (80% or more) for detecting a difference that large. In your case where you already have millions of samples what you can do is rephrase the question. Instead of the standard null hypothesis that the difference is 0 which you reject if you can determine that it is any size different from 0, define a delta that represents what you think is an important distance. Then you reject the null hpyothesis only if the test indicates that the difference is greater than delta.
Significance test for large sample sizes
When we do sample size determination for clinical trials we define a clinically significant (or clinically important) difference. That is a difference that is large enough to be worth detecting. The
Significance test for large sample sizes When we do sample size determination for clinical trials we define a clinically significant (or clinically important) difference. That is a difference that is large enough to be worth detecting. The definition is given by the clinician. It is not a statistical issue. It depends on the clinical problem and requires a clinical judgement. Once the clinician has decided on that we pick the smallest sample size required to have high power (80% or more) for detecting a difference that large. In your case where you already have millions of samples what you can do is rephrase the question. Instead of the standard null hypothesis that the difference is 0 which you reject if you can determine that it is any size different from 0, define a delta that represents what you think is an important distance. Then you reject the null hpyothesis only if the test indicates that the difference is greater than delta.
Significance test for large sample sizes When we do sample size determination for clinical trials we define a clinically significant (or clinically important) difference. That is a difference that is large enough to be worth detecting. The
33,469
Significance test for large sample sizes
Take a look at this paper by the late Jack Good: http://fitelson.org/probability/good_bnbc.pdf ; At section 4.3, his "Bayes/Non-Bayes Compromise" leads to the definition of a "standardized" p-value which tries to address the "Huge $n$ $\Rightarrow$ Highly Probable to Reject Null, Whatever Data" effect.
Significance test for large sample sizes
Take a look at this paper by the late Jack Good: http://fitelson.org/probability/good_bnbc.pdf ; At section 4.3, his "Bayes/Non-Bayes Compromise" leads to the definition of a "standardized" p-value wh
Significance test for large sample sizes Take a look at this paper by the late Jack Good: http://fitelson.org/probability/good_bnbc.pdf ; At section 4.3, his "Bayes/Non-Bayes Compromise" leads to the definition of a "standardized" p-value which tries to address the "Huge $n$ $\Rightarrow$ Highly Probable to Reject Null, Whatever Data" effect.
Significance test for large sample sizes Take a look at this paper by the late Jack Good: http://fitelson.org/probability/good_bnbc.pdf ; At section 4.3, his "Bayes/Non-Bayes Compromise" leads to the definition of a "standardized" p-value wh
33,470
Significance test for large sample sizes
If your sample is large enough then it seems to me that a statistical test is not needed. You have characterised the effect. Is the effect that you have characterised large enough to be interesting? If so, then make a reasoned and principled argument about the observations without recourse to a testing procedure.
Significance test for large sample sizes
If your sample is large enough then it seems to me that a statistical test is not needed. You have characterised the effect. Is the effect that you have characterised large enough to be interesting? I
Significance test for large sample sizes If your sample is large enough then it seems to me that a statistical test is not needed. You have characterised the effect. Is the effect that you have characterised large enough to be interesting? If so, then make a reasoned and principled argument about the observations without recourse to a testing procedure.
Significance test for large sample sizes If your sample is large enough then it seems to me that a statistical test is not needed. You have characterised the effect. Is the effect that you have characterised large enough to be interesting? I
33,471
Significance test for large sample sizes
Congratulations! You have a big enough sample size that you don't need to bother with significance testing! So don't worry about it. Now you just need to decide if the effect you see is "big enough" to care about, which is an entirely different problem that has nothing to do with significance testing. A statistical significance test, like a Chi square test, is attempting to answer a very specific problem: "how likely is it that a difference I observe in a random sample is just an artifact of sampling error (the error that arises when we try to make generalizations about a population using only a random sample of that population)?" That's it. The fact that a test is significant doesn't tell us anything about whether the effect is "big" or "meaningful" in some substantive sense, or even that it's actually "real" (it might be due to some measurement error, or confounding with other variables). Now, as sample size increases the likelihood of an observed difference of a given size being an artifact of sampling error goes down, and so significance tests will tend to be significant basically all of the time. This is just because they problem they are trying to help you solve has already been (largely) solved by the large sample size. So, in your case sampling error is not a particularly big problem, so a significance test is not very helpful. Rather, what you need to decide is if the relationship you are looking is "big enough" or not. That's not a question that can be answered with a statistical test. You need to use your knowledge of the subject matter to decide if the relationship is large enough to "make a difference in the real world." No statistical test can answer that question for you, and neither can anyone here, unless they also happen to know a lot about Google n-grams, and the specific research question you are asking.
Significance test for large sample sizes
Congratulations! You have a big enough sample size that you don't need to bother with significance testing! So don't worry about it. Now you just need to decide if the effect you see is "big enough" t
Significance test for large sample sizes Congratulations! You have a big enough sample size that you don't need to bother with significance testing! So don't worry about it. Now you just need to decide if the effect you see is "big enough" to care about, which is an entirely different problem that has nothing to do with significance testing. A statistical significance test, like a Chi square test, is attempting to answer a very specific problem: "how likely is it that a difference I observe in a random sample is just an artifact of sampling error (the error that arises when we try to make generalizations about a population using only a random sample of that population)?" That's it. The fact that a test is significant doesn't tell us anything about whether the effect is "big" or "meaningful" in some substantive sense, or even that it's actually "real" (it might be due to some measurement error, or confounding with other variables). Now, as sample size increases the likelihood of an observed difference of a given size being an artifact of sampling error goes down, and so significance tests will tend to be significant basically all of the time. This is just because they problem they are trying to help you solve has already been (largely) solved by the large sample size. So, in your case sampling error is not a particularly big problem, so a significance test is not very helpful. Rather, what you need to decide is if the relationship you are looking is "big enough" or not. That's not a question that can be answered with a statistical test. You need to use your knowledge of the subject matter to decide if the relationship is large enough to "make a difference in the real world." No statistical test can answer that question for you, and neither can anyone here, unless they also happen to know a lot about Google n-grams, and the specific research question you are asking.
Significance test for large sample sizes Congratulations! You have a big enough sample size that you don't need to bother with significance testing! So don't worry about it. Now you just need to decide if the effect you see is "big enough" t
33,472
Significance test for large sample sizes
The problem here is that it is bad statistical practice to propose a statistical test as part of an analysis plan without any knowledge of the sample size. If you did know the sample size, you should adjust the alpha accordingly. Ideally, a highly calibrated test such as the one you get with having N very, very large should balance the false positive error rate with the power - not in any objective sense, but there's no reason to set the usual (arbitrary) 0.05 when the sample size is so large a "significant" result is basically meaningless. The only way to know this is by doing simulations and understanding how the test statistic behaves when the null is true. There's no reason you can't set alpha = 0.0000001 and look to the one-minus-one-millionth quantile to define the critical value of the one-sided test, which may be a suitably large quantity.
Significance test for large sample sizes
The problem here is that it is bad statistical practice to propose a statistical test as part of an analysis plan without any knowledge of the sample size. If you did know the sample size, you should
Significance test for large sample sizes The problem here is that it is bad statistical practice to propose a statistical test as part of an analysis plan without any knowledge of the sample size. If you did know the sample size, you should adjust the alpha accordingly. Ideally, a highly calibrated test such as the one you get with having N very, very large should balance the false positive error rate with the power - not in any objective sense, but there's no reason to set the usual (arbitrary) 0.05 when the sample size is so large a "significant" result is basically meaningless. The only way to know this is by doing simulations and understanding how the test statistic behaves when the null is true. There's no reason you can't set alpha = 0.0000001 and look to the one-minus-one-millionth quantile to define the critical value of the one-sided test, which may be a suitably large quantity.
Significance test for large sample sizes The problem here is that it is bad statistical practice to propose a statistical test as part of an analysis plan without any knowledge of the sample size. If you did know the sample size, you should
33,473
Significance test for large sample sizes
This is my current recommendation: Statistical Tests (Hypothesis Testing) A statistical significance test, like a Chi square test, is attempting to answer a very specific problem: "how likely is it that a difference I observe in a random sample is just an artifact of sampling error (the error that arises when we try to make generalizations about a population using only a random sample of that population)?" Small Sample Size Summary What is a small sample size? When power is not too high (e.g. 0.9999) and p-value/CIs are not to small, usually n<500 or 300. Summary what to do for small sample size Hypothesis Testing: - p-value: stat test (e.g. t-test) with p-value & significance level - effect size: report the effect size (e.g. Cohen's d), see if it falls in the common ~0.2 (small), 0.5 (medium), 0.8 (large) & compare it to eps/pooled_std(group1, group2) - CI: CI's, do they intersect given the epsilon that matters for your application? - Power/sample size: making an estimate of your std (or preliminary data), get Power of your test with a given sample size or compute the sample size you need to achieve good power. ref: - also fantastic reference: https://stats.stackexchange.com/a/602978/28986 Large Sample Size Same as previous comment but n>500 good rule of thumb. Summary what to do for large sample size Hypothesis Testing: - CI: CI's and use the epsilon valid in your applicaiton - Effect size: report effect size, see if it falls in the common ~0.2 (small), 0.5 (medium), 0.8 (large) & compare it to eps/pooled_std(group1, group2) - LRT: todo - eps != 0 p-value: todo Hands on example See my_test_using_stds_from_real_expts_() function in effect_size.py. Todo later (for large sample size) LRT (theory & python) mainly for large sample size hypothesis testing with non-zero epsilon (theory & python) mainly for large sample size ref: - Fantastic reference: Significance test for large sample sizes
Significance test for large sample sizes
This is my current recommendation: Statistical Tests (Hypothesis Testing) A statistical significance test, like a Chi square test, is attempting to answer a very specific problem: "how likely is it th
Significance test for large sample sizes This is my current recommendation: Statistical Tests (Hypothesis Testing) A statistical significance test, like a Chi square test, is attempting to answer a very specific problem: "how likely is it that a difference I observe in a random sample is just an artifact of sampling error (the error that arises when we try to make generalizations about a population using only a random sample of that population)?" Small Sample Size Summary What is a small sample size? When power is not too high (e.g. 0.9999) and p-value/CIs are not to small, usually n<500 or 300. Summary what to do for small sample size Hypothesis Testing: - p-value: stat test (e.g. t-test) with p-value & significance level - effect size: report the effect size (e.g. Cohen's d), see if it falls in the common ~0.2 (small), 0.5 (medium), 0.8 (large) & compare it to eps/pooled_std(group1, group2) - CI: CI's, do they intersect given the epsilon that matters for your application? - Power/sample size: making an estimate of your std (or preliminary data), get Power of your test with a given sample size or compute the sample size you need to achieve good power. ref: - also fantastic reference: https://stats.stackexchange.com/a/602978/28986 Large Sample Size Same as previous comment but n>500 good rule of thumb. Summary what to do for large sample size Hypothesis Testing: - CI: CI's and use the epsilon valid in your applicaiton - Effect size: report effect size, see if it falls in the common ~0.2 (small), 0.5 (medium), 0.8 (large) & compare it to eps/pooled_std(group1, group2) - LRT: todo - eps != 0 p-value: todo Hands on example See my_test_using_stds_from_real_expts_() function in effect_size.py. Todo later (for large sample size) LRT (theory & python) mainly for large sample size hypothesis testing with non-zero epsilon (theory & python) mainly for large sample size ref: - Fantastic reference: Significance test for large sample sizes
Significance test for large sample sizes This is my current recommendation: Statistical Tests (Hypothesis Testing) A statistical significance test, like a Chi square test, is attempting to answer a very specific problem: "how likely is it th
33,474
What are the software limitations in all possible subsets selection in regression?
I suspect 30--60 is about the best you'll get. The standard approach is the leaps-and-bounds algorithm which doesn't require fitting every possible model. In $R$, the leaps package is one implementation. The documentation for the regsubsets function in the leaps package states that it will handle up to 50 variables without complaining. It can be "forced" to do more than 50 by setting the appropriate boolean flag. You might do a bit better with some parallelization technique, but the number of total models you can consider will (almost undoubtedly) only scale linearly with the number of CPU cores available to you. So, if 50 variables is the upper limit for a single core, and you have 1000 cores at your disposal, you could bump that to about 60 variables.
What are the software limitations in all possible subsets selection in regression?
I suspect 30--60 is about the best you'll get. The standard approach is the leaps-and-bounds algorithm which doesn't require fitting every possible model. In $R$, the leaps package is one implementati
What are the software limitations in all possible subsets selection in regression? I suspect 30--60 is about the best you'll get. The standard approach is the leaps-and-bounds algorithm which doesn't require fitting every possible model. In $R$, the leaps package is one implementation. The documentation for the regsubsets function in the leaps package states that it will handle up to 50 variables without complaining. It can be "forced" to do more than 50 by setting the appropriate boolean flag. You might do a bit better with some parallelization technique, but the number of total models you can consider will (almost undoubtedly) only scale linearly with the number of CPU cores available to you. So, if 50 variables is the upper limit for a single core, and you have 1000 cores at your disposal, you could bump that to about 60 variables.
What are the software limitations in all possible subsets selection in regression? I suspect 30--60 is about the best you'll get. The standard approach is the leaps-and-bounds algorithm which doesn't require fitting every possible model. In $R$, the leaps package is one implementati
33,475
What are the software limitations in all possible subsets selection in regression?
Just a caveat, but feature selection is a risky business, and the more features you have, the more degrees of freedom you have with which to optimise the feature selection criterion, and hence the greater the risk of over-fitting the feature selection criterion and in doing so obtain a model with poor generalisation ability. It is possible that with an efficient algorithm and careful coding you can perform all subsets selection with a large number of features, that doesn't mean that it is a good idea to do it, especially if you have relatively few observations. If you do use all subsets selection, it is vital to properly cross-validate the whole model fitting procedure (so that all-subset selection is performed independently in each fold of the cross-validation). In practice, ridge regression with no feature selection often out-performs linear regression with feature selection (that advice is given in Millar's monograph on feature selection).
What are the software limitations in all possible subsets selection in regression?
Just a caveat, but feature selection is a risky business, and the more features you have, the more degrees of freedom you have with which to optimise the feature selection criterion, and hence the gre
What are the software limitations in all possible subsets selection in regression? Just a caveat, but feature selection is a risky business, and the more features you have, the more degrees of freedom you have with which to optimise the feature selection criterion, and hence the greater the risk of over-fitting the feature selection criterion and in doing so obtain a model with poor generalisation ability. It is possible that with an efficient algorithm and careful coding you can perform all subsets selection with a large number of features, that doesn't mean that it is a good idea to do it, especially if you have relatively few observations. If you do use all subsets selection, it is vital to properly cross-validate the whole model fitting procedure (so that all-subset selection is performed independently in each fold of the cross-validation). In practice, ridge regression with no feature selection often out-performs linear regression with feature selection (that advice is given in Millar's monograph on feature selection).
What are the software limitations in all possible subsets selection in regression? Just a caveat, but feature selection is a risky business, and the more features you have, the more degrees of freedom you have with which to optimise the feature selection criterion, and hence the gre
33,476
What are the software limitations in all possible subsets selection in regression?
I was able to generate all possible subsets using 50 variables in SAS. I do not believe there is any hard limitation other than memory and CPU speed. Edit I generated the 2 best models for N=1 to 50 variables for 5000 observations. @levon9 - No, this ran in under 10 seconds. I generated 50 random variables from (0,1) -Ralph Winters
What are the software limitations in all possible subsets selection in regression?
I was able to generate all possible subsets using 50 variables in SAS. I do not believe there is any hard limitation other than memory and CPU speed. Edit I generated the 2 best models for N=1 to 50
What are the software limitations in all possible subsets selection in regression? I was able to generate all possible subsets using 50 variables in SAS. I do not believe there is any hard limitation other than memory and CPU speed. Edit I generated the 2 best models for N=1 to 50 variables for 5000 observations. @levon9 - No, this ran in under 10 seconds. I generated 50 random variables from (0,1) -Ralph Winters
What are the software limitations in all possible subsets selection in regression? I was able to generate all possible subsets using 50 variables in SAS. I do not believe there is any hard limitation other than memory and CPU speed. Edit I generated the 2 best models for N=1 to 50
33,477
What are the software limitations in all possible subsets selection in regression?
As $N$ gets big, your ability to use maths becomes absolutely crucial. "inefficient" mathematics will cost you at the PC. The upper limit depends on what equation you are solving. Avoiding matrix inverse or determinant calculations is a big advantage. One way to help with increasing the limit is to use theorems for decomposing a large matrix inverse from into smaller matrix inverses. This can often means the difference between feasible and not feasible. But this involves some hard work, and often quite complicated mathematical manipulations! But it is usually worth the time. Do the maths or do the time! Bayesian methods might be able to give an alternative way to get your result - might be quicker, which means your "upper limit" will increase (if only because it gives you two alternative ways of calculating the same answer - the smaller of two, will always be smaller than one of them!). If you can calculate a regression coefficient without inverting a matrix, then you will probably save a lot of time. This may be particularly useful in the Bayesian case, because "inside" a normal marginalisation integral, the $X^{T}X$ matrix does not need to be inverted, you just calculate a sum of squares. Further, the determinant matrix will form part of the normalising constant. So "in theory" you could use sampling techniques to numerically evaluate the integral (even though it has an analytic expression) which will be eons faster than trying to evaluate the "combinatorical explosion" of matrix inverses and determinants. (it will still be a "combinatorical explosion" of numerical integrations, but this may be quicker). This suggestion above is a bit of a "thought bubble" of mine. I want to actually test it out, see if it's any good. I think it would be (5,000 simulations + calculate exp(sum of squares) + calculate weighted average beta should be faster than matrix inversion for a big enough matrix.) The cost is approximate rather than exact estimates. There is nothing to stop you from using the same set of pseudo random numbers to numerically evaluate the integral, which will again, save you a great deal of time. There is also nothing stopping you from using a combination of either technique. Use exact when the matrices are small, use simulation when they are big. This is because in this part of the analysis. It is just different numerical techniques - just pick the technique which is quickest! Of course this is all just a bit of "hand wavy" arguments, I don't exactly know the best software packages to use - and worse, trying to figure out which algorithms they actually use.
What are the software limitations in all possible subsets selection in regression?
As $N$ gets big, your ability to use maths becomes absolutely crucial. "inefficient" mathematics will cost you at the PC. The upper limit depends on what equation you are solving. Avoiding matrix i
What are the software limitations in all possible subsets selection in regression? As $N$ gets big, your ability to use maths becomes absolutely crucial. "inefficient" mathematics will cost you at the PC. The upper limit depends on what equation you are solving. Avoiding matrix inverse or determinant calculations is a big advantage. One way to help with increasing the limit is to use theorems for decomposing a large matrix inverse from into smaller matrix inverses. This can often means the difference between feasible and not feasible. But this involves some hard work, and often quite complicated mathematical manipulations! But it is usually worth the time. Do the maths or do the time! Bayesian methods might be able to give an alternative way to get your result - might be quicker, which means your "upper limit" will increase (if only because it gives you two alternative ways of calculating the same answer - the smaller of two, will always be smaller than one of them!). If you can calculate a regression coefficient without inverting a matrix, then you will probably save a lot of time. This may be particularly useful in the Bayesian case, because "inside" a normal marginalisation integral, the $X^{T}X$ matrix does not need to be inverted, you just calculate a sum of squares. Further, the determinant matrix will form part of the normalising constant. So "in theory" you could use sampling techniques to numerically evaluate the integral (even though it has an analytic expression) which will be eons faster than trying to evaluate the "combinatorical explosion" of matrix inverses and determinants. (it will still be a "combinatorical explosion" of numerical integrations, but this may be quicker). This suggestion above is a bit of a "thought bubble" of mine. I want to actually test it out, see if it's any good. I think it would be (5,000 simulations + calculate exp(sum of squares) + calculate weighted average beta should be faster than matrix inversion for a big enough matrix.) The cost is approximate rather than exact estimates. There is nothing to stop you from using the same set of pseudo random numbers to numerically evaluate the integral, which will again, save you a great deal of time. There is also nothing stopping you from using a combination of either technique. Use exact when the matrices are small, use simulation when they are big. This is because in this part of the analysis. It is just different numerical techniques - just pick the technique which is quickest! Of course this is all just a bit of "hand wavy" arguments, I don't exactly know the best software packages to use - and worse, trying to figure out which algorithms they actually use.
What are the software limitations in all possible subsets selection in regression? As $N$ gets big, your ability to use maths becomes absolutely crucial. "inefficient" mathematics will cost you at the PC. The upper limit depends on what equation you are solving. Avoiding matrix i
33,478
Strange pattern in standard deviation confidence interval estimation via bootstrapping
You might have a bug in your code, or the bootstrap library does something else than expected. Edit: After corrected data was provided, it became apparent that the pattern was caused by one outlier, with each peak corresponding to the different number of times the outlier was selected into a sample.
Strange pattern in standard deviation confidence interval estimation via bootstrapping
You might have a bug in your code, or the bootstrap library does something else than expected. Edit: After corrected data was provided, it became apparent that the pattern was caused by one outlier, w
Strange pattern in standard deviation confidence interval estimation via bootstrapping You might have a bug in your code, or the bootstrap library does something else than expected. Edit: After corrected data was provided, it became apparent that the pattern was caused by one outlier, with each peak corresponding to the different number of times the outlier was selected into a sample.
Strange pattern in standard deviation confidence interval estimation via bootstrapping You might have a bug in your code, or the bootstrap library does something else than expected. Edit: After corrected data was provided, it became apparent that the pattern was caused by one outlier, w
33,479
Strange pattern in standard deviation confidence interval estimation via bootstrapping
I am hesitant to put this down as an answer, but to me this seems to be caused by the small amount of datapoints you base your bootstrap on (21, correct me if I'm wrong). To be more precise, to me it seems these specific 21 values, from which you sample, have only a few frequently possible standard deviations (the peaks in your histogram). If the base sample was larger and more diverse, the resulting histogram would be much smoother (and probably more alike the normal distribution you were expecting). On a general note and assuming I in the right here, this is a good example to show bootstrapping does not solve the problems of having a small sample.
Strange pattern in standard deviation confidence interval estimation via bootstrapping
I am hesitant to put this down as an answer, but to me this seems to be caused by the small amount of datapoints you base your bootstrap on (21, correct me if I'm wrong). To be more precise, to me it
Strange pattern in standard deviation confidence interval estimation via bootstrapping I am hesitant to put this down as an answer, but to me this seems to be caused by the small amount of datapoints you base your bootstrap on (21, correct me if I'm wrong). To be more precise, to me it seems these specific 21 values, from which you sample, have only a few frequently possible standard deviations (the peaks in your histogram). If the base sample was larger and more diverse, the resulting histogram would be much smoother (and probably more alike the normal distribution you were expecting). On a general note and assuming I in the right here, this is a good example to show bootstrapping does not solve the problems of having a small sample.
Strange pattern in standard deviation confidence interval estimation via bootstrapping I am hesitant to put this down as an answer, but to me this seems to be caused by the small amount of datapoints you base your bootstrap on (21, correct me if I'm wrong). To be more precise, to me it
33,480
How can a probability distribution diverge?
Somehow, if you would take the area of a diverging Gamma distribution, you could express it as the area of a dirac delta distribution, plus something more since it has non zero weight at $x \neq 0$, so it would be bigger than one. That's where your reasoning goes wrong: you can't automatically express any function which is infinite at $x = 0$ as a delta distribution plus something more. After all, if you could do this with $\delta(x)$, who's to say you couldn't also do it with $2\delta(x)$? Or $10^{-10}\delta(x)$? Or any other coefficient? It's just as valid to say that those distributions are zero for $x\neq 0$ and infinite at $x = 0$; why not use the same reasoning with them? Actually, distributions (in the mathematical sense of distribution theory) should be thought of more like functions of functions - you put in a function and get out a number. For the delta distribution specifically, if you put in the function $f$, you get out the number $f(0)$. Distributions are not normal number-to-number functions. They're more complicated, and more capable, than such "ordinary" functions. This idea of turning a function into a number is quite familiar to anyone who's used to dealing with probability. For example, the series of distribution moments - mean, standard deviation, skewness, kurtosis, and so on - can all be thought of as rules that turn a function (the probability distribution) into a number (the corresponding moment). Take the mean/expectation value, for instance. This rule turns a probability distribution $P(x)$ into the number $E_P[x]$, calculated as $$E_P[x] = \int P(x)\,x\ \mathrm{d}x$$ Or the rule for variance turns $P(x)$ into the number $\sigma_P^2$, where $$\sigma_P^2[x] = \int P(x)\,(x - E_P[x])^2\ \mathrm{d}x$$ My notation is a little weird here, but hopefully you get the idea.1 You may notice something these rules have in common: in all of them, the way you get from the function to the number is by integrating the function times some other weighting function. This is a very common way to represent mathematical distributions. So it's natural to wonder, is there some weighting function $\delta(x)$ that allows you to represent the action of a delta distribution like this? $$f\to \int \delta(x)\, f(x)\ \mathrm{d}x$$ You can easily establish that if there is such a function, it has to be equal to $0$ at every $x\neq 0$. But you can't get a value for $\delta(0)$ in this way. You can show that it's larger than any finite number, but there is no actual value for $\delta(0)$ that makes this equation work out, using the standard ideas of integration.2 The reason for that is that there's more to the delta distribution than just this: $$\begin{cases}0, & x\neq 0 \\ \infty, & x = 0\end{cases}$$ That "$\infty$" is misleading. It stands in for a whole extra set of information about the delta distribution that normal functions just can't represent. And that's why you can't meaningfully say that the gamma distribution is "more" than the delta distribution. Sure, at any $x > 0$, the value of the gamma distribution is more than the value of the delta distribution, but all the useful information about the delta distribution is locked up in that point at $x = 0$, and that information is too rich and complex to allow you to say that one distribution is more than the other. Technical details 1Actually, you can flip things around and think of the probability distribution itself as the mathematical distribution. In this sense, the probability distribution is a rule that takes a weighting function, like $x$ or $(x - E[x])^2$, to a number, $E[x]$ or $\sigma_x^2$ respectively. If you think about it that way, the standard notation makes a bit more sense, but I think the overall idea is a bit less natural for a post about mathematical distributions. 2Specifically, by "standard ideas of integration" I'm taking about Riemann integration and Lebesgue integration, both of which have the property that two functions which differ only at a single point must have the same integral (given the same limits). If there were a function $\delta(x)$, it would differ from the function $0$ at only one point, namely $x = 0$, and thus the two functions' integrals would always have to be the same. $$\int_a^b \delta(x)f(x)\ \mathrm{d}x = \int_a^b (0)f(x)\ \mathrm{d}x = 0$$ So there is no number you can assign to $\delta(0)$ that makes it reproduce the effect of the delta distribution.
How can a probability distribution diverge?
Somehow, if you would take the area of a diverging Gamma distribution, you could express it as the area of a dirac delta distribution, plus something more since it has non zero weight at $x \neq 0$, s
How can a probability distribution diverge? Somehow, if you would take the area of a diverging Gamma distribution, you could express it as the area of a dirac delta distribution, plus something more since it has non zero weight at $x \neq 0$, so it would be bigger than one. That's where your reasoning goes wrong: you can't automatically express any function which is infinite at $x = 0$ as a delta distribution plus something more. After all, if you could do this with $\delta(x)$, who's to say you couldn't also do it with $2\delta(x)$? Or $10^{-10}\delta(x)$? Or any other coefficient? It's just as valid to say that those distributions are zero for $x\neq 0$ and infinite at $x = 0$; why not use the same reasoning with them? Actually, distributions (in the mathematical sense of distribution theory) should be thought of more like functions of functions - you put in a function and get out a number. For the delta distribution specifically, if you put in the function $f$, you get out the number $f(0)$. Distributions are not normal number-to-number functions. They're more complicated, and more capable, than such "ordinary" functions. This idea of turning a function into a number is quite familiar to anyone who's used to dealing with probability. For example, the series of distribution moments - mean, standard deviation, skewness, kurtosis, and so on - can all be thought of as rules that turn a function (the probability distribution) into a number (the corresponding moment). Take the mean/expectation value, for instance. This rule turns a probability distribution $P(x)$ into the number $E_P[x]$, calculated as $$E_P[x] = \int P(x)\,x\ \mathrm{d}x$$ Or the rule for variance turns $P(x)$ into the number $\sigma_P^2$, where $$\sigma_P^2[x] = \int P(x)\,(x - E_P[x])^2\ \mathrm{d}x$$ My notation is a little weird here, but hopefully you get the idea.1 You may notice something these rules have in common: in all of them, the way you get from the function to the number is by integrating the function times some other weighting function. This is a very common way to represent mathematical distributions. So it's natural to wonder, is there some weighting function $\delta(x)$ that allows you to represent the action of a delta distribution like this? $$f\to \int \delta(x)\, f(x)\ \mathrm{d}x$$ You can easily establish that if there is such a function, it has to be equal to $0$ at every $x\neq 0$. But you can't get a value for $\delta(0)$ in this way. You can show that it's larger than any finite number, but there is no actual value for $\delta(0)$ that makes this equation work out, using the standard ideas of integration.2 The reason for that is that there's more to the delta distribution than just this: $$\begin{cases}0, & x\neq 0 \\ \infty, & x = 0\end{cases}$$ That "$\infty$" is misleading. It stands in for a whole extra set of information about the delta distribution that normal functions just can't represent. And that's why you can't meaningfully say that the gamma distribution is "more" than the delta distribution. Sure, at any $x > 0$, the value of the gamma distribution is more than the value of the delta distribution, but all the useful information about the delta distribution is locked up in that point at $x = 0$, and that information is too rich and complex to allow you to say that one distribution is more than the other. Technical details 1Actually, you can flip things around and think of the probability distribution itself as the mathematical distribution. In this sense, the probability distribution is a rule that takes a weighting function, like $x$ or $(x - E[x])^2$, to a number, $E[x]$ or $\sigma_x^2$ respectively. If you think about it that way, the standard notation makes a bit more sense, but I think the overall idea is a bit less natural for a post about mathematical distributions. 2Specifically, by "standard ideas of integration" I'm taking about Riemann integration and Lebesgue integration, both of which have the property that two functions which differ only at a single point must have the same integral (given the same limits). If there were a function $\delta(x)$, it would differ from the function $0$ at only one point, namely $x = 0$, and thus the two functions' integrals would always have to be the same. $$\int_a^b \delta(x)f(x)\ \mathrm{d}x = \int_a^b (0)f(x)\ \mathrm{d}x = 0$$ So there is no number you can assign to $\delta(0)$ that makes it reproduce the effect of the delta distribution.
How can a probability distribution diverge? Somehow, if you would take the area of a diverging Gamma distribution, you could express it as the area of a dirac delta distribution, plus something more since it has non zero weight at $x \neq 0$, s
33,481
How can a probability distribution diverge?
The Dirac delta is really not overly helpful here (although it is interesting), because the Gamma distribution has a continuous density, whereas the Dirac is about as non-continuous as you can get. You are right that the integral of a probability density must be one (I'll stick to densities defined on the positive axis only), $$ \int_0^\infty f(x)\,dx =1.$$ In the Gamma case, the density $f(x)$ diverges as $x\to 0$, so we have what is called an improper integral. In such a case, the integral is defined as the limit as the integration boundaries approach the point where the integrand is not defined, $$ \int_0^\infty f(x)\,dx := \lim_{a\to 0}\int_a^\infty f(x)\,dx,$$ as long as this limit exists. (Incidentally, we use the same abuse of notation to give a meaning to the symbol "$\int^\infty$", which is defined as the limit of the integral $\int^b$ as $b\to\infty$, again as long as this limit exists. So in this particular case, we have two problematic points - $0$, where the integrand is not defined, and $\infty$, where we can't evaluate the integral directly. We need to work with limits in both cases.) For the Gamma distribution specifically, we kind of side-step the problem. We first define the Gamma function as follows: $$\Gamma(k) := \int_0^\infty y^{k-1}e^{-y}\,dy.$$ We next prove that this definition actually makes sense, in the sense of the different limits outlined above. For simplicity, we can here stick to $k>0$, although the definition can be extended to (many) complex values $k$ as well. This check is a standard application of calculus and a nice exercise. Next, we substitute $x:=\theta y$ for $\theta>0$ and by the change of variables formula obtain $$\Gamma(k) = \int_0^\infty \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\theta^k}\,dx,$$ from which we get that $$1 = \int_0^\infty \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\Gamma(k)\theta^k}\,dx.$$ That is, the integrand integrates to one and is therefore a probability density. We call it the Gamma distribution with shape $k$ and scale $\theta$. Now, I realize that I really passed the buck here. The meat of the argument lies in the fact that the Gamma function definition above does make sense. However, this is straightforward calculus, not statistics, so I only feel very slightly guilty in referring you to your favorite calculus textbook and the gamma-function tag at Math.SO, especially this question and this question.
How can a probability distribution diverge?
The Dirac delta is really not overly helpful here (although it is interesting), because the Gamma distribution has a continuous density, whereas the Dirac is about as non-continuous as you can get. Yo
How can a probability distribution diverge? The Dirac delta is really not overly helpful here (although it is interesting), because the Gamma distribution has a continuous density, whereas the Dirac is about as non-continuous as you can get. You are right that the integral of a probability density must be one (I'll stick to densities defined on the positive axis only), $$ \int_0^\infty f(x)\,dx =1.$$ In the Gamma case, the density $f(x)$ diverges as $x\to 0$, so we have what is called an improper integral. In such a case, the integral is defined as the limit as the integration boundaries approach the point where the integrand is not defined, $$ \int_0^\infty f(x)\,dx := \lim_{a\to 0}\int_a^\infty f(x)\,dx,$$ as long as this limit exists. (Incidentally, we use the same abuse of notation to give a meaning to the symbol "$\int^\infty$", which is defined as the limit of the integral $\int^b$ as $b\to\infty$, again as long as this limit exists. So in this particular case, we have two problematic points - $0$, where the integrand is not defined, and $\infty$, where we can't evaluate the integral directly. We need to work with limits in both cases.) For the Gamma distribution specifically, we kind of side-step the problem. We first define the Gamma function as follows: $$\Gamma(k) := \int_0^\infty y^{k-1}e^{-y}\,dy.$$ We next prove that this definition actually makes sense, in the sense of the different limits outlined above. For simplicity, we can here stick to $k>0$, although the definition can be extended to (many) complex values $k$ as well. This check is a standard application of calculus and a nice exercise. Next, we substitute $x:=\theta y$ for $\theta>0$ and by the change of variables formula obtain $$\Gamma(k) = \int_0^\infty \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\theta^k}\,dx,$$ from which we get that $$1 = \int_0^\infty \frac{x^{k-1}e^{-\frac{x}{\theta}}}{\Gamma(k)\theta^k}\,dx.$$ That is, the integrand integrates to one and is therefore a probability density. We call it the Gamma distribution with shape $k$ and scale $\theta$. Now, I realize that I really passed the buck here. The meat of the argument lies in the fact that the Gamma function definition above does make sense. However, this is straightforward calculus, not statistics, so I only feel very slightly guilty in referring you to your favorite calculus textbook and the gamma-function tag at Math.SO, especially this question and this question.
How can a probability distribution diverge? The Dirac delta is really not overly helpful here (although it is interesting), because the Gamma distribution has a continuous density, whereas the Dirac is about as non-continuous as you can get. Yo
33,482
How can a probability distribution diverge?
Consider a standard exponential density $f(x)=\exp(-x)\,,\:x>0$ and consider a plot of $y=f(x)$ vs $x$ (left panel in the diagram below). Presumably you don't find it unfathomable that there's positive density for all $x>0$ yet the area is nonetheless $1$. Now let's exchange $x$ and $y$ ... that is let $x=\exp(-y)$, or $y = -\ln(x)$, for $0<x\leq 1$. Now this is a valid density, which asymptotes to the $y$ axis (so it's unbounded as $x\to 0$), but its area is clearly identical to the exponential (i.e. the area under the curve must still be 1 - all we did was reflect the shape, and reflection is area-preserving). Clearly, then, densities can be unbounded but have area 1.
How can a probability distribution diverge?
Consider a standard exponential density $f(x)=\exp(-x)\,,\:x>0$ and consider a plot of $y=f(x)$ vs $x$ (left panel in the diagram below). Presumably you don't find it unfathomable that there's positi
How can a probability distribution diverge? Consider a standard exponential density $f(x)=\exp(-x)\,,\:x>0$ and consider a plot of $y=f(x)$ vs $x$ (left panel in the diagram below). Presumably you don't find it unfathomable that there's positive density for all $x>0$ yet the area is nonetheless $1$. Now let's exchange $x$ and $y$ ... that is let $x=\exp(-y)$, or $y = -\ln(x)$, for $0<x\leq 1$. Now this is a valid density, which asymptotes to the $y$ axis (so it's unbounded as $x\to 0$), but its area is clearly identical to the exponential (i.e. the area under the curve must still be 1 - all we did was reflect the shape, and reflection is area-preserving). Clearly, then, densities can be unbounded but have area 1.
How can a probability distribution diverge? Consider a standard exponential density $f(x)=\exp(-x)\,,\:x>0$ and consider a plot of $y=f(x)$ vs $x$ (left panel in the diagram below). Presumably you don't find it unfathomable that there's positi
33,483
How can a probability distribution diverge?
This is really a calculus question, rather than statistics. You're asking how a function that goes to infinity at some values of its argument can still have a finite area under the curve? It's a valid question. For instance, if instead of Gamma function you took a hyperbole: $y=1/x$, for $x=[0,\infty)$ then the area under the curve doesn't converge, it's infinite. So, it's quite miraculous that a weighted sum of very large or even infinite numbers some how converge to a finite number. The sum is weighted because if you look at the Riemann's integral definition, it could be a sum like this: $$\int_0^\infty 1/x dx=\lim_{n\rightarrow\infty} \sum_{i=0}^n \frac{\Delta x_i}{x_i}$$ So, depending on which points $x_i$ you pick, the weights $\Delta x_i$ could be small or large. When you get closer to 0, $1/x_i$ get larger, but so do $\Delta x_i$ get smaller. In this competition $1/x_i$ wins, and the integral doesn't converge. For Gamma distribution it happens so that $\Delta x_i$ shrink faster than Gamma PDF grows, and the area ends up being finite. It's straight calculus to see how exactly it converges to 1.
How can a probability distribution diverge?
This is really a calculus question, rather than statistics. You're asking how a function that goes to infinity at some values of its argument can still have a finite area under the curve? It's a valid
How can a probability distribution diverge? This is really a calculus question, rather than statistics. You're asking how a function that goes to infinity at some values of its argument can still have a finite area under the curve? It's a valid question. For instance, if instead of Gamma function you took a hyperbole: $y=1/x$, for $x=[0,\infty)$ then the area under the curve doesn't converge, it's infinite. So, it's quite miraculous that a weighted sum of very large or even infinite numbers some how converge to a finite number. The sum is weighted because if you look at the Riemann's integral definition, it could be a sum like this: $$\int_0^\infty 1/x dx=\lim_{n\rightarrow\infty} \sum_{i=0}^n \frac{\Delta x_i}{x_i}$$ So, depending on which points $x_i$ you pick, the weights $\Delta x_i$ could be small or large. When you get closer to 0, $1/x_i$ get larger, but so do $\Delta x_i$ get smaller. In this competition $1/x_i$ wins, and the integral doesn't converge. For Gamma distribution it happens so that $\Delta x_i$ shrink faster than Gamma PDF grows, and the area ends up being finite. It's straight calculus to see how exactly it converges to 1.
How can a probability distribution diverge? This is really a calculus question, rather than statistics. You're asking how a function that goes to infinity at some values of its argument can still have a finite area under the curve? It's a valid
33,484
How can a probability distribution diverge?
Look at the following example. Notice that for any finite $N$, $$ \int_0^N \frac{1}{x} dx = \log(N)-\log(0) $$ but $\log(0)$ is undefined so the integral is $\infty$ in some sense (this has a limit in there, but ignore it). But $$ \int_0^N \frac{1}{\sqrt{x}} dx = \sqrt{N} - \sqrt{0} = \sqrt{N} $$ In general, this is based on the idea that $$ \int \frac{1}{x^p} dx = x^{1-p} $$ so if $1-p>0$ the fundamental theorem of calculus tells you the integral is finite. So the idea is that it diverges slow enough (where $p$ is the speed) that the area is still bounded. This is similar to the convergence of series. Recall that by the p-test we have that $$ \sum_0^\infty \frac{1}{x^p} $$ converges if and only if $p>1$. In this case we need $x^p \rightarrow \infty $ fast enough, where once again $p$ is the speed and $1$ is the turning point. Why can this be an actual thing? Think about the Koch snowflake. In this example you keep on adding the the perimeter of the snowflake in such a way that the area is growing slowly. This is due to the fact that if you make an equilateral triangle with sides of size $\frac{1}{3}$, the perimeter is 1 while the area is $\frac{1}{12\sqrt{3}}\sim 0.05$. Since the area is so much smaller than the perimeter (it is the multiplication of two small numbers instead of the addition!) you can choose to add triangles in such a way that the perimeter goes to infinity while the area stays finite. To do so you have to choose a speed at which the triangles go to zero, and as you probably guessed by now, there is a speed where it switches from being too slow and giving infinite area to being fast enough to giving finite area. In total, calculus tells us that not all singularities (that what these "go to infinity points" like zero are) are the same. There are huge differences based on the "local speed" of the singularity. $\Gamma$ simply has a singularity which is "slow enough" that the area if finite. If you want to learn more about the "why" singularities work like this, you can delve into a lot more detail in Complex Analysis and its study of the singularities of complex analytic functions (of which $\Gamma$ is).
How can a probability distribution diverge?
Look at the following example. Notice that for any finite $N$, $$ \int_0^N \frac{1}{x} dx = \log(N)-\log(0) $$ but $\log(0)$ is undefined so the integral is $\infty$ in some sense (this has a limit in
How can a probability distribution diverge? Look at the following example. Notice that for any finite $N$, $$ \int_0^N \frac{1}{x} dx = \log(N)-\log(0) $$ but $\log(0)$ is undefined so the integral is $\infty$ in some sense (this has a limit in there, but ignore it). But $$ \int_0^N \frac{1}{\sqrt{x}} dx = \sqrt{N} - \sqrt{0} = \sqrt{N} $$ In general, this is based on the idea that $$ \int \frac{1}{x^p} dx = x^{1-p} $$ so if $1-p>0$ the fundamental theorem of calculus tells you the integral is finite. So the idea is that it diverges slow enough (where $p$ is the speed) that the area is still bounded. This is similar to the convergence of series. Recall that by the p-test we have that $$ \sum_0^\infty \frac{1}{x^p} $$ converges if and only if $p>1$. In this case we need $x^p \rightarrow \infty $ fast enough, where once again $p$ is the speed and $1$ is the turning point. Why can this be an actual thing? Think about the Koch snowflake. In this example you keep on adding the the perimeter of the snowflake in such a way that the area is growing slowly. This is due to the fact that if you make an equilateral triangle with sides of size $\frac{1}{3}$, the perimeter is 1 while the area is $\frac{1}{12\sqrt{3}}\sim 0.05$. Since the area is so much smaller than the perimeter (it is the multiplication of two small numbers instead of the addition!) you can choose to add triangles in such a way that the perimeter goes to infinity while the area stays finite. To do so you have to choose a speed at which the triangles go to zero, and as you probably guessed by now, there is a speed where it switches from being too slow and giving infinite area to being fast enough to giving finite area. In total, calculus tells us that not all singularities (that what these "go to infinity points" like zero are) are the same. There are huge differences based on the "local speed" of the singularity. $\Gamma$ simply has a singularity which is "slow enough" that the area if finite. If you want to learn more about the "why" singularities work like this, you can delve into a lot more detail in Complex Analysis and its study of the singularities of complex analytic functions (of which $\Gamma$ is).
How can a probability distribution diverge? Look at the following example. Notice that for any finite $N$, $$ \int_0^N \frac{1}{x} dx = \log(N)-\log(0) $$ but $\log(0)$ is undefined so the integral is $\infty$ in some sense (this has a limit in
33,485
What are the chances my wife has lupus?
An answer that explained why the question cannot be answered in its present form was given but then deleted after you commented on it; I'll try to explain in more detail why there is not enough information to state the desired probability. I want to emphasize that I'm not saying this because I disapprove of the question; as I wrote in a comment, I think it's not for us to decide whether you should be asking this question; it's just that there is simply not enough information, and if you really do want to find the desired probability on your own, you'd need to obtain the missing information. First, we know nothing about the tests that were performed. You can see that the answer must depend on the reliability of the tests by considering the extreme cases: If the test is utterly unreliable and its results bear almost no relation to the actual presence of the disease, then the probability is tiny, namely roughly the same as before the test. If the test is perfectly reliable and never fails, the probability is $1$. That's a huge difference, and the only way to know where between those extremes the actual probability lies is from information about the reliability of the test, which we don't have. Second, all the correlating properties that you list (ethnicity, sex, relatives, ...) may or may not be correlated among each other. That is, lupus might tend to be congenital in men but not in women or vice versa. Without knowing these correlations, one could only give bounds on the probability by making opposite extreme assumptions on the correlations. These bounds might be slightly more useful than the range "between tiny and $1$" due to the reliability issue, but to get a single probability you'd have to know these correlations or at least make reasonable assumptions about them.
What are the chances my wife has lupus?
An answer that explained why the question cannot be answered in its present form was given but then deleted after you commented on it; I'll try to explain in more detail why there is not enough inform
What are the chances my wife has lupus? An answer that explained why the question cannot be answered in its present form was given but then deleted after you commented on it; I'll try to explain in more detail why there is not enough information to state the desired probability. I want to emphasize that I'm not saying this because I disapprove of the question; as I wrote in a comment, I think it's not for us to decide whether you should be asking this question; it's just that there is simply not enough information, and if you really do want to find the desired probability on your own, you'd need to obtain the missing information. First, we know nothing about the tests that were performed. You can see that the answer must depend on the reliability of the tests by considering the extreme cases: If the test is utterly unreliable and its results bear almost no relation to the actual presence of the disease, then the probability is tiny, namely roughly the same as before the test. If the test is perfectly reliable and never fails, the probability is $1$. That's a huge difference, and the only way to know where between those extremes the actual probability lies is from information about the reliability of the test, which we don't have. Second, all the correlating properties that you list (ethnicity, sex, relatives, ...) may or may not be correlated among each other. That is, lupus might tend to be congenital in men but not in women or vice versa. Without knowing these correlations, one could only give bounds on the probability by making opposite extreme assumptions on the correlations. These bounds might be slightly more useful than the range "between tiny and $1$" due to the reliability issue, but to get a single probability you'd have to know these correlations or at least make reasonable assumptions about them.
What are the chances my wife has lupus? An answer that explained why the question cannot be answered in its present form was given but then deleted after you commented on it; I'll try to explain in more detail why there is not enough inform
33,486
What are the chances my wife has lupus?
My take, as an Epidemiologist: The question really isn't answerable as given, for several reasons: Without very subject specific knowledge, there's no way of knowing if there's effect measure modification between some of those estimates. For example, if you're more likely to have lupus as a woman and more likely to have it as a minority, what happens if you are a female minority member? Are the two independent? Do they interact additively? Multiplicatively? There's a key factor missing: Why is he asking? He's probably not sitting at his desk going "I wonder if she has lupus..." Next week, we're likely not going to get "What is the probability my wife has dengue fever?" There's a reason he thinks this is true, which distorts all those statistics again, as those are population figures, not "Population where a family member suspects you have lupus" figures. The closest thing I could peg as a thing where you could produce a specific value is the "Negative Predictive Value" of the diagnostic test, but a quick googling suggests there are several brands of that particular type of test available, and without more information, you can't really answer it. What should this poor guy be told? To consult with his wife and her doctor. Trying to apply population level statistics to an individual is exactly not what epidemiological evidence is meant to do.
What are the chances my wife has lupus?
My take, as an Epidemiologist: The question really isn't answerable as given, for several reasons: Without very subject specific knowledge, there's no way of knowing if there's effect measure modific
What are the chances my wife has lupus? My take, as an Epidemiologist: The question really isn't answerable as given, for several reasons: Without very subject specific knowledge, there's no way of knowing if there's effect measure modification between some of those estimates. For example, if you're more likely to have lupus as a woman and more likely to have it as a minority, what happens if you are a female minority member? Are the two independent? Do they interact additively? Multiplicatively? There's a key factor missing: Why is he asking? He's probably not sitting at his desk going "I wonder if she has lupus..." Next week, we're likely not going to get "What is the probability my wife has dengue fever?" There's a reason he thinks this is true, which distorts all those statistics again, as those are population figures, not "Population where a family member suspects you have lupus" figures. The closest thing I could peg as a thing where you could produce a specific value is the "Negative Predictive Value" of the diagnostic test, but a quick googling suggests there are several brands of that particular type of test available, and without more information, you can't really answer it. What should this poor guy be told? To consult with his wife and her doctor. Trying to apply population level statistics to an individual is exactly not what epidemiological evidence is meant to do.
What are the chances my wife has lupus? My take, as an Epidemiologist: The question really isn't answerable as given, for several reasons: Without very subject specific knowledge, there's no way of knowing if there's effect measure modific
33,487
What are the chances my wife has lupus?
There is no way anyone can give the answer to the question you ask based on the information you provided. Most of the information you do provide might, if it was thoughly completed (you certainly need the correlations between the individual factors you provide) give the a priori prababilty that a white woman with no sibling etc. would have lupus. That probability could also be computed simply by counting the number of lupus cases among people in that exact category compared to the total number of people in that category (but counting lupus cases is not that simple: your question boils down to asking whether your wife should be included in such a count). However the key ingredient in the question is that your wife has been diagonsed (maybe incorrectly) with lupus. There is no way in which one can derive the effect this has on the probability. Certainly if the diagonosis is worth anything, it greatly increases the probability above the a priori probability. But to know by how much this affects the probability, one would need to know all the factors the diagonsis is based on (and you supply none), together with a very detailed analysis of how those factors correlate with this disease or other ones. So people are right that you should ask an epidimiologist, who has some chance of knowing the relevant information, unlike the poeple on this site. And even for an epidimiologist the question is very hard to give a reliable answer to.
What are the chances my wife has lupus?
There is no way anyone can give the answer to the question you ask based on the information you provided. Most of the information you do provide might, if it was thoughly completed (you certainly need
What are the chances my wife has lupus? There is no way anyone can give the answer to the question you ask based on the information you provided. Most of the information you do provide might, if it was thoughly completed (you certainly need the correlations between the individual factors you provide) give the a priori prababilty that a white woman with no sibling etc. would have lupus. That probability could also be computed simply by counting the number of lupus cases among people in that exact category compared to the total number of people in that category (but counting lupus cases is not that simple: your question boils down to asking whether your wife should be included in such a count). However the key ingredient in the question is that your wife has been diagonsed (maybe incorrectly) with lupus. There is no way in which one can derive the effect this has on the probability. Certainly if the diagonosis is worth anything, it greatly increases the probability above the a priori probability. But to know by how much this affects the probability, one would need to know all the factors the diagonsis is based on (and you supply none), together with a very detailed analysis of how those factors correlate with this disease or other ones. So people are right that you should ask an epidimiologist, who has some chance of knowing the relevant information, unlike the poeple on this site. And even for an epidimiologist the question is very hard to give a reliable answer to.
What are the chances my wife has lupus? There is no way anyone can give the answer to the question you ask based on the information you provided. Most of the information you do provide might, if it was thoughly completed (you certainly need
33,488
If a statistic doesn't reveal a significance, do I have to calculate power for it?
The hardline view on post-hoc power calculation is: don't do it as it's pointless. Russ Lenth from the University of Iowa has an article on this topic here (He also has an amusingly facetious Java applet for post-hoc power on his website).
If a statistic doesn't reveal a significance, do I have to calculate power for it?
The hardline view on post-hoc power calculation is: don't do it as it's pointless. Russ Lenth from the University of Iowa has an article on this topic here (He also has an amusingly facetious Java a
If a statistic doesn't reveal a significance, do I have to calculate power for it? The hardline view on post-hoc power calculation is: don't do it as it's pointless. Russ Lenth from the University of Iowa has an article on this topic here (He also has an amusingly facetious Java applet for post-hoc power on his website).
If a statistic doesn't reveal a significance, do I have to calculate power for it? The hardline view on post-hoc power calculation is: don't do it as it's pointless. Russ Lenth from the University of Iowa has an article on this topic here (He also has an amusingly facetious Java a
33,489
If a statistic doesn't reveal a significance, do I have to calculate power for it?
As an aside, Tukey's doesn't depend on the ANOVA results being significant; you can have significant pairwise differences even when the overall ANOVA is not significant. That is to say, if you're going to be doing Tukey-corrected pairwise comparisons, don't bother checking for overall significance first. If you only run the Tukey comparisons after getting a significant overall p-value, you are over-correcting. (I'm confident that this is true with regular ANOVA; it's possible that with repeated measures or non-orthogonality something else happens; anyone care to chime in?) Finally, to agree with Freya but to provide a little more guidance, instead of a post-hoc power test, a more reasonable thing to report would be the confidence intervals; they show exactly how big a difference your experiment could have detected, which is usually what people are after when they want a post-hoc power test anyway.
If a statistic doesn't reveal a significance, do I have to calculate power for it?
As an aside, Tukey's doesn't depend on the ANOVA results being significant; you can have significant pairwise differences even when the overall ANOVA is not significant. That is to say, if you're goin
If a statistic doesn't reveal a significance, do I have to calculate power for it? As an aside, Tukey's doesn't depend on the ANOVA results being significant; you can have significant pairwise differences even when the overall ANOVA is not significant. That is to say, if you're going to be doing Tukey-corrected pairwise comparisons, don't bother checking for overall significance first. If you only run the Tukey comparisons after getting a significant overall p-value, you are over-correcting. (I'm confident that this is true with regular ANOVA; it's possible that with repeated measures or non-orthogonality something else happens; anyone care to chime in?) Finally, to agree with Freya but to provide a little more guidance, instead of a post-hoc power test, a more reasonable thing to report would be the confidence intervals; they show exactly how big a difference your experiment could have detected, which is usually what people are after when they want a post-hoc power test anyway.
If a statistic doesn't reveal a significance, do I have to calculate power for it? As an aside, Tukey's doesn't depend on the ANOVA results being significant; you can have significant pairwise differences even when the overall ANOVA is not significant. That is to say, if you're goin
33,490
If a statistic doesn't reveal a significance, do I have to calculate power for it?
Another good discussion of the pitfalls of post-hoc power estimation is found in: Gerard, P. D., D. R. Smith, and G. Weerakkody. 1998. Limits of retrospective power analysis. Journal of Wildlife Management 62:801-807 [link].
If a statistic doesn't reveal a significance, do I have to calculate power for it?
Another good discussion of the pitfalls of post-hoc power estimation is found in: Gerard, P. D., D. R. Smith, and G. Weerakkody. 1998. Limits of retrospective power analysis. Journal of Wildlife Ma
If a statistic doesn't reveal a significance, do I have to calculate power for it? Another good discussion of the pitfalls of post-hoc power estimation is found in: Gerard, P. D., D. R. Smith, and G. Weerakkody. 1998. Limits of retrospective power analysis. Journal of Wildlife Management 62:801-807 [link].
If a statistic doesn't reveal a significance, do I have to calculate power for it? Another good discussion of the pitfalls of post-hoc power estimation is found in: Gerard, P. D., D. R. Smith, and G. Weerakkody. 1998. Limits of retrospective power analysis. Journal of Wildlife Ma
33,491
If a statistic doesn't reveal a significance, do I have to calculate power for it?
Most text books argue that it is only proper to do a post hoc such as Tukey's only with a significant f. If you chose planned comparison based on theory, a non significant F would be okay ... Tukey's is a fairly conservative test that typically won't show significance if f is not significant. What value are you using for mean square within to calculate Tukey's? The confidence intervals are also supposed to use mean square with rather than separate variance estimates.
If a statistic doesn't reveal a significance, do I have to calculate power for it?
Most text books argue that it is only proper to do a post hoc such as Tukey's only with a significant f. If you chose planned comparison based on theory, a non significant F would be okay ... Tukey
If a statistic doesn't reveal a significance, do I have to calculate power for it? Most text books argue that it is only proper to do a post hoc such as Tukey's only with a significant f. If you chose planned comparison based on theory, a non significant F would be okay ... Tukey's is a fairly conservative test that typically won't show significance if f is not significant. What value are you using for mean square within to calculate Tukey's? The confidence intervals are also supposed to use mean square with rather than separate variance estimates.
If a statistic doesn't reveal a significance, do I have to calculate power for it? Most text books argue that it is only proper to do a post hoc such as Tukey's only with a significant f. If you chose planned comparison based on theory, a non significant F would be okay ... Tukey
33,492
Sign of product of standard normal random variables
As long as $(X,Y)$ is standard bivariate normal with correlation $\rho$, the probability that $XY$ is positive or negative can be found using the well-known result for the positive quadrant probability $$P(X>0,Y>0)=\frac14+\frac1{2\pi}\sin^{-1}\rho \tag{1}$$ (This is likely discussed here before but I cannot quite find the question.) You have \begin{align} P(XY>0)&=P(X>0,Y>0)+P(X<0,Y<0) \\&=P(X>0,Y>0)+P(-X>0,-Y>0) \end{align} Because $(-X,-Y)$ has the same distribution as $(X,Y)$, this probability is just $$P(XY>0)=2P(X>0,Y>0)$$ Similarly, \begin{align} P(XY<0)&=P(X>0,Y<0)+P(X<0,Y>0) \\&=P(X>0,-Y>0)+P(-X>0,Y>0) \end{align} Again, $(X,-Y)$ and $(-X,Y)$ have the same distribution, so $$P(XY<0)=2P(X>0,-Y>0)$$ And since $(X,-Y)$ is bivariate normal with correlation $-\rho$, we have from $(1)$ that $$P(X>0,-Y>0)=\frac14-\frac1{2\pi}\sin^{-1}\rho$$
Sign of product of standard normal random variables
As long as $(X,Y)$ is standard bivariate normal with correlation $\rho$, the probability that $XY$ is positive or negative can be found using the well-known result for the positive quadrant probabilit
Sign of product of standard normal random variables As long as $(X,Y)$ is standard bivariate normal with correlation $\rho$, the probability that $XY$ is positive or negative can be found using the well-known result for the positive quadrant probability $$P(X>0,Y>0)=\frac14+\frac1{2\pi}\sin^{-1}\rho \tag{1}$$ (This is likely discussed here before but I cannot quite find the question.) You have \begin{align} P(XY>0)&=P(X>0,Y>0)+P(X<0,Y<0) \\&=P(X>0,Y>0)+P(-X>0,-Y>0) \end{align} Because $(-X,-Y)$ has the same distribution as $(X,Y)$, this probability is just $$P(XY>0)=2P(X>0,Y>0)$$ Similarly, \begin{align} P(XY<0)&=P(X>0,Y<0)+P(X<0,Y>0) \\&=P(X>0,-Y>0)+P(-X>0,Y>0) \end{align} Again, $(X,-Y)$ and $(-X,Y)$ have the same distribution, so $$P(XY<0)=2P(X>0,-Y>0)$$ And since $(X,-Y)$ is bivariate normal with correlation $-\rho$, we have from $(1)$ that $$P(X>0,-Y>0)=\frac14-\frac1{2\pi}\sin^{-1}\rho$$
Sign of product of standard normal random variables As long as $(X,Y)$ is standard bivariate normal with correlation $\rho$, the probability that $XY$ is positive or negative can be found using the well-known result for the positive quadrant probabilit
33,493
Sign of product of standard normal random variables
Consider $X,Y\sim N(0,1)$ with correlation $\rho$. Then (Nadarajah & Pogány, 2016; Gaunt, 2018) their product is variance-gamma distributed: $$ XY \sim \text{VG}(1,\rho,\sqrt{1-\rho^2},0). $$ Its PDF is $$ f_{XY}(x) = \frac{1}{\pi\sqrt{1-\rho^2}}\exp\left(\frac{\rho x}{1-\rho^2}\right) K_0\left(\frac{|x|}{1-\rho^2}\right), $$ where $K_0$ is the modified Bessel function of the second kind of order $0$. Thus $$ P(\text{sign}(XY) = -1) = \int_{-\infty}^0 f_{XY}(x)\,dx. $$ You may be able to evaluate this using your favorite computer algebra system. Unfortunately, this exceeds the standard computation time for WolframAlpha. COOLSerdash notes that the integral evaluates to a nice round $\frac{\arccos\rho}{\pi}$. Alternatively, you could map the above parameterization to the one employed in the VarianceGamma package for R and use the functions in there, if all you are interested in is numerical results.
Sign of product of standard normal random variables
Consider $X,Y\sim N(0,1)$ with correlation $\rho$. Then (Nadarajah & Pogány, 2016; Gaunt, 2018) their product is variance-gamma distributed: $$ XY \sim \text{VG}(1,\rho,\sqrt{1-\rho^2},0). $$ Its PDF
Sign of product of standard normal random variables Consider $X,Y\sim N(0,1)$ with correlation $\rho$. Then (Nadarajah & Pogány, 2016; Gaunt, 2018) their product is variance-gamma distributed: $$ XY \sim \text{VG}(1,\rho,\sqrt{1-\rho^2},0). $$ Its PDF is $$ f_{XY}(x) = \frac{1}{\pi\sqrt{1-\rho^2}}\exp\left(\frac{\rho x}{1-\rho^2}\right) K_0\left(\frac{|x|}{1-\rho^2}\right), $$ where $K_0$ is the modified Bessel function of the second kind of order $0$. Thus $$ P(\text{sign}(XY) = -1) = \int_{-\infty}^0 f_{XY}(x)\,dx. $$ You may be able to evaluate this using your favorite computer algebra system. Unfortunately, this exceeds the standard computation time for WolframAlpha. COOLSerdash notes that the integral evaluates to a nice round $\frac{\arccos\rho}{\pi}$. Alternatively, you could map the above parameterization to the one employed in the VarianceGamma package for R and use the functions in there, if all you are interested in is numerical results.
Sign of product of standard normal random variables Consider $X,Y\sim N(0,1)$ with correlation $\rho$. Then (Nadarajah & Pogány, 2016; Gaunt, 2018) their product is variance-gamma distributed: $$ XY \sim \text{VG}(1,\rho,\sqrt{1-\rho^2},0). $$ Its PDF
33,494
Sign of product of standard normal random variables
The correlation alone is not enough to be able to derive the probability distribution for the sign. See the example below for a case where $\rho = 2/\pi$ and the sign of $XY$ is always positive.
Sign of product of standard normal random variables
The correlation alone is not enough to be able to derive the probability distribution for the sign. See the example below for a case where $\rho = 2/\pi$ and the sign of $XY$ is always positive.
Sign of product of standard normal random variables The correlation alone is not enough to be able to derive the probability distribution for the sign. See the example below for a case where $\rho = 2/\pi$ and the sign of $XY$ is always positive.
Sign of product of standard normal random variables The correlation alone is not enough to be able to derive the probability distribution for the sign. See the example below for a case where $\rho = 2/\pi$ and the sign of $XY$ is always positive.
33,495
Equilibrium distribution of Markov chain
Due to the sum to one criterion and $w_4=2w_3,$ the $a$ and $b$ also has to be chosen to satisfy this condition: $$1-2a-b=2b$$ I think you miss out this condition, that is knowing $a$ would completely determine $b$. Also, we need each component to be nonnegative. To recover your first solution, you can let $a=\frac12, b=0$. To get your second solution, you can let $a=0,b=\frac13 $. The general solution is the convex hull of the solution for each of the classes. $$\alpha \left(\frac12, \frac12, 0, 0\right) + (1-\alpha)\left( 0,0, \frac13, \frac23\right)$$ where $0 \le \alpha \le 1$.
Equilibrium distribution of Markov chain
Due to the sum to one criterion and $w_4=2w_3,$ the $a$ and $b$ also has to be chosen to satisfy this condition: $$1-2a-b=2b$$ I think you miss out this condition, that is knowing $a$ would completely
Equilibrium distribution of Markov chain Due to the sum to one criterion and $w_4=2w_3,$ the $a$ and $b$ also has to be chosen to satisfy this condition: $$1-2a-b=2b$$ I think you miss out this condition, that is knowing $a$ would completely determine $b$. Also, we need each component to be nonnegative. To recover your first solution, you can let $a=\frac12, b=0$. To get your second solution, you can let $a=0,b=\frac13 $. The general solution is the convex hull of the solution for each of the classes. $$\alpha \left(\frac12, \frac12, 0, 0\right) + (1-\alpha)\left( 0,0, \frac13, \frac23\right)$$ where $0 \le \alpha \le 1$.
Equilibrium distribution of Markov chain Due to the sum to one criterion and $w_4=2w_3,$ the $a$ and $b$ also has to be chosen to satisfy this condition: $$1-2a-b=2b$$ I think you miss out this condition, that is knowing $a$ would completely
33,496
Equilibrium distribution of Markov chain
This Markov Chain is not irreducible and is therefore not ergodic. That is the reason why there is no unique equilibrium distribution. More specifically: nonergodicity entails that the equilibrium distribution depends on the distribution of the initial state $X_0$ of this chain. This chain has two irreducible classes, {1,2} consisting of states 1 and 2, and {3,4} consisting of states 3 and 4. In the answer of @Siong Thye Goh, the parameter $\alpha$ has a precise interpretation, namely: $$ \alpha = \text{Prob}[X_0 \in \{1,2\}] $$
Equilibrium distribution of Markov chain
This Markov Chain is not irreducible and is therefore not ergodic. That is the reason why there is no unique equilibrium distribution. More specifically: nonergodicity entails that the equilibrium dis
Equilibrium distribution of Markov chain This Markov Chain is not irreducible and is therefore not ergodic. That is the reason why there is no unique equilibrium distribution. More specifically: nonergodicity entails that the equilibrium distribution depends on the distribution of the initial state $X_0$ of this chain. This chain has two irreducible classes, {1,2} consisting of states 1 and 2, and {3,4} consisting of states 3 and 4. In the answer of @Siong Thye Goh, the parameter $\alpha$ has a precise interpretation, namely: $$ \alpha = \text{Prob}[X_0 \in \{1,2\}] $$
Equilibrium distribution of Markov chain This Markov Chain is not irreducible and is therefore not ergodic. That is the reason why there is no unique equilibrium distribution. More specifically: nonergodicity entails that the equilibrium dis
33,497
Equilibrium distribution of Markov chain
We have 3 restricting conditions on $w_{1}$, $w_{2}$, $w_{3}$ and $w_{4}$: (1) $w_{1}=w_{2}$, (2) $w_{4}=2w_{3}$, and (3) $w_{1}+w_{2}+w_{3}+w_{4}=1$. The general solution of (1) is $[w_{1},w_{2}] = w_{1}\times[1,1]$. The general solution of (2) is $[w_{3},w_{4}] = w_{3}\times[1,2]$. Finally from (3), we have $w_{1}+w_{2}+w_{3}+w_{4}=2w_{1} + 3w_{3}=1$. This last equation, $2w_{1} + 3w_{3}=1$, gives the general solution. To satisfy this equation, obviously, $w_{1}\leq1/2$ and $w_{3}\leq1/3$. For example, $w_{1}=1/4$ and $w_{3}=1/6$. Therefore, $w_{1}$ and $w_{3}$ can be expressed as $w_{1}=1/2\times \alpha$ and $w_{3}=1/3 \times \beta$ respectively, where $0\leq\alpha\leq1$ and $0\leq\beta \leq1$. Substituting to above equation, we get $\beta = 1-\alpha$.
Equilibrium distribution of Markov chain
We have 3 restricting conditions on $w_{1}$, $w_{2}$, $w_{3}$ and $w_{4}$: (1) $w_{1}=w_{2}$, (2) $w_{4}=2w_{3}$, and (3) $w_{1}+w_{2}+w_{3}+w_{4}=1$. The general solution of (1) is $[w_{1},w_{2}] = w
Equilibrium distribution of Markov chain We have 3 restricting conditions on $w_{1}$, $w_{2}$, $w_{3}$ and $w_{4}$: (1) $w_{1}=w_{2}$, (2) $w_{4}=2w_{3}$, and (3) $w_{1}+w_{2}+w_{3}+w_{4}=1$. The general solution of (1) is $[w_{1},w_{2}] = w_{1}\times[1,1]$. The general solution of (2) is $[w_{3},w_{4}] = w_{3}\times[1,2]$. Finally from (3), we have $w_{1}+w_{2}+w_{3}+w_{4}=2w_{1} + 3w_{3}=1$. This last equation, $2w_{1} + 3w_{3}=1$, gives the general solution. To satisfy this equation, obviously, $w_{1}\leq1/2$ and $w_{3}\leq1/3$. For example, $w_{1}=1/4$ and $w_{3}=1/6$. Therefore, $w_{1}$ and $w_{3}$ can be expressed as $w_{1}=1/2\times \alpha$ and $w_{3}=1/3 \times \beta$ respectively, where $0\leq\alpha\leq1$ and $0\leq\beta \leq1$. Substituting to above equation, we get $\beta = 1-\alpha$.
Equilibrium distribution of Markov chain We have 3 restricting conditions on $w_{1}$, $w_{2}$, $w_{3}$ and $w_{4}$: (1) $w_{1}=w_{2}$, (2) $w_{4}=2w_{3}$, and (3) $w_{1}+w_{2}+w_{3}+w_{4}=1$. The general solution of (1) is $[w_{1},w_{2}] = w
33,498
Can K-fold cross validation cause overfitting?
K-fold cross validation is a standard technique to detect overfitting. It cannot "cause" overfitting in the sense of causality. However, there is no guarantee that k-fold cross-validation removes overfitting. People are using it as a magic cure for overfitting, but it isn't. It may not be enough. The proper way to apply cross-validation is as a method to detect overfitting. If you do CV, and if there is a big difference between the test and the training error then you know you are overfitting and need to get more diverse data or choose simpler models and stronger regularization. The contrary does not hold: no big difference between test and train error does not mean you haven't been overfitting. It's not a magic cure, but the best method to detect overfitting we have (when used right). Some examples when cross-validation can fail: data is ordered, and not shuffled prior to splitting unbalanced data (try stratified cross-validation) duplicates in different folds natural groups (e.g., data from the same user) shuffled into multiple folds There are other cases where it cannot detect information leakage and overtitting even when used perfectly right. For example when analyzing time series, people like to standardize the data, split it into past and future data, then train a model to predict the future development of these stocks. The subtle information leakage was in the preprocessing: standardization prior to temporal splitting leaks information about the average of the remainder. Similar leaks can occur in other preprocessing. In outlier detection, if you scale the data to 0:1, a model can learn that values close to 0 and 1 are the most extreme values you can observe etc. Back to your question: Since each fold will be used to train the model (in  iterations), won't that cause overfitting? No. Each fold is used to train a new model from scratch, predict the accuracy, and then the model is discarded. You don't use any of the models trained during CV. You use validation (such as CV) for two purposes: Estimate how good your model will (hopefully) work in practise when you deploy it, without risking a real A-B-test in production yet. You only want to go live with models that are expected to work better than you current approach, or this may cost your company millions. Find the "best" parameters for train your final model (which you want to train on the entire training data). Tuning hyperparameters is when you have a high risk of overfitting if you are not careful. CV is not a way of "training" a model by feeding 10 batches of data.
Can K-fold cross validation cause overfitting?
K-fold cross validation is a standard technique to detect overfitting. It cannot "cause" overfitting in the sense of causality. However, there is no guarantee that k-fold cross-validation removes over
Can K-fold cross validation cause overfitting? K-fold cross validation is a standard technique to detect overfitting. It cannot "cause" overfitting in the sense of causality. However, there is no guarantee that k-fold cross-validation removes overfitting. People are using it as a magic cure for overfitting, but it isn't. It may not be enough. The proper way to apply cross-validation is as a method to detect overfitting. If you do CV, and if there is a big difference between the test and the training error then you know you are overfitting and need to get more diverse data or choose simpler models and stronger regularization. The contrary does not hold: no big difference between test and train error does not mean you haven't been overfitting. It's not a magic cure, but the best method to detect overfitting we have (when used right). Some examples when cross-validation can fail: data is ordered, and not shuffled prior to splitting unbalanced data (try stratified cross-validation) duplicates in different folds natural groups (e.g., data from the same user) shuffled into multiple folds There are other cases where it cannot detect information leakage and overtitting even when used perfectly right. For example when analyzing time series, people like to standardize the data, split it into past and future data, then train a model to predict the future development of these stocks. The subtle information leakage was in the preprocessing: standardization prior to temporal splitting leaks information about the average of the remainder. Similar leaks can occur in other preprocessing. In outlier detection, if you scale the data to 0:1, a model can learn that values close to 0 and 1 are the most extreme values you can observe etc. Back to your question: Since each fold will be used to train the model (in  iterations), won't that cause overfitting? No. Each fold is used to train a new model from scratch, predict the accuracy, and then the model is discarded. You don't use any of the models trained during CV. You use validation (such as CV) for two purposes: Estimate how good your model will (hopefully) work in practise when you deploy it, without risking a real A-B-test in production yet. You only want to go live with models that are expected to work better than you current approach, or this may cost your company millions. Find the "best" parameters for train your final model (which you want to train on the entire training data). Tuning hyperparameters is when you have a high risk of overfitting if you are not careful. CV is not a way of "training" a model by feeding 10 batches of data.
Can K-fold cross validation cause overfitting? K-fold cross validation is a standard technique to detect overfitting. It cannot "cause" overfitting in the sense of causality. However, there is no guarantee that k-fold cross-validation removes over
33,499
Can K-fold cross validation cause overfitting?
On the contrary, cross-validation is a good way to combat overfitting! Why $k$-fold CV? Suppose you have a model and you want an estimate of its out-of-sample performance... You could assess the prediction error on the same data used to fit the model (i.e. the training error), but this is obviously not a good indicator of out-of-sample performance. If the model is indeed overfitting, it will perform poorly on new observations, but you will still observe a low training error. Alternatively, you could split your data into two part (train/test) and only use the train set to fit the model. The rest of the data, never seen by the model in any way, is then used to get an estimate of the out-of-sample performance. Great! But what if we had used a different split? As it turns out the variance between results obtained from different splits can be quite large... so large in fact, that data splitting is only reliable for really large $n$. This is what $k$-fold CV attempts to tackle, by doing the following repeatedly: Fit your model with $n - \frac{n}{k}$ observations; Observe its performance on the remaining $\frac{n}{k}$ observations, which were not used to fit your model. You repeat this process $k$ times, each time leaving out the next $\frac{n}{k}$ observations for testing, until all observations have been used once as a test set. You then sum the errors on the test set of each fold (or compute a weighted average), and you have an estimate of out-of-sample performance that is less sensitive to the particular splits used, because there are now $k$ of them.$^\dagger$ Can this cause overfitting? Now to answer your question: Since each fold will be used to train the model (in $k$ iterations), won't that cause overfitting? Each fold is indeed used to train the same model... from scratch. So while there is indeed overlap between training sets, and thus you are indeed fitting models on (partially) the same data multiple times, you are not reusing the data to update your estimates! If your model would overfit in a particular fold, then the training error of that fold would be lower than the testing error of that fold. Hence, when summing/averaging the errors of all folds, a model that overfits would have low cross-validated performance. $\dagger$: Even better, if you can afford it computationally, is to repeat $k$-fold CV multiple times.
Can K-fold cross validation cause overfitting?
On the contrary, cross-validation is a good way to combat overfitting! Why $k$-fold CV? Suppose you have a model and you want an estimate of its out-of-sample performance... You could assess the pre
Can K-fold cross validation cause overfitting? On the contrary, cross-validation is a good way to combat overfitting! Why $k$-fold CV? Suppose you have a model and you want an estimate of its out-of-sample performance... You could assess the prediction error on the same data used to fit the model (i.e. the training error), but this is obviously not a good indicator of out-of-sample performance. If the model is indeed overfitting, it will perform poorly on new observations, but you will still observe a low training error. Alternatively, you could split your data into two part (train/test) and only use the train set to fit the model. The rest of the data, never seen by the model in any way, is then used to get an estimate of the out-of-sample performance. Great! But what if we had used a different split? As it turns out the variance between results obtained from different splits can be quite large... so large in fact, that data splitting is only reliable for really large $n$. This is what $k$-fold CV attempts to tackle, by doing the following repeatedly: Fit your model with $n - \frac{n}{k}$ observations; Observe its performance on the remaining $\frac{n}{k}$ observations, which were not used to fit your model. You repeat this process $k$ times, each time leaving out the next $\frac{n}{k}$ observations for testing, until all observations have been used once as a test set. You then sum the errors on the test set of each fold (or compute a weighted average), and you have an estimate of out-of-sample performance that is less sensitive to the particular splits used, because there are now $k$ of them.$^\dagger$ Can this cause overfitting? Now to answer your question: Since each fold will be used to train the model (in $k$ iterations), won't that cause overfitting? Each fold is indeed used to train the same model... from scratch. So while there is indeed overlap between training sets, and thus you are indeed fitting models on (partially) the same data multiple times, you are not reusing the data to update your estimates! If your model would overfit in a particular fold, then the training error of that fold would be lower than the testing error of that fold. Hence, when summing/averaging the errors of all folds, a model that overfits would have low cross-validated performance. $\dagger$: Even better, if you can afford it computationally, is to repeat $k$-fold CV multiple times.
Can K-fold cross validation cause overfitting? On the contrary, cross-validation is a good way to combat overfitting! Why $k$-fold CV? Suppose you have a model and you want an estimate of its out-of-sample performance... You could assess the pre
33,500
Does the posterior necessarily follow the same conditional dependence structure as the prior?
Your question can also be stated as: "$X$ is dependent on $a$ and $b$. And $a$ and $b$ are independent. Does this imply that $a$ and $b$ are conditionally independent given $X$?" The answer is no. We just need a counter-example to show it isn't the case. Suppose $X = a + b$. Then, once we know $X$'s value, $a$ and $b$ are dependent (information about one tells us what the other will be). For example, suppose $X=5$. Then, if $a=3$, it tells us that $b=2$. Similarly, if $b=4$, it tells $a=1$.
Does the posterior necessarily follow the same conditional dependence structure as the prior?
Your question can also be stated as: "$X$ is dependent on $a$ and $b$. And $a$ and $b$ are independent. Does this imply that $a$ and $b$ are conditionally independent given $X$?" The answer is no. We
Does the posterior necessarily follow the same conditional dependence structure as the prior? Your question can also be stated as: "$X$ is dependent on $a$ and $b$. And $a$ and $b$ are independent. Does this imply that $a$ and $b$ are conditionally independent given $X$?" The answer is no. We just need a counter-example to show it isn't the case. Suppose $X = a + b$. Then, once we know $X$'s value, $a$ and $b$ are dependent (information about one tells us what the other will be). For example, suppose $X=5$. Then, if $a=3$, it tells us that $b=2$. Similarly, if $b=4$, it tells $a=1$.
Does the posterior necessarily follow the same conditional dependence structure as the prior? Your question can also be stated as: "$X$ is dependent on $a$ and $b$. And $a$ and $b$ are independent. Does this imply that $a$ and $b$ are conditionally independent given $X$?" The answer is no. We