idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
16,201
How to keep time invariant variables in a fixed effects model
The Mundlak chamberlain device is a perfect tool for this. It is usually referred to as the correlated random effects model because it uses the random effect model to implicitly estimate fixed effects for time variant variables while also estimating the random effects for time invariant variables. However, in statistical softwares, you implement it thesame as the random effect model but you have to add the means of all time variant covariates.
How to keep time invariant variables in a fixed effects model
The Mundlak chamberlain device is a perfect tool for this. It is usually referred to as the correlated random effects model because it uses the random effect model to implicitly estimate fixed effects
How to keep time invariant variables in a fixed effects model The Mundlak chamberlain device is a perfect tool for this. It is usually referred to as the correlated random effects model because it uses the random effect model to implicitly estimate fixed effects for time variant variables while also estimating the random effects for time invariant variables. However, in statistical softwares, you implement it thesame as the random effect model but you have to add the means of all time variant covariates.
How to keep time invariant variables in a fixed effects model The Mundlak chamberlain device is a perfect tool for this. It is usually referred to as the correlated random effects model because it uses the random effect model to implicitly estimate fixed effects
16,202
Flat, conjugate, and hyper- priors. What are they?
Simply put, a flat/non-informative prior is used when one has little/no knowledge about the data and hence it has the least effect on outcomes of your analysis (i.e. posterior inference). Conjugate distributions are those whose prior and posterior distributions are the same, and the prior is called the conjugate prior. It is favoured for its algebraic conveniences, especially when the likelihood has a distribution in the form of exponential family (Gaussian, Beta, etc.). This is hugely beneficial when carrying posterior simulations using Gibbs sampling. And finally imagine that a prior distribution is set on a parameter in your model, however you want to add an another level of complexity/uncertainty. You would then impose a prior distribution on the parameters of the aforementioned prior, hence the name hyper-prior. I think Gelman's Bayesian Data Analysis is a great start for anyone who's interested in learning Bayesian statistics:)
Flat, conjugate, and hyper- priors. What are they?
Simply put, a flat/non-informative prior is used when one has little/no knowledge about the data and hence it has the least effect on outcomes of your analysis (i.e. posterior inference). Conjugate di
Flat, conjugate, and hyper- priors. What are they? Simply put, a flat/non-informative prior is used when one has little/no knowledge about the data and hence it has the least effect on outcomes of your analysis (i.e. posterior inference). Conjugate distributions are those whose prior and posterior distributions are the same, and the prior is called the conjugate prior. It is favoured for its algebraic conveniences, especially when the likelihood has a distribution in the form of exponential family (Gaussian, Beta, etc.). This is hugely beneficial when carrying posterior simulations using Gibbs sampling. And finally imagine that a prior distribution is set on a parameter in your model, however you want to add an another level of complexity/uncertainty. You would then impose a prior distribution on the parameters of the aforementioned prior, hence the name hyper-prior. I think Gelman's Bayesian Data Analysis is a great start for anyone who's interested in learning Bayesian statistics:)
Flat, conjugate, and hyper- priors. What are they? Simply put, a flat/non-informative prior is used when one has little/no knowledge about the data and hence it has the least effect on outcomes of your analysis (i.e. posterior inference). Conjugate di
16,203
Flat, conjugate, and hyper- priors. What are they?
At the highest level, we can think of all manner of priors as specifying some amount of information that the researcher brings to bear on the analysis outside of the data itself: before looking at the data, which values of parameters are more likely? In the dark ages of Bayesian analysis, when the Bayesians were fighting it out with frequentists, there was a belief that the researcher would want to introduce as little information to the analysis via the prior as possible. So there was a lot of research and argument devoted to understanding how, precisely, a prior could be "non-informative" in this way. Today, Gelman argues against the automatic choice of non-informative priors, saying in Bayesian Data Analysis that the description "non-informative" reflects his attitude towards the prior, rather than any "special" mathematical features of the prior. (Moreover, there was a question in the early literature of at what scale a prior is noninformative. I don't think that this is especially important to your question, but for a good example of this argument from a frequentist perspective, see the beginning of Gary King, Unifying Political Methodology.) A "flat" prior indicates a uniform prior where all values in the range are equally likely. Again, there are arguments to be had about whether these are truly non-informative, since specifying that all values are equally likely is, in some way, information, and may be sensitive to how the model is parameterized. Flat priors have a long history in Bayesian analysis, stretching back to Bayes and Laplace. A "vague" prior is highly diffuse though not necessarily flat, and it expresses that a large range of values are plausible, rather than concentrating the probability mass around specific range. Essentially, it is a prior with high variance (whatever "high" variance means in your context). Conjugate priors have the convenient feature that, when multiplied by the appropriate likelihood, they produce a posterior distribution with a “nice” expression. (There is some nuance here; see Do conjugate priors just lead to a posterior that is a modification of the parameters of the prior?) One example of this is the beta prior with the binomial likelihood, or the gamma prior with the poisson likelihood. There are helpful tables of these all over the Internet and Wikipedia. The exponential family is extremely convenient in this regard. Conjugate priors are often the "default" choice for some problems because of their convenient properties, but this does not necessarily mean that they are the "best" unless one's prior knowledge can be expressed via the conjugate prior. Advances in computation mean that conjugacy is not as prized as it once was (cf Gibbs sampling vs NUTS), so we can more easily perform inference with non-conjugate priors without much trouble. Hyper-priors are priors on the prior. This means that rather than specifying, say, a $N(\mu,\sigma^2)$ prior on a parameter with fixed $\mu$ and $\sigma^2$, you might express a prior on the parameter $\mu$ and a prior on the parameter $\sigma^2$. Most often, this is used in hierarchical modeling, when you believe that there is a common feature to all of the data points in question (say, because you are performing a statistical analysis on replications of the same experiment), and that variation in the data is explained as being caused by random assignment of parameters from this common distribution to the data points.
Flat, conjugate, and hyper- priors. What are they?
At the highest level, we can think of all manner of priors as specifying some amount of information that the researcher brings to bear on the analysis outside of the data itself: before looking at the
Flat, conjugate, and hyper- priors. What are they? At the highest level, we can think of all manner of priors as specifying some amount of information that the researcher brings to bear on the analysis outside of the data itself: before looking at the data, which values of parameters are more likely? In the dark ages of Bayesian analysis, when the Bayesians were fighting it out with frequentists, there was a belief that the researcher would want to introduce as little information to the analysis via the prior as possible. So there was a lot of research and argument devoted to understanding how, precisely, a prior could be "non-informative" in this way. Today, Gelman argues against the automatic choice of non-informative priors, saying in Bayesian Data Analysis that the description "non-informative" reflects his attitude towards the prior, rather than any "special" mathematical features of the prior. (Moreover, there was a question in the early literature of at what scale a prior is noninformative. I don't think that this is especially important to your question, but for a good example of this argument from a frequentist perspective, see the beginning of Gary King, Unifying Political Methodology.) A "flat" prior indicates a uniform prior where all values in the range are equally likely. Again, there are arguments to be had about whether these are truly non-informative, since specifying that all values are equally likely is, in some way, information, and may be sensitive to how the model is parameterized. Flat priors have a long history in Bayesian analysis, stretching back to Bayes and Laplace. A "vague" prior is highly diffuse though not necessarily flat, and it expresses that a large range of values are plausible, rather than concentrating the probability mass around specific range. Essentially, it is a prior with high variance (whatever "high" variance means in your context). Conjugate priors have the convenient feature that, when multiplied by the appropriate likelihood, they produce a posterior distribution with a “nice” expression. (There is some nuance here; see Do conjugate priors just lead to a posterior that is a modification of the parameters of the prior?) One example of this is the beta prior with the binomial likelihood, or the gamma prior with the poisson likelihood. There are helpful tables of these all over the Internet and Wikipedia. The exponential family is extremely convenient in this regard. Conjugate priors are often the "default" choice for some problems because of their convenient properties, but this does not necessarily mean that they are the "best" unless one's prior knowledge can be expressed via the conjugate prior. Advances in computation mean that conjugacy is not as prized as it once was (cf Gibbs sampling vs NUTS), so we can more easily perform inference with non-conjugate priors without much trouble. Hyper-priors are priors on the prior. This means that rather than specifying, say, a $N(\mu,\sigma^2)$ prior on a parameter with fixed $\mu$ and $\sigma^2$, you might express a prior on the parameter $\mu$ and a prior on the parameter $\sigma^2$. Most often, this is used in hierarchical modeling, when you believe that there is a common feature to all of the data points in question (say, because you are performing a statistical analysis on replications of the same experiment), and that variation in the data is explained as being caused by random assignment of parameters from this common distribution to the data points.
Flat, conjugate, and hyper- priors. What are they? At the highest level, we can think of all manner of priors as specifying some amount of information that the researcher brings to bear on the analysis outside of the data itself: before looking at the
16,204
Classic linear model - model selection
The problem began when you sought a reduced model and used the data rather than subject matter knowledge to pick the predictors. Stepwise variable selection without simultaneous shinkage to penalize for variable selection, though often used, is an invalid approach. Much has been written about this. There is no reason to trust that the 3-variable model is "best" and there is no reason not to use the original list of pre-specified predictors. P-values computed after using P-values to select variables is not valid. This has been called "double dipping" in the functional imaging literature. Here is an analogy. Suppose one is interested in comparing 6 treatments, but uses pairwise t-tests to pick which treatments are "different", resulting in a reduced set of 4 treatments. The analyst then tests for an overall difference with 3 degrees of freedom. This F test will have inflated type I error. The original F test with 5 d.f. is quite valid. See http://www.stata.com/support/faqs/stat/stepwise.html and stepwise-regression for more information.
Classic linear model - model selection
The problem began when you sought a reduced model and used the data rather than subject matter knowledge to pick the predictors. Stepwise variable selection without simultaneous shinkage to penalize
Classic linear model - model selection The problem began when you sought a reduced model and used the data rather than subject matter knowledge to pick the predictors. Stepwise variable selection without simultaneous shinkage to penalize for variable selection, though often used, is an invalid approach. Much has been written about this. There is no reason to trust that the 3-variable model is "best" and there is no reason not to use the original list of pre-specified predictors. P-values computed after using P-values to select variables is not valid. This has been called "double dipping" in the functional imaging literature. Here is an analogy. Suppose one is interested in comparing 6 treatments, but uses pairwise t-tests to pick which treatments are "different", resulting in a reduced set of 4 treatments. The analyst then tests for an overall difference with 3 degrees of freedom. This F test will have inflated type I error. The original F test with 5 d.f. is quite valid. See http://www.stata.com/support/faqs/stat/stepwise.html and stepwise-regression for more information.
Classic linear model - model selection The problem began when you sought a reduced model and used the data rather than subject matter knowledge to pick the predictors. Stepwise variable selection without simultaneous shinkage to penalize
16,205
Classic linear model - model selection
One answer would be "this cannot be done without subject matter knowledge". Unfortunately, that would likely get you an F on your assignment. Unless I was your professor. Then it would get an A. But, given your statement that $R^2$ is 0.03 and there are low correlations among all variables, I'm puzzled that any model is significant at all. What is N? I'm guessing it's very large. Then there's all the 5 predictors are generated by independent simulations from a normal distribution. Well, if you KNOW this (that is, your instructor told you) and if by "independent" you mean "not related to the DV" then you know that the best model is one with no predictors, and your intuition is correct.
Classic linear model - model selection
One answer would be "this cannot be done without subject matter knowledge". Unfortunately, that would likely get you an F on your assignment. Unless I was your professor. Then it would get an A. But,
Classic linear model - model selection One answer would be "this cannot be done without subject matter knowledge". Unfortunately, that would likely get you an F on your assignment. Unless I was your professor. Then it would get an A. But, given your statement that $R^2$ is 0.03 and there are low correlations among all variables, I'm puzzled that any model is significant at all. What is N? I'm guessing it's very large. Then there's all the 5 predictors are generated by independent simulations from a normal distribution. Well, if you KNOW this (that is, your instructor told you) and if by "independent" you mean "not related to the DV" then you know that the best model is one with no predictors, and your intuition is correct.
Classic linear model - model selection One answer would be "this cannot be done without subject matter knowledge". Unfortunately, that would likely get you an F on your assignment. Unless I was your professor. Then it would get an A. But,
16,206
Classic linear model - model selection
You might try doing cross validation. Choose a subset of your sample, find the "best" model for that subset using F or t tests, then apply it to the full data set (full cross validation can get more complicated than this, but this would be a good start). This helps to alleviate some of the stepwise testing problems. See A Note on Screening Regression Equations by David Freedman for a cute little simulation of this idea.
Classic linear model - model selection
You might try doing cross validation. Choose a subset of your sample, find the "best" model for that subset using F or t tests, then apply it to the full data set (full cross validation can get more c
Classic linear model - model selection You might try doing cross validation. Choose a subset of your sample, find the "best" model for that subset using F or t tests, then apply it to the full data set (full cross validation can get more complicated than this, but this would be a good start). This helps to alleviate some of the stepwise testing problems. See A Note on Screening Regression Equations by David Freedman for a cute little simulation of this idea.
Classic linear model - model selection You might try doing cross validation. Choose a subset of your sample, find the "best" model for that subset using F or t tests, then apply it to the full data set (full cross validation can get more c
16,207
Classic linear model - model selection
I really like the method used in the caret package: recursive feature elimination. You can read more about it in the vignette, but here's the basic process: The basic idea is to use a criteria (such as t statistics) to eliminate unimportant variables and see how that improves the predictive accuracy of the model. You wrap the entire thing in a resampling loop, such as cross-validation. Here is an example, using a linear model to rank variables in a manner similar to what you've described: #Setup set.seed(1) p1 <- rnorm(50) p2 <- rnorm(50) p3 <- rnorm(50) p4 <- rnorm(50) p5 <- rnorm(50) y <- 4*rnorm(50)+p1+p2-p5 #Select Variables require(caret) X <- data.frame(p1,p2,p3,p4,p5) RFE <- rfe(X,y, sizes = seq(1,5), rfeControl = rfeControl( functions = lmFuncs, method = "repeatedcv") ) RFE plot(RFE) #Fit linear model and compare fmla <- as.formula(paste("y ~ ", paste(RFE$optVariables, collapse= "+"))) fullmodel <- lm(y~p1+p2+p3+p4+p5,data.frame(y,p1,p2,p3,p4,p5)) reducedmodel <- lm(fmla,data.frame(y,p1,p2,p3,p4,p5)) summary(fullmodel) summary(reducedmodel) In this example, the algorythm detects that there are 3 "important" variables, but it only gets 2 of them.
Classic linear model - model selection
I really like the method used in the caret package: recursive feature elimination. You can read more about it in the vignette, but here's the basic process: The basic idea is to use a criteria (suc
Classic linear model - model selection I really like the method used in the caret package: recursive feature elimination. You can read more about it in the vignette, but here's the basic process: The basic idea is to use a criteria (such as t statistics) to eliminate unimportant variables and see how that improves the predictive accuracy of the model. You wrap the entire thing in a resampling loop, such as cross-validation. Here is an example, using a linear model to rank variables in a manner similar to what you've described: #Setup set.seed(1) p1 <- rnorm(50) p2 <- rnorm(50) p3 <- rnorm(50) p4 <- rnorm(50) p5 <- rnorm(50) y <- 4*rnorm(50)+p1+p2-p5 #Select Variables require(caret) X <- data.frame(p1,p2,p3,p4,p5) RFE <- rfe(X,y, sizes = seq(1,5), rfeControl = rfeControl( functions = lmFuncs, method = "repeatedcv") ) RFE plot(RFE) #Fit linear model and compare fmla <- as.formula(paste("y ~ ", paste(RFE$optVariables, collapse= "+"))) fullmodel <- lm(y~p1+p2+p3+p4+p5,data.frame(y,p1,p2,p3,p4,p5)) reducedmodel <- lm(fmla,data.frame(y,p1,p2,p3,p4,p5)) summary(fullmodel) summary(reducedmodel) In this example, the algorythm detects that there are 3 "important" variables, but it only gets 2 of them.
Classic linear model - model selection I really like the method used in the caret package: recursive feature elimination. You can read more about it in the vignette, but here's the basic process: The basic idea is to use a criteria (suc
16,208
Aggregating results from linear model runs R
Plot them! http://svn.cluelessresearch.com/tables2graphs/longley.png Or, if you must, use tables: The apsrtable package or the mtable function in the memisc package. Using mtable mtable123 <- mtable("Model 1"=lm1,"Model 2"=lm2,"Model 3"=lm3, summary.stats=c("sigma","R-squared","F","p","N")) > mtable123 Calls: Model 1: lm(formula = weight ~ group) Model 2: lm(formula = weight ~ group - 1) Model 3: lm(formula = log(weight) ~ group - 1) ============================================= Model 1 Model 2 Model 3 --------------------------------------------- (Intercept) 5.032*** (0.220) group: Trt/Ctl -0.371 (0.311) group: Ctl 5.032*** 1.610*** (0.220) (0.045) group: Trt 4.661*** 1.527*** (0.220) (0.045) --------------------------------------------- sigma 0.696 0.696 0.143 R-squared 0.073 0.982 0.993 F 1.419 485.051 1200.388 p 0.249 0.000 0.000 N 20 20 20 =============================================
Aggregating results from linear model runs R
Plot them! http://svn.cluelessresearch.com/tables2graphs/longley.png Or, if you must, use tables: The apsrtable package or the mtable function in the memisc package. Using mtable mtable123 <- mtable(
Aggregating results from linear model runs R Plot them! http://svn.cluelessresearch.com/tables2graphs/longley.png Or, if you must, use tables: The apsrtable package or the mtable function in the memisc package. Using mtable mtable123 <- mtable("Model 1"=lm1,"Model 2"=lm2,"Model 3"=lm3, summary.stats=c("sigma","R-squared","F","p","N")) > mtable123 Calls: Model 1: lm(formula = weight ~ group) Model 2: lm(formula = weight ~ group - 1) Model 3: lm(formula = log(weight) ~ group - 1) ============================================= Model 1 Model 2 Model 3 --------------------------------------------- (Intercept) 5.032*** (0.220) group: Trt/Ctl -0.371 (0.311) group: Ctl 5.032*** 1.610*** (0.220) (0.045) group: Trt 4.661*** 1.527*** (0.220) (0.045) --------------------------------------------- sigma 0.696 0.696 0.143 R-squared 0.073 0.982 0.993 F 1.419 485.051 1200.388 p 0.249 0.000 0.000 N 20 20 20 =============================================
Aggregating results from linear model runs R Plot them! http://svn.cluelessresearch.com/tables2graphs/longley.png Or, if you must, use tables: The apsrtable package or the mtable function in the memisc package. Using mtable mtable123 <- mtable(
16,209
Aggregating results from linear model runs R
The following doesn't answer exactly the question. It may give you some ideas, though. It's something I recently did in order to assess the fit of several regression models using one to four independent variables (the dependent variable was in the first column of the df1 dataframe). # create the combinations of the 4 independent variables library(foreach) xcomb <- foreach(i=1:4, .combine=c) %do% {combn(names(df1)[-1], i, simplify=FALSE) } # create formulas formlist <- lapply(xcomb, function(l) formula(paste(names(df1)[1], paste(l, collapse="+"), sep="~"))) The contents of as.character(formlist) was [1] "price ~ sqft" "price ~ age" [3] "price ~ feats" "price ~ tax" [5] "price ~ sqft + age" "price ~ sqft + feats" [7] "price ~ sqft + tax" "price ~ age + feats" [9] "price ~ age + tax" "price ~ feats + tax" [11] "price ~ sqft + age + feats" "price ~ sqft + age + tax" [13] "price ~ sqft + feats + tax" "price ~ age + feats + tax" [15] "price ~ sqft + age + feats + tax" Then I collected some useful indices # R squared models.r.sq <- sapply(formlist, function(i) summary(lm(i))$r.squared) # adjusted R squared models.adj.r.sq <- sapply(formlist, function(i) summary(lm(i))$adj.r.squared) # MSEp models.MSEp <- sapply(formlist, function(i) anova(lm(i))['Mean Sq']['Residuals',]) # Full model MSE MSE <- anova(lm(formlist[[length(formlist)]]))['Mean Sq']['Residuals',] # Mallow's Cp models.Cp <- sapply(formlist, function(i) { SSEp <- anova(lm(i))['Sum Sq']['Residuals',] mod.mat <- model.matrix(lm(i)) n <- dim(mod.mat)[1] p <- dim(mod.mat)[2] c(p,SSEp / MSE - (n - 2*p)) }) df.model.eval <- data.frame(model=as.character(formlist), p=models.Cp[1,], r.sq=models.r.sq, adj.r.sq=models.adj.r.sq, MSEp=models.MSEp, Cp=models.Cp[2,]) The final dataframe was model p r.sq adj.r.sq MSEp Cp 1 price~sqft 2 0.71390776 0.71139818 42044.46 49.260620 2 price~age 2 0.02847477 0.01352823 162541.84 292.462049 3 price~feats 2 0.17858447 0.17137907 120716.21 351.004441 4 price~tax 2 0.76641940 0.76417343 35035.94 20.591913 5 price~sqft+age 3 0.80348960 0.79734865 33391.05 10.899307 6 price~sqft+feats 3 0.72245824 0.71754599 41148.82 46.441002 7 price~sqft+tax 3 0.79837622 0.79446120 30536.19 5.819766 8 price~age+feats 3 0.16146638 0.13526220 142483.62 245.803026 9 price~age+tax 3 0.77886989 0.77173666 37884.71 20.026075 10 price~feats+tax 3 0.76941242 0.76493500 34922.80 21.021060 11 price~sqft+age+feats 4 0.80454221 0.79523470 33739.36 12.514175 12 price~sqft+age+tax 4 0.82977846 0.82140691 29640.97 3.832692 13 price~sqft+feats+tax 4 0.80068220 0.79481991 30482.90 6.609502 14 price~age+feats+tax 4 0.79186713 0.78163109 36242.54 17.381201 15 price~sqft+age+feats+tax 5 0.83210849 0.82091573 29722.50 5.000000 Finally, a Cp plot (using library wle)
Aggregating results from linear model runs R
The following doesn't answer exactly the question. It may give you some ideas, though. It's something I recently did in order to assess the fit of several regression models using one to four independe
Aggregating results from linear model runs R The following doesn't answer exactly the question. It may give you some ideas, though. It's something I recently did in order to assess the fit of several regression models using one to four independent variables (the dependent variable was in the first column of the df1 dataframe). # create the combinations of the 4 independent variables library(foreach) xcomb <- foreach(i=1:4, .combine=c) %do% {combn(names(df1)[-1], i, simplify=FALSE) } # create formulas formlist <- lapply(xcomb, function(l) formula(paste(names(df1)[1], paste(l, collapse="+"), sep="~"))) The contents of as.character(formlist) was [1] "price ~ sqft" "price ~ age" [3] "price ~ feats" "price ~ tax" [5] "price ~ sqft + age" "price ~ sqft + feats" [7] "price ~ sqft + tax" "price ~ age + feats" [9] "price ~ age + tax" "price ~ feats + tax" [11] "price ~ sqft + age + feats" "price ~ sqft + age + tax" [13] "price ~ sqft + feats + tax" "price ~ age + feats + tax" [15] "price ~ sqft + age + feats + tax" Then I collected some useful indices # R squared models.r.sq <- sapply(formlist, function(i) summary(lm(i))$r.squared) # adjusted R squared models.adj.r.sq <- sapply(formlist, function(i) summary(lm(i))$adj.r.squared) # MSEp models.MSEp <- sapply(formlist, function(i) anova(lm(i))['Mean Sq']['Residuals',]) # Full model MSE MSE <- anova(lm(formlist[[length(formlist)]]))['Mean Sq']['Residuals',] # Mallow's Cp models.Cp <- sapply(formlist, function(i) { SSEp <- anova(lm(i))['Sum Sq']['Residuals',] mod.mat <- model.matrix(lm(i)) n <- dim(mod.mat)[1] p <- dim(mod.mat)[2] c(p,SSEp / MSE - (n - 2*p)) }) df.model.eval <- data.frame(model=as.character(formlist), p=models.Cp[1,], r.sq=models.r.sq, adj.r.sq=models.adj.r.sq, MSEp=models.MSEp, Cp=models.Cp[2,]) The final dataframe was model p r.sq adj.r.sq MSEp Cp 1 price~sqft 2 0.71390776 0.71139818 42044.46 49.260620 2 price~age 2 0.02847477 0.01352823 162541.84 292.462049 3 price~feats 2 0.17858447 0.17137907 120716.21 351.004441 4 price~tax 2 0.76641940 0.76417343 35035.94 20.591913 5 price~sqft+age 3 0.80348960 0.79734865 33391.05 10.899307 6 price~sqft+feats 3 0.72245824 0.71754599 41148.82 46.441002 7 price~sqft+tax 3 0.79837622 0.79446120 30536.19 5.819766 8 price~age+feats 3 0.16146638 0.13526220 142483.62 245.803026 9 price~age+tax 3 0.77886989 0.77173666 37884.71 20.026075 10 price~feats+tax 3 0.76941242 0.76493500 34922.80 21.021060 11 price~sqft+age+feats 4 0.80454221 0.79523470 33739.36 12.514175 12 price~sqft+age+tax 4 0.82977846 0.82140691 29640.97 3.832692 13 price~sqft+feats+tax 4 0.80068220 0.79481991 30482.90 6.609502 14 price~age+feats+tax 4 0.79186713 0.78163109 36242.54 17.381201 15 price~sqft+age+feats+tax 5 0.83210849 0.82091573 29722.50 5.000000 Finally, a Cp plot (using library wle)
Aggregating results from linear model runs R The following doesn't answer exactly the question. It may give you some ideas, though. It's something I recently did in order to assess the fit of several regression models using one to four independe
16,210
Should "City" be a fixed or a random effect variable?
I would advise fitting both. Hopefully they will tell you the same thing. If not, that would be very interesting! Conceptually, city should be random. You are not specifically interested in estimates for each city for you research question and your sample of cities can be thought of as coming from a wider population of cities. These are good reasons to treat it as random. The problem is you only have 4 of them so you are asking the software to estimate a variance for a normally distributed variable with only 4 samples so that may not be very reliable. It is perfectly valid to fit fixed effects and this will control for the non independence within each city. In that case you are treating it a bit like a confounder. The reason for using random intercepts is that with many cities this becomes inconvenient and loses statistical power. So with only 4, I would do both.
Should "City" be a fixed or a random effect variable?
I would advise fitting both. Hopefully they will tell you the same thing. If not, that would be very interesting! Conceptually, city should be random. You are not specifically interested in estimates
Should "City" be a fixed or a random effect variable? I would advise fitting both. Hopefully they will tell you the same thing. If not, that would be very interesting! Conceptually, city should be random. You are not specifically interested in estimates for each city for you research question and your sample of cities can be thought of as coming from a wider population of cities. These are good reasons to treat it as random. The problem is you only have 4 of them so you are asking the software to estimate a variance for a normally distributed variable with only 4 samples so that may not be very reliable. It is perfectly valid to fit fixed effects and this will control for the non independence within each city. In that case you are treating it a bit like a confounder. The reason for using random intercepts is that with many cities this becomes inconvenient and loses statistical power. So with only 4, I would do both.
Should "City" be a fixed or a random effect variable? I would advise fitting both. Hopefully they will tell you the same thing. If not, that would be very interesting! Conceptually, city should be random. You are not specifically interested in estimates
16,211
Should "City" be a fixed or a random effect variable?
Robert Long already gave a nice answer, but let me add my three cents. As already noticed by Dave in the comment, when fitting fixed effects models you are asking the question what are the differences between those particular cities, while with random effects model you ask what is the variability between cities. Those are quite different questions to ask. If you are interested in more in-depth discussion of differences between both types of models, you can check my answer in the Fixed effect vs random effect when all possibilities are included in a mixed effects model thread. It's a different question, but the answer discusses the kind of issues that are closely related to questions as yours.
Should "City" be a fixed or a random effect variable?
Robert Long already gave a nice answer, but let me add my three cents. As already noticed by Dave in the comment, when fitting fixed effects models you are asking the question what are the differences
Should "City" be a fixed or a random effect variable? Robert Long already gave a nice answer, but let me add my three cents. As already noticed by Dave in the comment, when fitting fixed effects models you are asking the question what are the differences between those particular cities, while with random effects model you ask what is the variability between cities. Those are quite different questions to ask. If you are interested in more in-depth discussion of differences between both types of models, you can check my answer in the Fixed effect vs random effect when all possibilities are included in a mixed effects model thread. It's a different question, but the answer discusses the kind of issues that are closely related to questions as yours.
Should "City" be a fixed or a random effect variable? Robert Long already gave a nice answer, but let me add my three cents. As already noticed by Dave in the comment, when fitting fixed effects models you are asking the question what are the differences
16,212
Should "City" be a fixed or a random effect variable?
One further remark: If you assume that the city variable might be correlated with the other independent variables (and the blood sugar level), you need to model cities as fixed-effects because it would violate the assumption of independence of the random effects. An example might be if one city is in Florida where older people with higher blood sugar levels tend to cluster due to the milder winter.
Should "City" be a fixed or a random effect variable?
One further remark: If you assume that the city variable might be correlated with the other independent variables (and the blood sugar level), you need to model cities as fixed-effects because it woul
Should "City" be a fixed or a random effect variable? One further remark: If you assume that the city variable might be correlated with the other independent variables (and the blood sugar level), you need to model cities as fixed-effects because it would violate the assumption of independence of the random effects. An example might be if one city is in Florida where older people with higher blood sugar levels tend to cluster due to the milder winter.
Should "City" be a fixed or a random effect variable? One further remark: If you assume that the city variable might be correlated with the other independent variables (and the blood sugar level), you need to model cities as fixed-effects because it woul
16,213
Maximum Likelihood Estimation for Bernoulli distribution
Its often easier to work with the log-likelihood in these situations than the likelihood. Note that the minimum/maximum of the log-likelihood is exactly the same as the min/max of the likelihood. $$ \begin{align*} L(p) &= \prod_{i=1}^n p^{x_i}(1-p)^{(1-x_i)}\\ \ell(p) &= \log{p}\sum_{i=1}^n x_i + \log{(1-p)}\sum_{i=1}^n (1-x_i)\\ \dfrac{\partial\ell(p)}{\partial p} &= \dfrac{\sum_{i=1}^n x_i}{p} - \dfrac{\sum_{i=1}^n (1-x_i)}{1-p} \overset{\text{set}}{=}0\\ \sum_{i=1}^n x_i - p\sum_{i=1}^n x_i &= p\sum_{i=1}^n (1-x_i)\\ p& = \dfrac{1}{n}\sum_{i=1}^n x_i\\ \dfrac{\partial^2 \ell(p)}{\partial p^2} &= \dfrac{-\sum_{i=1}^n x_i}{p^2} - \dfrac{\sum_{i=1}^n (1-x_i)}{(1-p)^2} \end{align*} $$ The penultimate line gives us the MLE (the $p$ that satisfies the first derivative of the log-likelihood (also called the score function) equal to zero). The last equation gives us the second derivative of the log-likelihood. Since $p\in [0,1]$ and $x_i \in \left\{0,1\right\}$, the second derivative is negative.
Maximum Likelihood Estimation for Bernoulli distribution
Its often easier to work with the log-likelihood in these situations than the likelihood. Note that the minimum/maximum of the log-likelihood is exactly the same as the min/max of the likelihood. $$
Maximum Likelihood Estimation for Bernoulli distribution Its often easier to work with the log-likelihood in these situations than the likelihood. Note that the minimum/maximum of the log-likelihood is exactly the same as the min/max of the likelihood. $$ \begin{align*} L(p) &= \prod_{i=1}^n p^{x_i}(1-p)^{(1-x_i)}\\ \ell(p) &= \log{p}\sum_{i=1}^n x_i + \log{(1-p)}\sum_{i=1}^n (1-x_i)\\ \dfrac{\partial\ell(p)}{\partial p} &= \dfrac{\sum_{i=1}^n x_i}{p} - \dfrac{\sum_{i=1}^n (1-x_i)}{1-p} \overset{\text{set}}{=}0\\ \sum_{i=1}^n x_i - p\sum_{i=1}^n x_i &= p\sum_{i=1}^n (1-x_i)\\ p& = \dfrac{1}{n}\sum_{i=1}^n x_i\\ \dfrac{\partial^2 \ell(p)}{\partial p^2} &= \dfrac{-\sum_{i=1}^n x_i}{p^2} - \dfrac{\sum_{i=1}^n (1-x_i)}{(1-p)^2} \end{align*} $$ The penultimate line gives us the MLE (the $p$ that satisfies the first derivative of the log-likelihood (also called the score function) equal to zero). The last equation gives us the second derivative of the log-likelihood. Since $p\in [0,1]$ and $x_i \in \left\{0,1\right\}$, the second derivative is negative.
Maximum Likelihood Estimation for Bernoulli distribution Its often easier to work with the log-likelihood in these situations than the likelihood. Note that the minimum/maximum of the log-likelihood is exactly the same as the min/max of the likelihood. $$
16,214
Maximum Likelihood Estimation for Bernoulli distribution
The negative sign of the second derivative shows that the stationary point is a maximum. A positive would indicate a minimum. The second derivative tells you how the first derivative (gradient) is changing. A negative value tells you the curve is bending downwards. This occurs at a maximum. Assuming from your post you already have the first derivative of the log-likelihood function \begin{equation} \frac{d\ \ln f}{dp}=\frac{\sum_i x_i}{p}-\frac{n-\sum_i x_i}{1-p} \end{equation} giving \begin{equation} \hat{p}=\frac{\sum_i x_i}{n} \end{equation} Second deriative \begin{equation} \frac{d^2(\ln f)}{dp^2}=-\frac{\sum_i x_i}{p^2}-\frac{n-\sum_i x_i}{(1-p)^2} \end{equation} This will be negative for all real values of p, as inspection confirms that for the two fractions in the expression, both numerators and both denominators are positive. We can demonstrate this for the specific value of $\hat{p}$ found. Substituting your value of $\hat{p}$ \begin{equation} \frac{d^2(\ln f)}{d p^2}=-\frac{\sum_i x_i}{(\frac{\sum_i x_i}{n})^2}-\frac{n-\sum_i x_i}{(1-\frac{\sum_i x_i}{n})^2} \end{equation} \begin{equation} =-\frac{\sum_i x_i}{(\frac{\sum_i x_i}{n})^2}-\frac{n-\sum_i x_i}{(\frac{n-\sum_i x_i}{n})^2} \end{equation} \begin{equation} =-\frac{n^2}{\sum_i x_i}-\frac{n^2}{n-\sum_i x_i} \end{equation} Which is clearly negative. You know this is a global maximum, as it is the only maximum! Minimums​ occur at the boundaries. You could prove $p = 0$ was the maximum on the boundary by showing the gradient was always negative. Likewise if gradient is always positive, this would prove $p = 1$ is the maximum.
Maximum Likelihood Estimation for Bernoulli distribution
The negative sign of the second derivative shows that the stationary point is a maximum. A positive would indicate a minimum. The second derivative tells you how the first derivative (gradient) is cha
Maximum Likelihood Estimation for Bernoulli distribution The negative sign of the second derivative shows that the stationary point is a maximum. A positive would indicate a minimum. The second derivative tells you how the first derivative (gradient) is changing. A negative value tells you the curve is bending downwards. This occurs at a maximum. Assuming from your post you already have the first derivative of the log-likelihood function \begin{equation} \frac{d\ \ln f}{dp}=\frac{\sum_i x_i}{p}-\frac{n-\sum_i x_i}{1-p} \end{equation} giving \begin{equation} \hat{p}=\frac{\sum_i x_i}{n} \end{equation} Second deriative \begin{equation} \frac{d^2(\ln f)}{dp^2}=-\frac{\sum_i x_i}{p^2}-\frac{n-\sum_i x_i}{(1-p)^2} \end{equation} This will be negative for all real values of p, as inspection confirms that for the two fractions in the expression, both numerators and both denominators are positive. We can demonstrate this for the specific value of $\hat{p}$ found. Substituting your value of $\hat{p}$ \begin{equation} \frac{d^2(\ln f)}{d p^2}=-\frac{\sum_i x_i}{(\frac{\sum_i x_i}{n})^2}-\frac{n-\sum_i x_i}{(1-\frac{\sum_i x_i}{n})^2} \end{equation} \begin{equation} =-\frac{\sum_i x_i}{(\frac{\sum_i x_i}{n})^2}-\frac{n-\sum_i x_i}{(\frac{n-\sum_i x_i}{n})^2} \end{equation} \begin{equation} =-\frac{n^2}{\sum_i x_i}-\frac{n^2}{n-\sum_i x_i} \end{equation} Which is clearly negative. You know this is a global maximum, as it is the only maximum! Minimums​ occur at the boundaries. You could prove $p = 0$ was the maximum on the boundary by showing the gradient was always negative. Likewise if gradient is always positive, this would prove $p = 1$ is the maximum.
Maximum Likelihood Estimation for Bernoulli distribution The negative sign of the second derivative shows that the stationary point is a maximum. A positive would indicate a minimum. The second derivative tells you how the first derivative (gradient) is cha
16,215
Generate points efficiently between unit circle and unit square
Will two million points per second do? The distribution is symmetric: we only need work out the distribution for one-eighth of the full circle and then copy it around the other octants. In polar coordinates $(r,\theta)$, the cumulative distribution of the angle $\Theta$ for the random location $(X,Y)$ at the value $\theta$ is given by the area between the triangle $(0,0), (1,0), (1,\tan\theta)$ and the arc of the circle extending from $(1,0)$ to $(\cos\theta,\sin\theta)$. It is thereby proportional to $$F_\Theta(\theta) = \Pr(\Theta \le \theta) \propto \frac{1}{2}\tan(\theta) - \frac{\theta}{2},$$ whence its density is $$f_\Theta(\theta) = \frac{d}{d\theta} F_\Theta(\theta) \propto \tan^2(\theta).$$ We may sample from this density using, say, a rejection method (which has efficiency $8/\pi-2 \approx 54.6479\%$). The conditional density of the radial coordinate $R$ is proportional to $rdr$ between $r=1$ and $r=\sec\theta$. That can be sampled with an easy inversion of the CDF. If we generate independent samples $(r_i,\theta_i)$, conversion back to Cartesian coordinates $(x_i,y_i)$ samples this octant. Because the samples are independent, randomly swapping the coordinates produces an independent random sample from the first quadrant, as desired. (The random swaps require generating only a single Binomial variable to determine how many of the realizations to swap.) Each such realization of $(X,Y)$ requires, on average, one uniform variate (for $R$) plus $1/(8\pi-2)$ times two uniform variates (for $\Theta$) and a small amount of (fast) calculation. That's $4/(\pi-4) \approx 4.66$ variates per point (which, of course, has two coordinates). Full details are in the code example below. This figure plots 10,000 out of more than a half million points generated. Here is the R code that produced this simulation and timed it. n.sim <- 1e6 x.time <- system.time({ # Generate trial angles `theta` theta <- sqrt(runif(n.sim)) * pi/4 # Rejection step. theta <- theta[runif(n.sim) * 4 * theta <= pi * tan(theta)^2] # Generate radial coordinates `r`. n <- length(theta) r <- sqrt(1 + runif(n) * tan(theta)^2) # Convert to Cartesian coordinates. # (The products will generate a full circle) x <- r * cos(theta) #* c(1,1,-1,-1) y <- r * sin(theta) #* c(1,-1,1,-1) # Swap approximately half the coordinates. k <- rbinom(1, n, 1/2) if (k > 0) { z <- y[1:k] y[1:k] <- x[1:k] x[1:k] <- z } }) message(signif(x.time[3] * 1e6/n, 2), " seconds per million points.") # # Plot the result to confirm. # plot(c(0,1), c(0,1), type="n", bty="n", asp=1, xlab="x", ylab="y") rect(-1, -1, 1, 1, col="White", border="#00000040") m <- sample.int(n, min(n, 1e4)) points(x[m],y[m], pch=19, cex=1/2, col="#0000e010")
Generate points efficiently between unit circle and unit square
Will two million points per second do? The distribution is symmetric: we only need work out the distribution for one-eighth of the full circle and then copy it around the other octants. In polar coor
Generate points efficiently between unit circle and unit square Will two million points per second do? The distribution is symmetric: we only need work out the distribution for one-eighth of the full circle and then copy it around the other octants. In polar coordinates $(r,\theta)$, the cumulative distribution of the angle $\Theta$ for the random location $(X,Y)$ at the value $\theta$ is given by the area between the triangle $(0,0), (1,0), (1,\tan\theta)$ and the arc of the circle extending from $(1,0)$ to $(\cos\theta,\sin\theta)$. It is thereby proportional to $$F_\Theta(\theta) = \Pr(\Theta \le \theta) \propto \frac{1}{2}\tan(\theta) - \frac{\theta}{2},$$ whence its density is $$f_\Theta(\theta) = \frac{d}{d\theta} F_\Theta(\theta) \propto \tan^2(\theta).$$ We may sample from this density using, say, a rejection method (which has efficiency $8/\pi-2 \approx 54.6479\%$). The conditional density of the radial coordinate $R$ is proportional to $rdr$ between $r=1$ and $r=\sec\theta$. That can be sampled with an easy inversion of the CDF. If we generate independent samples $(r_i,\theta_i)$, conversion back to Cartesian coordinates $(x_i,y_i)$ samples this octant. Because the samples are independent, randomly swapping the coordinates produces an independent random sample from the first quadrant, as desired. (The random swaps require generating only a single Binomial variable to determine how many of the realizations to swap.) Each such realization of $(X,Y)$ requires, on average, one uniform variate (for $R$) plus $1/(8\pi-2)$ times two uniform variates (for $\Theta$) and a small amount of (fast) calculation. That's $4/(\pi-4) \approx 4.66$ variates per point (which, of course, has two coordinates). Full details are in the code example below. This figure plots 10,000 out of more than a half million points generated. Here is the R code that produced this simulation and timed it. n.sim <- 1e6 x.time <- system.time({ # Generate trial angles `theta` theta <- sqrt(runif(n.sim)) * pi/4 # Rejection step. theta <- theta[runif(n.sim) * 4 * theta <= pi * tan(theta)^2] # Generate radial coordinates `r`. n <- length(theta) r <- sqrt(1 + runif(n) * tan(theta)^2) # Convert to Cartesian coordinates. # (The products will generate a full circle) x <- r * cos(theta) #* c(1,1,-1,-1) y <- r * sin(theta) #* c(1,-1,1,-1) # Swap approximately half the coordinates. k <- rbinom(1, n, 1/2) if (k > 0) { z <- y[1:k] y[1:k] <- x[1:k] x[1:k] <- z } }) message(signif(x.time[3] * 1e6/n, 2), " seconds per million points.") # # Plot the result to confirm. # plot(c(0,1), c(0,1), type="n", bty="n", asp=1, xlab="x", ylab="y") rect(-1, -1, 1, 1, col="White", border="#00000040") m <- sample.int(n, min(n, 1e4)) points(x[m],y[m], pch=19, cex=1/2, col="#0000e010")
Generate points efficiently between unit circle and unit square Will two million points per second do? The distribution is symmetric: we only need work out the distribution for one-eighth of the full circle and then copy it around the other octants. In polar coor
16,216
Generate points efficiently between unit circle and unit square
I propose the following solution, that should be simpler, more efficient and/or computationally cheaper than other soutions by @cardinal, @whuber and @stephan-kolassa so far. It involves the following simple steps: 1) Draw two standard uniform samples: $$ u_1 \sim Unif(0,1)\\ u_2 \sim Unif(0,1). $$ 2a) Apply the following shear transformation to the point $\min\{u_1,u_2\}, \max\{u_1,u_2\}$ (points in the lower right triangle are reflected to the upper left triangle and they will be "un-reflected" in 2b): $$ \begin{bmatrix} x\\y \end{bmatrix} = \begin{bmatrix} 1\\1 \end{bmatrix} + \begin{bmatrix} \frac{\sqrt{2}}{2} & -1\\ \frac{\sqrt{2}}{2} - 1 & 0\\ \end{bmatrix} \, \begin{bmatrix} \min\{u_1,u_2\}\\ \max\{u_1,u_2\}\\ \end{bmatrix}. $$ 2b) Swap $x$ and $y$ if $u_1 > u_2$. 3) Reject the sample if inside the unit circle (acceptance should be around 72%), i.e.: $$ x^2 + y^2 < 1. $$ The intuition behind this algorithm is shown in the figure. Steps 2a and 2b can be merged into a single step: 2) Apply shear transformation and swap $$ x = 1 + \frac{\sqrt{2}}{2} \min(u_1, u_2) - u_2\\ y = 1 + \frac{\sqrt{2}}{2} \min(u_1, u_2) - u_1 $$ The following code implements the algorithm above (and tests it using @whuber's code). n.sim <- 1e6 x.time <- system.time({ # Draw two standard uniform samples u_1 <- runif(n.sim) u_2 <- runif(n.sim) # Apply shear transformation and swap tmp <- 1 + sqrt(2)/2 * pmin(u_1, u_2) x <- tmp - u_2 y <- tmp - u_1 # Reject if inside circle accept <- x^2 + y^2 > 1 x <- x[accept] y <- y[accept] n <- length(x) }) message(signif(x.time[3] * 1e6/n, 2), " seconds per million points.") # # Plot the result to confirm. # plot(c(0,1), c(0,1), type="n", bty="n", asp=1, xlab="x", ylab="y") rect(-1, -1, 1, 1, col="White", border="#00000040") m <- sample.int(n, min(n, 1e4)) points(x[m],y[m], pch=19, cex=1/2, col="#0000e010") Some quick tests yield the following results. Algorithm https://stats.stackexchange.com/a/258349 . Best of 3: 0.33 seconds per million points. This algorithm. Best of 3: 0.18 seconds per million points.
Generate points efficiently between unit circle and unit square
I propose the following solution, that should be simpler, more efficient and/or computationally cheaper than other soutions by @cardinal, @whuber and @stephan-kolassa so far. It involves the following
Generate points efficiently between unit circle and unit square I propose the following solution, that should be simpler, more efficient and/or computationally cheaper than other soutions by @cardinal, @whuber and @stephan-kolassa so far. It involves the following simple steps: 1) Draw two standard uniform samples: $$ u_1 \sim Unif(0,1)\\ u_2 \sim Unif(0,1). $$ 2a) Apply the following shear transformation to the point $\min\{u_1,u_2\}, \max\{u_1,u_2\}$ (points in the lower right triangle are reflected to the upper left triangle and they will be "un-reflected" in 2b): $$ \begin{bmatrix} x\\y \end{bmatrix} = \begin{bmatrix} 1\\1 \end{bmatrix} + \begin{bmatrix} \frac{\sqrt{2}}{2} & -1\\ \frac{\sqrt{2}}{2} - 1 & 0\\ \end{bmatrix} \, \begin{bmatrix} \min\{u_1,u_2\}\\ \max\{u_1,u_2\}\\ \end{bmatrix}. $$ 2b) Swap $x$ and $y$ if $u_1 > u_2$. 3) Reject the sample if inside the unit circle (acceptance should be around 72%), i.e.: $$ x^2 + y^2 < 1. $$ The intuition behind this algorithm is shown in the figure. Steps 2a and 2b can be merged into a single step: 2) Apply shear transformation and swap $$ x = 1 + \frac{\sqrt{2}}{2} \min(u_1, u_2) - u_2\\ y = 1 + \frac{\sqrt{2}}{2} \min(u_1, u_2) - u_1 $$ The following code implements the algorithm above (and tests it using @whuber's code). n.sim <- 1e6 x.time <- system.time({ # Draw two standard uniform samples u_1 <- runif(n.sim) u_2 <- runif(n.sim) # Apply shear transformation and swap tmp <- 1 + sqrt(2)/2 * pmin(u_1, u_2) x <- tmp - u_2 y <- tmp - u_1 # Reject if inside circle accept <- x^2 + y^2 > 1 x <- x[accept] y <- y[accept] n <- length(x) }) message(signif(x.time[3] * 1e6/n, 2), " seconds per million points.") # # Plot the result to confirm. # plot(c(0,1), c(0,1), type="n", bty="n", asp=1, xlab="x", ylab="y") rect(-1, -1, 1, 1, col="White", border="#00000040") m <- sample.int(n, min(n, 1e4)) points(x[m],y[m], pch=19, cex=1/2, col="#0000e010") Some quick tests yield the following results. Algorithm https://stats.stackexchange.com/a/258349 . Best of 3: 0.33 seconds per million points. This algorithm. Best of 3: 0.18 seconds per million points.
Generate points efficiently between unit circle and unit square I propose the following solution, that should be simpler, more efficient and/or computationally cheaper than other soutions by @cardinal, @whuber and @stephan-kolassa so far. It involves the following
16,217
Generate points efficiently between unit circle and unit square
Well, more efficiently can be done, but I sure hope you are not looking for faster. The idea would be to sample an $x$ value first, with a density proportional to the length of the vertical blue slice above each $x$ value: $$ f(x) = 1-\sqrt{1-x^2}. $$ Wolfram helps you to integrate that: $$ \int_0^x f(y)dy = -\frac{1}{2}x\sqrt{1-x^2}+x-\frac{1}{2}\arcsin x.$$ So the cumulative distribution function $F$ would be this expression, scaled to integrate to 1 (i.e., divided by $ \int_0^1 f(y)dy$). Now, to generate your $x$ value, pick a random number $t$, uniformly distributed between $0$ and $1$. Then find $x$ such that $F(x)=t$. That is, we need to invert the CDF (inverse transform sampling). This can be done, but it's not easy. Nor fast. Finally, given $x$, pick a random $y$ that is uniformly distributed between $\sqrt{1-x^2}$ and $1$. Below is R code. Note that I am pre-evaluating the CDF at a grid of $x$ values, and even then this takes quite a few minutes. You can probably speed the CDF inversion up quite a bit if you invest some thinking. Then again, thinking hurts. I personally would go for rejection sampling, which is faster and far less error-prone, unless I had very good reasons not to. epsilon <- 1e-6 xx <- seq(0,1,by=epsilon) x.cdf <- function(x) x-(x*sqrt(1-x^2)+asin(x))/2 xx.cdf <- x.cdf(xx)/x.cdf(1) nn <- 1e4 rr <- matrix(nrow=nn,ncol=2) set.seed(1) pb <- winProgressBar(max=nn) for ( ii in 1:nn ) { setWinProgressBar(pb,ii,paste(ii,"of",nn)) x <- max(xx[xx.cdf<runif(1)]) y <- runif(1,sqrt(1-x^2),1) rr[ii,] <- c(x,y) } close(pb) plot(rr,pch=19,cex=.3,xlab="",ylab="")
Generate points efficiently between unit circle and unit square
Well, more efficiently can be done, but I sure hope you are not looking for faster. The idea would be to sample an $x$ value first, with a density proportional to the length of the vertical blue slice
Generate points efficiently between unit circle and unit square Well, more efficiently can be done, but I sure hope you are not looking for faster. The idea would be to sample an $x$ value first, with a density proportional to the length of the vertical blue slice above each $x$ value: $$ f(x) = 1-\sqrt{1-x^2}. $$ Wolfram helps you to integrate that: $$ \int_0^x f(y)dy = -\frac{1}{2}x\sqrt{1-x^2}+x-\frac{1}{2}\arcsin x.$$ So the cumulative distribution function $F$ would be this expression, scaled to integrate to 1 (i.e., divided by $ \int_0^1 f(y)dy$). Now, to generate your $x$ value, pick a random number $t$, uniformly distributed between $0$ and $1$. Then find $x$ such that $F(x)=t$. That is, we need to invert the CDF (inverse transform sampling). This can be done, but it's not easy. Nor fast. Finally, given $x$, pick a random $y$ that is uniformly distributed between $\sqrt{1-x^2}$ and $1$. Below is R code. Note that I am pre-evaluating the CDF at a grid of $x$ values, and even then this takes quite a few minutes. You can probably speed the CDF inversion up quite a bit if you invest some thinking. Then again, thinking hurts. I personally would go for rejection sampling, which is faster and far less error-prone, unless I had very good reasons not to. epsilon <- 1e-6 xx <- seq(0,1,by=epsilon) x.cdf <- function(x) x-(x*sqrt(1-x^2)+asin(x))/2 xx.cdf <- x.cdf(xx)/x.cdf(1) nn <- 1e4 rr <- matrix(nrow=nn,ncol=2) set.seed(1) pb <- winProgressBar(max=nn) for ( ii in 1:nn ) { setWinProgressBar(pb,ii,paste(ii,"of",nn)) x <- max(xx[xx.cdf<runif(1)]) y <- runif(1,sqrt(1-x^2),1) rr[ii,] <- c(x,y) } close(pb) plot(rr,pch=19,cex=.3,xlab="",ylab="")
Generate points efficiently between unit circle and unit square Well, more efficiently can be done, but I sure hope you are not looking for faster. The idea would be to sample an $x$ value first, with a density proportional to the length of the vertical blue slice
16,218
Are a sum and a product of two covariance matrices also a covariance matrix?
Background A covariance matrix $\mathbb{A}$ for a vector of random variables $X=(X_1, X_2, \ldots, X_n)^\prime$ embodies a procedure to compute the variance of any linear combination of those random variables. The rule is that for any vector of coefficients $\lambda = (\lambda_1, \ldots, \lambda_n)$, $$\operatorname{Var}(\lambda X) = \lambda \mathbb{A} \lambda ^\prime.\tag{1}$$ In other words, the rules of matrix multiplication describe the rules of variances. Two properties of $\mathbb{A}$ are immediate and obvious: Because variances are expectations of squared values, they can never be negative. Thus, for all vectors $\lambda$, $$0 \le \operatorname{Var}(\lambda X) = \lambda \mathbb{A} \lambda ^\prime.$$ Covariance matrices must be non-negative-definite. Variances are just numbers--or, if you read the matrix formulas literally, they are $1\times 1$ matrices. Thus, they do not change when you transpose them. Transposing $(1)$ gives $$\lambda \mathbb{A} \lambda ^\prime = \operatorname{Var}(\lambda X) = \operatorname{Var}(\lambda X) ^\prime = \left(\lambda \mathbb{A} \lambda ^\prime\right)^\prime = \lambda \mathbb{A}^\prime \lambda ^\prime.$$ Since this holds for all $\lambda$, $\mathbb{A}$ must equal its transpose $\mathbb{A}^\prime$: covariance matrices must be symmetric. The deeper result is that any non-negative-definite symmetric matrix $\mathbb{A}$ is a covariance matrix. This means there actually is some vector-valued random variable $X$ with $\mathbb{A}$ as its covariance. We may demonstrate this by explicitly constructing $X$. One way is to notice that the (multivariate) density function $f(x_1,\ldots, x_n)$ with the property $$\log(f) \propto -\frac{1}{2} (x_1,\ldots,x_n)\mathbb{A}^{-1}(x_1,\ldots,x_n)^\prime$$ has $\mathbb{A}$ for its covariance. (Some delicacy is needed when $\mathbb{A}$ is not invertible--but that's just a technical detail.) Solutions Let $\mathbb{X}$ and $\mathbb{Y}$ be covariance matrices. Obviously they are square; and if their sum is to make any sense they must have the same dimensions. We need only check the two properties. The sum. Symmetry $$(\mathbb{X}+\mathbb{Y})^\prime = \mathbb{X}^\prime + \mathbb{Y}^\prime = (\mathbb{X} + \mathbb{Y})$$ shows the sum is symmetric. Non-negative definiteness. Let $\lambda$ be any vector. Then $$\lambda(\mathbb{X}+\mathbb{Y})\lambda^\prime = \lambda \mathbb{X}\lambda^\prime + \lambda \mathbb{Y}\lambda^\prime \ge 0 + 0 = 0$$ proves the point using basic properties of matrix multiplication. I leave this as an exercise. This one is tricky. One method I use to think through challenging matrix problems is to do some calculations with $2\times 2$ matrices. There are some common, familiar covariance matrices of this size, such as $$\pmatrix{a & b \\ b & a}$$ with $a^2 \ge b^2$ and $a \ge 0$. The concern is that $\mathbb{XY}$ might not be definite: that is, could it produce a negative value when computing a variance? If it will, then we had better have some negative coefficients in the matrix. That suggests considering $$\mathbb{X} = \pmatrix{a & -1 \\ -1 & a}$$ for $a \ge 1$. To get something interesting, we might gravitate initially to matrices $\mathbb{Y}$ with different-looking structures. Diagonal matrices come to mind, such as $$\mathbb{Y} = \pmatrix{b & 0 \\ 0 & 1}$$ with $b\ge 0$. (Notice how we may freely pick some of the coefficients, such as $-1$ and $1$, because we can rescale all the entries in any covariance matrix without changing its fundamental properties. This simplifies the search for interesting examples.) I leave it to you to compute $\mathbb{XY}$ and test whether it always is a covariance matrix for any allowable values of $a$ and $b$.
Are a sum and a product of two covariance matrices also a covariance matrix?
Background A covariance matrix $\mathbb{A}$ for a vector of random variables $X=(X_1, X_2, \ldots, X_n)^\prime$ embodies a procedure to compute the variance of any linear combination of those random v
Are a sum and a product of two covariance matrices also a covariance matrix? Background A covariance matrix $\mathbb{A}$ for a vector of random variables $X=(X_1, X_2, \ldots, X_n)^\prime$ embodies a procedure to compute the variance of any linear combination of those random variables. The rule is that for any vector of coefficients $\lambda = (\lambda_1, \ldots, \lambda_n)$, $$\operatorname{Var}(\lambda X) = \lambda \mathbb{A} \lambda ^\prime.\tag{1}$$ In other words, the rules of matrix multiplication describe the rules of variances. Two properties of $\mathbb{A}$ are immediate and obvious: Because variances are expectations of squared values, they can never be negative. Thus, for all vectors $\lambda$, $$0 \le \operatorname{Var}(\lambda X) = \lambda \mathbb{A} \lambda ^\prime.$$ Covariance matrices must be non-negative-definite. Variances are just numbers--or, if you read the matrix formulas literally, they are $1\times 1$ matrices. Thus, they do not change when you transpose them. Transposing $(1)$ gives $$\lambda \mathbb{A} \lambda ^\prime = \operatorname{Var}(\lambda X) = \operatorname{Var}(\lambda X) ^\prime = \left(\lambda \mathbb{A} \lambda ^\prime\right)^\prime = \lambda \mathbb{A}^\prime \lambda ^\prime.$$ Since this holds for all $\lambda$, $\mathbb{A}$ must equal its transpose $\mathbb{A}^\prime$: covariance matrices must be symmetric. The deeper result is that any non-negative-definite symmetric matrix $\mathbb{A}$ is a covariance matrix. This means there actually is some vector-valued random variable $X$ with $\mathbb{A}$ as its covariance. We may demonstrate this by explicitly constructing $X$. One way is to notice that the (multivariate) density function $f(x_1,\ldots, x_n)$ with the property $$\log(f) \propto -\frac{1}{2} (x_1,\ldots,x_n)\mathbb{A}^{-1}(x_1,\ldots,x_n)^\prime$$ has $\mathbb{A}$ for its covariance. (Some delicacy is needed when $\mathbb{A}$ is not invertible--but that's just a technical detail.) Solutions Let $\mathbb{X}$ and $\mathbb{Y}$ be covariance matrices. Obviously they are square; and if their sum is to make any sense they must have the same dimensions. We need only check the two properties. The sum. Symmetry $$(\mathbb{X}+\mathbb{Y})^\prime = \mathbb{X}^\prime + \mathbb{Y}^\prime = (\mathbb{X} + \mathbb{Y})$$ shows the sum is symmetric. Non-negative definiteness. Let $\lambda$ be any vector. Then $$\lambda(\mathbb{X}+\mathbb{Y})\lambda^\prime = \lambda \mathbb{X}\lambda^\prime + \lambda \mathbb{Y}\lambda^\prime \ge 0 + 0 = 0$$ proves the point using basic properties of matrix multiplication. I leave this as an exercise. This one is tricky. One method I use to think through challenging matrix problems is to do some calculations with $2\times 2$ matrices. There are some common, familiar covariance matrices of this size, such as $$\pmatrix{a & b \\ b & a}$$ with $a^2 \ge b^2$ and $a \ge 0$. The concern is that $\mathbb{XY}$ might not be definite: that is, could it produce a negative value when computing a variance? If it will, then we had better have some negative coefficients in the matrix. That suggests considering $$\mathbb{X} = \pmatrix{a & -1 \\ -1 & a}$$ for $a \ge 1$. To get something interesting, we might gravitate initially to matrices $\mathbb{Y}$ with different-looking structures. Diagonal matrices come to mind, such as $$\mathbb{Y} = \pmatrix{b & 0 \\ 0 & 1}$$ with $b\ge 0$. (Notice how we may freely pick some of the coefficients, such as $-1$ and $1$, because we can rescale all the entries in any covariance matrix without changing its fundamental properties. This simplifies the search for interesting examples.) I leave it to you to compute $\mathbb{XY}$ and test whether it always is a covariance matrix for any allowable values of $a$ and $b$.
Are a sum and a product of two covariance matrices also a covariance matrix? Background A covariance matrix $\mathbb{A}$ for a vector of random variables $X=(X_1, X_2, \ldots, X_n)^\prime$ embodies a procedure to compute the variance of any linear combination of those random v
16,219
Are a sum and a product of two covariance matrices also a covariance matrix?
A real matrix is a covariance matrix if and only if it is symmetric positive semi-definite. Hints: 1) If $X$ and $Y$ are symmetric, is $X+Y$ symmetric? If $z^TX z \ge 0$ for any $z$ and $z^TY z \ge 0$ for any $z$, what can you conclude about $z^T(X+Y)z$? 2) If $X$ is symmetric, is $X^2$ symmetric? If the eigenvalues of $X$ are non-negative, what can you conclude about the eigenvalues of $X^2$? 3) If $X$ and $Y$ are symmetric, can you conclude that $XY$ is symmetric, or can you find a counter-example?
Are a sum and a product of two covariance matrices also a covariance matrix?
A real matrix is a covariance matrix if and only if it is symmetric positive semi-definite. Hints: 1) If $X$ and $Y$ are symmetric, is $X+Y$ symmetric? If $z^TX z \ge 0$ for any $z$ and $z^TY z \ge 0$
Are a sum and a product of two covariance matrices also a covariance matrix? A real matrix is a covariance matrix if and only if it is symmetric positive semi-definite. Hints: 1) If $X$ and $Y$ are symmetric, is $X+Y$ symmetric? If $z^TX z \ge 0$ for any $z$ and $z^TY z \ge 0$ for any $z$, what can you conclude about $z^T(X+Y)z$? 2) If $X$ is symmetric, is $X^2$ symmetric? If the eigenvalues of $X$ are non-negative, what can you conclude about the eigenvalues of $X^2$? 3) If $X$ and $Y$ are symmetric, can you conclude that $XY$ is symmetric, or can you find a counter-example?
Are a sum and a product of two covariance matrices also a covariance matrix? A real matrix is a covariance matrix if and only if it is symmetric positive semi-definite. Hints: 1) If $X$ and $Y$ are symmetric, is $X+Y$ symmetric? If $z^TX z \ge 0$ for any $z$ and $z^TY z \ge 0$
16,220
How I can interpret GAM results?
The deviance explained is a bit like $R^2$ for models where sums of squares doesn't make much sense as a measure of discrepancy between the observations and the fitted values. In generalised models instead we measure this discrepancy using deviance. It is computed using the likelihood of the model and hence has a somewhat different mathematical definition for each error distribution (family argument in glm()/gam()). In the case of Gaussian models estimated as a GLM/GAM, deviance and residual sums of squares are equivalent. The deviance $D$ of a model is defined as: $$ D = 2 \left [ l(\hat{\beta}_{\mathrm{max}}) - l(\hat{\beta}) \right ] \phi $$ where $l(\hat{\beta}_{\mathrm{max}})$ is the maximised likelihood of the saturated model and $l(\hat{\beta})$ is the maximised likelihood of the model you've fitted. The saturated model is a model with one parameter for each data point; you can't get a higher likelihood than this, given the data. $\phi$ is the scale parameter. The scaled deviance is simply $$D^{*} = D / \phi$$ These scaled deviances play a role in likelihood ratio tests, where the difference of scaled deviances for two models is $\sim \chi^2_{p_1, p_2}$ (chi-square distributed with degrees of freedom $p_1$ and $p_2$). Deviance explained is just representing the above as the proportion of the total deviance explained by the current model. The scale estimate is $\hat{\phi}$, i.e. this is the value of $\phi$ estimated during model fitting. For the Poisson and Binomial families/distributions, by definition $\phi = 1$, but for other distributions this is not the case, including the Gaussian. In the Gaussian case, $\hat\phi$ is the residual standard error squared. The GCV score is the minimised generalised cross-validation (GCV) score of the GAM fitted. GCV is used for smoothness selection in the mgcv package for R; smoothing parameters are chosen to minimise prediction error where $\phi$ is unknown, and standard CV or GCV can be used to estimate prediction error. GCv is preferred here as it can be calculated without actually cross-validating (refitting the model to subsets of the data) it, which saves computational time/effort. The value reported is the minimised GCV score (UBRE, Un-biased Risk Estimator, scores are shown instead you are fitting a model with known $\phi$), and you can use these scores a bit like AIC, smaller values indicated better fitting models. GAMs fitted using GCV smoothness selection can suffer from under-smoothing. This can happen where the GCV profile is relative flat and random variation can lead to the algorithm converging at too wiggly a fit. Fitting via REML (use method = "REML" in the gam() call) or ML has been shown by Simon Wood and colleagues to be much more robust to under smoothing, but at computational expense. The above summaries are based on the descriptions in Simon Wood's rather excellent book on GAMs: Wood, S. N. (2006). Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC.
How I can interpret GAM results?
The deviance explained is a bit like $R^2$ for models where sums of squares doesn't make much sense as a measure of discrepancy between the observations and the fitted values. In generalised models in
How I can interpret GAM results? The deviance explained is a bit like $R^2$ for models where sums of squares doesn't make much sense as a measure of discrepancy between the observations and the fitted values. In generalised models instead we measure this discrepancy using deviance. It is computed using the likelihood of the model and hence has a somewhat different mathematical definition for each error distribution (family argument in glm()/gam()). In the case of Gaussian models estimated as a GLM/GAM, deviance and residual sums of squares are equivalent. The deviance $D$ of a model is defined as: $$ D = 2 \left [ l(\hat{\beta}_{\mathrm{max}}) - l(\hat{\beta}) \right ] \phi $$ where $l(\hat{\beta}_{\mathrm{max}})$ is the maximised likelihood of the saturated model and $l(\hat{\beta})$ is the maximised likelihood of the model you've fitted. The saturated model is a model with one parameter for each data point; you can't get a higher likelihood than this, given the data. $\phi$ is the scale parameter. The scaled deviance is simply $$D^{*} = D / \phi$$ These scaled deviances play a role in likelihood ratio tests, where the difference of scaled deviances for two models is $\sim \chi^2_{p_1, p_2}$ (chi-square distributed with degrees of freedom $p_1$ and $p_2$). Deviance explained is just representing the above as the proportion of the total deviance explained by the current model. The scale estimate is $\hat{\phi}$, i.e. this is the value of $\phi$ estimated during model fitting. For the Poisson and Binomial families/distributions, by definition $\phi = 1$, but for other distributions this is not the case, including the Gaussian. In the Gaussian case, $\hat\phi$ is the residual standard error squared. The GCV score is the minimised generalised cross-validation (GCV) score of the GAM fitted. GCV is used for smoothness selection in the mgcv package for R; smoothing parameters are chosen to minimise prediction error where $\phi$ is unknown, and standard CV or GCV can be used to estimate prediction error. GCv is preferred here as it can be calculated without actually cross-validating (refitting the model to subsets of the data) it, which saves computational time/effort. The value reported is the minimised GCV score (UBRE, Un-biased Risk Estimator, scores are shown instead you are fitting a model with known $\phi$), and you can use these scores a bit like AIC, smaller values indicated better fitting models. GAMs fitted using GCV smoothness selection can suffer from under-smoothing. This can happen where the GCV profile is relative flat and random variation can lead to the algorithm converging at too wiggly a fit. Fitting via REML (use method = "REML" in the gam() call) or ML has been shown by Simon Wood and colleagues to be much more robust to under smoothing, but at computational expense. The above summaries are based on the descriptions in Simon Wood's rather excellent book on GAMs: Wood, S. N. (2006). Generalized Additive Models: An Introduction with R. Chapman and Hall/CRC.
How I can interpret GAM results? The deviance explained is a bit like $R^2$ for models where sums of squares doesn't make much sense as a measure of discrepancy between the observations and the fitted values. In generalised models in
16,221
Inference for the skeptical (but not math-averse) reader
You have already got some good suggestions. Here are some more. First, two blogs that I read sporadically, and where questions such as you ask yourself are sometimes discussed. As they are blogs, you could even ask questions and get some very good answers! Here they come: http://andrewgelman.com/ (Andrew Gelman) http://errorstatistics.com/ (Deborah Mayo) And a few books I think will help you: Box, Hunter & Hunter: Statistics for experimenters. As the title says, this is a ("first", but really, really ... second) course for people which would like to design their own experiments, and so analyze them. Very high on the "why" part. Then: D R Cox: Principles of Statistical Inference, another very good book about the "why" not the "how". And, since you ask why means and proportions are treated differently, here is a book which does not do that: http://www.amazon.com/Statistics-4th-David-Freedman/dp/0393929728/ref=sr_1_1?s=books&ie=UTF8&qid=1373395118&sr=1-1&keywords=freedman+statistics Low on maths, high on principles.
Inference for the skeptical (but not math-averse) reader
You have already got some good suggestions. Here are some more. First, two blogs that I read sporadically, and where questions such as you ask yourself are sometimes discussed. As they are blogs, you
Inference for the skeptical (but not math-averse) reader You have already got some good suggestions. Here are some more. First, two blogs that I read sporadically, and where questions such as you ask yourself are sometimes discussed. As they are blogs, you could even ask questions and get some very good answers! Here they come: http://andrewgelman.com/ (Andrew Gelman) http://errorstatistics.com/ (Deborah Mayo) And a few books I think will help you: Box, Hunter & Hunter: Statistics for experimenters. As the title says, this is a ("first", but really, really ... second) course for people which would like to design their own experiments, and so analyze them. Very high on the "why" part. Then: D R Cox: Principles of Statistical Inference, another very good book about the "why" not the "how". And, since you ask why means and proportions are treated differently, here is a book which does not do that: http://www.amazon.com/Statistics-4th-David-Freedman/dp/0393929728/ref=sr_1_1?s=books&ie=UTF8&qid=1373395118&sr=1-1&keywords=freedman+statistics Low on maths, high on principles.
Inference for the skeptical (but not math-averse) reader You have already got some good suggestions. Here are some more. First, two blogs that I read sporadically, and where questions such as you ask yourself are sometimes discussed. As they are blogs, you
16,222
Inference for the skeptical (but not math-averse) reader
Try a radically different path to the subject: get "A History of Mathematical Statistics (From 1750 to 1930)", by Anders Hald, and learn about the history of our subject. Once you grasp the slow emergence of the concept of a statistical model, your questions will look trivial. The two pieces of a statistical model must be clearly understood: the observable data $X$ and the non-observable parameter $\Theta$. The sampling distribution of $X\mid \Theta$ is postulated, and our goal is to learn about $\Theta$ given some values of $X$. Looking at some of your questions: 1) Different models; 2) The $t$ distribution is the sampling distribution of a specific statistic (a function of data $X$) when the data are supposed to be normal; 3) The degrees of freedom characterize the sampling distribution of a statistic supposing that the values of $\Theta$ are constrained (by a so-called null hypothesis), and so on. Also, pick some simple inference problem (like normal data with known variance) and solve it in both the classical and Bayesian ways. Contrast the differences. That may be illuminating.
Inference for the skeptical (but not math-averse) reader
Try a radically different path to the subject: get "A History of Mathematical Statistics (From 1750 to 1930)", by Anders Hald, and learn about the history of our subject. Once you grasp the slow emerg
Inference for the skeptical (but not math-averse) reader Try a radically different path to the subject: get "A History of Mathematical Statistics (From 1750 to 1930)", by Anders Hald, and learn about the history of our subject. Once you grasp the slow emergence of the concept of a statistical model, your questions will look trivial. The two pieces of a statistical model must be clearly understood: the observable data $X$ and the non-observable parameter $\Theta$. The sampling distribution of $X\mid \Theta$ is postulated, and our goal is to learn about $\Theta$ given some values of $X$. Looking at some of your questions: 1) Different models; 2) The $t$ distribution is the sampling distribution of a specific statistic (a function of data $X$) when the data are supposed to be normal; 3) The degrees of freedom characterize the sampling distribution of a statistic supposing that the values of $\Theta$ are constrained (by a so-called null hypothesis), and so on. Also, pick some simple inference problem (like normal data with known variance) and solve it in both the classical and Bayesian ways. Contrast the differences. That may be illuminating.
Inference for the skeptical (but not math-averse) reader Try a radically different path to the subject: get "A History of Mathematical Statistics (From 1750 to 1930)", by Anders Hald, and learn about the history of our subject. Once you grasp the slow emerg
16,223
Inference for the skeptical (but not math-averse) reader
I rather doubt there will be a single book that suits you as individual people tend to be skeptical about different things, and books are written for a target audience, rather than for individuals. This is one of the good things about being taught by a person, rather than just a book, which is that you can ask questions as you go. This is a pretty difficult thing to do in a linear text. I don't think there is necessarily anything wrong with neglecting questions, there comes a point where addressing every question detracts from the exposition of the basic concepts which is often more important (especially in a stats 101 book!). I suspect the best approach is to get a good book and then look up the answer to unanswered questions elsewhere. I've got a bookshelf full of statistics texts in front of me, simply because none of them in isolation is all that I need (not even Jaynes' book ;o). For the absolute beginner, I think Grant Foster's book "Understanding Statistics" is a good place to start, but I suspect it is rather too basic in this case.
Inference for the skeptical (but not math-averse) reader
I rather doubt there will be a single book that suits you as individual people tend to be skeptical about different things, and books are written for a target audience, rather than for individuals. T
Inference for the skeptical (but not math-averse) reader I rather doubt there will be a single book that suits you as individual people tend to be skeptical about different things, and books are written for a target audience, rather than for individuals. This is one of the good things about being taught by a person, rather than just a book, which is that you can ask questions as you go. This is a pretty difficult thing to do in a linear text. I don't think there is necessarily anything wrong with neglecting questions, there comes a point where addressing every question detracts from the exposition of the basic concepts which is often more important (especially in a stats 101 book!). I suspect the best approach is to get a good book and then look up the answer to unanswered questions elsewhere. I've got a bookshelf full of statistics texts in front of me, simply because none of them in isolation is all that I need (not even Jaynes' book ;o). For the absolute beginner, I think Grant Foster's book "Understanding Statistics" is a good place to start, but I suspect it is rather too basic in this case.
Inference for the skeptical (but not math-averse) reader I rather doubt there will be a single book that suits you as individual people tend to be skeptical about different things, and books are written for a target audience, rather than for individuals. T
16,224
Inference for the skeptical (but not math-averse) reader
Abelson (1995), Statistics as Principled Argument is introductory & has an interesting take on some of the questions that often confuse learners. But perhaps you just need to read some books on theoretical statistics (skipping all the stuff about convergence, metric spaces, &c.) & then even if they don't answer specifically questions like your examples, you'll be able to answer most of them yourself, & look up the rest, as @Dikran suggests. I suggested in another thread reading Cox & Hinkley, Theoretical Statistics or Cox, Principles of Statistical Inference together with Casella & Berger, Statistical Inference to get an understanding of the different perspectives there are.
Inference for the skeptical (but not math-averse) reader
Abelson (1995), Statistics as Principled Argument is introductory & has an interesting take on some of the questions that often confuse learners. But perhaps you just need to read some books on theore
Inference for the skeptical (but not math-averse) reader Abelson (1995), Statistics as Principled Argument is introductory & has an interesting take on some of the questions that often confuse learners. But perhaps you just need to read some books on theoretical statistics (skipping all the stuff about convergence, metric spaces, &c.) & then even if they don't answer specifically questions like your examples, you'll be able to answer most of them yourself, & look up the rest, as @Dikran suggests. I suggested in another thread reading Cox & Hinkley, Theoretical Statistics or Cox, Principles of Statistical Inference together with Casella & Berger, Statistical Inference to get an understanding of the different perspectives there are.
Inference for the skeptical (but not math-averse) reader Abelson (1995), Statistics as Principled Argument is introductory & has an interesting take on some of the questions that often confuse learners. But perhaps you just need to read some books on theore
16,225
What exactly is building a statistical model?
I'll take a crack at this although I'm not a statistician by any means but land up doing a lot of 'modeling' - statistical and non-statistical. First let's start with the basics: What IS a model exactly? A model is a representation of reality albeit highly simplified. Think of a wax/wood 'model' for a house. You can touch/feel/smell it. Now a mathematical model is a representation of reality using numbers. What is this 'reality' I hear you ask? Okay. So think of this simple situation: The governor of your state implements a policy saying that the price of a pack of cigarettes would now cost $100 for the next year. The 'aim' is to deter the people from purchasing cigarettes thereby decreasing smoking thereby making the smokers healthier (because they'd quit). After 1 year the governor asks you - was this a success? How can you say that? Well you capture data like number of packets sold/day or per year, survey responses, any measurable data you can get your hands on that is relevant to the problem. You've just begun to 'model' the problem. Now you want to analyze what this 'model' says. That's where statistical modeling comes in handy. You could run a simple correlation/scatter plot to see what the model 'looks like'. You could get fancy to determine causality i.e., if increasing price did lead to decrease in smoking or were there other confounding factors at play (i.e., maybe it's something else altogether and your model missed it perhaps?). Now, constructing this model is done by a 'set of rules' (more like guidelines) i.e., what is/isn't legal or what does/doesn't make sense. You should know what you are doing and how to interpret the results of this model. Building/Executing/Interpreting this model requires basic knowledge of statistics. In the example above you need to know about correlation/scatter plots, regression (uni and multivariate) and other stuff. I suggest reading the absolute fun/informative read on understanding statistics intuitively: What is a p-value anyway It's a humorous intro to statistics and will teach you 'modeling' along the way from simple to advanced (i.e., linear regression). Then you can go on and read other stuff. So, remember a model is a representation of reality and that "All models are wrong but some are more useful than others". A model is a simplified representation of reality and you can't possibly consider everything but you must know what to and what not to consider to have a good model that can give you meaningful results. It doesn't stop here. You can create models to simulate reality too! That is how a bunch of numbers will change over time (say). These numbers map to some meaningful interpretation in your domain. You can also create these models to mine your data to see how the various measures relate to each other (the application of statistics here maybe questionable, but don't worry for now). Example: You look at grocery sales for a store per month and realize that whenever beer is bought so is a pack of diapers (you build a model that runs through the data set and shows you this association). It may be weird but it may imply that mostly fathers buy this over the weekend when baby sitting their kids? Put diapers near beers and you may increase your sales! Aaah! Modeling :) These are just examples and by no means a reference for professional work. You basically build models to understand/estimate how reality will/did function and to take better decisions based on the outputs. Statistics or not, you've probably doing modeling all your life without realizing it. Best of luck :)
What exactly is building a statistical model?
I'll take a crack at this although I'm not a statistician by any means but land up doing a lot of 'modeling' - statistical and non-statistical. First let's start with the basics: What IS a model exa
What exactly is building a statistical model? I'll take a crack at this although I'm not a statistician by any means but land up doing a lot of 'modeling' - statistical and non-statistical. First let's start with the basics: What IS a model exactly? A model is a representation of reality albeit highly simplified. Think of a wax/wood 'model' for a house. You can touch/feel/smell it. Now a mathematical model is a representation of reality using numbers. What is this 'reality' I hear you ask? Okay. So think of this simple situation: The governor of your state implements a policy saying that the price of a pack of cigarettes would now cost $100 for the next year. The 'aim' is to deter the people from purchasing cigarettes thereby decreasing smoking thereby making the smokers healthier (because they'd quit). After 1 year the governor asks you - was this a success? How can you say that? Well you capture data like number of packets sold/day or per year, survey responses, any measurable data you can get your hands on that is relevant to the problem. You've just begun to 'model' the problem. Now you want to analyze what this 'model' says. That's where statistical modeling comes in handy. You could run a simple correlation/scatter plot to see what the model 'looks like'. You could get fancy to determine causality i.e., if increasing price did lead to decrease in smoking or were there other confounding factors at play (i.e., maybe it's something else altogether and your model missed it perhaps?). Now, constructing this model is done by a 'set of rules' (more like guidelines) i.e., what is/isn't legal or what does/doesn't make sense. You should know what you are doing and how to interpret the results of this model. Building/Executing/Interpreting this model requires basic knowledge of statistics. In the example above you need to know about correlation/scatter plots, regression (uni and multivariate) and other stuff. I suggest reading the absolute fun/informative read on understanding statistics intuitively: What is a p-value anyway It's a humorous intro to statistics and will teach you 'modeling' along the way from simple to advanced (i.e., linear regression). Then you can go on and read other stuff. So, remember a model is a representation of reality and that "All models are wrong but some are more useful than others". A model is a simplified representation of reality and you can't possibly consider everything but you must know what to and what not to consider to have a good model that can give you meaningful results. It doesn't stop here. You can create models to simulate reality too! That is how a bunch of numbers will change over time (say). These numbers map to some meaningful interpretation in your domain. You can also create these models to mine your data to see how the various measures relate to each other (the application of statistics here maybe questionable, but don't worry for now). Example: You look at grocery sales for a store per month and realize that whenever beer is bought so is a pack of diapers (you build a model that runs through the data set and shows you this association). It may be weird but it may imply that mostly fathers buy this over the weekend when baby sitting their kids? Put diapers near beers and you may increase your sales! Aaah! Modeling :) These are just examples and by no means a reference for professional work. You basically build models to understand/estimate how reality will/did function and to take better decisions based on the outputs. Statistics or not, you've probably doing modeling all your life without realizing it. Best of luck :)
What exactly is building a statistical model? I'll take a crack at this although I'm not a statistician by any means but land up doing a lot of 'modeling' - statistical and non-statistical. First let's start with the basics: What IS a model exa
16,226
What exactly is building a statistical model?
Building a statistical model involves constructing a mathematical description of some real-world phenomena that accounts for the uncertainty and/or randomness involved in that system. Depending on the field of application, this could range from something as simple as linear regression, or basic hypothesis testing, through complicated multivariate factor analysis or data mining.
What exactly is building a statistical model?
Building a statistical model involves constructing a mathematical description of some real-world phenomena that accounts for the uncertainty and/or randomness involved in that system. Depending on th
What exactly is building a statistical model? Building a statistical model involves constructing a mathematical description of some real-world phenomena that accounts for the uncertainty and/or randomness involved in that system. Depending on the field of application, this could range from something as simple as linear regression, or basic hypothesis testing, through complicated multivariate factor analysis or data mining.
What exactly is building a statistical model? Building a statistical model involves constructing a mathematical description of some real-world phenomena that accounts for the uncertainty and/or randomness involved in that system. Depending on th
16,227
What exactly is building a statistical model?
Modeling to me involves specifying a probabilistic framework for observed data with estimable parameters that can be used to discern valuable differences in observable data when they exist. This is called power. Probabilistic models can be used for either prediction or inference. They can be used to calibrate machinery, to demonstrate deficiency in return on investment, to forecast weather or stocks, or simplify medical decision making. A model does not necessarily need to be built. In an isolated experiment, one can use a non-parametric modeling approach, such as the t-test to determine whether there is a significant difference in means between two groups. However, for many forecasting purposes, models can be built so as to detect changes in time. For instance, transition based Markov models can be used to predict up and down swings in market value for investments, but to what extent can a "dip" be considered worse than expected? Using historical evidence and observed predictors, one can build a sophisticated model to calibrate whether observed dips are significantly different from those which have historically been sustained. Using tools like control charts, cumulative incidence charts, survival curves, and other "time based" charts, it's possible to examine the difference between observed and expected events according to a model based simulation and call in judgement when necessary. Alternately, some models are "built" by having the flexibility to adapt as data grow. Twitter's detection of trending and Netflix's recommendation system are prime examples of such models. They have a general specification (Bayesian Model Averaging, for the latter) that allows a flexible model to accommodate historical shifts and trends and recalibrate to maintain best prediction, such as the introduction of high impact films, a large uptake of new users, or a dramatic shift in film preference due to seasonality. Some of the data mining approaches are introduced because they are highly adept at achieving certain types of prediction approaches (again, the issue of obtaining "expected" trends or values in data). K-NN is a way of incorporating high dimensional data and inferring whether subjects can receive reliable predictions simply due to proximity (whether from age, musical taste, sexual history, or some other measurable trait). Logistic regression on the other hand can obtain a binary classifier, but is much more commonly used to infer about the association between a binary outcome and one or more exposures and conditions through a parameter called the odds ratio. Because of limit theorems and its relationship to the generalized linear models, odds ratios are highly regular parameters that have a "highly conserved" type I error (i.e. the p-value means what you think it means).
What exactly is building a statistical model?
Modeling to me involves specifying a probabilistic framework for observed data with estimable parameters that can be used to discern valuable differences in observable data when they exist. This is ca
What exactly is building a statistical model? Modeling to me involves specifying a probabilistic framework for observed data with estimable parameters that can be used to discern valuable differences in observable data when they exist. This is called power. Probabilistic models can be used for either prediction or inference. They can be used to calibrate machinery, to demonstrate deficiency in return on investment, to forecast weather or stocks, or simplify medical decision making. A model does not necessarily need to be built. In an isolated experiment, one can use a non-parametric modeling approach, such as the t-test to determine whether there is a significant difference in means between two groups. However, for many forecasting purposes, models can be built so as to detect changes in time. For instance, transition based Markov models can be used to predict up and down swings in market value for investments, but to what extent can a "dip" be considered worse than expected? Using historical evidence and observed predictors, one can build a sophisticated model to calibrate whether observed dips are significantly different from those which have historically been sustained. Using tools like control charts, cumulative incidence charts, survival curves, and other "time based" charts, it's possible to examine the difference between observed and expected events according to a model based simulation and call in judgement when necessary. Alternately, some models are "built" by having the flexibility to adapt as data grow. Twitter's detection of trending and Netflix's recommendation system are prime examples of such models. They have a general specification (Bayesian Model Averaging, for the latter) that allows a flexible model to accommodate historical shifts and trends and recalibrate to maintain best prediction, such as the introduction of high impact films, a large uptake of new users, or a dramatic shift in film preference due to seasonality. Some of the data mining approaches are introduced because they are highly adept at achieving certain types of prediction approaches (again, the issue of obtaining "expected" trends or values in data). K-NN is a way of incorporating high dimensional data and inferring whether subjects can receive reliable predictions simply due to proximity (whether from age, musical taste, sexual history, or some other measurable trait). Logistic regression on the other hand can obtain a binary classifier, but is much more commonly used to infer about the association between a binary outcome and one or more exposures and conditions through a parameter called the odds ratio. Because of limit theorems and its relationship to the generalized linear models, odds ratios are highly regular parameters that have a "highly conserved" type I error (i.e. the p-value means what you think it means).
What exactly is building a statistical model? Modeling to me involves specifying a probabilistic framework for observed data with estimable parameters that can be used to discern valuable differences in observable data when they exist. This is ca
16,228
What exactly is building a statistical model?
Modelling is the process of identifying a suitable model. Frequently a modeller will have a good idea of important variables, and perhaps even have a theoretical basis for a particular model. They will also know some facts about the response and the general kind of relationships with the predictors, but may still not be certain that their general idea of a model is completely adequate - even with an excellent theoretical idea of how the mean should work, they might not, for example, be confident that the variance isn't related to the mean, or they might suspect some serial dependence could be possible. So there may be a cycle of several stages of model identification that makes reference to (at least some of) the data. The alternative is to regularly risk having quite unsuitable models. (Of course, if they're being responsible, they must take account of how using data in this way impacts their inferences.) The actual process varies somewhat from area to area and from person to person, but it's possible to find some people explicitly listing steps in their process (e.g. Box and Jenkins outline one such approach in their book on time series). Ideas about how to do model identification alter over time.
What exactly is building a statistical model?
Modelling is the process of identifying a suitable model. Frequently a modeller will have a good idea of important variables, and perhaps even have a theoretical basis for a particular model. They wi
What exactly is building a statistical model? Modelling is the process of identifying a suitable model. Frequently a modeller will have a good idea of important variables, and perhaps even have a theoretical basis for a particular model. They will also know some facts about the response and the general kind of relationships with the predictors, but may still not be certain that their general idea of a model is completely adequate - even with an excellent theoretical idea of how the mean should work, they might not, for example, be confident that the variance isn't related to the mean, or they might suspect some serial dependence could be possible. So there may be a cycle of several stages of model identification that makes reference to (at least some of) the data. The alternative is to regularly risk having quite unsuitable models. (Of course, if they're being responsible, they must take account of how using data in this way impacts their inferences.) The actual process varies somewhat from area to area and from person to person, but it's possible to find some people explicitly listing steps in their process (e.g. Box and Jenkins outline one such approach in their book on time series). Ideas about how to do model identification alter over time.
What exactly is building a statistical model? Modelling is the process of identifying a suitable model. Frequently a modeller will have a good idea of important variables, and perhaps even have a theoretical basis for a particular model. They wi
16,229
What exactly is building a statistical model?
I don't think there's a common definition of what constitutes a statistical model. From my experience in the industry it seems to be a synonym to what in econometrics is called a reduced form model. I'll explain. Suppose, that in your field there are established relationships or "laws," e.g. in Physics this would be $F=m\frac {d^2x}{dt^2}$ stating that force is proportional to the acceleration (aka "2nd law of mechanics"). So, knowing this law you could build a mathematical model of a cannon ball trajectory. This model will have what Physicists call "constants" or "coeffiecients", e.g. an air density at a given temperature and elevation. You'll have to find out what are these coefficients experimentally. In our case we'll have ask the artillery to fire the cannons under many different, tightly controlled conditions, such as angles, temperature etc. We collect all the data, and fit the model using statistical techniques. It could be as simple as linear regression or averages. Once got all the coefficients, we now run our mathematical model to produce the firing tables. This is neatly described in the unclassified document here, called "THE PRODUCTION OF FIRING TABLES FOR CANNON ARTILLERY." What I just described is not a statistical model. Yes, it does use statistics, but this model uses establishes laws of Physics, which are the essence of the model. Here, statistics is a mere tool to determine the values of a few important parameters. The dynamics of the system are described and pre-determined by the field. Suppose, that we did not know or did not care for the laws of Physics, and simply tried to establish the relationships between cannon flying distance and the parameters such as firing angle and temperature using a "statistical model." We'd create a big data set with a bunch of candidate variables, or features, and transformations of variables, maybe polynomial series of temperature etc. Then we'd run a regression of sorts, and identified coefficients. These coefficients would not necessarily have established interpretations in the field. We'd call them sensitivities to square of temperature etc. This model may actually be quite good at predicting the end points of cannon balls, because the underlying process is quite stable.
What exactly is building a statistical model?
I don't think there's a common definition of what constitutes a statistical model. From my experience in the industry it seems to be a synonym to what in econometrics is called a reduced form model. I
What exactly is building a statistical model? I don't think there's a common definition of what constitutes a statistical model. From my experience in the industry it seems to be a synonym to what in econometrics is called a reduced form model. I'll explain. Suppose, that in your field there are established relationships or "laws," e.g. in Physics this would be $F=m\frac {d^2x}{dt^2}$ stating that force is proportional to the acceleration (aka "2nd law of mechanics"). So, knowing this law you could build a mathematical model of a cannon ball trajectory. This model will have what Physicists call "constants" or "coeffiecients", e.g. an air density at a given temperature and elevation. You'll have to find out what are these coefficients experimentally. In our case we'll have ask the artillery to fire the cannons under many different, tightly controlled conditions, such as angles, temperature etc. We collect all the data, and fit the model using statistical techniques. It could be as simple as linear regression or averages. Once got all the coefficients, we now run our mathematical model to produce the firing tables. This is neatly described in the unclassified document here, called "THE PRODUCTION OF FIRING TABLES FOR CANNON ARTILLERY." What I just described is not a statistical model. Yes, it does use statistics, but this model uses establishes laws of Physics, which are the essence of the model. Here, statistics is a mere tool to determine the values of a few important parameters. The dynamics of the system are described and pre-determined by the field. Suppose, that we did not know or did not care for the laws of Physics, and simply tried to establish the relationships between cannon flying distance and the parameters such as firing angle and temperature using a "statistical model." We'd create a big data set with a bunch of candidate variables, or features, and transformations of variables, maybe polynomial series of temperature etc. Then we'd run a regression of sorts, and identified coefficients. These coefficients would not necessarily have established interpretations in the field. We'd call them sensitivities to square of temperature etc. This model may actually be quite good at predicting the end points of cannon balls, because the underlying process is quite stable.
What exactly is building a statistical model? I don't think there's a common definition of what constitutes a statistical model. From my experience in the industry it seems to be a synonym to what in econometrics is called a reduced form model. I
16,230
What exactly is building a statistical model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The simplest possible approximation, depiction and abstraction of real world situation, through the nearest possible counterpart in the theoretical world, is called a model. The data is turned into random variable, and which has an associated theory to find its various estimates, associations and hypothesis.
What exactly is building a statistical model?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
What exactly is building a statistical model? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted. The simplest possible approximation, depiction and abstraction of real world situation, through the nearest possible counterpart in the theoretical world, is called a model. The data is turned into random variable, and which has an associated theory to find its various estimates, associations and hypothesis.
What exactly is building a statistical model? Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
16,231
What is the point of non-informative priors?
The debate about non-informative priors has been going on for ages, at least since the end of the 19th century with criticism by Bertrand and de Morgan about the lack of invariance of Laplace's uniform priors (the same criticism reported by Stéphane Laurent in the above comments). This lack of invariance sounded like a death stroke for the Bayesian approach and, while some Bayesians were desperately trying to cling to specific distributions, using less than formal arguments, others had a vision of a larger picture where priors could be used in situations where there was hardly any prior information, besides the shape of the likelihood itself. This vision is best represented by Jeffreys' distributions, where the information matrix of the sampling model, $I(\theta)$, is turned into a prior distribution $$ \pi(\theta) \propto |I(\theta)|^{1/2} $$ which is most often improper, i.e. does not integrate to a finite value. The label "non-informative" associated with Jeffreys' priors is rather unfortunate, as they represent an input from the statistician, hence are informative about something! Similarly, "objective" has an authoritative weight I dislike... I thus prefer the label "reference prior", used for instance by José Bernado. Those priors indeed give a reference against which one can compute either the reference estimator/test/prediction or one's own estimator/test/prediction using a different prior motivated by subjective and objective items of information. To answer directly the question, "why not use only informative priors?", there is actually no answer. A prior distribution is a choice made by the statistician, neither a state of Nature nor a hidden variable. In other words, there is no "best prior" that one "should use". Because this is the nature of statistical inference that there is no "best answer". Hence my defence of the noninformative/reference choice! It is providing the same range of inferential tools as other priors, but gives answers that are only inspired by the shape of the likelihood function, rather than induced by some opinion about the range of the unknown parameters.
What is the point of non-informative priors?
The debate about non-informative priors has been going on for ages, at least since the end of the 19th century with criticism by Bertrand and de Morgan about the lack of invariance of Laplace's unifor
What is the point of non-informative priors? The debate about non-informative priors has been going on for ages, at least since the end of the 19th century with criticism by Bertrand and de Morgan about the lack of invariance of Laplace's uniform priors (the same criticism reported by Stéphane Laurent in the above comments). This lack of invariance sounded like a death stroke for the Bayesian approach and, while some Bayesians were desperately trying to cling to specific distributions, using less than formal arguments, others had a vision of a larger picture where priors could be used in situations where there was hardly any prior information, besides the shape of the likelihood itself. This vision is best represented by Jeffreys' distributions, where the information matrix of the sampling model, $I(\theta)$, is turned into a prior distribution $$ \pi(\theta) \propto |I(\theta)|^{1/2} $$ which is most often improper, i.e. does not integrate to a finite value. The label "non-informative" associated with Jeffreys' priors is rather unfortunate, as they represent an input from the statistician, hence are informative about something! Similarly, "objective" has an authoritative weight I dislike... I thus prefer the label "reference prior", used for instance by José Bernado. Those priors indeed give a reference against which one can compute either the reference estimator/test/prediction or one's own estimator/test/prediction using a different prior motivated by subjective and objective items of information. To answer directly the question, "why not use only informative priors?", there is actually no answer. A prior distribution is a choice made by the statistician, neither a state of Nature nor a hidden variable. In other words, there is no "best prior" that one "should use". Because this is the nature of statistical inference that there is no "best answer". Hence my defence of the noninformative/reference choice! It is providing the same range of inferential tools as other priors, but gives answers that are only inspired by the shape of the likelihood function, rather than induced by some opinion about the range of the unknown parameters.
What is the point of non-informative priors? The debate about non-informative priors has been going on for ages, at least since the end of the 19th century with criticism by Bertrand and de Morgan about the lack of invariance of Laplace's unifor
16,232
How to best display graphically type II (beta) error, power and sample size?
I have played around with similar plots and found that it works better when the 2 curves don't block each other, but are rather vertically offset (but still on the same x-axis). This makes it clear that one of the curves represents the null hypothesis and the other represents a given value for the mean under the alternative hypothesis. The power.examp function in the TeachingDemos package for R will create these plots and the run.power.examp function (same package) allows you to interactively change the arguments and update the plot.
How to best display graphically type II (beta) error, power and sample size?
I have played around with similar plots and found that it works better when the 2 curves don't block each other, but are rather vertically offset (but still on the same x-axis). This makes it clear t
How to best display graphically type II (beta) error, power and sample size? I have played around with similar plots and found that it works better when the 2 curves don't block each other, but are rather vertically offset (but still on the same x-axis). This makes it clear that one of the curves represents the null hypothesis and the other represents a given value for the mean under the alternative hypothesis. The power.examp function in the TeachingDemos package for R will create these plots and the run.power.examp function (same package) allows you to interactively change the arguments and update the plot.
How to best display graphically type II (beta) error, power and sample size? I have played around with similar plots and found that it works better when the 2 curves don't block each other, but are rather vertically offset (but still on the same x-axis). This makes it clear t
16,233
How to best display graphically type II (beta) error, power and sample size?
A few thoughts: (a) Use transparency, and (b) Allow for some interactivity. Here is my take, largely inspired by a Java applet on Type I and Type II Errors - Making Mistakes in the Justice System. As this is rather pure drawing code, I pasted it as gist #1139310. Here is how it looks: It relies on the aplpack package (slider and push button). So, basically, you can vary the deviation from the mean under $H_0$ (fixed at 0) and the location of the distribution under the alternative. Please note that there's no consideration of sample size.
How to best display graphically type II (beta) error, power and sample size?
A few thoughts: (a) Use transparency, and (b) Allow for some interactivity. Here is my take, largely inspired by a Java applet on Type I and Type II Errors - Making Mistakes in the Justice System. As
How to best display graphically type II (beta) error, power and sample size? A few thoughts: (a) Use transparency, and (b) Allow for some interactivity. Here is my take, largely inspired by a Java applet on Type I and Type II Errors - Making Mistakes in the Justice System. As this is rather pure drawing code, I pasted it as gist #1139310. Here is how it looks: It relies on the aplpack package (slider and push button). So, basically, you can vary the deviation from the mean under $H_0$ (fixed at 0) and the location of the distribution under the alternative. Please note that there's no consideration of sample size.
How to best display graphically type II (beta) error, power and sample size? A few thoughts: (a) Use transparency, and (b) Allow for some interactivity. Here is my take, largely inspired by a Java applet on Type I and Type II Errors - Making Mistakes in the Justice System. As
16,234
How to best display graphically type II (beta) error, power and sample size?
G Power 3, free software available on Mac and Windows, has some very nice graphing features for power analysis. The main graph is broadly consistent with your graph and that shown by @chl. It uses a simple straight line to indicate null hypothesis and alternate hypothesis test statistic distributions, and colours in beta and alpha in separate colours. A nice feature of G Power 3 is that it supports a large number of common power analysis scenarios and the GUI makes it simple for students and applied researchers to explore. Here is an a screen shot of a slide (taken from a presentation I gave on descriptive statistics with a section on power analysis) with multiple such graphs shown on the left. If you chose a one-tail t-test version then it would look more like your example. It's also possible to produce graphs that show the functional relationship between factors relevant to statistical power and hypothesis testing (e.g., alpha, effect size, sample size, power, etc.). I present a few examples of such graphs here. Here's one example of such a graph:
How to best display graphically type II (beta) error, power and sample size?
G Power 3, free software available on Mac and Windows, has some very nice graphing features for power analysis. The main graph is broadly consistent with your graph and that shown by @chl. It uses a s
How to best display graphically type II (beta) error, power and sample size? G Power 3, free software available on Mac and Windows, has some very nice graphing features for power analysis. The main graph is broadly consistent with your graph and that shown by @chl. It uses a simple straight line to indicate null hypothesis and alternate hypothesis test statistic distributions, and colours in beta and alpha in separate colours. A nice feature of G Power 3 is that it supports a large number of common power analysis scenarios and the GUI makes it simple for students and applied researchers to explore. Here is an a screen shot of a slide (taken from a presentation I gave on descriptive statistics with a section on power analysis) with multiple such graphs shown on the left. If you chose a one-tail t-test version then it would look more like your example. It's also possible to produce graphs that show the functional relationship between factors relevant to statistical power and hypothesis testing (e.g., alpha, effect size, sample size, power, etc.). I present a few examples of such graphs here. Here's one example of such a graph:
How to best display graphically type II (beta) error, power and sample size? G Power 3, free software available on Mac and Windows, has some very nice graphing features for power analysis. The main graph is broadly consistent with your graph and that shown by @chl. It uses a s
16,235
Formula for autocorrelation in R vs. Excel
The exact equation is given in: Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag. I'll give you an example: ### simulate some data with AR(1) where rho = .75 xi <- 1:50 yi <- arima.sim(model=list(ar=.75), n=50) ### get residuals res <- resid(lm(yi ~ xi)) ### acf for lags 1 and 2 cor(res[1:49], res[2:50]) ### not quite how this is calculated by R cor(res[1:48], res[3:50]) ### not quite how this is calculated by R ### how R calculates these acf(res, lag.max=2, plot=F) ### how this is calculated by R ### note: mean(res) = 0 for this example, so technically not needed here c0 <- 1/50 * sum( (res[1:50] - mean(res)) * (res[1:50] - mean(res)) ) c1 <- 1/50 * sum( (res[1:49] - mean(res)) * (res[2:50] - mean(res)) ) c2 <- 1/50 * sum( (res[1:48] - mean(res)) * (res[3:50] - mean(res)) ) c1/c0 c2/c0 And so on (e.g., res[1:47] and res[4:50] for lag 3).
Formula for autocorrelation in R vs. Excel
The exact equation is given in: Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag. I'll give you an example: ### simulate some data with AR(1)
Formula for autocorrelation in R vs. Excel The exact equation is given in: Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag. I'll give you an example: ### simulate some data with AR(1) where rho = .75 xi <- 1:50 yi <- arima.sim(model=list(ar=.75), n=50) ### get residuals res <- resid(lm(yi ~ xi)) ### acf for lags 1 and 2 cor(res[1:49], res[2:50]) ### not quite how this is calculated by R cor(res[1:48], res[3:50]) ### not quite how this is calculated by R ### how R calculates these acf(res, lag.max=2, plot=F) ### how this is calculated by R ### note: mean(res) = 0 for this example, so technically not needed here c0 <- 1/50 * sum( (res[1:50] - mean(res)) * (res[1:50] - mean(res)) ) c1 <- 1/50 * sum( (res[1:49] - mean(res)) * (res[2:50] - mean(res)) ) c2 <- 1/50 * sum( (res[1:48] - mean(res)) * (res[3:50] - mean(res)) ) c1/c0 c2/c0 And so on (e.g., res[1:47] and res[4:50] for lag 3).
Formula for autocorrelation in R vs. Excel The exact equation is given in: Venables, W. N. and Ripley, B. D. (2002) Modern Applied Statistics with S. Fourth Edition. Springer-Verlag. I'll give you an example: ### simulate some data with AR(1)
16,236
Formula for autocorrelation in R vs. Excel
The naive way to calculate the auto correlation (and possibly what Excel uses) is to create 2 copies of the vector then remove the 1st n elements from the first copy and the last n elements from the second copy (where n is the lag that you are computing from). Then pass those 2 vectors to the function to calculate the correlation. This method is OK and will give a reasonable answer, but it ignores the fact that the 2 vectors being compared are really measures of the same thing. The improved version (as shown by Wolfgang) is a similar function to the regular correlation, except that it uses the entire vector for computing the mean and variance.
Formula for autocorrelation in R vs. Excel
The naive way to calculate the auto correlation (and possibly what Excel uses) is to create 2 copies of the vector then remove the 1st n elements from the first copy and the last n elements from the s
Formula for autocorrelation in R vs. Excel The naive way to calculate the auto correlation (and possibly what Excel uses) is to create 2 copies of the vector then remove the 1st n elements from the first copy and the last n elements from the second copy (where n is the lag that you are computing from). Then pass those 2 vectors to the function to calculate the correlation. This method is OK and will give a reasonable answer, but it ignores the fact that the 2 vectors being compared are really measures of the same thing. The improved version (as shown by Wolfgang) is a similar function to the regular correlation, except that it uses the entire vector for computing the mean and variance.
Formula for autocorrelation in R vs. Excel The naive way to calculate the auto correlation (and possibly what Excel uses) is to create 2 copies of the vector then remove the 1st n elements from the first copy and the last n elements from the s
16,237
When would it be appropriate to report variance instead of standard deviation?
If you report the mean, then it is more appropriate to report the standard deviation as it is expressed in the same unity. Think about dimensional homogeneity in physics. Moreover, it is easier for the reader to consider confidence intervals (for large n, in order to use the Central Limit Theorem and consider a normal distribution) if the standard deviation is provided rather than the variance. However, you may consider reporting the variance if you are interested in comparing variance and bias, or giving "different variance components", since the total variance is the sum of the intra and inter variances, while the standard deviations do not sum up.
When would it be appropriate to report variance instead of standard deviation?
If you report the mean, then it is more appropriate to report the standard deviation as it is expressed in the same unity. Think about dimensional homogeneity in physics. Moreover, it is easier for th
When would it be appropriate to report variance instead of standard deviation? If you report the mean, then it is more appropriate to report the standard deviation as it is expressed in the same unity. Think about dimensional homogeneity in physics. Moreover, it is easier for the reader to consider confidence intervals (for large n, in order to use the Central Limit Theorem and consider a normal distribution) if the standard deviation is provided rather than the variance. However, you may consider reporting the variance if you are interested in comparing variance and bias, or giving "different variance components", since the total variance is the sum of the intra and inter variances, while the standard deviations do not sum up.
When would it be appropriate to report variance instead of standard deviation? If you report the mean, then it is more appropriate to report the standard deviation as it is expressed in the same unity. Think about dimensional homogeneity in physics. Moreover, it is easier for th
16,238
When would it be appropriate to report variance instead of standard deviation?
This is similar (but not equivalent). Nonetheless, standard deviation is expressed in the same units as the variable whereas the units of the variance are those of the variable to the power two. This makes standard deviation easier to interpret.
When would it be appropriate to report variance instead of standard deviation?
This is similar (but not equivalent). Nonetheless, standard deviation is expressed in the same units as the variable whereas the units of the variance are those of the variable to the power two. This
When would it be appropriate to report variance instead of standard deviation? This is similar (but not equivalent). Nonetheless, standard deviation is expressed in the same units as the variable whereas the units of the variance are those of the variable to the power two. This makes standard deviation easier to interpret.
When would it be appropriate to report variance instead of standard deviation? This is similar (but not equivalent). Nonetheless, standard deviation is expressed in the same units as the variable whereas the units of the variance are those of the variable to the power two. This
16,239
When would it be appropriate to report variance instead of standard deviation?
Variance weights outliers more heavily than data very near the mean due to the square. A higher variance helps you spot that more easily. Also, mathematically/theoretically speaking, dealing with variance is easier. And if you are dealing with more than one dataset you can add two independent variances (or more) to get the total variance due to those factors. But, adding one standard deviation to another gives you a meaningless number (if measure units are different).
When would it be appropriate to report variance instead of standard deviation?
Variance weights outliers more heavily than data very near the mean due to the square. A higher variance helps you spot that more easily. Also, mathematically/theoretically speaking, dealing with vari
When would it be appropriate to report variance instead of standard deviation? Variance weights outliers more heavily than data very near the mean due to the square. A higher variance helps you spot that more easily. Also, mathematically/theoretically speaking, dealing with variance is easier. And if you are dealing with more than one dataset you can add two independent variances (or more) to get the total variance due to those factors. But, adding one standard deviation to another gives you a meaningless number (if measure units are different).
When would it be appropriate to report variance instead of standard deviation? Variance weights outliers more heavily than data very near the mean due to the square. A higher variance helps you spot that more easily. Also, mathematically/theoretically speaking, dealing with vari
16,240
What is the distribution of OR (odds ratio)?
The log odds ratio has a Normal asymptotic distribution : $\log(\hat{OR}) \sim N(\log(OR), \sigma_{\log(OR)}^2)$ with $\sigma$ estimated from the contingency table. See, for example, page 6 of the notes: Asymptotic Theory for Parametric Models
What is the distribution of OR (odds ratio)?
The log odds ratio has a Normal asymptotic distribution : $\log(\hat{OR}) \sim N(\log(OR), \sigma_{\log(OR)}^2)$ with $\sigma$ estimated from the contingency table. See, for example, page 6 of the no
What is the distribution of OR (odds ratio)? The log odds ratio has a Normal asymptotic distribution : $\log(\hat{OR}) \sim N(\log(OR), \sigma_{\log(OR)}^2)$ with $\sigma$ estimated from the contingency table. See, for example, page 6 of the notes: Asymptotic Theory for Parametric Models
What is the distribution of OR (odds ratio)? The log odds ratio has a Normal asymptotic distribution : $\log(\hat{OR}) \sim N(\log(OR), \sigma_{\log(OR)}^2)$ with $\sigma$ estimated from the contingency table. See, for example, page 6 of the no
16,241
What is the distribution of OR (odds ratio)?
The estimators $\widehat{OR}$ have the asymptotic normal distribution around $OR$. Unless $n$ is quite large, however, their distributions are highly skewed. When $OR=1$, for instance, $\widehat{OR}$ cannot be much smaller than $OR$ (since $\widehat{OR}\ge0$), but it could be much larger with non-negligible probability. The log transform, having an additive rather than multiplicative structure, converges more rapidly to normality. An estimated Variance is: $$ \text{Var}[\ln\widehat{OR}]=\left(\frac{1}{n_{11}}\right)+\left(\frac{1}{n_{12}}\right)+\left(\frac{1}{n_{21}}\right)+\left(\frac{1}{n_{22}}\right). $$ The confidence interval for $\ln OR$: $$ \ln(\hat{OR})\pm z_{\frac{\alpha}{2}}\sigma_{\ln(OR)} $$ Exponentiating (taking antilogs) of its endpoints provides a confidence interval for $OR$. Agresti, Alan. Categorical data analysis, page 70.
What is the distribution of OR (odds ratio)?
The estimators $\widehat{OR}$ have the asymptotic normal distribution around $OR$. Unless $n$ is quite large, however, their distributions are highly skewed. When $OR=1$, for instance, $\widehat{OR}$
What is the distribution of OR (odds ratio)? The estimators $\widehat{OR}$ have the asymptotic normal distribution around $OR$. Unless $n$ is quite large, however, their distributions are highly skewed. When $OR=1$, for instance, $\widehat{OR}$ cannot be much smaller than $OR$ (since $\widehat{OR}\ge0$), but it could be much larger with non-negligible probability. The log transform, having an additive rather than multiplicative structure, converges more rapidly to normality. An estimated Variance is: $$ \text{Var}[\ln\widehat{OR}]=\left(\frac{1}{n_{11}}\right)+\left(\frac{1}{n_{12}}\right)+\left(\frac{1}{n_{21}}\right)+\left(\frac{1}{n_{22}}\right). $$ The confidence interval for $\ln OR$: $$ \ln(\hat{OR})\pm z_{\frac{\alpha}{2}}\sigma_{\ln(OR)} $$ Exponentiating (taking antilogs) of its endpoints provides a confidence interval for $OR$. Agresti, Alan. Categorical data analysis, page 70.
What is the distribution of OR (odds ratio)? The estimators $\widehat{OR}$ have the asymptotic normal distribution around $OR$. Unless $n$ is quite large, however, their distributions are highly skewed. When $OR=1$, for instance, $\widehat{OR}$
16,242
What is the distribution of OR (odds ratio)?
Generally, with a large sample size it is assumed as reasonable approximation that all estimators (or some opportune functions of them) have a normal distribution. So, if you only need the p-value corresponding to the given confidence interval, you can simply proceed as follows: transform $OR$ and the corresponding $(c1,c2)$ CI to $\ln(OR)$ and $(\ln(c1),\ln(c2))$ [The $OR$ domain is $(0,+\infty)$ while $\ln(OR)$ domain is $(-\infty,+\infty)$] since the length of every CI depends on its level alpha and on estimator standard deviation, calculate $$ d(OR)=\frac{\ln(c2)-\ln(c1)}{z_{\alpha/2}*2} $$ $[\text{Pr}(Z>z_{\alpha/2})=\alpha/2; z_{0.05/2}=1.96]$ calculate the p-value corresponding to the (standardized normal) test statistic $z=\frac{\ln(OR)}{sd(OR)}$
What is the distribution of OR (odds ratio)?
Generally, with a large sample size it is assumed as reasonable approximation that all estimators (or some opportune functions of them) have a normal distribution. So, if you only need the p-value co
What is the distribution of OR (odds ratio)? Generally, with a large sample size it is assumed as reasonable approximation that all estimators (or some opportune functions of them) have a normal distribution. So, if you only need the p-value corresponding to the given confidence interval, you can simply proceed as follows: transform $OR$ and the corresponding $(c1,c2)$ CI to $\ln(OR)$ and $(\ln(c1),\ln(c2))$ [The $OR$ domain is $(0,+\infty)$ while $\ln(OR)$ domain is $(-\infty,+\infty)$] since the length of every CI depends on its level alpha and on estimator standard deviation, calculate $$ d(OR)=\frac{\ln(c2)-\ln(c1)}{z_{\alpha/2}*2} $$ $[\text{Pr}(Z>z_{\alpha/2})=\alpha/2; z_{0.05/2}=1.96]$ calculate the p-value corresponding to the (standardized normal) test statistic $z=\frac{\ln(OR)}{sd(OR)}$
What is the distribution of OR (odds ratio)? Generally, with a large sample size it is assumed as reasonable approximation that all estimators (or some opportune functions of them) have a normal distribution. So, if you only need the p-value co
16,243
What is the distribution of OR (odds ratio)?
Since the odds ratio cannot be negative, it is restricted at the lower end, but not at the upper end, and so has a skew distribution.
What is the distribution of OR (odds ratio)?
Since the odds ratio cannot be negative, it is restricted at the lower end, but not at the upper end, and so has a skew distribution.
What is the distribution of OR (odds ratio)? Since the odds ratio cannot be negative, it is restricted at the lower end, but not at the upper end, and so has a skew distribution.
What is the distribution of OR (odds ratio)? Since the odds ratio cannot be negative, it is restricted at the lower end, but not at the upper end, and so has a skew distribution.
16,244
Why do we need alternative hypothesis?
There was, historically, disagreement about whether an alternative hypothesis was necessary. Let me explain this point of disagreement by considering the opinions of Fisher and Neyman, within the context of frequentist statistics, and a Bayesian answer. Fisher - We do not need an alternative hypothesis; we can simply test a null hypothesis using a goodness-of-fit test. The outcome is a $p$-value, providing a measure of evidence for the null hypothesis. Neyman - We must perform a hypothesis test between a null and an alternative. The test is such that it would result in type-1 errors at a fixed, pre-specified rate, $\alpha$. The outcome is a decision - to reject or not reject the null hypothesis at the level $\alpha$. We need an alternative from a decision theoretic perspective - we are making a choice between two courses of action - and because we should report the power of the test $$ 1 - p\left(\textrm{Accept $H_0$} \, \middle|\, H_1\right) $$ We should seek the most powerful tests possible to have the best chance of rejecting $H_0$ when the alternative is true. To satisfy both these points, the alternative hypothesis cannot be the vague 'not $H_0$' one. Bayesian - We must consider at least two models and update their relative plausibility with data. With only a single model, we simple have $$ p(H_0) = 1 $$ no matter what data we collect. To make calculations in this framework, the alternative hypothesis (or model as it would be known in this context) cannot be the ill-defined 'not $H_0$' one. I call it ill-defined since we cannot write the model $p(\text{data}|\text{not }H_0)$.
Why do we need alternative hypothesis?
There was, historically, disagreement about whether an alternative hypothesis was necessary. Let me explain this point of disagreement by considering the opinions of Fisher and Neyman, within the cont
Why do we need alternative hypothesis? There was, historically, disagreement about whether an alternative hypothesis was necessary. Let me explain this point of disagreement by considering the opinions of Fisher and Neyman, within the context of frequentist statistics, and a Bayesian answer. Fisher - We do not need an alternative hypothesis; we can simply test a null hypothesis using a goodness-of-fit test. The outcome is a $p$-value, providing a measure of evidence for the null hypothesis. Neyman - We must perform a hypothesis test between a null and an alternative. The test is such that it would result in type-1 errors at a fixed, pre-specified rate, $\alpha$. The outcome is a decision - to reject or not reject the null hypothesis at the level $\alpha$. We need an alternative from a decision theoretic perspective - we are making a choice between two courses of action - and because we should report the power of the test $$ 1 - p\left(\textrm{Accept $H_0$} \, \middle|\, H_1\right) $$ We should seek the most powerful tests possible to have the best chance of rejecting $H_0$ when the alternative is true. To satisfy both these points, the alternative hypothesis cannot be the vague 'not $H_0$' one. Bayesian - We must consider at least two models and update their relative plausibility with data. With only a single model, we simple have $$ p(H_0) = 1 $$ no matter what data we collect. To make calculations in this framework, the alternative hypothesis (or model as it would be known in this context) cannot be the ill-defined 'not $H_0$' one. I call it ill-defined since we cannot write the model $p(\text{data}|\text{not }H_0)$.
Why do we need alternative hypothesis? There was, historically, disagreement about whether an alternative hypothesis was necessary. Let me explain this point of disagreement by considering the opinions of Fisher and Neyman, within the cont
16,245
Why do we need alternative hypothesis?
I will focus on "If we do not talk about accepting alternative hypothesis, why do we need to have alternative hypothesis at all?" Because it helps us to choose a meaningful test statistic and design our study to have high power---a high chance of rejecting the null when the alternative is true. Without an alternative, we have no concept of power. Imagine we only have a null hypothesis and no alternative. Then there's no guidance on how to choose a test statistic that will have high power. All we can say is, "Reject the null whenever you observe a test statistic whose value is unlikely under the null." We can pick something arbitrary: we could draw Uniform(0,1) random numbers and reject the null when they are below 0.05. This happens under the null "rarely," no more than 5% of the time---yet it's also just as rare when the null is false. So this is technically a statistical test, but it's meaningless as evidence for or against anything. Instead, usually we have some scientifically-plausible alternative hypothesis ("There is a positive difference in outcomes between the treatment and control groups in my experiment"). We'd like to defend it against potential critics who would bring up the null hypothesis as devil's advocates ("I'm not convinced yet---maybe your treatment actually hurts, or has no effect at all, and any apparent difference in the data is due only to sampling variation"). With these 2 hypotheses in mind, now we can setup up a powerful test, by choosing a test statistic whose typical values under the alternative are unlikely under the null. (A positive 2-sample t-statistic far from 0 would be unsurprising if the alternative is true, but surprising if the null is true.) Then we figure out the test statistic's sampling distribution under the null, so we can calculate p-values---and interpret them. When we observe a test statistic that's unlikely under the null, especially if the study design, sample size, etc. were chosen to have high power, this provides some evidence for the alternative. So, why don't we talk about "accepting" the alternative hypothesis? Because even a high-powered study doesn't provide completely rigorous proof that the null is wrong. It's still a kind of evidence, but weaker than some other kinds of evidence.
Why do we need alternative hypothesis?
I will focus on "If we do not talk about accepting alternative hypothesis, why do we need to have alternative hypothesis at all?" Because it helps us to choose a meaningful test statistic and design o
Why do we need alternative hypothesis? I will focus on "If we do not talk about accepting alternative hypothesis, why do we need to have alternative hypothesis at all?" Because it helps us to choose a meaningful test statistic and design our study to have high power---a high chance of rejecting the null when the alternative is true. Without an alternative, we have no concept of power. Imagine we only have a null hypothesis and no alternative. Then there's no guidance on how to choose a test statistic that will have high power. All we can say is, "Reject the null whenever you observe a test statistic whose value is unlikely under the null." We can pick something arbitrary: we could draw Uniform(0,1) random numbers and reject the null when they are below 0.05. This happens under the null "rarely," no more than 5% of the time---yet it's also just as rare when the null is false. So this is technically a statistical test, but it's meaningless as evidence for or against anything. Instead, usually we have some scientifically-plausible alternative hypothesis ("There is a positive difference in outcomes between the treatment and control groups in my experiment"). We'd like to defend it against potential critics who would bring up the null hypothesis as devil's advocates ("I'm not convinced yet---maybe your treatment actually hurts, or has no effect at all, and any apparent difference in the data is due only to sampling variation"). With these 2 hypotheses in mind, now we can setup up a powerful test, by choosing a test statistic whose typical values under the alternative are unlikely under the null. (A positive 2-sample t-statistic far from 0 would be unsurprising if the alternative is true, but surprising if the null is true.) Then we figure out the test statistic's sampling distribution under the null, so we can calculate p-values---and interpret them. When we observe a test statistic that's unlikely under the null, especially if the study design, sample size, etc. were chosen to have high power, this provides some evidence for the alternative. So, why don't we talk about "accepting" the alternative hypothesis? Because even a high-powered study doesn't provide completely rigorous proof that the null is wrong. It's still a kind of evidence, but weaker than some other kinds of evidence.
Why do we need alternative hypothesis? I will focus on "If we do not talk about accepting alternative hypothesis, why do we need to have alternative hypothesis at all?" Because it helps us to choose a meaningful test statistic and design o
16,246
Why do we need alternative hypothesis?
Im am not 100% sure if this is a formal requirement but typically the null hypothesis and alternative hypothesis are: 1) complementary and 2) exhaustive. That is: 1) they cannot be both true at the same time ; 2) if one is not true the other must be true. Consider simple test of heights between girls and boys. A typical null hypothesis in this case is that $height_{boys} = height_{girls}$. An alternative hypothesis would be $height_{boys} \ne height_{girls}$. So if null is not true - alternative must be true.
Why do we need alternative hypothesis?
Im am not 100% sure if this is a formal requirement but typically the null hypothesis and alternative hypothesis are: 1) complementary and 2) exhaustive. That is: 1) they cannot be both true at the sa
Why do we need alternative hypothesis? Im am not 100% sure if this is a formal requirement but typically the null hypothesis and alternative hypothesis are: 1) complementary and 2) exhaustive. That is: 1) they cannot be both true at the same time ; 2) if one is not true the other must be true. Consider simple test of heights between girls and boys. A typical null hypothesis in this case is that $height_{boys} = height_{girls}$. An alternative hypothesis would be $height_{boys} \ne height_{girls}$. So if null is not true - alternative must be true.
Why do we need alternative hypothesis? Im am not 100% sure if this is a formal requirement but typically the null hypothesis and alternative hypothesis are: 1) complementary and 2) exhaustive. That is: 1) they cannot be both true at the sa
16,247
Why do we need alternative hypothesis?
Why do we need to have alternative hypothesis at all? In a classical hypothesis test, the only mathematical role played by the alternative hypothesis is that it affects the ordering of the evidence through the chosen test statistic. The alternative hypothesis is used to determine the appropriate test statistic for the test, which is equivalent to setting an ordinal ranking of all possible data outcomes from those most conducive to the null hypothesis (against the stated alternative) to those least conducive to the null hypotheses (against the stated alternative). Once you have formed this ordinal ranking of the possible data outcomes, the alternative hypothesis plays no further mathematical role in the test. You can find a related answer on this question here which gives a schematic diagram of the classical hypothesis test and how the alternative hypothesis enters into the test. This is a useful supplement to the present answer. Formal explanation: In any classical hypothesis test with $n$ observable data values $\mathbf{x} = (x_1,...,x_n)$ you have some test statistic $T: \mathbb{R}^n \rightarrow \mathbb{R}$ that maps every possible outcome of the data onto an ordinal scale that measures whether it is more conducive to the null or alternative hypothesis. (Without loss of generality we will assume that lower values are more conducive to the null hypothesis and higher values are more conducive to the alternative hypothesis. We sometimes say that higher values of the test statistic are "more extreme" insofar as they constitute more extreme evidence for the alternative hypothesis.) The p-value of the test is then given by: $$p(\mathbf{x}) \equiv p_T(\mathbf{x}) \equiv \mathbb{P}( T(\mathbf{X}) \geqslant T(\mathbf{x}) | H_0).$$ This p-value function fully determines the evidence in the test for any data vector. When combined with a chosen significance level, it determines the outcome of the test for any data vector. (We have described this for a fixed number of data points $n$ but this can easily be extended to allow for arbitrary $n$.) It is important to note that the p-value is affected by the test statistic only through the ordinal scale it induces, so if you apply a monotonically increasing transformation to the test statistics, this makes no difference to the hypothesis test (i.e., it is the same test). This mathematical property merely reflects the fact that the sole purpose of the test statistic is to induce an ordinal scale on the space of all possible data vectors, to show which are more conducive to the null/alternative. The alternative hypothesis affects this measurement only through the function $T$, which is chosen based on the stated null and alternative hypotheses within the overall model. Hence, we can regard the test statistic function as being a function $T \equiv g (\mathcal{M}, H_0, H_A)$ of the overall model $\mathcal{M}$ and the two hypotheses. For example, for a likelihood-ratio-test the test statistic is formed by taking a ratio (or logarithm of a ratio) of supremums of the likelihood function over parameter ranges relating to the null and alternative hypotheses. What does this mean if we compare tests with different alternatives? Suppose you have a fixed model $\mathcal{M}$ and you want to do two different hypothesis tests comparing the same null hypothesis $H_0$ against two different alternatives $H_A$ and $H_A'$. In this case you will have two different test statistic functions: $$T = g (\mathcal{M}, H_0, H_A) \quad \quad \quad \quad \quad T' = g (\mathcal{M}, H_0, H_A'),$$ leading to the corresponding p-value functions: $$p(\mathbf{x}) = \mathbb{P}( T(\mathbf{X}) \geqslant T(\mathbf{x}) | H_0) \quad \quad \quad \quad \quad p'(\mathbf{x}) = \mathbb{P}( T'(\mathbf{X}) \geqslant T'(\mathbf{x}) | H_0).$$ It is important to note that if $T$ and $T'$ are monotonic increasing transformations of one another then the p-value functions $p$ and $p'$ are identical, so both tests are the same test. If the functions $T$ and $T'$ are not monotonic increasing transformations of one another then we have two genuinely different hypothesis tests.
Why do we need alternative hypothesis?
Why do we need to have alternative hypothesis at all? In a classical hypothesis test, the only mathematical role played by the alternative hypothesis is that it affects the ordering of the evidence t
Why do we need alternative hypothesis? Why do we need to have alternative hypothesis at all? In a classical hypothesis test, the only mathematical role played by the alternative hypothesis is that it affects the ordering of the evidence through the chosen test statistic. The alternative hypothesis is used to determine the appropriate test statistic for the test, which is equivalent to setting an ordinal ranking of all possible data outcomes from those most conducive to the null hypothesis (against the stated alternative) to those least conducive to the null hypotheses (against the stated alternative). Once you have formed this ordinal ranking of the possible data outcomes, the alternative hypothesis plays no further mathematical role in the test. You can find a related answer on this question here which gives a schematic diagram of the classical hypothesis test and how the alternative hypothesis enters into the test. This is a useful supplement to the present answer. Formal explanation: In any classical hypothesis test with $n$ observable data values $\mathbf{x} = (x_1,...,x_n)$ you have some test statistic $T: \mathbb{R}^n \rightarrow \mathbb{R}$ that maps every possible outcome of the data onto an ordinal scale that measures whether it is more conducive to the null or alternative hypothesis. (Without loss of generality we will assume that lower values are more conducive to the null hypothesis and higher values are more conducive to the alternative hypothesis. We sometimes say that higher values of the test statistic are "more extreme" insofar as they constitute more extreme evidence for the alternative hypothesis.) The p-value of the test is then given by: $$p(\mathbf{x}) \equiv p_T(\mathbf{x}) \equiv \mathbb{P}( T(\mathbf{X}) \geqslant T(\mathbf{x}) | H_0).$$ This p-value function fully determines the evidence in the test for any data vector. When combined with a chosen significance level, it determines the outcome of the test for any data vector. (We have described this for a fixed number of data points $n$ but this can easily be extended to allow for arbitrary $n$.) It is important to note that the p-value is affected by the test statistic only through the ordinal scale it induces, so if you apply a monotonically increasing transformation to the test statistics, this makes no difference to the hypothesis test (i.e., it is the same test). This mathematical property merely reflects the fact that the sole purpose of the test statistic is to induce an ordinal scale on the space of all possible data vectors, to show which are more conducive to the null/alternative. The alternative hypothesis affects this measurement only through the function $T$, which is chosen based on the stated null and alternative hypotheses within the overall model. Hence, we can regard the test statistic function as being a function $T \equiv g (\mathcal{M}, H_0, H_A)$ of the overall model $\mathcal{M}$ and the two hypotheses. For example, for a likelihood-ratio-test the test statistic is formed by taking a ratio (or logarithm of a ratio) of supremums of the likelihood function over parameter ranges relating to the null and alternative hypotheses. What does this mean if we compare tests with different alternatives? Suppose you have a fixed model $\mathcal{M}$ and you want to do two different hypothesis tests comparing the same null hypothesis $H_0$ against two different alternatives $H_A$ and $H_A'$. In this case you will have two different test statistic functions: $$T = g (\mathcal{M}, H_0, H_A) \quad \quad \quad \quad \quad T' = g (\mathcal{M}, H_0, H_A'),$$ leading to the corresponding p-value functions: $$p(\mathbf{x}) = \mathbb{P}( T(\mathbf{X}) \geqslant T(\mathbf{x}) | H_0) \quad \quad \quad \quad \quad p'(\mathbf{x}) = \mathbb{P}( T'(\mathbf{X}) \geqslant T'(\mathbf{x}) | H_0).$$ It is important to note that if $T$ and $T'$ are monotonic increasing transformations of one another then the p-value functions $p$ and $p'$ are identical, so both tests are the same test. If the functions $T$ and $T'$ are not monotonic increasing transformations of one another then we have two genuinely different hypothesis tests.
Why do we need alternative hypothesis? Why do we need to have alternative hypothesis at all? In a classical hypothesis test, the only mathematical role played by the alternative hypothesis is that it affects the ordering of the evidence t
16,248
Why do we need alternative hypothesis?
The reason I wouldn't think of accepting the alternative hypothesis is because that's not what we are testing. Null hypothesis significance testing (NHST) calculates the probability of observing data as extreme as observed (or more) given that the null hypothesis is true, or in other words NHST calculates a probability value that is conditioned on the fact that the null hypothesis is true, $P(data|H_0)$. So it is the probability of the data assuming that the null hypothesis is true. It never uses or gives the probability of a hypothesis (neither null nor alternative). Therefore when you observe a small p-value, all you know is that the data you observed appears to be unlikely under $H_0$, so you are collecting evidence against the null and in favour for whatever your alternative explanation is. Before you run the experiment, you can decide on a cut-off level ($\alpha$) that deems you result significant, meaning if your p-value falls below that level, you conclude that the evidence against the null is so overwhelmingly high that the data must have originated from some other data generating process and you reject the null hypothesis based on that evidence. If the p-value is above that level you fail to reject the null hypothesis since your evidence is not substantial enough to believe that your sample came form a different data generating process. The reason why you formulate an alternative hypothesis is because you likely had an experiment in mind before you started sampling. Formulating an alternative hypothesis can also decide on whether you use a one-tailed or two-tailed test and hence giving you more statistical power (in the one-tailed scenario). But technically in order to run the test you don't need to formulate an alternative hypothesis, you just need data.
Why do we need alternative hypothesis?
The reason I wouldn't think of accepting the alternative hypothesis is because that's not what we are testing. Null hypothesis significance testing (NHST) calculates the probability of observing data
Why do we need alternative hypothesis? The reason I wouldn't think of accepting the alternative hypothesis is because that's not what we are testing. Null hypothesis significance testing (NHST) calculates the probability of observing data as extreme as observed (or more) given that the null hypothesis is true, or in other words NHST calculates a probability value that is conditioned on the fact that the null hypothesis is true, $P(data|H_0)$. So it is the probability of the data assuming that the null hypothesis is true. It never uses or gives the probability of a hypothesis (neither null nor alternative). Therefore when you observe a small p-value, all you know is that the data you observed appears to be unlikely under $H_0$, so you are collecting evidence against the null and in favour for whatever your alternative explanation is. Before you run the experiment, you can decide on a cut-off level ($\alpha$) that deems you result significant, meaning if your p-value falls below that level, you conclude that the evidence against the null is so overwhelmingly high that the data must have originated from some other data generating process and you reject the null hypothesis based on that evidence. If the p-value is above that level you fail to reject the null hypothesis since your evidence is not substantial enough to believe that your sample came form a different data generating process. The reason why you formulate an alternative hypothesis is because you likely had an experiment in mind before you started sampling. Formulating an alternative hypothesis can also decide on whether you use a one-tailed or two-tailed test and hence giving you more statistical power (in the one-tailed scenario). But technically in order to run the test you don't need to formulate an alternative hypothesis, you just need data.
Why do we need alternative hypothesis? The reason I wouldn't think of accepting the alternative hypothesis is because that's not what we are testing. Null hypothesis significance testing (NHST) calculates the probability of observing data
16,249
The difference between with or without intercept model in logistic regression
It will almost never be meaningful to use the no intercept model in logistic regression. The intercept parameter $\beta_0$ is modelling the marginal distribution of the response $Y$, so using $\beta_0=0$ is tantamont to assuming that $P(Y=1)=0.5$, marginally. Do you really know that? If that is untrue, you cannot trust any inference from the no intercept model.
The difference between with or without intercept model in logistic regression
It will almost never be meaningful to use the no intercept model in logistic regression. The intercept parameter $\beta_0$ is modelling the marginal distribution of the response $Y$, so using $\beta_
The difference between with or without intercept model in logistic regression It will almost never be meaningful to use the no intercept model in logistic regression. The intercept parameter $\beta_0$ is modelling the marginal distribution of the response $Y$, so using $\beta_0=0$ is tantamont to assuming that $P(Y=1)=0.5$, marginally. Do you really know that? If that is untrue, you cannot trust any inference from the no intercept model.
The difference between with or without intercept model in logistic regression It will almost never be meaningful to use the no intercept model in logistic regression. The intercept parameter $\beta_0$ is modelling the marginal distribution of the response $Y$, so using $\beta_
16,250
Choosing the hyperparameters using T-SNE for classification
I routinely use $t$-SNE (alongside clustering techniques - more on this in the end) to recognise/assess the presence of clusters in my data. Unfortunately to my knowledge there is no standard way to choose the correct perplexity aside looking at the produced reduced dimension dataset and then assessing if it is meaningful. There are some general facts, eg. distances between clusters are mostly meaningless, small perplexity values encourage small clot-like structures but that's about it. A very rough rule-of-thumb is to check what is the error value associated with each reconstruction. $t$-SNE is trying to minimise the sum of the Kullback-Leibler divergences between the distribution of the distances between the data in the original domain and the distribution of distances between the data in the reduced dimension domain (actually the target distributions are the distributions of the probabilities that a point will pick another point as its neighbour but these are directly proportional to the distance between the two points). It could be argued that smaller values of KL-divergence show better results. This idea does not work very well in practice but it would theoretically help to exclude some ranges of the perplexity values as well as some runs of the algorithm that are clearly suboptimal. I explain why this heuristic is far from a panacea and how it could though be mildly useful: The perplexity parameter increases monotonically with the variance of the Gaussian used to calculate the distances/probabilities. Therefore as you increase the perplexity parameter as a whole you will get smaller distances in absolute terms and subsequent KL-divergence values. Nevertheless if you have 20 runs with the same perplexity and you cannot (do not want to) look at them you can always pick the one with the smallest variable hoping it retains the original distances more accurately. The same goes for the $\theta$, the approximation parameter for the Barnes-Hut approximation, assuming perplexity is fixed changing $\theta$ and then checking the resulting costs should be somewhat informative. In the end of the day, lower costs are associated with more faithful reconstructions. All is not lost though... For your particular use case, a trick to mildly automate the procedure of picking a good perplexity value is the following: Run a small clustering procedure (say a $k$-means or DBSCAN) on the reduced dimensionality dataset and then assess the quality of that clustering using some sort of index (Cohen's $k$, Rand index, Fowlkes-Mallows, etc.) against what you try to predict. The idea here is that for your task at hand the correct representation of the data (the perplexity dependant $t$-SNE results) should give the most informative representation (in the form of one of the metrics mentioned) in terms of their alignment with the property you try to predict. This is why $t$-SNE was used in the first place after all, if the resulting representation is uninformative for the properties we are investigating then it is simply no good despite its low reconstruction error, visual appeal, etc. etc. Let me point out that what I describe are heuristics. As mentioned in the beginning of my post, manually inspecting the results is an indispensable way of assessing the quality of the resulting dimensionality reduction/clustering.
Choosing the hyperparameters using T-SNE for classification
I routinely use $t$-SNE (alongside clustering techniques - more on this in the end) to recognise/assess the presence of clusters in my data. Unfortunately to my knowledge there is no standard way to c
Choosing the hyperparameters using T-SNE for classification I routinely use $t$-SNE (alongside clustering techniques - more on this in the end) to recognise/assess the presence of clusters in my data. Unfortunately to my knowledge there is no standard way to choose the correct perplexity aside looking at the produced reduced dimension dataset and then assessing if it is meaningful. There are some general facts, eg. distances between clusters are mostly meaningless, small perplexity values encourage small clot-like structures but that's about it. A very rough rule-of-thumb is to check what is the error value associated with each reconstruction. $t$-SNE is trying to minimise the sum of the Kullback-Leibler divergences between the distribution of the distances between the data in the original domain and the distribution of distances between the data in the reduced dimension domain (actually the target distributions are the distributions of the probabilities that a point will pick another point as its neighbour but these are directly proportional to the distance between the two points). It could be argued that smaller values of KL-divergence show better results. This idea does not work very well in practice but it would theoretically help to exclude some ranges of the perplexity values as well as some runs of the algorithm that are clearly suboptimal. I explain why this heuristic is far from a panacea and how it could though be mildly useful: The perplexity parameter increases monotonically with the variance of the Gaussian used to calculate the distances/probabilities. Therefore as you increase the perplexity parameter as a whole you will get smaller distances in absolute terms and subsequent KL-divergence values. Nevertheless if you have 20 runs with the same perplexity and you cannot (do not want to) look at them you can always pick the one with the smallest variable hoping it retains the original distances more accurately. The same goes for the $\theta$, the approximation parameter for the Barnes-Hut approximation, assuming perplexity is fixed changing $\theta$ and then checking the resulting costs should be somewhat informative. In the end of the day, lower costs are associated with more faithful reconstructions. All is not lost though... For your particular use case, a trick to mildly automate the procedure of picking a good perplexity value is the following: Run a small clustering procedure (say a $k$-means or DBSCAN) on the reduced dimensionality dataset and then assess the quality of that clustering using some sort of index (Cohen's $k$, Rand index, Fowlkes-Mallows, etc.) against what you try to predict. The idea here is that for your task at hand the correct representation of the data (the perplexity dependant $t$-SNE results) should give the most informative representation (in the form of one of the metrics mentioned) in terms of their alignment with the property you try to predict. This is why $t$-SNE was used in the first place after all, if the resulting representation is uninformative for the properties we are investigating then it is simply no good despite its low reconstruction error, visual appeal, etc. etc. Let me point out that what I describe are heuristics. As mentioned in the beginning of my post, manually inspecting the results is an indispensable way of assessing the quality of the resulting dimensionality reduction/clustering.
Choosing the hyperparameters using T-SNE for classification I routinely use $t$-SNE (alongside clustering techniques - more on this in the end) to recognise/assess the presence of clusters in my data. Unfortunately to my knowledge there is no standard way to c
16,251
Choosing the hyperparameters using T-SNE for classification
We usually set the perplexity to 5% of the dataset size. So for a dataset with 100K rows I would start with perplexity of 5000, or at least 1000, if you don't have a high performance computer available. Our data sets are from flow cytometry analysis, they usually have 50k to 500k data points each with 10 to 20 numerical values.
Choosing the hyperparameters using T-SNE for classification
We usually set the perplexity to 5% of the dataset size. So for a dataset with 100K rows I would start with perplexity of 5000, or at least 1000, if you don't have a high performance computer availabl
Choosing the hyperparameters using T-SNE for classification We usually set the perplexity to 5% of the dataset size. So for a dataset with 100K rows I would start with perplexity of 5000, or at least 1000, if you don't have a high performance computer available. Our data sets are from flow cytometry analysis, they usually have 50k to 500k data points each with 10 to 20 numerical values.
Choosing the hyperparameters using T-SNE for classification We usually set the perplexity to 5% of the dataset size. So for a dataset with 100K rows I would start with perplexity of 5000, or at least 1000, if you don't have a high performance computer availabl
16,252
Choosing the hyperparameters using T-SNE for classification
It could be interesting for you to have a look the "Automatic Selection of t-SNE Perplexity" by Cao and Wang: t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimensionality reduction methods for data visualization, but it has a perplexity hyperparameter that requires manual selection. In practice, proper tuning of t-SNE perplexity requires users to understand the inner working of the method as well as to have hands-on experience. We propose a model selection objective for t-SNE perplexity that requires negligible extra computation beyond that of the t-SNE itself. We empirically validate that the perplexity settings found by our approach are consistent with preferences elicited from human experts across a number of datasets. The similarities of our approach to Bayesian information criteria (BIC) and minimum description length (MDL) are also analyzed.
Choosing the hyperparameters using T-SNE for classification
It could be interesting for you to have a look the "Automatic Selection of t-SNE Perplexity" by Cao and Wang: t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimens
Choosing the hyperparameters using T-SNE for classification It could be interesting for you to have a look the "Automatic Selection of t-SNE Perplexity" by Cao and Wang: t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimensionality reduction methods for data visualization, but it has a perplexity hyperparameter that requires manual selection. In practice, proper tuning of t-SNE perplexity requires users to understand the inner working of the method as well as to have hands-on experience. We propose a model selection objective for t-SNE perplexity that requires negligible extra computation beyond that of the t-SNE itself. We empirically validate that the perplexity settings found by our approach are consistent with preferences elicited from human experts across a number of datasets. The similarities of our approach to Bayesian information criteria (BIC) and minimum description length (MDL) are also analyzed.
Choosing the hyperparameters using T-SNE for classification It could be interesting for you to have a look the "Automatic Selection of t-SNE Perplexity" by Cao and Wang: t-Distributed Stochastic Neighbor Embedding (t-SNE) is one of the most widely used dimens
16,253
Choosing the hyperparameters using T-SNE for classification
I found a very comprehensible article by Nikolay Oskolkov, a bioinfomatician and a medium-writer, explaining some really insightful heuristics on how to choose tSNE's hyperparameters. How to tune hyperparameters of tSNE (by Nikolay Oskolkov, from Jul 19, 2019) I hope you will find it useful too! Just to put the summary of the article for your reference: In this post we have learnt that despite tSNE can be sensitive with respect to its hyperparameters, there are simple rules for obtaining good looking tSNE plots for scRNAseq data. The optimal number of PCs for inputting into tSNE can be found through randomization of the expression matrix. The optimal perplexity can be calculated from the number of cells according to the simple power law Perplexity ~ N^(1/2). Finally, the optimal number of iterations should provide the largest distance between the data points of ~100 units. However, this article only gives some rough heuristics for choosing a given set of hyperparameters values in a general setting. With regards to navigating through the hyperparameters' space while performing tSNE and choosing the best values for a particular dataset like your own, I agree with the aforementioned suggestion on using an intermediary procedure like k-means to judge the quality of clustering performed with respect to your concerned classification targets.
Choosing the hyperparameters using T-SNE for classification
I found a very comprehensible article by Nikolay Oskolkov, a bioinfomatician and a medium-writer, explaining some really insightful heuristics on how to choose tSNE's hyperparameters. How to tune hyp
Choosing the hyperparameters using T-SNE for classification I found a very comprehensible article by Nikolay Oskolkov, a bioinfomatician and a medium-writer, explaining some really insightful heuristics on how to choose tSNE's hyperparameters. How to tune hyperparameters of tSNE (by Nikolay Oskolkov, from Jul 19, 2019) I hope you will find it useful too! Just to put the summary of the article for your reference: In this post we have learnt that despite tSNE can be sensitive with respect to its hyperparameters, there are simple rules for obtaining good looking tSNE plots for scRNAseq data. The optimal number of PCs for inputting into tSNE can be found through randomization of the expression matrix. The optimal perplexity can be calculated from the number of cells according to the simple power law Perplexity ~ N^(1/2). Finally, the optimal number of iterations should provide the largest distance between the data points of ~100 units. However, this article only gives some rough heuristics for choosing a given set of hyperparameters values in a general setting. With regards to navigating through the hyperparameters' space while performing tSNE and choosing the best values for a particular dataset like your own, I agree with the aforementioned suggestion on using an intermediary procedure like k-means to judge the quality of clustering performed with respect to your concerned classification targets.
Choosing the hyperparameters using T-SNE for classification I found a very comprehensible article by Nikolay Oskolkov, a bioinfomatician and a medium-writer, explaining some really insightful heuristics on how to choose tSNE's hyperparameters. How to tune hyp
16,254
Expected value vs. most probable value (mode)
For a Normal distribution, the expected value, a.k.a. the mean, equals the mode. In general, not only is the expected value not only not the most likely (or at highest density), but it may have no chance of occurring. For instance, consider the random variable X which equals 0 or 2, each with probability 0.5. Then EX = 1, but the expected value,1, has 0 probability of occurring, while 0 and 2 are both modes of the distribution. The quote "the expected value of x is what one expects to happen on average" is non-technical layman's language, which as evident by your confusion, only serves to confuse matters. Expected value has a very specific meaning in probability as being the mathematical average. Whereas in layman's language, an expected value or "on average" may be something expected to typically occur. These can be reconciled if "on average" is interpreted as being the mathematical average of what occurs. Expectantly yours, Joe Average
Expected value vs. most probable value (mode)
For a Normal distribution, the expected value, a.k.a. the mean, equals the mode. In general, not only is the expected value not only not the most likely (or at highest density), but it may have no cha
Expected value vs. most probable value (mode) For a Normal distribution, the expected value, a.k.a. the mean, equals the mode. In general, not only is the expected value not only not the most likely (or at highest density), but it may have no chance of occurring. For instance, consider the random variable X which equals 0 or 2, each with probability 0.5. Then EX = 1, but the expected value,1, has 0 probability of occurring, while 0 and 2 are both modes of the distribution. The quote "the expected value of x is what one expects to happen on average" is non-technical layman's language, which as evident by your confusion, only serves to confuse matters. Expected value has a very specific meaning in probability as being the mathematical average. Whereas in layman's language, an expected value or "on average" may be something expected to typically occur. These can be reconciled if "on average" is interpreted as being the mathematical average of what occurs. Expectantly yours, Joe Average
Expected value vs. most probable value (mode) For a Normal distribution, the expected value, a.k.a. the mean, equals the mode. In general, not only is the expected value not only not the most likely (or at highest density), but it may have no cha
16,255
Expected value vs. most probable value (mode)
The expected value is a priori very abstract and there is no reason to think that it's the most probable outcome; as other have point out, it's easy to construct random variables for which $$P( X = E(X) ) = 0$$ (and the same with density if $X$ is continuos) The only justification for the expected value, and the reason why we "expect to see it often", is the Law of large numbers: if you have $n$ independent identically distributed variables $X_i$, then $$\frac {X_1 + \dots + X_n}{n} \to E(X)$$ (for a suitable meaning of $\to$ which is pointless to investigate at the moment) What does it mean? Imagine that you throw a coin with probability $p> \frac 12$ of landing head, which we will associate with the number $1$, and probability $1-p$ of landing tail (that is, $0$). What is the most probable outcome? 1! (that is, head) What is the expected value? $$E(X) = 1\cdot p + 0\cdot(1-p) = p$$ Now clearly "p" will never happen (it's either head or tail, either 0 or 1). But launch the coin 10.000 times ,and record the times it came up head over the total number of throws. This number captures what we intuitively think of average ("average number of heads"). And the law of large numbers tells you that this number will be close to $E(X) = p$
Expected value vs. most probable value (mode)
The expected value is a priori very abstract and there is no reason to think that it's the most probable outcome; as other have point out, it's easy to construct random variables for which $$P( X = E(
Expected value vs. most probable value (mode) The expected value is a priori very abstract and there is no reason to think that it's the most probable outcome; as other have point out, it's easy to construct random variables for which $$P( X = E(X) ) = 0$$ (and the same with density if $X$ is continuos) The only justification for the expected value, and the reason why we "expect to see it often", is the Law of large numbers: if you have $n$ independent identically distributed variables $X_i$, then $$\frac {X_1 + \dots + X_n}{n} \to E(X)$$ (for a suitable meaning of $\to$ which is pointless to investigate at the moment) What does it mean? Imagine that you throw a coin with probability $p> \frac 12$ of landing head, which we will associate with the number $1$, and probability $1-p$ of landing tail (that is, $0$). What is the most probable outcome? 1! (that is, head) What is the expected value? $$E(X) = 1\cdot p + 0\cdot(1-p) = p$$ Now clearly "p" will never happen (it's either head or tail, either 0 or 1). But launch the coin 10.000 times ,and record the times it came up head over the total number of throws. This number captures what we intuitively think of average ("average number of heads"). And the law of large numbers tells you that this number will be close to $E(X) = p$
Expected value vs. most probable value (mode) The expected value is a priori very abstract and there is no reason to think that it's the most probable outcome; as other have point out, it's easy to construct random variables for which $$P( X = E(
16,256
Expected value vs. most probable value (mode)
I don't like the term "expected value" and didn't use it when teaching probability. "Arithmetic mean" is better, in my opinion, because the arithmetic mean of a 6-sided die is 3.5 but such a number doesn't occur. I did originally hear the term "expectation value" for the concept when in college. Lots of technical terms do not agree with the obvious non-technical meaning. ("Or" comes to mind.) Note that a distribution may have more than one mode but the arithmetic mean is unique. Mode, mean, and median are different and have different uses.
Expected value vs. most probable value (mode)
I don't like the term "expected value" and didn't use it when teaching probability. "Arithmetic mean" is better, in my opinion, because the arithmetic mean of a 6-sided die is 3.5 but such a number do
Expected value vs. most probable value (mode) I don't like the term "expected value" and didn't use it when teaching probability. "Arithmetic mean" is better, in my opinion, because the arithmetic mean of a 6-sided die is 3.5 but such a number doesn't occur. I did originally hear the term "expectation value" for the concept when in college. Lots of technical terms do not agree with the obvious non-technical meaning. ("Or" comes to mind.) Note that a distribution may have more than one mode but the arithmetic mean is unique. Mode, mean, and median are different and have different uses.
Expected value vs. most probable value (mode) I don't like the term "expected value" and didn't use it when teaching probability. "Arithmetic mean" is better, in my opinion, because the arithmetic mean of a 6-sided die is 3.5 but such a number do
16,257
Expected value vs. most probable value (mode)
The difference is easiest to see with discrete distributions: Consider two sets of values where each number is equally likely to be drawn: {1,2,2,2,10} and {1,2,2,2,3}. Both have the same mode (2), but the expected values differ. Expected value puts extra weight on large values while the mode simply looks for what value occurs frequently. So if you drew from this distribution a bunch of times, your sample average would be close to the expected value, while the most common integer to occur would be the close to the mode. The mode is defined as $mode=arg{\max{f(x)}}$ while as you showed above, the expected value integrates over $x*f(x)$ so it considers the weight of each x. The use of language to distinguish between different measure of central tendency is a common issue when learning statistics. For example, the median is another measure that isn't skewed by large values like the average.
Expected value vs. most probable value (mode)
The difference is easiest to see with discrete distributions: Consider two sets of values where each number is equally likely to be drawn: {1,2,2,2,10} and {1,2,2,2,3}. Both have the same mode (2), b
Expected value vs. most probable value (mode) The difference is easiest to see with discrete distributions: Consider two sets of values where each number is equally likely to be drawn: {1,2,2,2,10} and {1,2,2,2,3}. Both have the same mode (2), but the expected values differ. Expected value puts extra weight on large values while the mode simply looks for what value occurs frequently. So if you drew from this distribution a bunch of times, your sample average would be close to the expected value, while the most common integer to occur would be the close to the mode. The mode is defined as $mode=arg{\max{f(x)}}$ while as you showed above, the expected value integrates over $x*f(x)$ so it considers the weight of each x. The use of language to distinguish between different measure of central tendency is a common issue when learning statistics. For example, the median is another measure that isn't skewed by large values like the average.
Expected value vs. most probable value (mode) The difference is easiest to see with discrete distributions: Consider two sets of values where each number is equally likely to be drawn: {1,2,2,2,10} and {1,2,2,2,3}. Both have the same mode (2), b
16,258
" all of these data points come from the same distribution." How to test?
Imagine two scenarios: the data points were all drawn from the same distribution -- one that was uniform on (16,36) the data points were drawn from a 50-50 mix of two populations: a. population A, which is shaped like this: b. population B, shaped like this: ... such that the mixture of the two looks exactly like the case in 1. How could they be told apart? Whatever shapes you choose for two populations, there's always going to be a single population distribution that has the same shape. This argument clearly demonstrates that for the general case you simply can't do it. There's no possible way to differentiate. If you introduce information about the populations (assumptions, effectively) then there may often be ways to proceed*, but the general case is dead. * e.g. if you assume that populations are unimodal and have sufficiently different means you can get somewhere [There restrictions that were added to the question are not sufficient to avoid a different version of the kind of problem I describe above -- we can still write a unimodal null on the positive half-line as a 50-50 mixture of two unimodal distributions on the positive half-line. Of course if you have a more specific null, this becomes much less of an issue. Alternatively it should still be possible to restrict the class of alternatives further until we were in a position to test against some mixture alternative. Or some additional restrictions might be applied to both null and alternative that would make them distinguishable.]
" all of these data points come from the same distribution." How to test?
Imagine two scenarios: the data points were all drawn from the same distribution -- one that was uniform on (16,36) the data points were drawn from a 50-50 mix of two populations: a. population A, w
" all of these data points come from the same distribution." How to test? Imagine two scenarios: the data points were all drawn from the same distribution -- one that was uniform on (16,36) the data points were drawn from a 50-50 mix of two populations: a. population A, which is shaped like this: b. population B, shaped like this: ... such that the mixture of the two looks exactly like the case in 1. How could they be told apart? Whatever shapes you choose for two populations, there's always going to be a single population distribution that has the same shape. This argument clearly demonstrates that for the general case you simply can't do it. There's no possible way to differentiate. If you introduce information about the populations (assumptions, effectively) then there may often be ways to proceed*, but the general case is dead. * e.g. if you assume that populations are unimodal and have sufficiently different means you can get somewhere [There restrictions that were added to the question are not sufficient to avoid a different version of the kind of problem I describe above -- we can still write a unimodal null on the positive half-line as a 50-50 mixture of two unimodal distributions on the positive half-line. Of course if you have a more specific null, this becomes much less of an issue. Alternatively it should still be possible to restrict the class of alternatives further until we were in a position to test against some mixture alternative. Or some additional restrictions might be applied to both null and alternative that would make them distinguishable.]
" all of these data points come from the same distribution." How to test? Imagine two scenarios: the data points were all drawn from the same distribution -- one that was uniform on (16,36) the data points were drawn from a 50-50 mix of two populations: a. population A, w
16,259
" all of these data points come from the same distribution." How to test?
You obviously need to have some theory to talk about distribution(s) and state hypotheses to test. Something that groups subjects in one or more groups and something that makes measurements to lay apart. How can you get there? I see three options: If you already know that from your subject matter, then you just need to translate it into the language of statistical hypothesis Plot the charts and recognize patterns to become hypotheses to test Come up with a list of distributions you could fit and do a mathematical experiment. Probabilistic programming is the keyword here The exercise would then let you conclude that there are one or more groups represented in your sample or just one. Or no group at all.
" all of these data points come from the same distribution." How to test?
You obviously need to have some theory to talk about distribution(s) and state hypotheses to test. Something that groups subjects in one or more groups and something that makes measurements to lay apa
" all of these data points come from the same distribution." How to test? You obviously need to have some theory to talk about distribution(s) and state hypotheses to test. Something that groups subjects in one or more groups and something that makes measurements to lay apart. How can you get there? I see three options: If you already know that from your subject matter, then you just need to translate it into the language of statistical hypothesis Plot the charts and recognize patterns to become hypotheses to test Come up with a list of distributions you could fit and do a mathematical experiment. Probabilistic programming is the keyword here The exercise would then let you conclude that there are one or more groups represented in your sample or just one. Or no group at all.
" all of these data points come from the same distribution." How to test? You obviously need to have some theory to talk about distribution(s) and state hypotheses to test. Something that groups subjects in one or more groups and something that makes measurements to lay apa
16,260
Using BIC to estimate the number of k in KMEANS
It seems you have a few errors in your formulas, as determined by comparing to: https://www.cs.cmu.edu/~dpelleg/download/xmeans.pdf (there are some errors in the paper) https://github.com/bobhancock/goxmeans/blob/master/km.go https://github.com/mynameisfiber/pyxmeans/blob/master/pyxmeans/xmeans.py https://github.com/bobhancock/goxmeans/blob/master/doc/BIC_notes.pdf 1. np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi) - (n[i] / 2) * np.log(cl_var[i]) - ((n[i] - m) / 2) for i in range(m)]) - const_term Here there are three errors in the paper, fourth and fifth lines are missing a factor of d, the last line substitute m for 1. It should be: np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi*cl_var) - ((n[i] - 1) * d/ 2) for i in range(m)]) - const_term 2. The const_term: const_term = 0.5 * m * np.log(N) should be: const_term = 0.5 * m * np.log(N) * (d+1) 3. The variance formula: cl_var = [(1.0 / (n[i] - m)) * sum(distance.cdist(p[np.where(label_ == i)], [centers[0][i]], 'euclidean')**2) for i in range(m)] should be a scalar: cl_var = (1.0 / (N - m) / d) * sum([sum(distance.cdist(p[np.where(labels == i)], [centers[0][i]], 'euclidean')**2) for i in range(m)]) 4. Use natural logs, instead of your base10 logs. 5. Finally, and most importantly, the BIC you are computing has an inverse sign from the regular definition. so you are looking to maximize instead of minimize
Using BIC to estimate the number of k in KMEANS
It seems you have a few errors in your formulas, as determined by comparing to: https://www.cs.cmu.edu/~dpelleg/download/xmeans.pdf (there are some errors in the paper) https://github.com/bobhancock/
Using BIC to estimate the number of k in KMEANS It seems you have a few errors in your formulas, as determined by comparing to: https://www.cs.cmu.edu/~dpelleg/download/xmeans.pdf (there are some errors in the paper) https://github.com/bobhancock/goxmeans/blob/master/km.go https://github.com/mynameisfiber/pyxmeans/blob/master/pyxmeans/xmeans.py https://github.com/bobhancock/goxmeans/blob/master/doc/BIC_notes.pdf 1. np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi) - (n[i] / 2) * np.log(cl_var[i]) - ((n[i] - m) / 2) for i in range(m)]) - const_term Here there are three errors in the paper, fourth and fifth lines are missing a factor of d, the last line substitute m for 1. It should be: np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi*cl_var) - ((n[i] - 1) * d/ 2) for i in range(m)]) - const_term 2. The const_term: const_term = 0.5 * m * np.log(N) should be: const_term = 0.5 * m * np.log(N) * (d+1) 3. The variance formula: cl_var = [(1.0 / (n[i] - m)) * sum(distance.cdist(p[np.where(label_ == i)], [centers[0][i]], 'euclidean')**2) for i in range(m)] should be a scalar: cl_var = (1.0 / (N - m) / d) * sum([sum(distance.cdist(p[np.where(labels == i)], [centers[0][i]], 'euclidean')**2) for i in range(m)]) 4. Use natural logs, instead of your base10 logs. 5. Finally, and most importantly, the BIC you are computing has an inverse sign from the regular definition. so you are looking to maximize instead of minimize
Using BIC to estimate the number of k in KMEANS It seems you have a few errors in your formulas, as determined by comparing to: https://www.cs.cmu.edu/~dpelleg/download/xmeans.pdf (there are some errors in the paper) https://github.com/bobhancock/
16,261
Using BIC to estimate the number of k in KMEANS
This is basically eyaler's solution, with a few notes.. I just typed it out if someone wanted a quick copy/paste: Notes: eyalers 4th comment is incorrect np.log is already a natural log, no change needed eyalers 5th comment about inverse is correct. In the code below, you are looking for the MAXIMUM - keep in mind that the example has negative BIC numbers Code is as follows (again, all credit to eyaler): from sklearn import cluster from scipy.spatial import distance import sklearn.datasets from sklearn.preprocessing import StandardScaler import numpy as np def compute_bic(kmeans,X): """ Computes the BIC metric for a given clusters Parameters: ----------------------------------------- kmeans: List of clustering object from scikit learn X : multidimension np array of data points Returns: ----------------------------------------- BIC value """ # assign centers and labels centers = [kmeans.cluster_centers_] labels = kmeans.labels_ #number of clusters m = kmeans.n_clusters # size of the clusters n = np.bincount(labels) #size of data set N, d = X.shape #compute variance for all clusters beforehand cl_var = (1.0 / (N - m) / d) * sum([sum(distance.cdist(X[np.where(labels == i)], [centers[0][i]], 'euclidean')**2) for i in range(m)]) const_term = 0.5 * m * np.log(N) * (d+1) BIC = np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi*cl_var) - ((n[i] - 1) * d/ 2) for i in range(m)]) - const_term return(BIC) # IRIS DATA iris = sklearn.datasets.load_iris() X = iris.data[:, :4] # extract only the features #Xs = StandardScaler().fit_transform(X) Y = iris.target ks = range(1,10) # run 9 times kmeans and save each result in the KMeans object KMeans = [cluster.KMeans(n_clusters = i, init="k-means++").fit(X) for i in ks] # now run for each cluster the BIC computation BIC = [compute_bic(kmeansi,X) for kmeansi in KMeans] print BIC
Using BIC to estimate the number of k in KMEANS
This is basically eyaler's solution, with a few notes.. I just typed it out if someone wanted a quick copy/paste: Notes: eyalers 4th comment is incorrect np.log is already a natural log, no change ne
Using BIC to estimate the number of k in KMEANS This is basically eyaler's solution, with a few notes.. I just typed it out if someone wanted a quick copy/paste: Notes: eyalers 4th comment is incorrect np.log is already a natural log, no change needed eyalers 5th comment about inverse is correct. In the code below, you are looking for the MAXIMUM - keep in mind that the example has negative BIC numbers Code is as follows (again, all credit to eyaler): from sklearn import cluster from scipy.spatial import distance import sklearn.datasets from sklearn.preprocessing import StandardScaler import numpy as np def compute_bic(kmeans,X): """ Computes the BIC metric for a given clusters Parameters: ----------------------------------------- kmeans: List of clustering object from scikit learn X : multidimension np array of data points Returns: ----------------------------------------- BIC value """ # assign centers and labels centers = [kmeans.cluster_centers_] labels = kmeans.labels_ #number of clusters m = kmeans.n_clusters # size of the clusters n = np.bincount(labels) #size of data set N, d = X.shape #compute variance for all clusters beforehand cl_var = (1.0 / (N - m) / d) * sum([sum(distance.cdist(X[np.where(labels == i)], [centers[0][i]], 'euclidean')**2) for i in range(m)]) const_term = 0.5 * m * np.log(N) * (d+1) BIC = np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi*cl_var) - ((n[i] - 1) * d/ 2) for i in range(m)]) - const_term return(BIC) # IRIS DATA iris = sklearn.datasets.load_iris() X = iris.data[:, :4] # extract only the features #Xs = StandardScaler().fit_transform(X) Y = iris.target ks = range(1,10) # run 9 times kmeans and save each result in the KMeans object KMeans = [cluster.KMeans(n_clusters = i, init="k-means++").fit(X) for i in ks] # now run for each cluster the BIC computation BIC = [compute_bic(kmeansi,X) for kmeansi in KMeans] print BIC
Using BIC to estimate the number of k in KMEANS This is basically eyaler's solution, with a few notes.. I just typed it out if someone wanted a quick copy/paste: Notes: eyalers 4th comment is incorrect np.log is already a natural log, no change ne
16,262
Using BIC to estimate the number of k in KMEANS
With my environments Prabhath's answer does not work because np.where() clause is unhashable for telling which records of X to be referred to calculate cl_var. Let me fix that error and re-post the code as I don't have enough reputation to even add a comment. # Almost all credits to elayer and Prabhath from sklearn import cluster from scipy.spatial import distance import sklearn.datasets from sklearn.preprocessing import StandardScaler import numpy as np def compute_bic(kmeans,X): """ Computes the BIC metric for a given clusters Parameters: ----------------------------------------- kmeans: List of clustering object from scikit learn X : multidimension np array of data points Returns: ----------------------------------------- BIC value """ # assign centers and labels centers = [kmeans.cluster_centers_] labels = kmeans.labels_ #number of clusters m = kmeans.n_clusters # size of the clusters n = np.bincount(labels) #size of data set N, d = X.shape #compute variance for all clusters beforehand #fixed cl_var = (1.0 / (N - m) / d) * sum([sum(distance.cdist(X.iloc[labels == i], [centers[0][i]], 'euclidean')**2) for i in range(m)]) const_term = 0.5 * m * np.log(N) * (d+1) BIC = np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi*cl_var) - ((n[i] - 1) * d/ 2) for i in range(m)]) - const_term return(BIC) # IRIS DATA iris = sklearn.datasets.load_iris() X = iris.data[:, :4] # extract only the features #Xs = StandardScaler().fit_transform(X) Y = iris.target ks = range(1,10) # run 9 times kmeans and save each result in the KMeans object KMeans = [cluster.KMeans(n_clusters = i, init="k-means++").fit(X) for i in ks] # now run for each cluster the BIC computation BIC = [compute_bic(kmeansi,X) for kmeansi in KMeans] print BIC
Using BIC to estimate the number of k in KMEANS
With my environments Prabhath's answer does not work because np.where() clause is unhashable for telling which records of X to be referred to calculate cl_var. Let me fix that error and re-post the co
Using BIC to estimate the number of k in KMEANS With my environments Prabhath's answer does not work because np.where() clause is unhashable for telling which records of X to be referred to calculate cl_var. Let me fix that error and re-post the code as I don't have enough reputation to even add a comment. # Almost all credits to elayer and Prabhath from sklearn import cluster from scipy.spatial import distance import sklearn.datasets from sklearn.preprocessing import StandardScaler import numpy as np def compute_bic(kmeans,X): """ Computes the BIC metric for a given clusters Parameters: ----------------------------------------- kmeans: List of clustering object from scikit learn X : multidimension np array of data points Returns: ----------------------------------------- BIC value """ # assign centers and labels centers = [kmeans.cluster_centers_] labels = kmeans.labels_ #number of clusters m = kmeans.n_clusters # size of the clusters n = np.bincount(labels) #size of data set N, d = X.shape #compute variance for all clusters beforehand #fixed cl_var = (1.0 / (N - m) / d) * sum([sum(distance.cdist(X.iloc[labels == i], [centers[0][i]], 'euclidean')**2) for i in range(m)]) const_term = 0.5 * m * np.log(N) * (d+1) BIC = np.sum([n[i] * np.log(n[i]) - n[i] * np.log(N) - ((n[i] * d) / 2) * np.log(2*np.pi*cl_var) - ((n[i] - 1) * d/ 2) for i in range(m)]) - const_term return(BIC) # IRIS DATA iris = sklearn.datasets.load_iris() X = iris.data[:, :4] # extract only the features #Xs = StandardScaler().fit_transform(X) Y = iris.target ks = range(1,10) # run 9 times kmeans and save each result in the KMeans object KMeans = [cluster.KMeans(n_clusters = i, init="k-means++").fit(X) for i in ks] # now run for each cluster the BIC computation BIC = [compute_bic(kmeansi,X) for kmeansi in KMeans] print BIC
Using BIC to estimate the number of k in KMEANS With my environments Prabhath's answer does not work because np.where() clause is unhashable for telling which records of X to be referred to calculate cl_var. Let me fix that error and re-post the co
16,263
Using BIC to estimate the number of k in KMEANS
Yosher's modification is for the case when X is a pd.DataFrame. The initial solution for np.arrays. There seems to be multiple formulas around. In case you need to compare the score to the built-in version of Consensus K-Means I refer to this to the bic_kmeans function as found here: pyckmeans.
Using BIC to estimate the number of k in KMEANS
Yosher's modification is for the case when X is a pd.DataFrame. The initial solution for np.arrays. There seems to be multiple formulas around. In case you need to compare the score to the built-in ve
Using BIC to estimate the number of k in KMEANS Yosher's modification is for the case when X is a pd.DataFrame. The initial solution for np.arrays. There seems to be multiple formulas around. In case you need to compare the score to the built-in version of Consensus K-Means I refer to this to the bic_kmeans function as found here: pyckmeans.
Using BIC to estimate the number of k in KMEANS Yosher's modification is for the case when X is a pd.DataFrame. The initial solution for np.arrays. There seems to be multiple formulas around. In case you need to compare the score to the built-in ve
16,264
What is the definition of "best" as used in the term "best fit" and cross validation?
I think this is an excellent question. I am going to paraphase it just to be sure I have got it right: It would seem that there are lots of ways to choose the complexity penalty function $c$ and error penalty function $e$. Which choice is `best'. What should best even mean? I think the answer (if there is one) will take you way beyond just cross-validation. I like how this question (and the topic in general) ties nicely to Occam's Razor and the general concept of parsimony that is fundamental to science. I am by no means an expert in this area but I find this question hugely interesting. The best text I know on these sorts of question is Universal Artificial Intelligence by Marcus Hutter (don't ask me any questions about it though, I haven't read most of it). I went to a talk by Hutter and couple of years ago and was very impressed. You are right in thinking that there is a minimum entropy argument in there somewhere (used for the complexity penalty function $c$ in some manner). Hutter advocates the use of Kolmogorov complexity instead of entropy. Also, Hutter's definition of `best' (as far as I remember) is (informally) the model that best predicts the future (i.e. best predicts the data that will be observed in the future). I can't remember how he formalises this notion.
What is the definition of "best" as used in the term "best fit" and cross validation?
I think this is an excellent question. I am going to paraphase it just to be sure I have got it right: It would seem that there are lots of ways to choose the complexity penalty function $c$ and
What is the definition of "best" as used in the term "best fit" and cross validation? I think this is an excellent question. I am going to paraphase it just to be sure I have got it right: It would seem that there are lots of ways to choose the complexity penalty function $c$ and error penalty function $e$. Which choice is `best'. What should best even mean? I think the answer (if there is one) will take you way beyond just cross-validation. I like how this question (and the topic in general) ties nicely to Occam's Razor and the general concept of parsimony that is fundamental to science. I am by no means an expert in this area but I find this question hugely interesting. The best text I know on these sorts of question is Universal Artificial Intelligence by Marcus Hutter (don't ask me any questions about it though, I haven't read most of it). I went to a talk by Hutter and couple of years ago and was very impressed. You are right in thinking that there is a minimum entropy argument in there somewhere (used for the complexity penalty function $c$ in some manner). Hutter advocates the use of Kolmogorov complexity instead of entropy. Also, Hutter's definition of `best' (as far as I remember) is (informally) the model that best predicts the future (i.e. best predicts the data that will be observed in the future). I can't remember how he formalises this notion.
What is the definition of "best" as used in the term "best fit" and cross validation? I think this is an excellent question. I am going to paraphase it just to be sure I have got it right: It would seem that there are lots of ways to choose the complexity penalty function $c$ and
16,265
What is the definition of "best" as used in the term "best fit" and cross validation?
I will offer a brief intuitive answer (at a fairly abstract level) till a better answer is offered by someone else: First, note that complex functions/models achieve better fit (i.e., have lower residuals) as they exploit some local features (think noise) of the dataset that are not present globally (think systematic patterns). Second, when performing cross validation we split the data into two sets: the training set and the validation set. Thus, when we perform cross validation, a complex model may not predict very well because by definition a complex model will exploit the local features of the training set. However, the local features of the training set could be very different compared the local features of the validation set resulting in poor predictive performance. Therefore, we have a tendency to select the model that captures the global features of the training and the validation datasets. In summary, cross validation protects against overfitting by selecting the model that captures the global patterns of the dataset and by avoiding models that exploit some local feature of a dataset.
What is the definition of "best" as used in the term "best fit" and cross validation?
I will offer a brief intuitive answer (at a fairly abstract level) till a better answer is offered by someone else: First, note that complex functions/models achieve better fit (i.e., have lower resi
What is the definition of "best" as used in the term "best fit" and cross validation? I will offer a brief intuitive answer (at a fairly abstract level) till a better answer is offered by someone else: First, note that complex functions/models achieve better fit (i.e., have lower residuals) as they exploit some local features (think noise) of the dataset that are not present globally (think systematic patterns). Second, when performing cross validation we split the data into two sets: the training set and the validation set. Thus, when we perform cross validation, a complex model may not predict very well because by definition a complex model will exploit the local features of the training set. However, the local features of the training set could be very different compared the local features of the validation set resulting in poor predictive performance. Therefore, we have a tendency to select the model that captures the global features of the training and the validation datasets. In summary, cross validation protects against overfitting by selecting the model that captures the global patterns of the dataset and by avoiding models that exploit some local feature of a dataset.
What is the definition of "best" as used in the term "best fit" and cross validation? I will offer a brief intuitive answer (at a fairly abstract level) till a better answer is offered by someone else: First, note that complex functions/models achieve better fit (i.e., have lower resi
16,266
What is the definition of "best" as used in the term "best fit" and cross validation?
In a general machine-learning view the answer is fairly simple: we want to build model that will have the highest accuracy when predicting new data (unseen during training). Because we cannot directly test this (we don't have data from the future) we do Monte Carlo simulation of such a test -- and this is basically the idea underneath cross validation. There may be some issues about what is accuracy (for instance a business client can state that overshoot costs 5€ per unit and undershoot 0.01€ per unit, so it is better to build a less accurate but more undershooting model), but in general it is fairly intuitive per cent of true answers in classification and widely used explained variance in regression.
What is the definition of "best" as used in the term "best fit" and cross validation?
In a general machine-learning view the answer is fairly simple: we want to build model that will have the highest accuracy when predicting new data (unseen during training). Because we cannot directly
What is the definition of "best" as used in the term "best fit" and cross validation? In a general machine-learning view the answer is fairly simple: we want to build model that will have the highest accuracy when predicting new data (unseen during training). Because we cannot directly test this (we don't have data from the future) we do Monte Carlo simulation of such a test -- and this is basically the idea underneath cross validation. There may be some issues about what is accuracy (for instance a business client can state that overshoot costs 5€ per unit and undershoot 0.01€ per unit, so it is better to build a less accurate but more undershooting model), but in general it is fairly intuitive per cent of true answers in classification and widely used explained variance in regression.
What is the definition of "best" as used in the term "best fit" and cross validation? In a general machine-learning view the answer is fairly simple: we want to build model that will have the highest accuracy when predicting new data (unseen during training). Because we cannot directly
16,267
What is the definition of "best" as used in the term "best fit" and cross validation?
Great discussion here, but I think of cross-validation in a different way from the answers thus far (mbq and I are on the same page I think). So, I'll put in my two cents at the risk of muddying the waters... Cross-validation is a statistical technique for assessing the variability and bias, due to sampling error, in a model's ability to fit and predict data. Thus, "best" would be the model which provides the lowest generalization error, which would be in units of variability and bias. Techniques such as Bayesian and Bootstrap Model Averaging can be used to update a model in an algorithmic way based upon results from the cross validation effort. This FAQ provides good information for more context of what informs my opinion.
What is the definition of "best" as used in the term "best fit" and cross validation?
Great discussion here, but I think of cross-validation in a different way from the answers thus far (mbq and I are on the same page I think). So, I'll put in my two cents at the risk of muddying the w
What is the definition of "best" as used in the term "best fit" and cross validation? Great discussion here, but I think of cross-validation in a different way from the answers thus far (mbq and I are on the same page I think). So, I'll put in my two cents at the risk of muddying the waters... Cross-validation is a statistical technique for assessing the variability and bias, due to sampling error, in a model's ability to fit and predict data. Thus, "best" would be the model which provides the lowest generalization error, which would be in units of variability and bias. Techniques such as Bayesian and Bootstrap Model Averaging can be used to update a model in an algorithmic way based upon results from the cross validation effort. This FAQ provides good information for more context of what informs my opinion.
What is the definition of "best" as used in the term "best fit" and cross validation? Great discussion here, but I think of cross-validation in a different way from the answers thus far (mbq and I are on the same page I think). So, I'll put in my two cents at the risk of muddying the w
16,268
What is the definition of "best" as used in the term "best fit" and cross validation?
A lot of people have excellent answers, here is my $0.02. There are two ways to look at "best model", or "model selection", speaking statistically: 1 An explanation that is as simple as possible, but no simpler (Attrib. Einstein) - This is also called Occam's Razor, as explanation applies here. - Have a concept of True model or a model which approximates the truth - Explanation is like doing scientific research 2 Prediction is the interest, similar to engineering development. - Prediction is the aim, and all that matters is that the model works - Model choice should be based on quality of predictions - Cf: Ein-Dor, P. & Feldmesser, J. (1987) Attributes of the performance of central processing units: a relative performance prediction model. Communications of the ACM 30, 308–317. Widespread (mis)conception: Model Choice is equivalent to choosing the best model For explanation we ought to be alert to be possibility of there being several (roughly) equally good explanatory models. Simplicity helps both with communicating the concepts embodied in the model and in what psychologists call generalization, the ability to ‘work’ in scenarios very different from those in which the model was studied. So there is a premium on few models. For prediction: (Dr Ripley's) good analogy is that of choosing between expert opinions: if you have access to a large panel of experts, how would you use their opinions? Cross Validation takes care of the prediction aspect. For details about CV please refer to this presentation by Dr. B. D. Ripley Dr. Brian D. Ripley's presentation on model selection Citation: Please note that everything in this answer is from the presentation cited above. I am a big fan of this presentation and I like it. Other opinions may vary. The title of the presentation is: "Selecting Amongst Large Classes of Models" and was given at Symposium in Honour of John Nelder's 80th birthday, Imperial College, 29/30 March 2004, by Dr. Brian D. Ripley.
What is the definition of "best" as used in the term "best fit" and cross validation?
A lot of people have excellent answers, here is my $0.02. There are two ways to look at "best model", or "model selection", speaking statistically: 1 An explanation that is as simple as possible, b
What is the definition of "best" as used in the term "best fit" and cross validation? A lot of people have excellent answers, here is my $0.02. There are two ways to look at "best model", or "model selection", speaking statistically: 1 An explanation that is as simple as possible, but no simpler (Attrib. Einstein) - This is also called Occam's Razor, as explanation applies here. - Have a concept of True model or a model which approximates the truth - Explanation is like doing scientific research 2 Prediction is the interest, similar to engineering development. - Prediction is the aim, and all that matters is that the model works - Model choice should be based on quality of predictions - Cf: Ein-Dor, P. & Feldmesser, J. (1987) Attributes of the performance of central processing units: a relative performance prediction model. Communications of the ACM 30, 308–317. Widespread (mis)conception: Model Choice is equivalent to choosing the best model For explanation we ought to be alert to be possibility of there being several (roughly) equally good explanatory models. Simplicity helps both with communicating the concepts embodied in the model and in what psychologists call generalization, the ability to ‘work’ in scenarios very different from those in which the model was studied. So there is a premium on few models. For prediction: (Dr Ripley's) good analogy is that of choosing between expert opinions: if you have access to a large panel of experts, how would you use their opinions? Cross Validation takes care of the prediction aspect. For details about CV please refer to this presentation by Dr. B. D. Ripley Dr. Brian D. Ripley's presentation on model selection Citation: Please note that everything in this answer is from the presentation cited above. I am a big fan of this presentation and I like it. Other opinions may vary. The title of the presentation is: "Selecting Amongst Large Classes of Models" and was given at Symposium in Honour of John Nelder's 80th birthday, Imperial College, 29/30 March 2004, by Dr. Brian D. Ripley.
What is the definition of "best" as used in the term "best fit" and cross validation? A lot of people have excellent answers, here is my $0.02. There are two ways to look at "best model", or "model selection", speaking statistically: 1 An explanation that is as simple as possible, b
16,269
What is the definition of "best" as used in the term "best fit" and cross validation?
The error function is the error of your model (function) on the training data. The complexity is some norm (e.g., squared l2 norm) of the function you are trying to learn. Minimizing the complexity term essentially favors smooth functions, which do well not just on the training data but also on the test data. If you represent your function by a set of coefficients (say, if you are doing linear regression), penalizing the complexity by the squared norm would lead to small coefficient values in your function (penalizing other norms leads to different notions of complexity control).
What is the definition of "best" as used in the term "best fit" and cross validation?
The error function is the error of your model (function) on the training data. The complexity is some norm (e.g., squared l2 norm) of the function you are trying to learn. Minimizing the complexity te
What is the definition of "best" as used in the term "best fit" and cross validation? The error function is the error of your model (function) on the training data. The complexity is some norm (e.g., squared l2 norm) of the function you are trying to learn. Minimizing the complexity term essentially favors smooth functions, which do well not just on the training data but also on the test data. If you represent your function by a set of coefficients (say, if you are doing linear regression), penalizing the complexity by the squared norm would lead to small coefficient values in your function (penalizing other norms leads to different notions of complexity control).
What is the definition of "best" as used in the term "best fit" and cross validation? The error function is the error of your model (function) on the training data. The complexity is some norm (e.g., squared l2 norm) of the function you are trying to learn. Minimizing the complexity te
16,270
What is the definition of "best" as used in the term "best fit" and cross validation?
From an optimization point of view, the problem (with $(p,q)\geq 1,\;\lambda>0$), $(1)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p+\lambda||\beta||_q$ is equivalent to $(2)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p$ $s.t.$ $||\beta||_q\leq\lambda$ Which simply incorporates unto the objective function the prior information that $||\beta||_q\leq\lambda$. If this prior turns out to be true, then it can be shown ($q=1,2$) that incorporating it unto the objective function minimizes the risk associated with $\hat{\beta}$ (i.e. very unformaly, improves the accuracy of $\hat{\beta}$) $\lambda$ is a so called meta-parameter (or latent parameter) that is not being optimized over (in which case the solution would trivially reduce to $\lambda=\infty$), but rather, reflects information not contained in the sample $(x,y)$ used to solve $(1)-(2)$ (for example other studies or expert's opinion). Cross validation is an attempt at constructing a data induced prior (i.e. slicing the dataset so that part of it is used to infer reasonable values of $\lambda$ and part of it used to estimate $\hat{\beta}|\lambda$). As to your subquestion (why $e()=||y-m(x,\beta)||_p$) this is because for $p=1$ ($p=2$) this measure of distance between the model and the observations has (easely) derivable assymptotical properties (strong convergence to meaningfull population couterparts of $m()$).
What is the definition of "best" as used in the term "best fit" and cross validation?
From an optimization point of view, the problem (with $(p,q)\geq 1,\;\lambda>0$), $(1)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p+\lambda||\beta||_q$ is equivalent to $(2)\;\underset
What is the definition of "best" as used in the term "best fit" and cross validation? From an optimization point of view, the problem (with $(p,q)\geq 1,\;\lambda>0$), $(1)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p+\lambda||\beta||_q$ is equivalent to $(2)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p$ $s.t.$ $||\beta||_q\leq\lambda$ Which simply incorporates unto the objective function the prior information that $||\beta||_q\leq\lambda$. If this prior turns out to be true, then it can be shown ($q=1,2$) that incorporating it unto the objective function minimizes the risk associated with $\hat{\beta}$ (i.e. very unformaly, improves the accuracy of $\hat{\beta}$) $\lambda$ is a so called meta-parameter (or latent parameter) that is not being optimized over (in which case the solution would trivially reduce to $\lambda=\infty$), but rather, reflects information not contained in the sample $(x,y)$ used to solve $(1)-(2)$ (for example other studies or expert's opinion). Cross validation is an attempt at constructing a data induced prior (i.e. slicing the dataset so that part of it is used to infer reasonable values of $\lambda$ and part of it used to estimate $\hat{\beta}|\lambda$). As to your subquestion (why $e()=||y-m(x,\beta)||_p$) this is because for $p=1$ ($p=2$) this measure of distance between the model and the observations has (easely) derivable assymptotical properties (strong convergence to meaningfull population couterparts of $m()$).
What is the definition of "best" as used in the term "best fit" and cross validation? From an optimization point of view, the problem (with $(p,q)\geq 1,\;\lambda>0$), $(1)\;\underset{\beta|\lambda,x,y}{Arg\min.}||y-m(x,\beta)||_p+\lambda||\beta||_q$ is equivalent to $(2)\;\underset
16,271
Linear regression with "hour of the day"
Dummy encoding would destroy any proximity measure (and ordering) among hours. For example, the distance between 1 PM and 9 PM would be the same as the distance between 1 PM and 1 AM. It'd be harder to say something like around 1 PM. Even leaving them as is, e.g. numbers in 0-23, would be a better approach than dummy encoding in my opinion. But, this way has a catch as well: 00:01 and 23:59 would be seen very distant but actually they're not. To remedy this, your second listed approach, i.e. cyclic variables, is used. Cyclic variables map hours onto a circle (like a 24-h mechanical clock) so that the ML algorithm can see the neighbours of individual hours.
Linear regression with "hour of the day"
Dummy encoding would destroy any proximity measure (and ordering) among hours. For example, the distance between 1 PM and 9 PM would be the same as the distance between 1 PM and 1 AM. It'd be harder t
Linear regression with "hour of the day" Dummy encoding would destroy any proximity measure (and ordering) among hours. For example, the distance between 1 PM and 9 PM would be the same as the distance between 1 PM and 1 AM. It'd be harder to say something like around 1 PM. Even leaving them as is, e.g. numbers in 0-23, would be a better approach than dummy encoding in my opinion. But, this way has a catch as well: 00:01 and 23:59 would be seen very distant but actually they're not. To remedy this, your second listed approach, i.e. cyclic variables, is used. Cyclic variables map hours onto a circle (like a 24-h mechanical clock) so that the ML algorithm can see the neighbours of individual hours.
Linear regression with "hour of the day" Dummy encoding would destroy any proximity measure (and ordering) among hours. For example, the distance between 1 PM and 9 PM would be the same as the distance between 1 PM and 1 AM. It'd be harder t
16,272
Linear regression with "hour of the day"
+1 to gunes' answer. Dummy coding will indeed disregard the distance between time points - the responses between two time points 1 hour apart will be more alike than between two time points 3 hours apart, and dummy coding completely discards this piece of information. Dummy encoding fits a step-like time dependency: the response is flat for one hour, and then it suddenly jumps (and the jump is unconstrained except for what the data tells us - this is a consequence of the lack of proximity modeled). Both aspects are ecologically extremely doubtful: Here is an additional aspect. If you bucketize your day into 24 hours, then you need to fit 23 parameters in addition to the intercept. This is a lot, and you will need a huge amount of data to reliably fit this without running afoul of the bias-variance tradeoff. An alternative would be to use a Fourier-type model with harmonics. For instance, assume your observation timestamp $t$ corresponds to a time of day $\tau(t)$ (so when going from $t$ to $\tau(t)$, we simply drop the day, month and year information from $t$). Then you can transform the time impact into sines and cosines: $$ \sin\big(2\pi k\frac{\tau(t)}{24}\big), \quad\cos\big(2\pi k\frac{\tau(t)}{24}\big). $$ A simple model would go up to $k=3$: $$ y_t = \beta_0+\sum_{k=1}^3 \beta_k\sin\big(2\pi k\frac{\tau(t)}{24}\big) + \sum_{k=1}^3\gamma_k\cos\big(2\pi k\frac{\tau(t)}{24}\big) + \text{other covariates}+\epsilon_t. $$ This already gives you a lot of flexibility at the cost of fitting only 6 parameters, so your model will be far more stable. Also, you will get neither the constant response within an hour, nor the abrupt steps when a new hour starts. Here are some random examples of time courses this can fit: Of course, regardless of what choice you make, you should think about including any additional pieces of information you know (e.g., if all theaters and cinemas start or finish their shows at the same point in time, then mark this with a dummy, because then you will get a sharp step change, at least in the relevant districts). Also, the time response will certainly differ between weekdays and weekends, and likely also between Fridays and other weekdays, so include interactions between your time model and the day of week. Or look into models for multiple-seasonalities to address this. R code for my plots: par(mai=c(.8,.1,.1,.1)) plot(c(0,24),c(0,1),yaxt="n",xlab="Hour",ylab="") lines(c(0,rep(1:23,each=2),24),rep(runif(24),each=2)) tau <- seq(0,24,by=.001) mm <- cbind(1,sin(2*pi*1*tau/24),sin(2*pi*2*tau/24),sin(2*pi*3*tau/24),cos(2*pi*1*tau/24),cos(2*pi*2*tau/24),cos(2*pi*3*tau/24)) par(mai=c(.8,.1,.1,.1),mfrow=c(3,2)) for ( ii in 1:6 ) plot(tau,(mm%*%runif(7,-1,1))[,1],yaxt="n",xlab="Hour",ylab="",type="l")
Linear regression with "hour of the day"
+1 to gunes' answer. Dummy coding will indeed disregard the distance between time points - the responses between two time points 1 hour apart will be more alike than between two time points 3 hours ap
Linear regression with "hour of the day" +1 to gunes' answer. Dummy coding will indeed disregard the distance between time points - the responses between two time points 1 hour apart will be more alike than between two time points 3 hours apart, and dummy coding completely discards this piece of information. Dummy encoding fits a step-like time dependency: the response is flat for one hour, and then it suddenly jumps (and the jump is unconstrained except for what the data tells us - this is a consequence of the lack of proximity modeled). Both aspects are ecologically extremely doubtful: Here is an additional aspect. If you bucketize your day into 24 hours, then you need to fit 23 parameters in addition to the intercept. This is a lot, and you will need a huge amount of data to reliably fit this without running afoul of the bias-variance tradeoff. An alternative would be to use a Fourier-type model with harmonics. For instance, assume your observation timestamp $t$ corresponds to a time of day $\tau(t)$ (so when going from $t$ to $\tau(t)$, we simply drop the day, month and year information from $t$). Then you can transform the time impact into sines and cosines: $$ \sin\big(2\pi k\frac{\tau(t)}{24}\big), \quad\cos\big(2\pi k\frac{\tau(t)}{24}\big). $$ A simple model would go up to $k=3$: $$ y_t = \beta_0+\sum_{k=1}^3 \beta_k\sin\big(2\pi k\frac{\tau(t)}{24}\big) + \sum_{k=1}^3\gamma_k\cos\big(2\pi k\frac{\tau(t)}{24}\big) + \text{other covariates}+\epsilon_t. $$ This already gives you a lot of flexibility at the cost of fitting only 6 parameters, so your model will be far more stable. Also, you will get neither the constant response within an hour, nor the abrupt steps when a new hour starts. Here are some random examples of time courses this can fit: Of course, regardless of what choice you make, you should think about including any additional pieces of information you know (e.g., if all theaters and cinemas start or finish their shows at the same point in time, then mark this with a dummy, because then you will get a sharp step change, at least in the relevant districts). Also, the time response will certainly differ between weekdays and weekends, and likely also between Fridays and other weekdays, so include interactions between your time model and the day of week. Or look into models for multiple-seasonalities to address this. R code for my plots: par(mai=c(.8,.1,.1,.1)) plot(c(0,24),c(0,1),yaxt="n",xlab="Hour",ylab="") lines(c(0,rep(1:23,each=2),24),rep(runif(24),each=2)) tau <- seq(0,24,by=.001) mm <- cbind(1,sin(2*pi*1*tau/24),sin(2*pi*2*tau/24),sin(2*pi*3*tau/24),cos(2*pi*1*tau/24),cos(2*pi*2*tau/24),cos(2*pi*3*tau/24)) par(mai=c(.8,.1,.1,.1),mfrow=c(3,2)) for ( ii in 1:6 ) plot(tau,(mm%*%runif(7,-1,1))[,1],yaxt="n",xlab="Hour",ylab="",type="l")
Linear regression with "hour of the day" +1 to gunes' answer. Dummy coding will indeed disregard the distance between time points - the responses between two time points 1 hour apart will be more alike than between two time points 3 hours ap
16,273
Linear regression with "hour of the day"
For a time series regression, simply adding hourly dummies $D_h, h = 0,\cdots, 23$, is the natural thing to do in most cases, i.e. fit the model $$ y_t = \beta_0 D_0 + \cdots + \beta_{23}D_{23} + \mbox{ other covariates } + \epsilon_t. $$ As a modeler, you're simply saying that the dependent variable $y_t$ has a hourly-dependent average $\beta_h$ at hour $h$, plus the effect from other covariates. Any hourly (additive) seasonality in the data would be picked up by this regression. (Alternatively, seaonsality can be modeled multiplicatively by, say, a SARMAX-type model.) Transforming the data by some arbitrary periodic function (sin/cos/etc) is not really appropriate. For example, say you fit the model $$ y_t = \sum_{h=0}^{23} \beta_{h}\sin(2 \pi \frac{h(t)}{24}) + \mbox{ other covariates } + \epsilon_t, $$ where $h(t) = 12$ if observation $y_t$ is sampled at the 12th hour of the day (for example). Then you're imposing a peak at hour $h = 6$ (or whenever, by transforming the sine function) on the data, arbitrarily.
Linear regression with "hour of the day"
For a time series regression, simply adding hourly dummies $D_h, h = 0,\cdots, 23$, is the natural thing to do in most cases, i.e. fit the model $$ y_t = \beta_0 D_0 + \cdots + \beta_{23}D_{23} + \mbo
Linear regression with "hour of the day" For a time series regression, simply adding hourly dummies $D_h, h = 0,\cdots, 23$, is the natural thing to do in most cases, i.e. fit the model $$ y_t = \beta_0 D_0 + \cdots + \beta_{23}D_{23} + \mbox{ other covariates } + \epsilon_t. $$ As a modeler, you're simply saying that the dependent variable $y_t$ has a hourly-dependent average $\beta_h$ at hour $h$, plus the effect from other covariates. Any hourly (additive) seasonality in the data would be picked up by this regression. (Alternatively, seaonsality can be modeled multiplicatively by, say, a SARMAX-type model.) Transforming the data by some arbitrary periodic function (sin/cos/etc) is not really appropriate. For example, say you fit the model $$ y_t = \sum_{h=0}^{23} \beta_{h}\sin(2 \pi \frac{h(t)}{24}) + \mbox{ other covariates } + \epsilon_t, $$ where $h(t) = 12$ if observation $y_t$ is sampled at the 12th hour of the day (for example). Then you're imposing a peak at hour $h = 6$ (or whenever, by transforming the sine function) on the data, arbitrarily.
Linear regression with "hour of the day" For a time series regression, simply adding hourly dummies $D_h, h = 0,\cdots, 23$, is the natural thing to do in most cases, i.e. fit the model $$ y_t = \beta_0 D_0 + \cdots + \beta_{23}D_{23} + \mbo
16,274
Probability of winning a competition K games best of series of N games
You can use the negative binomial distribution for this problem. If X is distributed as NegBin(n, w), then X is the number of games the player loses before winning n of them, if the probability of winning any given game is w. So, dnbinom(q = 2, size = 2, prob = w) is the probability that the player loses a total of 2 games before winning 2. Then, pnbinom(q = 2, size = 3, prob = w) is the probability that the player loses 2 or less before they win 3 games. This is equal to the probability of winning a 3 out of 5 series. In general, the probability of winning a best n-out-of-(2n-1) series can be calculated with pnbinom(q = n-1, size = n, prob = w). ## w is the probability of winning any individual game ## k is the number of wins needed to win the series (3 in a best 3 of 5 series) win <- function(w, k){ return (pnbinom(q = k - 1, size = k, prob = w)) } win(0.9, 3) ## 0.99144
Probability of winning a competition K games best of series of N games
You can use the negative binomial distribution for this problem. If X is distributed as NegBin(n, w), then X is the number of games the player loses before winning n of them, if the probability of win
Probability of winning a competition K games best of series of N games You can use the negative binomial distribution for this problem. If X is distributed as NegBin(n, w), then X is the number of games the player loses before winning n of them, if the probability of winning any given game is w. So, dnbinom(q = 2, size = 2, prob = w) is the probability that the player loses a total of 2 games before winning 2. Then, pnbinom(q = 2, size = 3, prob = w) is the probability that the player loses 2 or less before they win 3 games. This is equal to the probability of winning a 3 out of 5 series. In general, the probability of winning a best n-out-of-(2n-1) series can be calculated with pnbinom(q = n-1, size = n, prob = w). ## w is the probability of winning any individual game ## k is the number of wins needed to win the series (3 in a best 3 of 5 series) win <- function(w, k){ return (pnbinom(q = k - 1, size = k, prob = w)) } win(0.9, 3) ## 0.99144
Probability of winning a competition K games best of series of N games You can use the negative binomial distribution for this problem. If X is distributed as NegBin(n, w), then X is the number of games the player loses before winning n of them, if the probability of win
16,275
Probability of winning a competition K games best of series of N games
This is interesting. Let's demonstrate using n = 3 where it takes 2 wins to be the winner. We can first determine which combinations are available n = 3L lst = replicate(n, 0:1, simplify = FALSE) combos = do.call(expand.grid, lst) combos Var1 Var2 Var3 1 0 0 0 2 1 0 0 3 0 1 0 4 1 1 0 5 0 0 1 6 1 0 1 7 0 1 1 8 1 1 1 As you noted, some of these combinations are not possible. We are specifically interested when the rowSums() are equal to 2. Using this we can figure out which combinations are actually possible. possible_combos = combos[rowSums(combos) == ceiling(n / 2), ] possible_combos Var1 Var2 Var3 4 1 1 0 6 1 0 1 7 0 1 1 Our last step is to calculate each contribution from our possible contributions. We know that w^2 will be in each calculation. The l part is more complicated. In our first possible combination, the l contributes nothing. We can use max.col(..., ties.method = "last") to figure how many ls occurred: losses = max.col(possible_combos, ties.method = "last") - ceiling(n / 2) losses # [1] 0 1 1 w = 0.9 l = 1 - w wins_p = w ^ ceiling(n / 2) losses_p = ifelse(losses == 0L, 1, l ^ losses) p = sum(wins_p * losses_p) p # [1] 0.972 To generalize this, we can use this as a function based on n and w: right = function(n, w) { lst = replicate(n, 0:1, simplify = FALSE) combos = do.call(expand.grid, lst) possible_combos = combos[rowSums(combos) == ceiling(n / 2), ] losses = max.col(possible_combos, ties.method = "last") - ceiling(n / 2) l = 1 - w wins_p = w ^ ceiling(n / 2) losses_p = ifelse(losses == 0L, 1, l ^ losses) sum(wins_p * losses_p) } right(5, 0.9) # [1] 0.99144 right(5, 0.9) + right(5, 0.1) # [1] 1
Probability of winning a competition K games best of series of N games
This is interesting. Let's demonstrate using n = 3 where it takes 2 wins to be the winner. We can first determine which combinations are available n = 3L lst = replicate(n, 0:1, simplify = FALSE) comb
Probability of winning a competition K games best of series of N games This is interesting. Let's demonstrate using n = 3 where it takes 2 wins to be the winner. We can first determine which combinations are available n = 3L lst = replicate(n, 0:1, simplify = FALSE) combos = do.call(expand.grid, lst) combos Var1 Var2 Var3 1 0 0 0 2 1 0 0 3 0 1 0 4 1 1 0 5 0 0 1 6 1 0 1 7 0 1 1 8 1 1 1 As you noted, some of these combinations are not possible. We are specifically interested when the rowSums() are equal to 2. Using this we can figure out which combinations are actually possible. possible_combos = combos[rowSums(combos) == ceiling(n / 2), ] possible_combos Var1 Var2 Var3 4 1 1 0 6 1 0 1 7 0 1 1 Our last step is to calculate each contribution from our possible contributions. We know that w^2 will be in each calculation. The l part is more complicated. In our first possible combination, the l contributes nothing. We can use max.col(..., ties.method = "last") to figure how many ls occurred: losses = max.col(possible_combos, ties.method = "last") - ceiling(n / 2) losses # [1] 0 1 1 w = 0.9 l = 1 - w wins_p = w ^ ceiling(n / 2) losses_p = ifelse(losses == 0L, 1, l ^ losses) p = sum(wins_p * losses_p) p # [1] 0.972 To generalize this, we can use this as a function based on n and w: right = function(n, w) { lst = replicate(n, 0:1, simplify = FALSE) combos = do.call(expand.grid, lst) possible_combos = combos[rowSums(combos) == ceiling(n / 2), ] losses = max.col(possible_combos, ties.method = "last") - ceiling(n / 2) l = 1 - w wins_p = w ^ ceiling(n / 2) losses_p = ifelse(losses == 0L, 1, l ^ losses) sum(wins_p * losses_p) } right(5, 0.9) # [1] 0.99144 right(5, 0.9) + right(5, 0.1) # [1] 1
Probability of winning a competition K games best of series of N games This is interesting. Let's demonstrate using n = 3 where it takes 2 wins to be the winner. We can first determine which combinations are available n = 3L lst = replicate(n, 0:1, simplify = FALSE) comb
16,276
Probability of winning a competition K games best of series of N games
Actually you are almost there, but you should be aware of the cases with four and five games to win. win <- function(w) dbinom(3,3,w)+w*dbinom(2,3,w) + w*dbinom(2,4,w) or a compact solution win <- function(w) w*sum(mapply(dbinom,2,2:4,w)) such that > win(0.9) [1] 0.99144 Explanation: Given that the number of games needed to win the serial depends on the last win: If 3 games are needed: the first two should both wins, such that the prob. is w**3 (equivalently dbinom(3,3,w) or dbinom(2,2,w)*w) If 4 games are needed: among the previous three games there should be 2 wins and 1 loss, such that the prob. is choose(4,2)*w**2*(1-w)*w (equivalently dbinom(2,4,w)*w) If 5 games are needed: among previous four games there should both 2 wins and 2 losses, such that the prob. is choose(4,2)*w**2*(1-w)**2*w (equivalently dbinom(2,4,w)*w) Update: Generalization of win With respect to any N (either even or odd), a generalized function win can be defined like below win <- function(w, N) w*sum(mapply(dbinom,ceiling((N-1)/2),ceiling((N-1)/2):(N-1),w)) However, above is not as efficient for large Ns. Instead, the negative binomial distribution method mentioned by @RyanFrost is preferred for cases with large Ns, i.e., win <- function(w, N) pnbinom(floor((N-1)/2), ceiling((N+1)/2), w) Example > win(0.9,5) # needs at 3 wins out of 5 games [1] 0.99144 > win(0.9,6) # needs at 4 wins out of 6 games [1] 0.98415
Probability of winning a competition K games best of series of N games
Actually you are almost there, but you should be aware of the cases with four and five games to win. win <- function(w) dbinom(3,3,w)+w*dbinom(2,3,w) + w*dbinom(2,4,w) or a compact solution win <- fu
Probability of winning a competition K games best of series of N games Actually you are almost there, but you should be aware of the cases with four and five games to win. win <- function(w) dbinom(3,3,w)+w*dbinom(2,3,w) + w*dbinom(2,4,w) or a compact solution win <- function(w) w*sum(mapply(dbinom,2,2:4,w)) such that > win(0.9) [1] 0.99144 Explanation: Given that the number of games needed to win the serial depends on the last win: If 3 games are needed: the first two should both wins, such that the prob. is w**3 (equivalently dbinom(3,3,w) or dbinom(2,2,w)*w) If 4 games are needed: among the previous three games there should be 2 wins and 1 loss, such that the prob. is choose(4,2)*w**2*(1-w)*w (equivalently dbinom(2,4,w)*w) If 5 games are needed: among previous four games there should both 2 wins and 2 losses, such that the prob. is choose(4,2)*w**2*(1-w)**2*w (equivalently dbinom(2,4,w)*w) Update: Generalization of win With respect to any N (either even or odd), a generalized function win can be defined like below win <- function(w, N) w*sum(mapply(dbinom,ceiling((N-1)/2),ceiling((N-1)/2):(N-1),w)) However, above is not as efficient for large Ns. Instead, the negative binomial distribution method mentioned by @RyanFrost is preferred for cases with large Ns, i.e., win <- function(w, N) pnbinom(floor((N-1)/2), ceiling((N+1)/2), w) Example > win(0.9,5) # needs at 3 wins out of 5 games [1] 0.99144 > win(0.9,6) # needs at 4 wins out of 6 games [1] 0.98415
Probability of winning a competition K games best of series of N games Actually you are almost there, but you should be aware of the cases with four and five games to win. win <- function(w) dbinom(3,3,w)+w*dbinom(2,3,w) + w*dbinom(2,4,w) or a compact solution win <- fu
16,277
Probability of winning a competition K games best of series of N games
It is a binomial distribution: dbinom(3,5,0.9) + dbinom(4,5,0.9)+dbinom(5,5,0.9) = 0.99144 The reason goes like this, you just need to think about the total space, even if there are already 3 wins or 4 wins, let's imagine the event goes on and we can write the possible events to include as: w.w.w : w.w.w.w.k , w.w.w.k.k , w.w.w.k.w, w.w.w.w,w w.w.l.w : w.w.l.w.l, w.w.w.l.w l.w.w.w : l.w.w.w.1 , l.w.w.w.w And so on.. And those that have written as containing 3 wins for example (w.w.l.w.l) will be part of 5 choose 3 in a binomial. Out of the total space of W/L over 5 events, what we need are 3 wins, 4 wins and 5 wins to include all the events that can result in the team winning (even though in real life, the 4th or 5th match will not occur)
Probability of winning a competition K games best of series of N games
It is a binomial distribution: dbinom(3,5,0.9) + dbinom(4,5,0.9)+dbinom(5,5,0.9) = 0.99144 The reason goes like this, you just need to think about the total space, even if there are already 3 wins o
Probability of winning a competition K games best of series of N games It is a binomial distribution: dbinom(3,5,0.9) + dbinom(4,5,0.9)+dbinom(5,5,0.9) = 0.99144 The reason goes like this, you just need to think about the total space, even if there are already 3 wins or 4 wins, let's imagine the event goes on and we can write the possible events to include as: w.w.w : w.w.w.w.k , w.w.w.k.k , w.w.w.k.w, w.w.w.w,w w.w.l.w : w.w.l.w.l, w.w.w.l.w l.w.w.w : l.w.w.w.1 , l.w.w.w.w And so on.. And those that have written as containing 3 wins for example (w.w.l.w.l) will be part of 5 choose 3 in a binomial. Out of the total space of W/L over 5 events, what we need are 3 wins, 4 wins and 5 wins to include all the events that can result in the team winning (even though in real life, the 4th or 5th match will not occur)
Probability of winning a competition K games best of series of N games It is a binomial distribution: dbinom(3,5,0.9) + dbinom(4,5,0.9)+dbinom(5,5,0.9) = 0.99144 The reason goes like this, you just need to think about the total space, even if there are already 3 wins o
16,278
Probability of winning a competition K games best of series of N games
Here is a function win that computes the probability to win k out of n games with a probability to win each game w. It implements the idea of this Math.SE post. win <- function(w, n = 5, k = 3){ loose <- function(w, n, k){ l <- 1 - w m <- n - k s <- seq_len(n - 1)[-seq_len(m - 1)] ch <- sapply(s, function(x) choose(x, m)) w <- w^(seq_along(s) - 1) l^m * sum(w*ch) } if(k >= ceiling(n / 2)){ 1 - loose(w, n, k)*(1 - w) }else{ stop("a minority cannot win.") } } win(0.9) #[1] 0.99144 win(0.1) #[1] 0.00856 win(0.9) + win(0.1) #[1] 1
Probability of winning a competition K games best of series of N games
Here is a function win that computes the probability to win k out of n games with a probability to win each game w. It implements the idea of this Math.SE post. win <- function(w, n = 5, k = 3){ loo
Probability of winning a competition K games best of series of N games Here is a function win that computes the probability to win k out of n games with a probability to win each game w. It implements the idea of this Math.SE post. win <- function(w, n = 5, k = 3){ loose <- function(w, n, k){ l <- 1 - w m <- n - k s <- seq_len(n - 1)[-seq_len(m - 1)] ch <- sapply(s, function(x) choose(x, m)) w <- w^(seq_along(s) - 1) l^m * sum(w*ch) } if(k >= ceiling(n / 2)){ 1 - loose(w, n, k)*(1 - w) }else{ stop("a minority cannot win.") } } win(0.9) #[1] 0.99144 win(0.1) #[1] 0.00856 win(0.9) + win(0.1) #[1] 1
Probability of winning a competition K games best of series of N games Here is a function win that computes the probability to win k out of n games with a probability to win each game w. It implements the idea of this Math.SE post. win <- function(w, n = 5, k = 3){ loo
16,279
Probability of winning a competition K games best of series of N games
What your're asking is a simple math problem. I'll try to explain the math step by step then we'll write our codes with respect of that. Math Combination In mathematics, a combination is a selection of items from a collection and is defined as follows. Where ! is the factorial operator. For example let's assume we have four rounds and you want to select three wins (or one loose). We have: Which are: w-w-w-l, w-w-l-w, w-l-w-w, l-w-w-wBut as you mentioned, w-w-w-l is not acceptable because the game is done when 3 wins are accomplished! So take a good look at this point: If our rounds takes more than 3 (best of 5), the last one must be win if winning probability calculation is desired! So to correct my calculations I should first select the last round as a win and select 2 other spots (from remaining spots) for winning. Or And if we show winning probability with w and loosing with l the probability of this state to be happened is Javascript Now that you know the concept, let's begin coding! First I need a function to calculate factorial of n (n!). let f = (n)=> { let o = 1; for(i=1; i<=n; i++) { o *= i; } return o; } To understand the procedure better, I'll write my codes step by step. So the next step is to define the combination function. let c = (n,r)=> { return f(n)/(f(r) * f(n-r)); } And now it's time to calculate r wins probability in a p round game with win probability of w. let _w = (p,r,w)=> { let o = 1; // Selection of win positions o *= c(1,1) * c(p-1,r-1); // Calculation probability o *= Math.pow(w, r) * Math.pow(1-w, p-r); return o; } Now we are ready to make the BO (Best Of) function with N rounds and K wins. let BO = (N, K, w)=> { // P is what we wish to find! let P = 0; for (j=K; j<=N; j++) { P += _w(j, K, w); } return P; } And some examples: console.log(BO(5,3,0.9)); // 0.9914400000000001 console.log(BO(7,4,0.9)); // 0.997272 console.log(BO(9,5,0.9)); // 0.99910908
Probability of winning a competition K games best of series of N games
What your're asking is a simple math problem. I'll try to explain the math step by step then we'll write our codes with respect of that. Math Combination In mathematics, a combination is a selection o
Probability of winning a competition K games best of series of N games What your're asking is a simple math problem. I'll try to explain the math step by step then we'll write our codes with respect of that. Math Combination In mathematics, a combination is a selection of items from a collection and is defined as follows. Where ! is the factorial operator. For example let's assume we have four rounds and you want to select three wins (or one loose). We have: Which are: w-w-w-l, w-w-l-w, w-l-w-w, l-w-w-wBut as you mentioned, w-w-w-l is not acceptable because the game is done when 3 wins are accomplished! So take a good look at this point: If our rounds takes more than 3 (best of 5), the last one must be win if winning probability calculation is desired! So to correct my calculations I should first select the last round as a win and select 2 other spots (from remaining spots) for winning. Or And if we show winning probability with w and loosing with l the probability of this state to be happened is Javascript Now that you know the concept, let's begin coding! First I need a function to calculate factorial of n (n!). let f = (n)=> { let o = 1; for(i=1; i<=n; i++) { o *= i; } return o; } To understand the procedure better, I'll write my codes step by step. So the next step is to define the combination function. let c = (n,r)=> { return f(n)/(f(r) * f(n-r)); } And now it's time to calculate r wins probability in a p round game with win probability of w. let _w = (p,r,w)=> { let o = 1; // Selection of win positions o *= c(1,1) * c(p-1,r-1); // Calculation probability o *= Math.pow(w, r) * Math.pow(1-w, p-r); return o; } Now we are ready to make the BO (Best Of) function with N rounds and K wins. let BO = (N, K, w)=> { // P is what we wish to find! let P = 0; for (j=K; j<=N; j++) { P += _w(j, K, w); } return P; } And some examples: console.log(BO(5,3,0.9)); // 0.9914400000000001 console.log(BO(7,4,0.9)); // 0.997272 console.log(BO(9,5,0.9)); // 0.99910908
Probability of winning a competition K games best of series of N games What your're asking is a simple math problem. I'll try to explain the math step by step then we'll write our codes with respect of that. Math Combination In mathematics, a combination is a selection o
16,280
Strategies for time series forecasting for 2000 different products?
A follow up to @StephanKolassa 's answer: I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to use when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate your forecasting algorithm over a large number of time series. You can check this response for more details on how to automate forecast generation. In demand forecasting, some form of hierarchical forecasting is frequently performed, i.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach. Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts. With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units. In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting. Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.
Strategies for time series forecasting for 2000 different products?
A follow up to @StephanKolassa 's answer: I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want a
Strategies for time series forecasting for 2000 different products? A follow up to @StephanKolassa 's answer: I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want also want to use Facebook's Prophet package (Auto.arima is easy to use, but two years of weekly data is bordering not enough data for an ARIMA model in my experience). Personally I have found Prophet to be easier to use when you have promotions and holiday event data available, otherwise ETS() might work better. Your real challenge is more of a coding challenge of how to efficiently iterate your forecasting algorithm over a large number of time series. You can check this response for more details on how to automate forecast generation. In demand forecasting, some form of hierarchical forecasting is frequently performed, i.e you have 2000 products and you need a separate forecast for each separate product, but there are similarities between products that might help with the forecasting. You want to find some way of grouping the product together along a product hierarchy and then use hierarchical forecasting to improve accuracy. Since you are looking for forecasts at the individual product level, look at trying the top-down hierarchical approach. Something a little bit more farfetched, but I would like call it out: Amazon and Uber use neural networks for this type of problem, where instead of having a separate forecast for each product/time series, they use one gigantic recurrent neural network to forecast all the time series in bulk. Note that they still end up with individual forecasts for each product (in Uber's case it is traffic/demand per city as opposed to products), they are just using a large model (an LSTM deep learning model) to do it all at once. The idea is similar in spirit to hierarchical forecasting in the sense that the neural network learns from the similarities between the histories of different products to come up with better forecasts. The Uber team has made some of their code available (through the M4 competition Github repositories), however it is C++ code (not exactly the favorite language of the stats crowd). Amazon's approach is not open source and you have to use their paid Amazon Forecast service to do the forecasts. With regards to your second comment: You need to differentiate between forecasting sales and forecasting demand. Demand is unconstrained, if suddenly an item is popular and your customers want 200 units, it doesn't matter that you have only 50 units on hand, your demand is still going to be 200 units. In practice it is very difficult to observe demand directly, so we use sales as proxy for demand. This has a problem because it doesn't account for situations where a customer wanted to purchase a product but it was unavailable. To address it, along with the historical sales data, information about inventory levels and stock outs is either directly included in a model or used to preprocess the time series prior to generating a model for forecasting. Typically an unconstrained forecast is generated first by a forecast engine and then passed on to a planning system which then adds the constrains you mention (i.e demand is 500 units but only 300 units are available) along with other constraints (safety stock, presentation stock, budgetary constraints, plans for promotions or introductions of new products etc...) - however this falls under the general rubric of planning and inventory management, not forecasting per se.
Strategies for time series forecasting for 2000 different products? A follow up to @StephanKolassa 's answer: I concur with Stephan that ETS() from the forecast package in R is probably your best and fastest choice. If ETS doesn't give good results, you might want a
16,281
Strategies for time series forecasting for 2000 different products?
We will only be able to give you very general advice. Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA). There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data. At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above. What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.) If you have more specific questions, do post them at CV.
Strategies for time series forecasting for 2000 different products?
We will only be able to give you very general advice. Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For i
Strategies for time series forecasting for 2000 different products? We will only be able to give you very general advice. Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For instance, you could regress sales on promotions, then potentially model residuals (using exponential smoothing or ARIMA). There are software packages that do a reasonably good job at fitting multiple time series models to a series. You can then simply iterate over your 2000 series, which should not take much more runtime than a cup of coffee. I particularly recommend the ets() function in the forecast package in R. (Less so the auto.arima() function for weekly data. At least skim a forecasting textbook, e.g., this one. It uses the forecast package I recommend above. What is your final objective? Do you want an unbiased forecast? Then assess point forecasts using the MSE. Will your bonus depend on the MAPE? Then this list of the problems of the MAPE may be helpful. Do you need forecasts to set safety amounts? Then you need quantile forecasts, not mean predictions. (The functions in the forecast package can give you those.) If you have more specific questions, do post them at CV.
Strategies for time series forecasting for 2000 different products? We will only be able to give you very general advice. Are there any strong drivers, like promotions or calendar events, or seasonality, trends or lifecycles? If so, include them in your models. For i
16,282
Strategies for time series forecasting for 2000 different products?
Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 . In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.
Strategies for time series forecasting for 2000 different products?
Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time serie
Strategies for time series forecasting for 2000 different products? Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time series by 1000 . In terms of mass producing equations that may have both deterministic structure (pulses/level shift/local time trends ) OR either auto-regressive seasonality and arima structure you have to run a computer-based script . Beware of simple auto arima solutions that assume no deterministic structure OR fixed assumptions about same.
Strategies for time series forecasting for 2000 different products? Segmenting based on the variance of the original series makes no sense to me as the best model should be invariant to scale. Consider a series ..model it and then multiply each value in the time serie
16,283
Understanding QR Decomposition
The idea of the QR decomposition as a procedure to get OLS estimates is already explained in the post linked by @MatthewDrury. The source code of the function qr is written in Fortran and may be hard to follow. Here I show a minimal implementation that reproduces the main results for a model fitted by OLS. Hopefully the steps are easier to follow. Recap: The QR procedure is used to decompose the matrix of regressor variables $X$ into an orthonormal matrix $Q$ and a non-singular upper-triangular matrix $R$. Substituting $X = QR$ in the normal equations $X'X\hat\beta = X'y$ yields: $$ R'Q'QR\hat\beta = R'Q'y \,. $$ Premultipying by $R^{-1}$ and using the fact that $Q'Q$ is a diagonal matrix gives: $$ R\hat\beta = Q'y \,. \tag 1 $$ The point of this result is that, since $R$ is an upper-triangular matrix, this equation is easy to solve for $\hat\beta$ by backwards substitutions. Now, how to we get the matrices $Q$ and $R$? We can Householder transformation, Givens rotations or the Gram-Schmidt procedure. Below I use Householder transformations. See details for example here. The code below is based on the Pascal code described in the book Pollock (1999) Chapters 7 and 8. The matrix of regressors is used to store the matrix $R$ of the QR decomposition. The dependent variable $Y$ is overwritten with the results of $Q'y$ (right-hand-side of equation (1) above). Notice also that in the last step the residual sum of squares can be obtained from this vector. QR.regression <- function(y, X) { nr <- length(y) nc <- NCOL(X) # Householder transformations for (j in seq_len(nc)) { id <- seq.int(j, nr) sigma <- sum(X[id,j]^2) s <- sqrt(sigma) diag_ej <- X[j,j] gamma <- 1.0 / (sigma + abs(s * diag_ej)) kappa <- if (diag_ej < 0) s else -s X[j,j] <- X[j,j] - kappa if (j < nc) for (k in seq.int(j+1, nc)) { yPrime <- sum(X[id,j] * X[id,k]) * gamma X[id,k] <- X[id,k] - X[id,j] * yPrime } yPrime <- sum(X[id,j] * y[id]) * gamma y[id] <- y[id] - X[id,j] * yPrime X[j,j] <- kappa } # end Householder # residual sum of squares rss <- sum(y[seq.int(nc+1, nr)]^2) # Backsolve beta <- rep(NA, nc) for (j in seq.int(nc, 1)) { beta[j] <- y[j] if (j < nc) for (i in seq.int(j+1, nc)) beta[j] <- beta[j] - X[j,i] * beta[i] beta[j] <- beta[j] / X[j,j] } # set zeros in the lower triangular side of X (which stores) # not really necessary, this is just to return R for illustration for (i in seq_len(ncol(X))) X[seq.int(i+1, nr),i] <- 0 list(R=X[1:nc,1:nc], y=y, beta=beta, rss=rss) } We can check that the same estimates than lm are obtained. # benchmark results fit <- lm(expression_data ~ 0+design) # OLS by QR decomposition y <- expression_data X <- design res <- QR.regression(y, X) res$beta # [1] 1.43235881 0.56139421 0.07744044 -0.15611038 -0.15021796 all.equal(res$beta, coef(fit), check.attributes=FALSE) # [1] TRUE all.equal(res$rss, sum(residuals(fit)^2)) # [1] TRUE We can also get the matrix $Q$ and check that it is orthogonal: Q <- X %*% solve(res$R) round(crossprod(Q), 3) # 1 2 3 4 5 # 1 1 0 0 0 0 # 2 0 1 0 0 0 # 3 0 0 1 0 0 # 4 0 0 0 1 0 # 5 0 0 0 0 1 The residuals can be obtained as y - X %*% res$beta. References D.S.G. Pollock (1999) A handbook of time series analysis, signal processing and dynamics, Academic Press.
Understanding QR Decomposition
The idea of the QR decomposition as a procedure to get OLS estimates is already explained in the post linked by @MatthewDrury. The source code of the function qr is written in Fortran and may be hard
Understanding QR Decomposition The idea of the QR decomposition as a procedure to get OLS estimates is already explained in the post linked by @MatthewDrury. The source code of the function qr is written in Fortran and may be hard to follow. Here I show a minimal implementation that reproduces the main results for a model fitted by OLS. Hopefully the steps are easier to follow. Recap: The QR procedure is used to decompose the matrix of regressor variables $X$ into an orthonormal matrix $Q$ and a non-singular upper-triangular matrix $R$. Substituting $X = QR$ in the normal equations $X'X\hat\beta = X'y$ yields: $$ R'Q'QR\hat\beta = R'Q'y \,. $$ Premultipying by $R^{-1}$ and using the fact that $Q'Q$ is a diagonal matrix gives: $$ R\hat\beta = Q'y \,. \tag 1 $$ The point of this result is that, since $R$ is an upper-triangular matrix, this equation is easy to solve for $\hat\beta$ by backwards substitutions. Now, how to we get the matrices $Q$ and $R$? We can Householder transformation, Givens rotations or the Gram-Schmidt procedure. Below I use Householder transformations. See details for example here. The code below is based on the Pascal code described in the book Pollock (1999) Chapters 7 and 8. The matrix of regressors is used to store the matrix $R$ of the QR decomposition. The dependent variable $Y$ is overwritten with the results of $Q'y$ (right-hand-side of equation (1) above). Notice also that in the last step the residual sum of squares can be obtained from this vector. QR.regression <- function(y, X) { nr <- length(y) nc <- NCOL(X) # Householder transformations for (j in seq_len(nc)) { id <- seq.int(j, nr) sigma <- sum(X[id,j]^2) s <- sqrt(sigma) diag_ej <- X[j,j] gamma <- 1.0 / (sigma + abs(s * diag_ej)) kappa <- if (diag_ej < 0) s else -s X[j,j] <- X[j,j] - kappa if (j < nc) for (k in seq.int(j+1, nc)) { yPrime <- sum(X[id,j] * X[id,k]) * gamma X[id,k] <- X[id,k] - X[id,j] * yPrime } yPrime <- sum(X[id,j] * y[id]) * gamma y[id] <- y[id] - X[id,j] * yPrime X[j,j] <- kappa } # end Householder # residual sum of squares rss <- sum(y[seq.int(nc+1, nr)]^2) # Backsolve beta <- rep(NA, nc) for (j in seq.int(nc, 1)) { beta[j] <- y[j] if (j < nc) for (i in seq.int(j+1, nc)) beta[j] <- beta[j] - X[j,i] * beta[i] beta[j] <- beta[j] / X[j,j] } # set zeros in the lower triangular side of X (which stores) # not really necessary, this is just to return R for illustration for (i in seq_len(ncol(X))) X[seq.int(i+1, nr),i] <- 0 list(R=X[1:nc,1:nc], y=y, beta=beta, rss=rss) } We can check that the same estimates than lm are obtained. # benchmark results fit <- lm(expression_data ~ 0+design) # OLS by QR decomposition y <- expression_data X <- design res <- QR.regression(y, X) res$beta # [1] 1.43235881 0.56139421 0.07744044 -0.15611038 -0.15021796 all.equal(res$beta, coef(fit), check.attributes=FALSE) # [1] TRUE all.equal(res$rss, sum(residuals(fit)^2)) # [1] TRUE We can also get the matrix $Q$ and check that it is orthogonal: Q <- X %*% solve(res$R) round(crossprod(Q), 3) # 1 2 3 4 5 # 1 1 0 0 0 0 # 2 0 1 0 0 0 # 3 0 0 1 0 0 # 4 0 0 0 1 0 # 5 0 0 0 0 1 The residuals can be obtained as y - X %*% res$beta. References D.S.G. Pollock (1999) A handbook of time series analysis, signal processing and dynamics, Academic Press.
Understanding QR Decomposition The idea of the QR decomposition as a procedure to get OLS estimates is already explained in the post linked by @MatthewDrury. The source code of the function qr is written in Fortran and may be hard
16,284
Biased bootstrap: is it okay to center the CI around the observed statistic?
In the setup given by the OP the parameter of interest is the Shannon entropy $$\theta(\mathbf{p}) = - \sum_{i = 1}^{50} p_i \log p_i,$$ which is a function of the probability vector $\mathbf{p} \in \mathbb{R}^{50}$. The estimator based on $n$ samples ($n = 100$ in the simulation) is the plug-in estimator $$\hat{\theta}_n = \theta(\hat{\mathbf{p}}_n) = - \sum_{i=1}^{50} \hat{p}_{n,i} \log \hat{p}_{n,i}.$$ The samples were generated using the uniform distribution for which the Shannon entropy is $\log(50) = 3.912.$ Since the Shannon entropy is maximized in the uniform distribution, the plug-in estimator must be downward biased. A simulation shows that $\mathrm{bias}(\hat{\theta}_{100}) \simeq -0.28$ whereas $\mathrm{bias}(\hat{\theta}_{500}) \simeq -0.05$. The plug-in estimator is consistent, but the $\Delta$-method does not apply for $\mathbf{p}$ being the uniform distribution, because the derivative of the Shannon entropy is 0. Thus for this particular choice of $\mathbf{p}$, confidence intervals based on asymptotic arguments are not obvious. The percentile interval is based on the distribution of $\theta(\mathbf{p}_n^*)$ where $\mathbf{p}_n^*$ is the estimator obtained from sampling $n$ observations from $\hat{\mathbf{p}}_n$. Specifically, it is the interval from the 2.5% quantile to the 97.5% quantile for the distribution of $\theta(\mathbf{p}_n^*)$. As the OP's bootstrap simulation shows, $\theta(\mathbf{p}_n^*)$ is clearly also downward biased as an estimator of $\theta(\hat{\mathbf{p}}_n)$, which results in the percentile interval being completely wrong. For the basic (and normal) interval, the roles of the quantiles are interchanged. This implies that the interval does seem to be reasonable (it covers 3.912), though intervals extending beyond 3.912 are not logically meaningful. Moreover, I don't know if the basic interval will have the correct coverage. Its justification is based on the following approximate distributional identity: $$\theta(\mathbf{p}_n^*) - \theta(\hat{\mathbf{p}}_n) \overset{\mathcal{D}}{\simeq} \theta(\hat{\mathbf{p}}_n) - \theta(\mathbf{p}),$$ which might be questionable for (relatively) small $n$ like $n = 100$. The OP's last suggestion of a standard error based interval $\theta(\hat{\mathbf{p}}_n) \pm 1.96\hat{\mathrm{se}}_n$ will not work either because of the large bias. It might work for a bias-corrected estimator, but then you first of all need correct standard errors for the bias-corrected estimator. I would consider a likelihood interval based of the profile log-likelihood for $\theta(\mathbf{p})$. I'm afraid that I don't know any simple way to compute the profile log-likelihood for this example except that you need to maximize the log-likelihood over $\mathbf{p}$ for different fixed values of $\theta(\mathbf{p})$.
Biased bootstrap: is it okay to center the CI around the observed statistic?
In the setup given by the OP the parameter of interest is the Shannon entropy $$\theta(\mathbf{p}) = - \sum_{i = 1}^{50} p_i \log p_i,$$ which is a function of the probability vector $\mathbf{p} \in
Biased bootstrap: is it okay to center the CI around the observed statistic? In the setup given by the OP the parameter of interest is the Shannon entropy $$\theta(\mathbf{p}) = - \sum_{i = 1}^{50} p_i \log p_i,$$ which is a function of the probability vector $\mathbf{p} \in \mathbb{R}^{50}$. The estimator based on $n$ samples ($n = 100$ in the simulation) is the plug-in estimator $$\hat{\theta}_n = \theta(\hat{\mathbf{p}}_n) = - \sum_{i=1}^{50} \hat{p}_{n,i} \log \hat{p}_{n,i}.$$ The samples were generated using the uniform distribution for which the Shannon entropy is $\log(50) = 3.912.$ Since the Shannon entropy is maximized in the uniform distribution, the plug-in estimator must be downward biased. A simulation shows that $\mathrm{bias}(\hat{\theta}_{100}) \simeq -0.28$ whereas $\mathrm{bias}(\hat{\theta}_{500}) \simeq -0.05$. The plug-in estimator is consistent, but the $\Delta$-method does not apply for $\mathbf{p}$ being the uniform distribution, because the derivative of the Shannon entropy is 0. Thus for this particular choice of $\mathbf{p}$, confidence intervals based on asymptotic arguments are not obvious. The percentile interval is based on the distribution of $\theta(\mathbf{p}_n^*)$ where $\mathbf{p}_n^*$ is the estimator obtained from sampling $n$ observations from $\hat{\mathbf{p}}_n$. Specifically, it is the interval from the 2.5% quantile to the 97.5% quantile for the distribution of $\theta(\mathbf{p}_n^*)$. As the OP's bootstrap simulation shows, $\theta(\mathbf{p}_n^*)$ is clearly also downward biased as an estimator of $\theta(\hat{\mathbf{p}}_n)$, which results in the percentile interval being completely wrong. For the basic (and normal) interval, the roles of the quantiles are interchanged. This implies that the interval does seem to be reasonable (it covers 3.912), though intervals extending beyond 3.912 are not logically meaningful. Moreover, I don't know if the basic interval will have the correct coverage. Its justification is based on the following approximate distributional identity: $$\theta(\mathbf{p}_n^*) - \theta(\hat{\mathbf{p}}_n) \overset{\mathcal{D}}{\simeq} \theta(\hat{\mathbf{p}}_n) - \theta(\mathbf{p}),$$ which might be questionable for (relatively) small $n$ like $n = 100$. The OP's last suggestion of a standard error based interval $\theta(\hat{\mathbf{p}}_n) \pm 1.96\hat{\mathrm{se}}_n$ will not work either because of the large bias. It might work for a bias-corrected estimator, but then you first of all need correct standard errors for the bias-corrected estimator. I would consider a likelihood interval based of the profile log-likelihood for $\theta(\mathbf{p})$. I'm afraid that I don't know any simple way to compute the profile log-likelihood for this example except that you need to maximize the log-likelihood over $\mathbf{p}$ for different fixed values of $\theta(\mathbf{p})$.
Biased bootstrap: is it okay to center the CI around the observed statistic? In the setup given by the OP the parameter of interest is the Shannon entropy $$\theta(\mathbf{p}) = - \sum_{i = 1}^{50} p_i \log p_i,$$ which is a function of the probability vector $\mathbf{p} \in
16,285
Biased bootstrap: is it okay to center the CI around the observed statistic?
As the answer by @NRH points out, the problem is not that the bootstrapping gave a biased result. It's that the simple "plug in" estimate of the Shannon entropy, based on data from a sample, is biased downward from the true population value. This problem was recognized in the 1950s, within a few years of the definition of this index. This paper discusses the underlying issues, with references to associated literature. The problem arises from the nonlinear relation of the individual probabilities to this entropy measure. In this case, the observed genotype fraction for gene i in sample n, $\hat{p}_{n,i}$, is an unbiased estimator of the true probability, $p_{n,i}$. But when that observed value is applied to the "plug in" formula for entropy over M genes: $$\hat{\theta}_n = \theta(\hat{\mathbf{p}}_n) = - \sum_{i=1}^{M} \hat{p}_{n,i} \log \hat{p}_{n,i}.$$ the non-linear relation means that the resulting value is a biased under-estimate of the true genetic diversity. The bias depends on the number of genes, $M$ and the number of observations, $N$. To first order, the plug-in estimate will be lower than the true entropy by an amount $(M -1)/2N$. Higher order corrections are evaluated in the paper linked above. There are packages in R that deal with this issue. The simboot package in particular has a function estShannonf that makes these bias corrections, and a function sbdiv for calculating confidence intervals. It will be better to use such established open-source tools for your analysis rather than try to start over from scratch.
Biased bootstrap: is it okay to center the CI around the observed statistic?
As the answer by @NRH points out, the problem is not that the bootstrapping gave a biased result. It's that the simple "plug in" estimate of the Shannon entropy, based on data from a sample, is biased
Biased bootstrap: is it okay to center the CI around the observed statistic? As the answer by @NRH points out, the problem is not that the bootstrapping gave a biased result. It's that the simple "plug in" estimate of the Shannon entropy, based on data from a sample, is biased downward from the true population value. This problem was recognized in the 1950s, within a few years of the definition of this index. This paper discusses the underlying issues, with references to associated literature. The problem arises from the nonlinear relation of the individual probabilities to this entropy measure. In this case, the observed genotype fraction for gene i in sample n, $\hat{p}_{n,i}$, is an unbiased estimator of the true probability, $p_{n,i}$. But when that observed value is applied to the "plug in" formula for entropy over M genes: $$\hat{\theta}_n = \theta(\hat{\mathbf{p}}_n) = - \sum_{i=1}^{M} \hat{p}_{n,i} \log \hat{p}_{n,i}.$$ the non-linear relation means that the resulting value is a biased under-estimate of the true genetic diversity. The bias depends on the number of genes, $M$ and the number of observations, $N$. To first order, the plug-in estimate will be lower than the true entropy by an amount $(M -1)/2N$. Higher order corrections are evaluated in the paper linked above. There are packages in R that deal with this issue. The simboot package in particular has a function estShannonf that makes these bias corrections, and a function sbdiv for calculating confidence intervals. It will be better to use such established open-source tools for your analysis rather than try to start over from scratch.
Biased bootstrap: is it okay to center the CI around the observed statistic? As the answer by @NRH points out, the problem is not that the bootstrapping gave a biased result. It's that the simple "plug in" estimate of the Shannon entropy, based on data from a sample, is biased
16,286
Question about bias-variance tradeoff
Well, sort of. As stated, you ascribe intent to the scientist to minimize either bias or variance. In practice, you cannot explicitly observe the bias or the variance of your model (if you could, then you would know the true signal, in which case you wouldn't need a model). In general, you can only observe the error rate of your model on a specific data set, and you seek to estimate the out of sample error rate using various creative techniques. Now you do know that, theoretically at least, this error rate can be decomposed into bias and variance terms, but you cannot directly observe this balance in any specific concrete situation. So I'd restate your observations slightly as: A model is underfit to the data when the bias term contributes the majority of out of sample error. A model is overfit to the data when the variance term contributes the majority of out of sample error. In general, there is no real way to know for sure, as you can never truly observe the model bias. Nonetheless, there are various patterns of behavior that are indicative of being in one situation or another: Overfit models tend to have much worse goodness of fit performance on a testing dataset vs. a training data set. Underfit models tend to have the similar goodness of fit performance on a testing vs. training data set. These are the patterns that are manifest in the famous plots of error rates by model complexity, this one is from The Elements of Statistical Learning: Oftentimes these plots are overlaid with a bias and variance curve. I took this one from this nice exposition: But, it is very important to realize that you never actually get to see these additional curves in any realistic situation.
Question about bias-variance tradeoff
Well, sort of. As stated, you ascribe intent to the scientist to minimize either bias or variance. In practice, you cannot explicitly observe the bias or the variance of your model (if you could, th
Question about bias-variance tradeoff Well, sort of. As stated, you ascribe intent to the scientist to minimize either bias or variance. In practice, you cannot explicitly observe the bias or the variance of your model (if you could, then you would know the true signal, in which case you wouldn't need a model). In general, you can only observe the error rate of your model on a specific data set, and you seek to estimate the out of sample error rate using various creative techniques. Now you do know that, theoretically at least, this error rate can be decomposed into bias and variance terms, but you cannot directly observe this balance in any specific concrete situation. So I'd restate your observations slightly as: A model is underfit to the data when the bias term contributes the majority of out of sample error. A model is overfit to the data when the variance term contributes the majority of out of sample error. In general, there is no real way to know for sure, as you can never truly observe the model bias. Nonetheless, there are various patterns of behavior that are indicative of being in one situation or another: Overfit models tend to have much worse goodness of fit performance on a testing dataset vs. a training data set. Underfit models tend to have the similar goodness of fit performance on a testing vs. training data set. These are the patterns that are manifest in the famous plots of error rates by model complexity, this one is from The Elements of Statistical Learning: Oftentimes these plots are overlaid with a bias and variance curve. I took this one from this nice exposition: But, it is very important to realize that you never actually get to see these additional curves in any realistic situation.
Question about bias-variance tradeoff Well, sort of. As stated, you ascribe intent to the scientist to minimize either bias or variance. In practice, you cannot explicitly observe the bias or the variance of your model (if you could, th
16,287
Question about bias-variance tradeoff
Illustrating the Bias - Variance Tradeoff using a toy example As @Matthew Drury points out, in realistic situations you don't get to see the last graph, but the following toy example may provide visual interpretation and intuition to those who find it helpful. Dataset and assumptions Consider the dataset which consists of i.i.d. samples from $Y$ a random variable defined as $Y = sin(\pi x - 0.5) + \epsilon$ where $\epsilon \sim Uniform(-0.5,0.5)$, or in other words $Y = f(x) + \epsilon$ Note that $x$ is not a random variable hence the variance of $Y$ is $Var(Y) = Var(\epsilon) = \frac{1}{12}$ We will be fitting a linear, polynomial regression model to this dataset of the form $ \hat f(x) = \beta_0 + \beta_1x + \beta_1 x^2 + ... + \beta_px^p$. Fitting various polynomials models Intuitively, you would expect a straight line curve to perform badly as the dataset is clearly non linear. Similarly, fitting a very high order polynomial might be excessive. This intuition is reflected in the graph below which shows the various models and their corresponding Mean Square Error for train and test data. The above graph works for a single train / test split but how do we know whether it generalizes? Estimating the expected train and test MSE Here we have many options, but one approach is to randomly split the data between train / test - fit the model on the given split, and repeat this experiment many times. The resulting MSE can be plotted and the average is an estimate of the expected error. It is interesting to see that the test MSE fluctuates wildly for different train / test splits of the data. But taking the average on a sufficiently large number of experiments gives us better confidence. Note the gray dotted line that shows the variance of $Y$ computed at the beginning. It appears that on average the test MSE is never below this value  Bias - Variance Decomposition As explained here the MSE can be broken down into 3 main components: $$E[ (Y - \hat f)^2 ] = \sigma^2_\epsilon + Bias^2[\hat f] + Var[\hat f]$$ $$E[ (Y - \hat f)^2 ] = \sigma^2_\epsilon + \left[ f - E[\hat f] \right]^2 + E\left[ \hat f - E[ \hat f] \right]^2$$ Where in our toy case: $f$ is known from the initial dataset $\sigma^2_\epsilon $ is known from the uniform distribution of $\epsilon$ $E[\hat f]$ can be computed as above $\hat f$ corresponds to a lightly colored line $E\left[ \hat f - E[ \hat f] \right]^2$ can be estimated by taking the average Giving the following relation Note: the graph above uses the training data to fit the model and then calculates the MSE on train + test.
Question about bias-variance tradeoff
Illustrating the Bias - Variance Tradeoff using a toy example As @Matthew Drury points out, in realistic situations you don't get to see the last graph, but the following toy example may provide visua
Question about bias-variance tradeoff Illustrating the Bias - Variance Tradeoff using a toy example As @Matthew Drury points out, in realistic situations you don't get to see the last graph, but the following toy example may provide visual interpretation and intuition to those who find it helpful. Dataset and assumptions Consider the dataset which consists of i.i.d. samples from $Y$ a random variable defined as $Y = sin(\pi x - 0.5) + \epsilon$ where $\epsilon \sim Uniform(-0.5,0.5)$, or in other words $Y = f(x) + \epsilon$ Note that $x$ is not a random variable hence the variance of $Y$ is $Var(Y) = Var(\epsilon) = \frac{1}{12}$ We will be fitting a linear, polynomial regression model to this dataset of the form $ \hat f(x) = \beta_0 + \beta_1x + \beta_1 x^2 + ... + \beta_px^p$. Fitting various polynomials models Intuitively, you would expect a straight line curve to perform badly as the dataset is clearly non linear. Similarly, fitting a very high order polynomial might be excessive. This intuition is reflected in the graph below which shows the various models and their corresponding Mean Square Error for train and test data. The above graph works for a single train / test split but how do we know whether it generalizes? Estimating the expected train and test MSE Here we have many options, but one approach is to randomly split the data between train / test - fit the model on the given split, and repeat this experiment many times. The resulting MSE can be plotted and the average is an estimate of the expected error. It is interesting to see that the test MSE fluctuates wildly for different train / test splits of the data. But taking the average on a sufficiently large number of experiments gives us better confidence. Note the gray dotted line that shows the variance of $Y$ computed at the beginning. It appears that on average the test MSE is never below this value  Bias - Variance Decomposition As explained here the MSE can be broken down into 3 main components: $$E[ (Y - \hat f)^2 ] = \sigma^2_\epsilon + Bias^2[\hat f] + Var[\hat f]$$ $$E[ (Y - \hat f)^2 ] = \sigma^2_\epsilon + \left[ f - E[\hat f] \right]^2 + E\left[ \hat f - E[ \hat f] \right]^2$$ Where in our toy case: $f$ is known from the initial dataset $\sigma^2_\epsilon $ is known from the uniform distribution of $\epsilon$ $E[\hat f]$ can be computed as above $\hat f$ corresponds to a lightly colored line $E\left[ \hat f - E[ \hat f] \right]^2$ can be estimated by taking the average Giving the following relation Note: the graph above uses the training data to fit the model and then calculates the MSE on train + test.
Question about bias-variance tradeoff Illustrating the Bias - Variance Tradeoff using a toy example As @Matthew Drury points out, in realistic situations you don't get to see the last graph, but the following toy example may provide visua
16,288
Why is n-gram used in text language identification instead of words?
I think the most detailed answers can be found in Mehryar Mohri's extensive work on the topic. Here's a link to one of his lecture slides on the topic: https://web.archive.org/web/20151125061427/http://www.cims.nyu.edu/~mohri/amls/lecture_3.pdf The problem of language detection is that human language (words) have structure. For example, in English, it's very common for the letter 'u' to follow the letter 'q,' while this is not the case in transliterated Arabic. n-grams work by capturing this structure. Thus, certain combinations of letters are more likely in some languages than others. This is the basis of n-gram classification. Bag-of-words, on the other hand, depends on searching through a large dictionary and essentially doing template matching. There are two main drawbacks here: 1) each language would have to have an extensive dictionary of words on file, which would take a relatively long time to search through, and 2) bag-of-words will fail if none of the words in the training set are included in the testing set. Assuming that you are using bigrams (n=2) and there are 26 letters in your alphabet, then there are only 26^2 = 676 possible bigrams for that alphabet, many of which will never occur. Therefore, the "profile" (to use language detector's words) for each language needs a very small database. A bag-of-words classifier, on-the-other-hand would need a full dictionary for EACH language in order to guarantee that a language could be detected based on whichever sentence it was given. So in short - each language profile can be quickly generated with a relatively small feature space. Interestingly, n-grams only work because letters are not drawn iid in a language - this is explicitly leverage. Note: the general equation for the number of n-grams for words is l^n where l is the number of letters in the alphabet.
Why is n-gram used in text language identification instead of words?
I think the most detailed answers can be found in Mehryar Mohri's extensive work on the topic. Here's a link to one of his lecture slides on the topic: https://web.archive.org/web/20151125061427/http:
Why is n-gram used in text language identification instead of words? I think the most detailed answers can be found in Mehryar Mohri's extensive work on the topic. Here's a link to one of his lecture slides on the topic: https://web.archive.org/web/20151125061427/http://www.cims.nyu.edu/~mohri/amls/lecture_3.pdf The problem of language detection is that human language (words) have structure. For example, in English, it's very common for the letter 'u' to follow the letter 'q,' while this is not the case in transliterated Arabic. n-grams work by capturing this structure. Thus, certain combinations of letters are more likely in some languages than others. This is the basis of n-gram classification. Bag-of-words, on the other hand, depends on searching through a large dictionary and essentially doing template matching. There are two main drawbacks here: 1) each language would have to have an extensive dictionary of words on file, which would take a relatively long time to search through, and 2) bag-of-words will fail if none of the words in the training set are included in the testing set. Assuming that you are using bigrams (n=2) and there are 26 letters in your alphabet, then there are only 26^2 = 676 possible bigrams for that alphabet, many of which will never occur. Therefore, the "profile" (to use language detector's words) for each language needs a very small database. A bag-of-words classifier, on-the-other-hand would need a full dictionary for EACH language in order to guarantee that a language could be detected based on whichever sentence it was given. So in short - each language profile can be quickly generated with a relatively small feature space. Interestingly, n-grams only work because letters are not drawn iid in a language - this is explicitly leverage. Note: the general equation for the number of n-grams for words is l^n where l is the number of letters in the alphabet.
Why is n-gram used in text language identification instead of words? I think the most detailed answers can be found in Mehryar Mohri's extensive work on the topic. Here's a link to one of his lecture slides on the topic: https://web.archive.org/web/20151125061427/http:
16,289
Why is n-gram used in text language identification instead of words?
Letter N-grams are used instead of words for several reasons: 1) The list of words needed for a given language is quite large, perhaps 100,000 if you consider fast, faster, fastest, fasted, fasts, fasting, ... as all different words. For 80 languages, you need about 80x as many words,taking up a lot of space -- 50+ megabytes. 2) The number of letter trigrams for a 26-letter alphabet is 26**3 or about 17,000 and for quadgrams (N=4) about 450,000 covering ALL languages using that alphabet. Similar but somewhat larger numbers for N-grams in larger alphabets of 30-100 characters. For the CJK languages with 4000+ letters in the Han script, unigrams (N=1) are sufficient. For some Unicode scripts, there is just one language per script (Greek, Armenian), so no letter combinations are needed (so-called nil-grams N=0) 3) With words, you have no information at all when given a word not in the dictionary, while with letter N-grams you often have at least a few useful letter combinations within that word. CLD2 uses quadgrams for most Unicode scripts (alphabets) including Latin, Cyrillic, and Arabic, unigrams for the CJK scripts, nilgrams for other scripts, and also includes a limited number of quite-distinctive and fairly common complete words and pairs of words for distinguishing within difficult groups of statistically-similar languages, such as Indonesian and Malay. Letter bigrams and trigrams are perhaps useful for distinguishing among a tiny number of languages (about eight, see https://docs.google.com/document/d/1NtErs467Ub4yklEfK0C9AYef06G_1_9NHL5dPuKIH7k/edit), but are useless for distinguishing dozens of languages. Thus, CLD2 uses quadgrams, associating with each letter combination the top three most likely languages using that combination. This allows covering 80 languages with about 1.5 MB of tables and 160 languages in more detail with about 5MB of tables.
Why is n-gram used in text language identification instead of words?
Letter N-grams are used instead of words for several reasons: 1) The list of words needed for a given language is quite large, perhaps 100,000 if you consider fast, faster, fastest, fasted, fasts, fas
Why is n-gram used in text language identification instead of words? Letter N-grams are used instead of words for several reasons: 1) The list of words needed for a given language is quite large, perhaps 100,000 if you consider fast, faster, fastest, fasted, fasts, fasting, ... as all different words. For 80 languages, you need about 80x as many words,taking up a lot of space -- 50+ megabytes. 2) The number of letter trigrams for a 26-letter alphabet is 26**3 or about 17,000 and for quadgrams (N=4) about 450,000 covering ALL languages using that alphabet. Similar but somewhat larger numbers for N-grams in larger alphabets of 30-100 characters. For the CJK languages with 4000+ letters in the Han script, unigrams (N=1) are sufficient. For some Unicode scripts, there is just one language per script (Greek, Armenian), so no letter combinations are needed (so-called nil-grams N=0) 3) With words, you have no information at all when given a word not in the dictionary, while with letter N-grams you often have at least a few useful letter combinations within that word. CLD2 uses quadgrams for most Unicode scripts (alphabets) including Latin, Cyrillic, and Arabic, unigrams for the CJK scripts, nilgrams for other scripts, and also includes a limited number of quite-distinctive and fairly common complete words and pairs of words for distinguishing within difficult groups of statistically-similar languages, such as Indonesian and Malay. Letter bigrams and trigrams are perhaps useful for distinguishing among a tiny number of languages (about eight, see https://docs.google.com/document/d/1NtErs467Ub4yklEfK0C9AYef06G_1_9NHL5dPuKIH7k/edit), but are useless for distinguishing dozens of languages. Thus, CLD2 uses quadgrams, associating with each letter combination the top three most likely languages using that combination. This allows covering 80 languages with about 1.5 MB of tables and 160 languages in more detail with about 5MB of tables.
Why is n-gram used in text language identification instead of words? Letter N-grams are used instead of words for several reasons: 1) The list of words needed for a given language is quite large, perhaps 100,000 if you consider fast, faster, fastest, fasted, fasts, fas
16,290
When to Log/Exp your Variables when using Random Forest Models?
The way Random Forests are built is invariant to monotonic transformations of the independent variables. Splits will be completely analogous. If you are just aiming for accuracy you will not see any improvement in it. In fact, since Random Forests are able to find complex non-linear (Why are you calling this linear regression?) relations and variable interactions on the fly, if you transform your independent variables you may smooth out the information that allows this algorithm to do this properly. Sometimes Random Forests are not treated as a black box and used for inference. For example, you can interpret the variable importance measures that it provides, or calculate some sort of marginal effects of your independent variable on your dependent variable. This is usually visualized as partial dependence plots. I'm pretty sure this last thing is highly influenced by the scale of the variables, which is a problem when trying to obtain information of a more descriptive nature from Random Forests. In this case it might help you to transform your variables (standardize), which could make partial dependence plots comparable. Not completely sure on this, will have to think on it. Not long ago I tried to predict count data using a Random Forest, regressing on the square root and the natural log of the dependant variable helped a bit, not much, and not enough to let me keep the model. Some packages with which you may use random forests for inference: https://uc-r.github.io/lime https://cran.r-project.org/web/packages/randomForestExplainer/index.html https://pbiecek.github.io/DALEX_docs/2-2-useCaseApartmetns.html
When to Log/Exp your Variables when using Random Forest Models?
The way Random Forests are built is invariant to monotonic transformations of the independent variables. Splits will be completely analogous. If you are just aiming for accuracy you will not see any i
When to Log/Exp your Variables when using Random Forest Models? The way Random Forests are built is invariant to monotonic transformations of the independent variables. Splits will be completely analogous. If you are just aiming for accuracy you will not see any improvement in it. In fact, since Random Forests are able to find complex non-linear (Why are you calling this linear regression?) relations and variable interactions on the fly, if you transform your independent variables you may smooth out the information that allows this algorithm to do this properly. Sometimes Random Forests are not treated as a black box and used for inference. For example, you can interpret the variable importance measures that it provides, or calculate some sort of marginal effects of your independent variable on your dependent variable. This is usually visualized as partial dependence plots. I'm pretty sure this last thing is highly influenced by the scale of the variables, which is a problem when trying to obtain information of a more descriptive nature from Random Forests. In this case it might help you to transform your variables (standardize), which could make partial dependence plots comparable. Not completely sure on this, will have to think on it. Not long ago I tried to predict count data using a Random Forest, regressing on the square root and the natural log of the dependant variable helped a bit, not much, and not enough to let me keep the model. Some packages with which you may use random forests for inference: https://uc-r.github.io/lime https://cran.r-project.org/web/packages/randomForestExplainer/index.html https://pbiecek.github.io/DALEX_docs/2-2-useCaseApartmetns.html
When to Log/Exp your Variables when using Random Forest Models? The way Random Forests are built is invariant to monotonic transformations of the independent variables. Splits will be completely analogous. If you are just aiming for accuracy you will not see any i
16,291
When to Log/Exp your Variables when using Random Forest Models?
Echoing @JEquihua, Random Forest prediction accuracy won't improve. Also note, if you keep both the original predictor and the transformed predictor (as is often done in linear regression), you may cause problems. That's because RF randomly chooses a subset of the variables to grow each tree, and you've essentially put the transformed variable in twice. If it's a strong predictor, it will get used, and your random forests won't be as uncorrelated as they might have been, leading to higher variance.
When to Log/Exp your Variables when using Random Forest Models?
Echoing @JEquihua, Random Forest prediction accuracy won't improve. Also note, if you keep both the original predictor and the transformed predictor (as is often done in linear regression), you may c
When to Log/Exp your Variables when using Random Forest Models? Echoing @JEquihua, Random Forest prediction accuracy won't improve. Also note, if you keep both the original predictor and the transformed predictor (as is often done in linear regression), you may cause problems. That's because RF randomly chooses a subset of the variables to grow each tree, and you've essentially put the transformed variable in twice. If it's a strong predictor, it will get used, and your random forests won't be as uncorrelated as they might have been, leading to higher variance.
When to Log/Exp your Variables when using Random Forest Models? Echoing @JEquihua, Random Forest prediction accuracy won't improve. Also note, if you keep both the original predictor and the transformed predictor (as is often done in linear regression), you may c
16,292
What is cross section in "cross section of stock return"?
Cochrane (p. 435, 2005) gives a simple explanation between the difference of looking at expected returns in the time series and in the cross section: Time series: How average returns change over time. Cross section: How average returns change across different stock or portfolios. So intuitively, if you study the cross section of stock returns, you want to answer the question why stock A earns higher/lower returns than stock B. That's why you call it cross section: at one point in time, you check the cross section of many stocks. Note that you do not need a time series for that, you really need only one point in time (and in some corporate finance studies this is also done because they only want to explain the cross section for one shock, let's say the default of Lehman; however, in most studies, you check the cross section during an interval, probably to increase the sample size). So for instance, if you look at the CAPM, that's a model that explains the cross section of stock returns with only one factor, the systematic risk of a stock. Since the CAPM is empirically not successful in explaining the stock returns completely, there are other models, such as the Fama-French 3 factor-model. Note that those models do not help in explaining the time series. The CAPM does not tell you if the market risk premium should be high or low today, only that given a certain risk premium and risk-free rate, how much higher the return of stock A compared to stock B should be. Reference: Cochrane, John (2005): Asset Pricing, Revised Edition, Princeton University Press
What is cross section in "cross section of stock return"?
Cochrane (p. 435, 2005) gives a simple explanation between the difference of looking at expected returns in the time series and in the cross section: Time series: How average returns change over time
What is cross section in "cross section of stock return"? Cochrane (p. 435, 2005) gives a simple explanation between the difference of looking at expected returns in the time series and in the cross section: Time series: How average returns change over time. Cross section: How average returns change across different stock or portfolios. So intuitively, if you study the cross section of stock returns, you want to answer the question why stock A earns higher/lower returns than stock B. That's why you call it cross section: at one point in time, you check the cross section of many stocks. Note that you do not need a time series for that, you really need only one point in time (and in some corporate finance studies this is also done because they only want to explain the cross section for one shock, let's say the default of Lehman; however, in most studies, you check the cross section during an interval, probably to increase the sample size). So for instance, if you look at the CAPM, that's a model that explains the cross section of stock returns with only one factor, the systematic risk of a stock. Since the CAPM is empirically not successful in explaining the stock returns completely, there are other models, such as the Fama-French 3 factor-model. Note that those models do not help in explaining the time series. The CAPM does not tell you if the market risk premium should be high or low today, only that given a certain risk premium and risk-free rate, how much higher the return of stock A compared to stock B should be. Reference: Cochrane, John (2005): Asset Pricing, Revised Edition, Princeton University Press
What is cross section in "cross section of stock return"? Cochrane (p. 435, 2005) gives a simple explanation between the difference of looking at expected returns in the time series and in the cross section: Time series: How average returns change over time
16,293
Software package to solve L-infinity norm linear regression
Short answer: Your problem can be formulated as a linear program (LP), leaving you to choose your favorite LP solver for the task. To see how to write the problem as an LP, read on. This minimization problem is often referred to as Chebyshev approximation. Let $\newcommand{\y}{\mathbf{y}}\newcommand{\X}{\mathbf{X}}\newcommand{\x}{\mathbf{x}}\newcommand{\b}{\mathbf{\beta}}\newcommand{\reals}{\mathbb{R}}\newcommand{\ones}{\mathbf{1}_n} \y = (y_i) \in \reals^n$, $\X \in \reals^{n \times p}$ with row $i$ denoted by $\x_i$ and $\b \in \reals^p$. Then we seek to minimize the function $f(\b) = \|\y - \X \b\|_\infty$ with respect to $\b$. Denote the optimal value by $$ f^\star = f(\b^\star) = \inf \{f(\b): \b \in \reals^p \} \>. $$ The key to recasting this as an LP is to rewrite the problem in epigraph form. It is not difficult to convince oneself that, in fact, $$ f^\star = \inf\{t: f(\b) \leq t, \;t \in \reals, \;\b \in \reals^p \} \> . $$ Now, using the definition of the function $f$, we can rewrite the right-hand side above as $$ f^\star = \inf\{t: -t \leq y_i - \x_i \b \leq t, \;t \in \reals, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>, $$ and so we see that minimizing the $\ell_\infty$ norm in a regression setting is equivalent to the LP $$ \begin{array}{ll} \text{minimize} & t \\ \text{subject to} & \y-\X \b \leq t\ones \\ & \y - \X \b \geq - t \ones \>, \\ \end{array} $$ where the optimization is done over $(\b, t)$, and $\ones$ denotes a vector of ones of length $n$. I leave it as an (easy) exercise for the reader to recast the above LP in standard form. Relationship to the $\ell_1$ (total variation) version of linear regression It is interesting to note that something very similar can be done with the $\ell_1$ norm. Let $g(\b) = \|\y - \X \b \|_1$. Then, similar arguments lead one to conclude that $$\newcommand{\t}{\mathbf{t}} g^\star = \inf\{\t^T \ones : -t_i \leq y_i - \x_i \b \leq t_i, \;\t = (t_i) \in \reals^n, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>, $$ so that the corresponding LP is $$ \begin{array}{ll} \text{minimize} & \t^T \ones \\ \text{subject to} & \y-\X \b \leq \t \\ & \y - \X \b \geq - \t \>. \\ \end{array} $$ Note here that $\t$ is now a vector of length $n$ instead of a scalar, as it was in the $\ell_\infty$ case. The similarity in these two problems and the fact that they can both be cast as LPs is, of course, no accident. The two norms are related in that that they are the dual norms of each other.
Software package to solve L-infinity norm linear regression
Short answer: Your problem can be formulated as a linear program (LP), leaving you to choose your favorite LP solver for the task. To see how to write the problem as an LP, read on. This minimization
Software package to solve L-infinity norm linear regression Short answer: Your problem can be formulated as a linear program (LP), leaving you to choose your favorite LP solver for the task. To see how to write the problem as an LP, read on. This minimization problem is often referred to as Chebyshev approximation. Let $\newcommand{\y}{\mathbf{y}}\newcommand{\X}{\mathbf{X}}\newcommand{\x}{\mathbf{x}}\newcommand{\b}{\mathbf{\beta}}\newcommand{\reals}{\mathbb{R}}\newcommand{\ones}{\mathbf{1}_n} \y = (y_i) \in \reals^n$, $\X \in \reals^{n \times p}$ with row $i$ denoted by $\x_i$ and $\b \in \reals^p$. Then we seek to minimize the function $f(\b) = \|\y - \X \b\|_\infty$ with respect to $\b$. Denote the optimal value by $$ f^\star = f(\b^\star) = \inf \{f(\b): \b \in \reals^p \} \>. $$ The key to recasting this as an LP is to rewrite the problem in epigraph form. It is not difficult to convince oneself that, in fact, $$ f^\star = \inf\{t: f(\b) \leq t, \;t \in \reals, \;\b \in \reals^p \} \> . $$ Now, using the definition of the function $f$, we can rewrite the right-hand side above as $$ f^\star = \inf\{t: -t \leq y_i - \x_i \b \leq t, \;t \in \reals, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>, $$ and so we see that minimizing the $\ell_\infty$ norm in a regression setting is equivalent to the LP $$ \begin{array}{ll} \text{minimize} & t \\ \text{subject to} & \y-\X \b \leq t\ones \\ & \y - \X \b \geq - t \ones \>, \\ \end{array} $$ where the optimization is done over $(\b, t)$, and $\ones$ denotes a vector of ones of length $n$. I leave it as an (easy) exercise for the reader to recast the above LP in standard form. Relationship to the $\ell_1$ (total variation) version of linear regression It is interesting to note that something very similar can be done with the $\ell_1$ norm. Let $g(\b) = \|\y - \X \b \|_1$. Then, similar arguments lead one to conclude that $$\newcommand{\t}{\mathbf{t}} g^\star = \inf\{\t^T \ones : -t_i \leq y_i - \x_i \b \leq t_i, \;\t = (t_i) \in \reals^n, \;\b \in \reals^p,\; 1 \leq i \leq n \} \>, $$ so that the corresponding LP is $$ \begin{array}{ll} \text{minimize} & \t^T \ones \\ \text{subject to} & \y-\X \b \leq \t \\ & \y - \X \b \geq - \t \>. \\ \end{array} $$ Note here that $\t$ is now a vector of length $n$ instead of a scalar, as it was in the $\ell_\infty$ case. The similarity in these two problems and the fact that they can both be cast as LPs is, of course, no accident. The two norms are related in that that they are the dual norms of each other.
Software package to solve L-infinity norm linear regression Short answer: Your problem can be formulated as a linear program (LP), leaving you to choose your favorite LP solver for the task. To see how to write the problem as an LP, read on. This minimization
16,294
Software package to solve L-infinity norm linear regression
Malab can do it, using cvx. to get cvx (free): http://cvxr.com/cvx/download/ In cvx, you would write it this way: cvx_begin variable x(n); minimize( norm(A*x-b,Inf) ); cvx_end (check example page 12 of the manual) There is a Python implementation of CVX (here) but the commands are slightly different...
Software package to solve L-infinity norm linear regression
Malab can do it, using cvx. to get cvx (free): http://cvxr.com/cvx/download/ In cvx, you would write it this way: cvx_begin variable x(n); minimize( norm(A*x-b,Inf) ); cvx_end (check example p
Software package to solve L-infinity norm linear regression Malab can do it, using cvx. to get cvx (free): http://cvxr.com/cvx/download/ In cvx, you would write it this way: cvx_begin variable x(n); minimize( norm(A*x-b,Inf) ); cvx_end (check example page 12 of the manual) There is a Python implementation of CVX (here) but the commands are slightly different...
Software package to solve L-infinity norm linear regression Malab can do it, using cvx. to get cvx (free): http://cvxr.com/cvx/download/ In cvx, you would write it this way: cvx_begin variable x(n); minimize( norm(A*x-b,Inf) ); cvx_end (check example p
16,295
Software package to solve L-infinity norm linear regression
@cardinal's answer is well-stated and has been accepted, but, for the sake of closing this thread completely I'll offer the following: The IMSL Numerical Libraries contain a routine for performing L-infinity norm regression. The routine is available in Fortran, C, Java, C# and Python. I have used the C and Python versions for which the method is call lnorm_regression, which also supports general $L_p$-norm regression, $p >= 1$. Note that these are commercial libraries but the Python versions are free (as in beer) for non-commercial use.
Software package to solve L-infinity norm linear regression
@cardinal's answer is well-stated and has been accepted, but, for the sake of closing this thread completely I'll offer the following: The IMSL Numerical Libraries contain a routine for performing L-i
Software package to solve L-infinity norm linear regression @cardinal's answer is well-stated and has been accepted, but, for the sake of closing this thread completely I'll offer the following: The IMSL Numerical Libraries contain a routine for performing L-infinity norm regression. The routine is available in Fortran, C, Java, C# and Python. I have used the C and Python versions for which the method is call lnorm_regression, which also supports general $L_p$-norm regression, $p >= 1$. Note that these are commercial libraries but the Python versions are free (as in beer) for non-commercial use.
Software package to solve L-infinity norm linear regression @cardinal's answer is well-stated and has been accepted, but, for the sake of closing this thread completely I'll offer the following: The IMSL Numerical Libraries contain a routine for performing L-i
16,296
Pros and cons of bootstrapping
The bootstrap is a method of doing inference in a way that does not require assuming a parametric form for the population distribution. It does not treat the original sample as if it is the population even those it involves sampling with replacement from the original sample. It assumes that sampling with replacement from the original sample of size n mimics taking a sample of size n from a larger population. It also has many variants such as the m out of n bootstrap which re-samples m time from a sample of size n where m < n. The nice properties of the bootstrap depend on asymptotic theory. As others have mentioned the bootstrap does not contain more information about the population than what is given in the original sample. For that reason it sometimes doesn't work well in small samples. In my book "Bootstrap Methods: A Practitioners Guide" second edition published by Wiley in 2007, I point out situations where the bootstrap can fail. This includes distributions that do not have finite moments, small sample sizes, estimating extreme values from the distribution and estimating variance in survey sampling where the population size is N and a large sample n is taken. In some cases variants of the bootstrap can work better than the original approach. This happens with the m out of n bootstrap in some applications In the case of estimating error rates in discriminant analysis, the 632 bootstrap is an improvement over other methods including other bootstrap methods.. A reason for using it is that sometimes you can't rely on parametric assumptions and in some situations the bootstrap works better than other non-parametric methods. It can be applied to a wide variety of problems including nonlinear regression, classification, confidence interval estimation, bias estimation, adjustment of p-values and time series analysis to name a few.
Pros and cons of bootstrapping
The bootstrap is a method of doing inference in a way that does not require assuming a parametric form for the population distribution. It does not treat the original sample as if it is the populatio
Pros and cons of bootstrapping The bootstrap is a method of doing inference in a way that does not require assuming a parametric form for the population distribution. It does not treat the original sample as if it is the population even those it involves sampling with replacement from the original sample. It assumes that sampling with replacement from the original sample of size n mimics taking a sample of size n from a larger population. It also has many variants such as the m out of n bootstrap which re-samples m time from a sample of size n where m < n. The nice properties of the bootstrap depend on asymptotic theory. As others have mentioned the bootstrap does not contain more information about the population than what is given in the original sample. For that reason it sometimes doesn't work well in small samples. In my book "Bootstrap Methods: A Practitioners Guide" second edition published by Wiley in 2007, I point out situations where the bootstrap can fail. This includes distributions that do not have finite moments, small sample sizes, estimating extreme values from the distribution and estimating variance in survey sampling where the population size is N and a large sample n is taken. In some cases variants of the bootstrap can work better than the original approach. This happens with the m out of n bootstrap in some applications In the case of estimating error rates in discriminant analysis, the 632 bootstrap is an improvement over other methods including other bootstrap methods.. A reason for using it is that sometimes you can't rely on parametric assumptions and in some situations the bootstrap works better than other non-parametric methods. It can be applied to a wide variety of problems including nonlinear regression, classification, confidence interval estimation, bias estimation, adjustment of p-values and time series analysis to name a few.
Pros and cons of bootstrapping The bootstrap is a method of doing inference in a way that does not require assuming a parametric form for the population distribution. It does not treat the original sample as if it is the populatio
16,297
Pros and cons of bootstrapping
A bootstrap sample can only tell you things about the original sample, and won't give you any new information about the real population. It is simply a nonparametric method for constructing confidence intervals and similar. If you want to gain more information about the population, you have to gather more data from the population.
Pros and cons of bootstrapping
A bootstrap sample can only tell you things about the original sample, and won't give you any new information about the real population. It is simply a nonparametric method for constructing confidence
Pros and cons of bootstrapping A bootstrap sample can only tell you things about the original sample, and won't give you any new information about the real population. It is simply a nonparametric method for constructing confidence intervals and similar. If you want to gain more information about the population, you have to gather more data from the population.
Pros and cons of bootstrapping A bootstrap sample can only tell you things about the original sample, and won't give you any new information about the real population. It is simply a nonparametric method for constructing confidence
16,298
Why linear regression has assumption on residual but generalized linear model has assumptions on response?
Simple linear regression having Gaussian errors is a very nice attribute that does not generalize to generalized linear models. In generalized linear models, the response follows some given distribution given the mean. Linear regression follows this pattern; if we have $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$ with $\epsilon_i \sim N(0, \sigma)$ then we also have $y_i \sim N(\beta_0 + \beta_1 x_i, \sigma)$ Okay, so the response follows the given distribution for generalized linear models, but for linear regression we also have that the residuals follow a Gaussian distribution. Why is it emphasized that the residuals are normal when that's not the generalized rule? Well, because it's the much more useful rule. The nice thing about thinking about normality of the residuals is this is much easier to examine. If we subtract out the estimated means, all the residuals should have roughly the same variance and roughly the same mean (0) and will be roughly normally distributed (note: I say "roughly" because if we don't have perfect estimates of the regression parameters, which of course we do not, the variance of the estimates of $\epsilon_i$ will have different variances based on the ranges of $x$. But hopefully there's enough precision in the estimates that this is ignorable!). On the other hand, looking at the unadjusted $y_i$'s, we can't really tell if they are normal if they all have different means. For example, consider the following model: $y_i = 0 + 2 \times x_i + \epsilon_i$ with $\epsilon_i \sim N(0, 0.2)$ and $x_i \sim \text{Bernoulli}(p = 0.5)$ Then the $y_i$ will be highly bimodal, but does not violate the assumptions of linear regression! On the other hand, the residuals will follow a roughly normal distribution. Here's some R code to illustrate. x <- rbinom(1000, size = 1, prob = 0.5) y <- 2 * x + rnorm(1000, sd = 0.2) fit <- lm(y ~ x) resids <- residuals(fit) par(mfrow = c(1,2)) hist(y, main = 'Distribution of Responses') hist(resids, main = 'Distribution of Residuals')
Why linear regression has assumption on residual but generalized linear model has assumptions on res
Simple linear regression having Gaussian errors is a very nice attribute that does not generalize to generalized linear models. In generalized linear models, the response follows some given distribut
Why linear regression has assumption on residual but generalized linear model has assumptions on response? Simple linear regression having Gaussian errors is a very nice attribute that does not generalize to generalized linear models. In generalized linear models, the response follows some given distribution given the mean. Linear regression follows this pattern; if we have $y_i = \beta_0 + \beta_1 x_i + \epsilon_i$ with $\epsilon_i \sim N(0, \sigma)$ then we also have $y_i \sim N(\beta_0 + \beta_1 x_i, \sigma)$ Okay, so the response follows the given distribution for generalized linear models, but for linear regression we also have that the residuals follow a Gaussian distribution. Why is it emphasized that the residuals are normal when that's not the generalized rule? Well, because it's the much more useful rule. The nice thing about thinking about normality of the residuals is this is much easier to examine. If we subtract out the estimated means, all the residuals should have roughly the same variance and roughly the same mean (0) and will be roughly normally distributed (note: I say "roughly" because if we don't have perfect estimates of the regression parameters, which of course we do not, the variance of the estimates of $\epsilon_i$ will have different variances based on the ranges of $x$. But hopefully there's enough precision in the estimates that this is ignorable!). On the other hand, looking at the unadjusted $y_i$'s, we can't really tell if they are normal if they all have different means. For example, consider the following model: $y_i = 0 + 2 \times x_i + \epsilon_i$ with $\epsilon_i \sim N(0, 0.2)$ and $x_i \sim \text{Bernoulli}(p = 0.5)$ Then the $y_i$ will be highly bimodal, but does not violate the assumptions of linear regression! On the other hand, the residuals will follow a roughly normal distribution. Here's some R code to illustrate. x <- rbinom(1000, size = 1, prob = 0.5) y <- 2 * x + rnorm(1000, sd = 0.2) fit <- lm(y ~ x) resids <- residuals(fit) par(mfrow = c(1,2)) hist(y, main = 'Distribution of Responses') hist(resids, main = 'Distribution of Residuals')
Why linear regression has assumption on residual but generalized linear model has assumptions on res Simple linear regression having Gaussian errors is a very nice attribute that does not generalize to generalized linear models. In generalized linear models, the response follows some given distribut
16,299
Why linear regression has assumption on residual but generalized linear model has assumptions on response?
The assumptions are not inconsistent. If, for $i = 1, \ldots, n$, you assume $$ Y_i = \beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik} + \epsilon_i, $$ with the errors $\epsilon_i$ being normally distributed with mean 0 and variance $\sigma^2$, that's the same as assuming that conditional on $X_{i1}, \ldots, X_{ik}$, the response $Y_i$ is normally distributed with mean $\beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik}$ and variance $\sigma^2$. This is because having conditioned on $X_{i1}, \ldots, X_{ik}$, we treat $\beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik}$ as being constant. The usual multiple linear regression model with normal errors is a generalised linear model with normal response and identity link.
Why linear regression has assumption on residual but generalized linear model has assumptions on res
The assumptions are not inconsistent. If, for $i = 1, \ldots, n$, you assume $$ Y_i = \beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik} + \epsilon_i, $$ with the errors $\epsilon_i$ being normally di
Why linear regression has assumption on residual but generalized linear model has assumptions on response? The assumptions are not inconsistent. If, for $i = 1, \ldots, n$, you assume $$ Y_i = \beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik} + \epsilon_i, $$ with the errors $\epsilon_i$ being normally distributed with mean 0 and variance $\sigma^2$, that's the same as assuming that conditional on $X_{i1}, \ldots, X_{ik}$, the response $Y_i$ is normally distributed with mean $\beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik}$ and variance $\sigma^2$. This is because having conditioned on $X_{i1}, \ldots, X_{ik}$, we treat $\beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik}$ as being constant. The usual multiple linear regression model with normal errors is a generalised linear model with normal response and identity link.
Why linear regression has assumption on residual but generalized linear model has assumptions on res The assumptions are not inconsistent. If, for $i = 1, \ldots, n$, you assume $$ Y_i = \beta_0 + \beta_1 X_{i1} + \ldots + \beta_k X_{ik} + \epsilon_i, $$ with the errors $\epsilon_i$ being normally di
16,300
Should we standardize the data while doing Gaussian process regression?
Yes, it is desirable to standardize the data while learning Gaussian processes regression. There are a number of reasons: In common Gaussian processes regression model we suppose that output $y$ has zero mean, so we should standardize $y$ to match our assumption. For many covariance function we have scale parameters in covariance functions. So, we should standardize inputs to get better estimation of parameters of covariance functions. Gaussian processes regression is prone to numerical problems as we have to inverse ill-conditioned covariance matrix. To make this problem less severe, you should standardize your data. Some packages do this job for you, for example GPR in sklearn has an option normalize for normalization of inputs, while not outputs; see this.
Should we standardize the data while doing Gaussian process regression?
Yes, it is desirable to standardize the data while learning Gaussian processes regression. There are a number of reasons: In common Gaussian processes regression model we suppose that output $y$ has
Should we standardize the data while doing Gaussian process regression? Yes, it is desirable to standardize the data while learning Gaussian processes regression. There are a number of reasons: In common Gaussian processes regression model we suppose that output $y$ has zero mean, so we should standardize $y$ to match our assumption. For many covariance function we have scale parameters in covariance functions. So, we should standardize inputs to get better estimation of parameters of covariance functions. Gaussian processes regression is prone to numerical problems as we have to inverse ill-conditioned covariance matrix. To make this problem less severe, you should standardize your data. Some packages do this job for you, for example GPR in sklearn has an option normalize for normalization of inputs, while not outputs; see this.
Should we standardize the data while doing Gaussian process regression? Yes, it is desirable to standardize the data while learning Gaussian processes regression. There are a number of reasons: In common Gaussian processes regression model we suppose that output $y$ has