idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
34,801
How to evaluate results of linear regression
if you really are fine with your linear trainig model and want to know how well it would predict your test data, then all you would have to do is to use the linear model formula you already have and include the estimated coefficients a (= intercept) and b(regression coefficient, also called slope) resulting from first model. should look like y= a + b*X here some imaginary numbers... y= 2 + 0.5*X Which software are you using? Are you using R ? if so, you can use the predict.lm() function and apply it on your 2nd dataset.
How to evaluate results of linear regression
if you really are fine with your linear trainig model and want to know how well it would predict your test data, then all you would have to do is to use the linear model formula you already have and i
How to evaluate results of linear regression if you really are fine with your linear trainig model and want to know how well it would predict your test data, then all you would have to do is to use the linear model formula you already have and include the estimated coefficients a (= intercept) and b(regression coefficient, also called slope) resulting from first model. should look like y= a + b*X here some imaginary numbers... y= 2 + 0.5*X Which software are you using? Are you using R ? if so, you can use the predict.lm() function and apply it on your 2nd dataset.
How to evaluate results of linear regression if you really are fine with your linear trainig model and want to know how well it would predict your test data, then all you would have to do is to use the linear model formula you already have and i
34,802
How to evaluate results of linear regression
While this largely depends on exactly what your goals are, a simple and standard way to do this would be measuring the mean squared error (MSE). So if you have your test dataset $\mathcal{D}$ which consist of input/output pairs, $\mathcal{D} = \{(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\}$ and your parameters $a$ and $b$, then the MSE can be calculated as $$ \text{MSE}_{a,b} = \frac{1}{n}\sum_{i=1}^n (y_i - (ax_i + b))^2. $$ This is probably a sensible way to measure your error also since this is likely the criteria you used for finding the parameters $a$ and $b$. If you want to get a better idea of how well your estimated parameters generalize, you should look into something like cross validation.
How to evaluate results of linear regression
While this largely depends on exactly what your goals are, a simple and standard way to do this would be measuring the mean squared error (MSE). So if you have your test dataset $\mathcal{D}$ which co
How to evaluate results of linear regression While this largely depends on exactly what your goals are, a simple and standard way to do this would be measuring the mean squared error (MSE). So if you have your test dataset $\mathcal{D}$ which consist of input/output pairs, $\mathcal{D} = \{(x_1, y_1), (x_2, y_2), \ldots, (x_n, y_n)\}$ and your parameters $a$ and $b$, then the MSE can be calculated as $$ \text{MSE}_{a,b} = \frac{1}{n}\sum_{i=1}^n (y_i - (ax_i + b))^2. $$ This is probably a sensible way to measure your error also since this is likely the criteria you used for finding the parameters $a$ and $b$. If you want to get a better idea of how well your estimated parameters generalize, you should look into something like cross validation.
How to evaluate results of linear regression While this largely depends on exactly what your goals are, a simple and standard way to do this would be measuring the mean squared error (MSE). So if you have your test dataset $\mathcal{D}$ which co
34,803
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding precision?
I eventually worked out the answer myself (with the help of a mathematician friend). In JAGS/BUGS we can define the prior distribution on the precision of a normal distribution using a gamma distribution, which also happens to be a conjugate prior for the normal distribution parameterized by precision. We want to be able to specify a gamma prior on the normal distribution using our guess of the mean SD of the normal distribution and the SD of the SD of the normal distribution. In order to do this we need to find the prior distribution that corresponds to the gamma distribution but is the conjugate prior for a normal distribution parameterized by SD. I found three mentions of this distribution where it is either called the inverted half gamma (Fink, 1997) or the inverted gamma-1 (Adjemian, 2010; LaValle, 1970). The inverted gamma-1 distribution has two parameters $\nu$ and $s$ which corresponds to $2 \cdot shape$ and $2 \cdot rate$ of the gamma distribution respectively. The mean and SD of the inverted gamma-1 is: $$ \mu = \sqrt{\frac{s}{2}}\frac{\Gamma(\frac{\nu-1}{2})}{\Gamma(\frac{\nu}{2})} \space \text{ and } \space \sigma^2 = \frac{s}{\nu - 2} - \mu^2$$ It doesn't seem to exist a closed solution that allow us to get $\nu$ and $s$ if we specify $\mu$ and $\sigma$. Adjemian (2010) recommends a numerical approach and fortunately a matlab script that does this is available from the open source platform Dynare. The following is an R translation of that script: # Copyright (C) 2003-2008 Dynare Team, modified 2012 by Rasmus Bååth # # This file is modified R version of an original Matlab file that is part of Dynare. # # Dynare is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Dynare is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. inverse_gamma_specification <- function(mu, sigma) { sigma2 = sigma^2 mu2 = mu^2 if(sigma^2 < Inf) { nu = sqrt(2*(2+mu^2/sigma^2)) nu2 = 2*nu nu1 = 2 err = 2*mu^2*gamma(nu/2)^2-(sigma^2+mu^2)*(nu-2)*gamma((nu-1)/2)^2 while(abs(nu2-nu1) > 1e-12) { if(err > 0) { nu1 = nu if(nu < nu2) { nu = nu2 } else { nu = 2*nu nu2 = nu } } else { nu2 = nu } nu = (nu1+nu2)/2 err = 2*mu^2*gamma(nu/2)^2-(sigma^2+mu^2)*(nu-2)*gamma((nu-1)/2)^2 } s = (sigma^2+mu^2)*(nu-2) } else { nu = 2 s = 2*mu^2/pi } c(nu=nu, s=s) } The R/JAGS script below shows how we can now specify our gamma prior on the precision of a normal distribution. library(rjags) model_string <- "model{ y ~ dnorm(0, tau) sigma <- 1/sqrt(tau) tau ~ dgamma(shape, rate) }" # Here we specify the mean and sd of sigma and get the corresponding # parameters for the gamma distribution. mu_sigma <- 100 sd_sigma <- 50 params <- inverse_gamma_specification(mu_sigma, sd_sigma) shape <- params["nu"] / 2 rate <- params["s"] / 2 data.list <- list(y=NA, shape = shape, rate = rate) model <- jags.model(textConnection(model_string), data=data.list, n.chains=4, n.adapt=1000) update(model, 10000) samples <- as.matrix(coda.samples( model, variable.names=c("y", "tau", "sigma"), n.iter=10000)) And now we can check if the sample posteriors (which should mimic the priors as we gave no data to the model, that is, y = NA) are as we specified. mean(samples[, "sigma"]) ## 99.87198 sd(samples[, "sigma"]) ## 49.37357 par(mfcol=c(3,1), mar=c(2,2,2,2)) plot(density(samples[, "tau"]), main="tau") plot(density(samples[, "sigma"]), main="sigma") plot(density(samples[, "y"]), main="y") This seems to be correct. Any objections or comments to this method of specifying a prior is much appreciated! Edit: To calculate the shape and rate of the gamma prior as we have done above is not the same as directly using a gamma prior on the SD on a normal distribution. This is illustrated by the R script below. # Generating random precision values and converting to # SD using the shape and rate values calculated above rand_precision <- rgamma(999999, shape=shape, rate=rate) rand_sd <- 1/sqrt(rand_prec) # Specifying the mean and sd of the gamma distribution directly using the # mu and sigma specified before and generating random SD values. shape2 <- mu^2/sigma^2 rate2 <- mu/sigma^2 rand_sd2 <- rgamma(999999, shape2, rate2) The two distributions now has the same mean and SD. mean(rand_sd) ## 99.96195 mean(rand_sd2) ## 99.95316 sd(rand_sd) ## 50.21289 sd(rand_sd2) ## 50.01591 But they are not the same distribution. plot(density(rand_sd[rand_sd < 400]), col="blue", lwd=4, xlim=c(0, 400)) lines(density(rand_sd2[rand_sd2 < 400]), col="red", lwd=4, xlim=c(0, 400)) From what I've read it seems to be more usual to put a gamma prior on the precision than a gamma prior on the SD. But I don't know what the argument would be for preferring the former over the latter. References Fink, D. (1997). A compendium of conjugate priors. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.157.5540&rep=rep1&type=pdf Adjemian, S. (2010). Prior Distributions in Dynare. http://www.dynare.org/stepan/dynare/text/DynareDistributions.pdf LaValle, I.H. (1970). An introduction to probability, decision, and inference. Holt, Rinehart and Winston New York.
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding
I eventually worked out the answer myself (with the help of a mathematician friend). In JAGS/BUGS we can define the prior distribution on the precision of a normal distribution using a gamma distribut
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding precision? I eventually worked out the answer myself (with the help of a mathematician friend). In JAGS/BUGS we can define the prior distribution on the precision of a normal distribution using a gamma distribution, which also happens to be a conjugate prior for the normal distribution parameterized by precision. We want to be able to specify a gamma prior on the normal distribution using our guess of the mean SD of the normal distribution and the SD of the SD of the normal distribution. In order to do this we need to find the prior distribution that corresponds to the gamma distribution but is the conjugate prior for a normal distribution parameterized by SD. I found three mentions of this distribution where it is either called the inverted half gamma (Fink, 1997) or the inverted gamma-1 (Adjemian, 2010; LaValle, 1970). The inverted gamma-1 distribution has two parameters $\nu$ and $s$ which corresponds to $2 \cdot shape$ and $2 \cdot rate$ of the gamma distribution respectively. The mean and SD of the inverted gamma-1 is: $$ \mu = \sqrt{\frac{s}{2}}\frac{\Gamma(\frac{\nu-1}{2})}{\Gamma(\frac{\nu}{2})} \space \text{ and } \space \sigma^2 = \frac{s}{\nu - 2} - \mu^2$$ It doesn't seem to exist a closed solution that allow us to get $\nu$ and $s$ if we specify $\mu$ and $\sigma$. Adjemian (2010) recommends a numerical approach and fortunately a matlab script that does this is available from the open source platform Dynare. The following is an R translation of that script: # Copyright (C) 2003-2008 Dynare Team, modified 2012 by Rasmus Bååth # # This file is modified R version of an original Matlab file that is part of Dynare. # # Dynare is free software: you can redistribute it and/or modify # it under the terms of the GNU General Public License as published by # the Free Software Foundation, either version 3 of the License, or # (at your option) any later version. # # Dynare is distributed in the hope that it will be useful, # but WITHOUT ANY WARRANTY; without even the implied warranty of # MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the # GNU General Public License for more details. inverse_gamma_specification <- function(mu, sigma) { sigma2 = sigma^2 mu2 = mu^2 if(sigma^2 < Inf) { nu = sqrt(2*(2+mu^2/sigma^2)) nu2 = 2*nu nu1 = 2 err = 2*mu^2*gamma(nu/2)^2-(sigma^2+mu^2)*(nu-2)*gamma((nu-1)/2)^2 while(abs(nu2-nu1) > 1e-12) { if(err > 0) { nu1 = nu if(nu < nu2) { nu = nu2 } else { nu = 2*nu nu2 = nu } } else { nu2 = nu } nu = (nu1+nu2)/2 err = 2*mu^2*gamma(nu/2)^2-(sigma^2+mu^2)*(nu-2)*gamma((nu-1)/2)^2 } s = (sigma^2+mu^2)*(nu-2) } else { nu = 2 s = 2*mu^2/pi } c(nu=nu, s=s) } The R/JAGS script below shows how we can now specify our gamma prior on the precision of a normal distribution. library(rjags) model_string <- "model{ y ~ dnorm(0, tau) sigma <- 1/sqrt(tau) tau ~ dgamma(shape, rate) }" # Here we specify the mean and sd of sigma and get the corresponding # parameters for the gamma distribution. mu_sigma <- 100 sd_sigma <- 50 params <- inverse_gamma_specification(mu_sigma, sd_sigma) shape <- params["nu"] / 2 rate <- params["s"] / 2 data.list <- list(y=NA, shape = shape, rate = rate) model <- jags.model(textConnection(model_string), data=data.list, n.chains=4, n.adapt=1000) update(model, 10000) samples <- as.matrix(coda.samples( model, variable.names=c("y", "tau", "sigma"), n.iter=10000)) And now we can check if the sample posteriors (which should mimic the priors as we gave no data to the model, that is, y = NA) are as we specified. mean(samples[, "sigma"]) ## 99.87198 sd(samples[, "sigma"]) ## 49.37357 par(mfcol=c(3,1), mar=c(2,2,2,2)) plot(density(samples[, "tau"]), main="tau") plot(density(samples[, "sigma"]), main="sigma") plot(density(samples[, "y"]), main="y") This seems to be correct. Any objections or comments to this method of specifying a prior is much appreciated! Edit: To calculate the shape and rate of the gamma prior as we have done above is not the same as directly using a gamma prior on the SD on a normal distribution. This is illustrated by the R script below. # Generating random precision values and converting to # SD using the shape and rate values calculated above rand_precision <- rgamma(999999, shape=shape, rate=rate) rand_sd <- 1/sqrt(rand_prec) # Specifying the mean and sd of the gamma distribution directly using the # mu and sigma specified before and generating random SD values. shape2 <- mu^2/sigma^2 rate2 <- mu/sigma^2 rand_sd2 <- rgamma(999999, shape2, rate2) The two distributions now has the same mean and SD. mean(rand_sd) ## 99.96195 mean(rand_sd2) ## 99.95316 sd(rand_sd) ## 50.21289 sd(rand_sd2) ## 50.01591 But they are not the same distribution. plot(density(rand_sd[rand_sd < 400]), col="blue", lwd=4, xlim=c(0, 400)) lines(density(rand_sd2[rand_sd2 < 400]), col="red", lwd=4, xlim=c(0, 400)) From what I've read it seems to be more usual to put a gamma prior on the precision than a gamma prior on the SD. But I don't know what the argument would be for preferring the former over the latter. References Fink, D. (1997). A compendium of conjugate priors. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.157.5540&rep=rep1&type=pdf Adjemian, S. (2010). Prior Distributions in Dynare. http://www.dynare.org/stepan/dynare/text/DynareDistributions.pdf LaValle, I.H. (1970). An introduction to probability, decision, and inference. Holt, Rinehart and Winston New York.
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding I eventually worked out the answer myself (with the help of a mathematician friend). In JAGS/BUGS we can define the prior distribution on the precision of a normal distribution using a gamma distribut
34,804
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding precision?
My impression (perhaps mistaken?) is that your goal is to put a prior on sigma instead of on tau (which equals 1/sigma^2), because it is more intuitive to deal with sigam. As Erik replied earlier, this is straight forward in JAGS/BUGS: tau <- pow(sigma,-2) sigma ~ thePriorOfYourChoice There are various examples of this in Doing Bayesian Data Analysis, such as Figure 18.1, p. 494. For an example of putting a gamma prior on sigma, see this blog post: http://doingbayesiandataanalysis.blogspot.com/2012/04/improved-programs-for-hierarchical.html
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding
My impression (perhaps mistaken?) is that your goal is to put a prior on sigma instead of on tau (which equals 1/sigma^2), because it is more intuitive to deal with sigam. As Erik replied earlier, thi
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding precision? My impression (perhaps mistaken?) is that your goal is to put a prior on sigma instead of on tau (which equals 1/sigma^2), because it is more intuitive to deal with sigam. As Erik replied earlier, this is straight forward in JAGS/BUGS: tau <- pow(sigma,-2) sigma ~ thePriorOfYourChoice There are various examples of this in Doing Bayesian Data Analysis, such as Figure 18.1, p. 494. For an example of putting a gamma prior on sigma, see this blog post: http://doingbayesiandataanalysis.blogspot.com/2012/04/improved-programs-for-hierarchical.html
How to use the SD of a normal sampling distribution to specify the gamma prior for the corresponding My impression (perhaps mistaken?) is that your goal is to put a prior on sigma instead of on tau (which equals 1/sigma^2), because it is more intuitive to deal with sigam. As Erik replied earlier, thi
34,805
Calculating precision and recall in R
I wrote a function for this purpose, based on the exercise in the book Data Mining with R: # Function: evaluation metrics ## True positives (TP) - Correctly idd as success ## True negatives (TN) - Correctly idd as failure ## False positives (FP) - success incorrectly idd as failure ## False negatives (FN) - failure incorrectly idd as success ## Precision - P = TP/(TP+FP) how many idd actually success/failure ## Recall - R = TP/(TP+FN) how many of the successes correctly idd ## F-score - F = (2 * P * R)/(P + R) harm mean of precision and recall prf <- function(predAct){ ## predAct is two col dataframe of pred,act preds = predAct[,1] trues = predAct[,2] xTab <- table(preds, trues) clss <- as.character(sort(unique(preds))) r <- matrix(NA, ncol = 7, nrow = 1, dimnames = list(c(),c('Acc', paste("P",clss[1],sep='_'), paste("R",clss[1],sep='_'), paste("F",clss[1],sep='_'), paste("P",clss[2],sep='_'), paste("R",clss[2],sep='_'), paste("F",clss[2],sep='_')))) r[1,1] <- sum(xTab[1,1],xTab[2,2])/sum(xTab) # Accuracy r[1,2] <- xTab[1,1]/sum(xTab[,1]) # Miss Precision r[1,3] <- xTab[1,1]/sum(xTab[1,]) # Miss Recall r[1,4] <- (2*r[1,2]*r[1,3])/sum(r[1,2],r[1,3]) # Miss F r[1,5] <- xTab[2,2]/sum(xTab[,2]) # Hit Precision r[1,6] <- xTab[2,2]/sum(xTab[2,]) # Hit Recall r[1,7] <- (2*r[1,5]*r[1,6])/sum(r[1,5],r[1,6]) # Hit F r} Where for any binary classification task, this returns the precision, recall, and F-stat for each classification and the overall accuracy like so: > pred <- rbinom(100,1,.7) > act <- rbinom(100,1,.7) > predAct <- data.frame(pred,act) > prf(predAct) Acc P_0 R_0 F_0 P_1 R_1 F_1 [1,] 0.63 0.34375 0.4074074 0.3728814 0.7647059 0.7123288 0.7375887 Calculating the P, R, and F for each class like this lets you see whether one or the other is giving you more difficulty, and it's easy to then calculate the overall P, R, F stats. I haven't used the ROCR package, but you could easily derive the same ROC curves by training the classifier over the range of some parameter and calling the function for classifiers at points along the range.
Calculating precision and recall in R
I wrote a function for this purpose, based on the exercise in the book Data Mining with R: # Function: evaluation metrics ## True positives (TP) - Correctly idd as success ## True negatives (T
Calculating precision and recall in R I wrote a function for this purpose, based on the exercise in the book Data Mining with R: # Function: evaluation metrics ## True positives (TP) - Correctly idd as success ## True negatives (TN) - Correctly idd as failure ## False positives (FP) - success incorrectly idd as failure ## False negatives (FN) - failure incorrectly idd as success ## Precision - P = TP/(TP+FP) how many idd actually success/failure ## Recall - R = TP/(TP+FN) how many of the successes correctly idd ## F-score - F = (2 * P * R)/(P + R) harm mean of precision and recall prf <- function(predAct){ ## predAct is two col dataframe of pred,act preds = predAct[,1] trues = predAct[,2] xTab <- table(preds, trues) clss <- as.character(sort(unique(preds))) r <- matrix(NA, ncol = 7, nrow = 1, dimnames = list(c(),c('Acc', paste("P",clss[1],sep='_'), paste("R",clss[1],sep='_'), paste("F",clss[1],sep='_'), paste("P",clss[2],sep='_'), paste("R",clss[2],sep='_'), paste("F",clss[2],sep='_')))) r[1,1] <- sum(xTab[1,1],xTab[2,2])/sum(xTab) # Accuracy r[1,2] <- xTab[1,1]/sum(xTab[,1]) # Miss Precision r[1,3] <- xTab[1,1]/sum(xTab[1,]) # Miss Recall r[1,4] <- (2*r[1,2]*r[1,3])/sum(r[1,2],r[1,3]) # Miss F r[1,5] <- xTab[2,2]/sum(xTab[,2]) # Hit Precision r[1,6] <- xTab[2,2]/sum(xTab[2,]) # Hit Recall r[1,7] <- (2*r[1,5]*r[1,6])/sum(r[1,5],r[1,6]) # Hit F r} Where for any binary classification task, this returns the precision, recall, and F-stat for each classification and the overall accuracy like so: > pred <- rbinom(100,1,.7) > act <- rbinom(100,1,.7) > predAct <- data.frame(pred,act) > prf(predAct) Acc P_0 R_0 F_0 P_1 R_1 F_1 [1,] 0.63 0.34375 0.4074074 0.3728814 0.7647059 0.7123288 0.7375887 Calculating the P, R, and F for each class like this lets you see whether one or the other is giving you more difficulty, and it's easy to then calculate the overall P, R, F stats. I haven't used the ROCR package, but you could easily derive the same ROC curves by training the classifier over the range of some parameter and calling the function for classifiers at points along the range.
Calculating precision and recall in R I wrote a function for this purpose, based on the exercise in the book Data Mining with R: # Function: evaluation metrics ## True positives (TP) - Correctly idd as success ## True negatives (T
34,806
Calculating precision and recall in R
As Robert put it correctly, Accuracy is the way to go. I just want to add that it is possible to do calculate it with ROCR. Take a look at help(performance) to select different measures. For example, in ROCR only one decision threshold is used which is called cutoff. The following code plots accuracy vs cutoff and extracts the cutoff for maximum accuracy. require(ROCR) # Prepare data for plotting data(ROCR.simple) pred <- with(ROCR.simple, prediction(predictions, labels)) perf <- performance(pred, measure="acc", x.measure="cutoff") # Get the cutoff for the best accuracy bestAccInd <- which.max(perf@"y.values"[[1]]) bestMsg <- paste("best accuracy=", perf@"y.values"[[1]][bestAccInd], " at cutoff=", round(perf@"x.values"[[1]][bestAccInd], 4)) plot(perf, sub=bestMsg) which results in To operate with two thresholds in order to create a middle region of uncertainty (which is a valid way to go if the circumstances / target application allows it) one can create two performance objects with ROCR cutoff vs True Positive Rate (tpr) aka precision for the positive class cutoff vs True Negative Rate (tnr) aka precision for the negative class Select a suitable cutoff from the performance vectors (using the R method which) and combine them to achieve the desired balance. This should be straightforward, hence I leave it as an exercise to the reader. One last note: What is the difference between Accuracy and calculating precision for both classes separately and e.g. combine them in a (weighted) average? Accuracy calculates a weighted average, where the weight for class c is equivalent to number of instances with class c. This means that if you suffer a heavy class skew (98% negatives for example) on can simply "optimize" the accuracy by setting predict the label negative for all instances. In such a case a non-weighted plain average of both class precisions prevents the gaming of the metric. In the case of a balanced classes both calculation methods lead of course to the same result.
Calculating precision and recall in R
As Robert put it correctly, Accuracy is the way to go. I just want to add that it is possible to do calculate it with ROCR. Take a look at help(performance) to select different measures. For example,
Calculating precision and recall in R As Robert put it correctly, Accuracy is the way to go. I just want to add that it is possible to do calculate it with ROCR. Take a look at help(performance) to select different measures. For example, in ROCR only one decision threshold is used which is called cutoff. The following code plots accuracy vs cutoff and extracts the cutoff for maximum accuracy. require(ROCR) # Prepare data for plotting data(ROCR.simple) pred <- with(ROCR.simple, prediction(predictions, labels)) perf <- performance(pred, measure="acc", x.measure="cutoff") # Get the cutoff for the best accuracy bestAccInd <- which.max(perf@"y.values"[[1]]) bestMsg <- paste("best accuracy=", perf@"y.values"[[1]][bestAccInd], " at cutoff=", round(perf@"x.values"[[1]][bestAccInd], 4)) plot(perf, sub=bestMsg) which results in To operate with two thresholds in order to create a middle region of uncertainty (which is a valid way to go if the circumstances / target application allows it) one can create two performance objects with ROCR cutoff vs True Positive Rate (tpr) aka precision for the positive class cutoff vs True Negative Rate (tnr) aka precision for the negative class Select a suitable cutoff from the performance vectors (using the R method which) and combine them to achieve the desired balance. This should be straightforward, hence I leave it as an exercise to the reader. One last note: What is the difference between Accuracy and calculating precision for both classes separately and e.g. combine them in a (weighted) average? Accuracy calculates a weighted average, where the weight for class c is equivalent to number of instances with class c. This means that if you suffer a heavy class skew (98% negatives for example) on can simply "optimize" the accuracy by setting predict the label negative for all instances. In such a case a non-weighted plain average of both class precisions prevents the gaming of the metric. In the case of a balanced classes both calculation methods lead of course to the same result.
Calculating precision and recall in R As Robert put it correctly, Accuracy is the way to go. I just want to add that it is possible to do calculate it with ROCR. Take a look at help(performance) to select different measures. For example,
34,807
Understanding Fisher's combined test
The Fisher combination test is intended to combine information from separate tests done on independent data sets in order to obtain power when the individual tests may not have sufficient power. The idea is that if the $k$ null hypotheses are all correct the $p$-value will be uniformly distributed on $[0,1]$ independently of each other. This means that $-2 ∑ \log(p_i)$ will be $\chi^2$ with $2k$ degrees of freedom. Rejecting this combined null hypothesis leads to the conclusion that at least one of the null hypotheses is false. That is what you are doing when you apply this procedure.
Understanding Fisher's combined test
The Fisher combination test is intended to combine information from separate tests done on independent data sets in order to obtain power when the individual tests may not have sufficient power. The
Understanding Fisher's combined test The Fisher combination test is intended to combine information from separate tests done on independent data sets in order to obtain power when the individual tests may not have sufficient power. The idea is that if the $k$ null hypotheses are all correct the $p$-value will be uniformly distributed on $[0,1]$ independently of each other. This means that $-2 ∑ \log(p_i)$ will be $\chi^2$ with $2k$ degrees of freedom. Rejecting this combined null hypothesis leads to the conclusion that at least one of the null hypotheses is false. That is what you are doing when you apply this procedure.
Understanding Fisher's combined test The Fisher combination test is intended to combine information from separate tests done on independent data sets in order to obtain power when the individual tests may not have sufficient power. The
34,808
Understanding Fisher's combined test
There are several ways of combining $p$-values and some of them have this property and some do not. This is partly because the problem is not well specified. There has been an extensive simulation study of many of the most-well known methods. The bottom line is that if you want the property of cancellation you can have it but you do not have to.
Understanding Fisher's combined test
There are several ways of combining $p$-values and some of them have this property and some do not. This is partly because the problem is not well specified. There has been an extensive simulation stu
Understanding Fisher's combined test There are several ways of combining $p$-values and some of them have this property and some do not. This is partly because the problem is not well specified. There has been an extensive simulation study of many of the most-well known methods. The bottom line is that if you want the property of cancellation you can have it but you do not have to.
Understanding Fisher's combined test There are several ways of combining $p$-values and some of them have this property and some do not. This is partly because the problem is not well specified. There has been an extensive simulation stu
34,809
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable
I am going to assume that you have observations from $Y$ because you are specifying distributional assumptions on $X$ and $Z$. Then $Y=Z-X$. Given that $Y=Z-X$, we have that the density of $Y$ can be written as the convolution: $$f_Y(y)=\int_{-\infty}^{\infty}\dfrac{1}{c}f_Z\left(\dfrac{y+x}{c}\right)\dfrac{1}{\sigma}f_X\left(\dfrac{x}{\sigma}\right)dx,$$ where $f_Z$ is the standard $\alpha -$stable density and $f_X$ represents the standard normal density. Note that, using a change of variable, we can rewrite this density as follows $$f_Y(y)=\int_{-\infty}^{\infty}\dfrac{1}{c}f_Z\left(\dfrac{y+u\sigma}{c}\right)f_X\left(u\right)du = \int_{-\infty}^{\infty}\dfrac{1}{\sigma}f_X\left(\dfrac{cu-y}{\sigma}\right)f_Z\left(u\right)du,$$ It might be difficult (if feasible) to obtain this density in closed form but it can be approximated by simulating $(x_1,...,x_N)$ from a standard normal distribution and calculating the average $$f_Y(y;c,\sigma)\approx \dfrac{1}{N}\sum_{j=1}^N \dfrac{1}{c}f_Z\left(\dfrac{y+x_j \sigma}{c}\right),$$ or by simulating from a standard $\alpha-$stable distribution $(z_1,...,z_N)$ and calculating the average $$f_Y(y;c,\sigma)\approx \dfrac{1}{N}\sum_{j=1}^N \dfrac{1}{\sigma}f_X\left(\dfrac{c z_j -y}{\sigma}\right),$$ Note that the first approximation implies an easy simulation but a difficult evaluation of the density $f_Z$. The second approximation implies a difficult simulation but an easy evaluation of $f_X$. I also guess that the number of needed simulations for a good approximation is less in the first approximation. Either way, this might imply the use of intensive computation. Now, if you have a sample $(y_1,...,y_n)$, then you can write the likelihood of $(c,\sigma)$ as $${\mathcal L}(c,\sigma)\propto \prod_{j=1}^n f_Y(y_j;c,\sigma).$$ Using this, you can approximate the Maximum Likelihood Estimators (MLE) of $(c,\sigma)$ by maximising the corresponding likelihood using the approximations I described above. Toy example in R In this example, consider $(c,\sigma)=(0.5,1)$, $n=100$ and $N=1000$. rm(list=ls()) library(stabledist) # Values of the theoretical parameters alpha0 = 1.75 sigma0 = 1 c0 = 0.5 set.seed(2) # Simulated sample y0 = rnorm(100) - rstable(n=100, alpha=alpha0 , beta=0, gamma = c0, delta = 0, pm = 0) # A histogram of the sample hist(y0) # Second approximation of the density of Y fy = function(y,c,sigma,ns){ z = rstable(n=ns, alpha=alpha0 , beta=0, gamma = 1, delta = 0, pm = 0) return( mean(dnorm(c*z-y),mean=0,std=sigma )) } # -log likelihood ll = function(par){ temp = rep(0,length(y0)) if(par[1]>0&par[2]>0){ for(j in 1:length(y0)) temp[j]=fy(y0[j],par[1],par[2],1000) return(-sum(log(temp))) } else return(Inf) } # optimisation optim(c(0.5,1),ll,control = list(maxit=500)) The estimators I got are $(\hat c,\hat\sigma)=(0.657,0.942)$, sort of close to the theoretical values. You can easily play with this code to obtain the corresponding estimators for your sample. I hope this helps.
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable
I am going to assume that you have observations from $Y$ because you are specifying distributional assumptions on $X$ and $Z$. Then $Y=Z-X$. Given that $Y=Z-X$, we have that the density of $Y$ can be
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable I am going to assume that you have observations from $Y$ because you are specifying distributional assumptions on $X$ and $Z$. Then $Y=Z-X$. Given that $Y=Z-X$, we have that the density of $Y$ can be written as the convolution: $$f_Y(y)=\int_{-\infty}^{\infty}\dfrac{1}{c}f_Z\left(\dfrac{y+x}{c}\right)\dfrac{1}{\sigma}f_X\left(\dfrac{x}{\sigma}\right)dx,$$ where $f_Z$ is the standard $\alpha -$stable density and $f_X$ represents the standard normal density. Note that, using a change of variable, we can rewrite this density as follows $$f_Y(y)=\int_{-\infty}^{\infty}\dfrac{1}{c}f_Z\left(\dfrac{y+u\sigma}{c}\right)f_X\left(u\right)du = \int_{-\infty}^{\infty}\dfrac{1}{\sigma}f_X\left(\dfrac{cu-y}{\sigma}\right)f_Z\left(u\right)du,$$ It might be difficult (if feasible) to obtain this density in closed form but it can be approximated by simulating $(x_1,...,x_N)$ from a standard normal distribution and calculating the average $$f_Y(y;c,\sigma)\approx \dfrac{1}{N}\sum_{j=1}^N \dfrac{1}{c}f_Z\left(\dfrac{y+x_j \sigma}{c}\right),$$ or by simulating from a standard $\alpha-$stable distribution $(z_1,...,z_N)$ and calculating the average $$f_Y(y;c,\sigma)\approx \dfrac{1}{N}\sum_{j=1}^N \dfrac{1}{\sigma}f_X\left(\dfrac{c z_j -y}{\sigma}\right),$$ Note that the first approximation implies an easy simulation but a difficult evaluation of the density $f_Z$. The second approximation implies a difficult simulation but an easy evaluation of $f_X$. I also guess that the number of needed simulations for a good approximation is less in the first approximation. Either way, this might imply the use of intensive computation. Now, if you have a sample $(y_1,...,y_n)$, then you can write the likelihood of $(c,\sigma)$ as $${\mathcal L}(c,\sigma)\propto \prod_{j=1}^n f_Y(y_j;c,\sigma).$$ Using this, you can approximate the Maximum Likelihood Estimators (MLE) of $(c,\sigma)$ by maximising the corresponding likelihood using the approximations I described above. Toy example in R In this example, consider $(c,\sigma)=(0.5,1)$, $n=100$ and $N=1000$. rm(list=ls()) library(stabledist) # Values of the theoretical parameters alpha0 = 1.75 sigma0 = 1 c0 = 0.5 set.seed(2) # Simulated sample y0 = rnorm(100) - rstable(n=100, alpha=alpha0 , beta=0, gamma = c0, delta = 0, pm = 0) # A histogram of the sample hist(y0) # Second approximation of the density of Y fy = function(y,c,sigma,ns){ z = rstable(n=ns, alpha=alpha0 , beta=0, gamma = 1, delta = 0, pm = 0) return( mean(dnorm(c*z-y),mean=0,std=sigma )) } # -log likelihood ll = function(par){ temp = rep(0,length(y0)) if(par[1]>0&par[2]>0){ for(j in 1:length(y0)) temp[j]=fy(y0[j],par[1],par[2],1000) return(-sum(log(temp))) } else return(Inf) } # optimisation optim(c(0.5,1),ll,control = list(maxit=500)) The estimators I got are $(\hat c,\hat\sigma)=(0.657,0.942)$, sort of close to the theoretical values. You can easily play with this code to obtain the corresponding estimators for your sample. I hope this helps.
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable I am going to assume that you have observations from $Y$ because you are specifying distributional assumptions on $X$ and $Z$. Then $Y=Z-X$. Given that $Y=Z-X$, we have that the density of $Y$ can be
34,810
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable
A much slower (at least in my implementation) alternative to Procrastinator's answer is the brute-force method: form the likelihood as the convolution of a Gaussian and $\alpha$-stable distribution, calculated by numerical integration, then maximize it. library(stabledist) library(MASS) library(stats4) # True parameter values alpha0 <- 1.75 sigma0 <- 1 c0 <- 0.5 set.seed(2) # Simulated sample Z <- rnorm(100) + rstable(n=100, alpha=alpha0 , beta=0, gamma = c0) # -log likelihood # (uses log of scale parameters as input, so range is (-Inf,Inf)) ll = function(lsigma, lsc) { fconv <- function(x, zi, sigma, sc) { dnorm(x, 0, sigma)*dstable(zi-x, alpha=alpha0, beta=0, gamma=sc) } sigma <- exp(lsigma) sc <- exp(lsc) f <- 0 for (zi in Z) { f <- f + log(integrate(fconv, lower=-5*sigma, upper=5*sigma, zi=zi, sigma=sigma, sc=sc)$value) } -f } # optimisation # Note: reltol should probably be set larger than the accuracy of integrate # or you may have convergence problems foo <- mle(ll, start=list(lsigma=0, lsc=log(0.5)), control=list(reltol=4*(.Machine$double.eps^0.25))) summary(foo) ... blah blah blah ... Coefficients: Estimate Std. Error lsigma 0.1703237 0.9844097 lsc -0.6680191 2.0703820 -2 log L: 361.0518 > exp(foo@coef) lsigma lsc 1.1856886 0.5127232 Runtime, however, is an issue; on my reasonably fast computer, this took about 25 minutes to run. Larger samples, or starting well away from the MLE, would no doubt take longer. You clearly wouldn't want to form confidence intervals by bootstrapping the procedure...
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable
A much slower (at least in my implementation) alternative to Procrastinator's answer is the brute-force method: form the likelihood as the convolution of a Gaussian and $\alpha$-stable distribution, c
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable A much slower (at least in my implementation) alternative to Procrastinator's answer is the brute-force method: form the likelihood as the convolution of a Gaussian and $\alpha$-stable distribution, calculated by numerical integration, then maximize it. library(stabledist) library(MASS) library(stats4) # True parameter values alpha0 <- 1.75 sigma0 <- 1 c0 <- 0.5 set.seed(2) # Simulated sample Z <- rnorm(100) + rstable(n=100, alpha=alpha0 , beta=0, gamma = c0) # -log likelihood # (uses log of scale parameters as input, so range is (-Inf,Inf)) ll = function(lsigma, lsc) { fconv <- function(x, zi, sigma, sc) { dnorm(x, 0, sigma)*dstable(zi-x, alpha=alpha0, beta=0, gamma=sc) } sigma <- exp(lsigma) sc <- exp(lsc) f <- 0 for (zi in Z) { f <- f + log(integrate(fconv, lower=-5*sigma, upper=5*sigma, zi=zi, sigma=sigma, sc=sc)$value) } -f } # optimisation # Note: reltol should probably be set larger than the accuracy of integrate # or you may have convergence problems foo <- mle(ll, start=list(lsigma=0, lsc=log(0.5)), control=list(reltol=4*(.Machine$double.eps^0.25))) summary(foo) ... blah blah blah ... Coefficients: Estimate Std. Error lsigma 0.1703237 0.9844097 lsc -0.6680191 2.0703820 -2 log L: 361.0518 > exp(foo@coef) lsigma lsc 1.1856886 0.5127232 Runtime, however, is an issue; on my reasonably fast computer, this took about 25 minutes to run. Larger samples, or starting well away from the MLE, would no doubt take longer. You clearly wouldn't want to form confidence intervals by bootstrapping the procedure...
Estimating the parameters of a sum of a Gaussian and an $\alpha$-stable random variable A much slower (at least in my implementation) alternative to Procrastinator's answer is the brute-force method: form the likelihood as the convolution of a Gaussian and $\alpha$-stable distribution, c
34,811
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
This is a gentle tutorial: http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf I like the wikipedia entry as well. The links there are also very good: http://en.wikipedia.org/wiki/Dirichlet_process Here is a summer school lecture of one of the most active researchers http://videolectures.net/mlss07_teh_dp/
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
This is a gentle tutorial: http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf I like the wikipedia entry as well. The links there are also very good: http://en.wikipedia.org/wiki/Dirichlet_process Here is a s
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers) This is a gentle tutorial: http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf I like the wikipedia entry as well. The links there are also very good: http://en.wikipedia.org/wiki/Dirichlet_process Here is a summer school lecture of one of the most active researchers http://videolectures.net/mlss07_teh_dp/
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers) This is a gentle tutorial: http://www.cs.cmu.edu/~kbe/dp_tutorial.pdf I like the wikipedia entry as well. The links there are also very good: http://en.wikipedia.org/wiki/Dirichlet_process Here is a s
34,812
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
Another possibility is Introduction to the Dirichlet Distribution and Related Processes, but I'm afraid I haven't read it yet, however it is down for our next reading group!
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
Another possibility is Introduction to the Dirichlet Distribution and Related Processes, but I'm afraid I haven't read it yet, however it is down for our next reading group!
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers) Another possibility is Introduction to the Dirichlet Distribution and Related Processes, but I'm afraid I haven't read it yet, however it is down for our next reading group!
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers) Another possibility is Introduction to the Dirichlet Distribution and Related Processes, but I'm afraid I haven't read it yet, however it is down for our next reading group!
34,813
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
Here's a rather good introductory video lecture by Tom Griffiths, also given at the Machine Learning Summer School.
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers)
Here's a rather good introductory video lecture by Tom Griffiths, also given at the Machine Learning Summer School.
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers) Here's a rather good introductory video lecture by Tom Griffiths, also given at the Machine Learning Summer School.
Mixture Models and Dirichlet Process Mixtures (beginner lectures or papers) Here's a rather good introductory video lecture by Tom Griffiths, also given at the Machine Learning Summer School.
34,814
Outliers spotting in time series analysis, should I pre-process data or not?
The smooth trend should cope with economic effects without any trouble. Using robust=TRUE in stl makes sense here (and I've changed my original function to do the same). Unless you have more than ten years of data, I would stick with periodic seasonality. It is unlikely to change fast enough to detect with shorter time series. Pre-processing the data for working days makes sense as it removes known causes of variability. I suggest you try the stl approach and look at where it gives very different results from your existing method. Then look at those cases and see which method is giving the most sensible results. I would not go the ARIMA route as it is nowhere near as robust as stl.
Outliers spotting in time series analysis, should I pre-process data or not?
The smooth trend should cope with economic effects without any trouble. Using robust=TRUE in stl makes sense here (and I've changed my original function to do the same). Unless you have more than ten
Outliers spotting in time series analysis, should I pre-process data or not? The smooth trend should cope with economic effects without any trouble. Using robust=TRUE in stl makes sense here (and I've changed my original function to do the same). Unless you have more than ten years of data, I would stick with periodic seasonality. It is unlikely to change fast enough to detect with shorter time series. Pre-processing the data for working days makes sense as it removes known causes of variability. I suggest you try the stl approach and look at where it gives very different results from your existing method. Then look at those cases and see which method is giving the most sensible results. I would not go the ARIMA route as it is nowhere near as robust as stl.
Outliers spotting in time series analysis, should I pre-process data or not? The smooth trend should cope with economic effects without any trouble. Using robust=TRUE in stl makes sense here (and I've changed my original function to do the same). Unless you have more than ten
34,815
Outliers spotting in time series analysis, should I pre-process data or not?
Ok now, let's try this for comparison. What if I remove calendar effects by dividing the original time serie by the number of actual working days in each month and then I multyply the results by 21? The original time-serie is in black, and the calendar-adjusted one is in red: The first thing that popped into my mind is: hey, are these data really seasonal? August might be, but what about November/December? It seems to me that working-day adjustment cancels out most, if not all, of the seasonality for the winter months. How do you guys see it? On top of that: I still notice the pulse in Nov'05 and Jan'09, I'm not really sure about May'06 and it seems to me like Jan'08 might have been more of a matter related to working days than to an actual pulse. Also, I can totally see the level shift in Feb'09, but what about the one in Dec'06? Isn't it more like a side-effect of the Nov'06 pulse (is Nov'06 even a pulse considering calendar-adjusted data)? The serie went up so high that, when it came back down, it seemed like a shift in level. Does the pulse-adjusted data still generate a level shift warning in Dec'06? Again, the idea here is to try and see if pre-processing of data might actually improve correct outlier identification. I think a side-by-side test like this might help. IrishStats (or anyone else) care to accept the challenge? :-)
Outliers spotting in time series analysis, should I pre-process data or not?
Ok now, let's try this for comparison. What if I remove calendar effects by dividing the original time serie by the number of actual working days in each month and then I multyply the results by 21? T
Outliers spotting in time series analysis, should I pre-process data or not? Ok now, let's try this for comparison. What if I remove calendar effects by dividing the original time serie by the number of actual working days in each month and then I multyply the results by 21? The original time-serie is in black, and the calendar-adjusted one is in red: The first thing that popped into my mind is: hey, are these data really seasonal? August might be, but what about November/December? It seems to me that working-day adjustment cancels out most, if not all, of the seasonality for the winter months. How do you guys see it? On top of that: I still notice the pulse in Nov'05 and Jan'09, I'm not really sure about May'06 and it seems to me like Jan'08 might have been more of a matter related to working days than to an actual pulse. Also, I can totally see the level shift in Feb'09, but what about the one in Dec'06? Isn't it more like a side-effect of the Nov'06 pulse (is Nov'06 even a pulse considering calendar-adjusted data)? The serie went up so high that, when it came back down, it seemed like a shift in level. Does the pulse-adjusted data still generate a level shift warning in Dec'06? Again, the idea here is to try and see if pre-processing of data might actually improve correct outlier identification. I think a side-by-side test like this might help. IrishStats (or anyone else) care to accept the challenge? :-)
Outliers spotting in time series analysis, should I pre-process data or not? Ok now, let's try this for comparison. What if I remove calendar effects by dividing the original time serie by the number of actual working days in each month and then I multyply the results by 21? T
34,816
Outliers spotting in time series analysis, should I pre-process data or not?
The problem/opportunity is to identify the underlying ARIMA or Seasonal Dummy Model and augment as needed . This particular series evidences string / dominant determinstic seasona dummies as compared to a seasonal ARIMA structure. We then identify both unusual values be they pulses, seasonal pulses, level shifts and or local time trends AND and autoregressive structure needed to generate "noise". Two Level Shifts were identified on or around time period 50 (2009/February) and period 24 (2006/December). The data suggested the following model . The unusual values i.e. the PULSES are listed here. A very illuminating graphic is the cleansed vs the actual shown here . Finally the fit/actual/forecast graph is a good ( but busy ) summary . The forecast graph is . The final model statistics are shown in the last three images and and and . The residuals from the model are reasonably random with no remaining autocorrelative structure . Hope this little example helps all ! I am one of the developers of the software I used here . There are other commercially available products that will deliver something similar.
Outliers spotting in time series analysis, should I pre-process data or not?
The problem/opportunity is to identify the underlying ARIMA or Seasonal Dummy Model and augment as needed . This particular series evidences string / dominant determinstic seasona dummies as compared
Outliers spotting in time series analysis, should I pre-process data or not? The problem/opportunity is to identify the underlying ARIMA or Seasonal Dummy Model and augment as needed . This particular series evidences string / dominant determinstic seasona dummies as compared to a seasonal ARIMA structure. We then identify both unusual values be they pulses, seasonal pulses, level shifts and or local time trends AND and autoregressive structure needed to generate "noise". Two Level Shifts were identified on or around time period 50 (2009/February) and period 24 (2006/December). The data suggested the following model . The unusual values i.e. the PULSES are listed here. A very illuminating graphic is the cleansed vs the actual shown here . Finally the fit/actual/forecast graph is a good ( but busy ) summary . The forecast graph is . The final model statistics are shown in the last three images and and and . The residuals from the model are reasonably random with no remaining autocorrelative structure . Hope this little example helps all ! I am one of the developers of the software I used here . There are other commercially available products that will deliver something similar.
Outliers spotting in time series analysis, should I pre-process data or not? The problem/opportunity is to identify the underlying ARIMA or Seasonal Dummy Model and augment as needed . This particular series evidences string / dominant determinstic seasona dummies as compared
34,817
25th and 75th percentile according to wolfram alpha
There are at least 9 [different definitions of empirical quantiles, see Wikipedia or the R manual (i.e., ? quantile). R computes the 25th percentile of your data as 37 if you specify type=6 (like Minitab and SPSS) and 39.5 if you specify type=5 (piecewise linear function): > quantile(x=c(32,42,46,46,54), probs=0.25, type=5) 25% 39.5 > quantile(x=c(32,42,46,46,54), probs=0.25, type=6) 25% 37
25th and 75th percentile according to wolfram alpha
There are at least 9 [different definitions of empirical quantiles, see Wikipedia or the R manual (i.e., ? quantile). R computes the 25th percentile of your data as 37 if you specify type=6 (like Mini
25th and 75th percentile according to wolfram alpha There are at least 9 [different definitions of empirical quantiles, see Wikipedia or the R manual (i.e., ? quantile). R computes the 25th percentile of your data as 37 if you specify type=6 (like Minitab and SPSS) and 39.5 if you specify type=5 (piecewise linear function): > quantile(x=c(32,42,46,46,54), probs=0.25, type=5) 25% 39.5 > quantile(x=c(32,42,46,46,54), probs=0.25, type=6) 25% 37
25th and 75th percentile according to wolfram alpha There are at least 9 [different definitions of empirical quantiles, see Wikipedia or the R manual (i.e., ? quantile). R computes the 25th percentile of your data as 37 if you specify type=6 (like Mini
34,818
Given my networks of friends - can I detect my most "Central" friends?
Simplest solution would be to count the number of friends each of your friends has in common with you, and invert that and use this as a measure of centrality. Anything further than this is bound to require additional assumptions: you can always sum the distances (along the shortest friend path excluding you) between all your friends (and the one with the smallest "total distance" would then be "most central"). But then you have to decide: what is the "weight" of the longer distances: Say you are considering the distance from A and B to persons C, D and E, and these are respectively 3,1,2 and 4,1,1: do you consider the total distance the same? Also, if you want to avoid totally disconnected people (which make the sum awkward, because you have to specify a hard number for "disconnected" people's distances), you will probably have to allow for connections outside your own friends circle (e.g.: you are friends with 100 people who know me, but you're not befriended with me, and the 100 people don't necessarily are friends either). But even then you may have disconnected nodes in your "friend graph". Finally, you may also have to weigh the connections themselves: perhaps the date the friendship was established, perhaps the number of messages that have been posted on either's wall (which could even make the "distance" nonsymmetric), the person who initiated the friendship (sent the "request"), or details specified about the relationship (family relations etc) could matter for your "distance of interest". All in all: you will have to specify what your goals are, and adapt you distance measure to it. There's bound to be quite a bit of literature about distances in graphs, but all will require figuring out which distance you are interested in.
Given my networks of friends - can I detect my most "Central" friends?
Simplest solution would be to count the number of friends each of your friends has in common with you, and invert that and use this as a measure of centrality. Anything further than this is bound to r
Given my networks of friends - can I detect my most "Central" friends? Simplest solution would be to count the number of friends each of your friends has in common with you, and invert that and use this as a measure of centrality. Anything further than this is bound to require additional assumptions: you can always sum the distances (along the shortest friend path excluding you) between all your friends (and the one with the smallest "total distance" would then be "most central"). But then you have to decide: what is the "weight" of the longer distances: Say you are considering the distance from A and B to persons C, D and E, and these are respectively 3,1,2 and 4,1,1: do you consider the total distance the same? Also, if you want to avoid totally disconnected people (which make the sum awkward, because you have to specify a hard number for "disconnected" people's distances), you will probably have to allow for connections outside your own friends circle (e.g.: you are friends with 100 people who know me, but you're not befriended with me, and the 100 people don't necessarily are friends either). But even then you may have disconnected nodes in your "friend graph". Finally, you may also have to weigh the connections themselves: perhaps the date the friendship was established, perhaps the number of messages that have been posted on either's wall (which could even make the "distance" nonsymmetric), the person who initiated the friendship (sent the "request"), or details specified about the relationship (family relations etc) could matter for your "distance of interest". All in all: you will have to specify what your goals are, and adapt you distance measure to it. There's bound to be quite a bit of literature about distances in graphs, but all will require figuring out which distance you are interested in.
Given my networks of friends - can I detect my most "Central" friends? Simplest solution would be to count the number of friends each of your friends has in common with you, and invert that and use this as a measure of centrality. Anything further than this is bound to r
34,819
Given my networks of friends - can I detect my most "Central" friends?
There are many, many, many ways to define your "most central" friends. These are called Centrality measures. Probably the three most common are these, with somewhat plain English explanations. Degree: How many friends does Friend A have? Closeness: How many steps does does one have to go through to go from Friend A to any other friend on your network? Betweenness: How many paths on your network between friends in your network pass through Friend A? Sometimes these are very similar, and highlight the same "important" people. Sometimes, they give interesting results where even someone without a huge number of connections is a "friend of a friend" of nearly everyone, or connects two disparate groups. And these are just a few - there are, as I said, a ton of different ways to look at centrality, with lots of twists. Lots of software will let you look at these measures. My personal favorite, if you know Python, is NetworkX. NodeXL works for Excel, sna is one of the R packages that handles it, etc. In terms of the information you need...obviously, you need the network itself. One thing you are assuming is that the network you collect (in this case, Facebook), adequately represents the actual friendship network you're asking about. So, for example, people don't have hangers-on in their friend's list they haven't bothered to delete, or there's no one who is important to your network who has opted out of social networking. Or in the case of Facebook, that there's no spoof accounts.
Given my networks of friends - can I detect my most "Central" friends?
There are many, many, many ways to define your "most central" friends. These are called Centrality measures. Probably the three most common are these, with somewhat plain English explanations. Degree
Given my networks of friends - can I detect my most "Central" friends? There are many, many, many ways to define your "most central" friends. These are called Centrality measures. Probably the three most common are these, with somewhat plain English explanations. Degree: How many friends does Friend A have? Closeness: How many steps does does one have to go through to go from Friend A to any other friend on your network? Betweenness: How many paths on your network between friends in your network pass through Friend A? Sometimes these are very similar, and highlight the same "important" people. Sometimes, they give interesting results where even someone without a huge number of connections is a "friend of a friend" of nearly everyone, or connects two disparate groups. And these are just a few - there are, as I said, a ton of different ways to look at centrality, with lots of twists. Lots of software will let you look at these measures. My personal favorite, if you know Python, is NetworkX. NodeXL works for Excel, sna is one of the R packages that handles it, etc. In terms of the information you need...obviously, you need the network itself. One thing you are assuming is that the network you collect (in this case, Facebook), adequately represents the actual friendship network you're asking about. So, for example, people don't have hangers-on in their friend's list they haven't bothered to delete, or there's no one who is important to your network who has opted out of social networking. Or in the case of Facebook, that there's no spoof accounts.
Given my networks of friends - can I detect my most "Central" friends? There are many, many, many ways to define your "most central" friends. These are called Centrality measures. Probably the three most common are these, with somewhat plain English explanations. Degree
34,820
Given my networks of friends - can I detect my most "Central" friends?
Take a look at NodeXL (an simple, but powerful, Excel extension for network analysis) and the book Analyzing Social Media Networks with NodeXL: Insights from a Connected World. Even if you will use some other software, the book definitely discusses the various measures of centrality and their uses very well. I don't have it in front of me, but I seem to remember it addresses the prioritization question in the context of marketing.
Given my networks of friends - can I detect my most "Central" friends?
Take a look at NodeXL (an simple, but powerful, Excel extension for network analysis) and the book Analyzing Social Media Networks with NodeXL: Insights from a Connected World. Even if you will use so
Given my networks of friends - can I detect my most "Central" friends? Take a look at NodeXL (an simple, but powerful, Excel extension for network analysis) and the book Analyzing Social Media Networks with NodeXL: Insights from a Connected World. Even if you will use some other software, the book definitely discusses the various measures of centrality and their uses very well. I don't have it in front of me, but I seem to remember it addresses the prioritization question in the context of marketing.
Given my networks of friends - can I detect my most "Central" friends? Take a look at NodeXL (an simple, but powerful, Excel extension for network analysis) and the book Analyzing Social Media Networks with NodeXL: Insights from a Connected World. Even if you will use so
34,821
What is the relationship between a p-value and a confidence interval?
The p-value relates to a test against the null hypothesis, usually that the parameter value is zero (no relationship). The wider the confidence interval on a parameter estimate is, the closer one of its extreme points will be to zero, and a p-value of 0.05 means that the 95% confidence interval just touches zero. In fact for a p-value $p$ of a parameter estimate, the $(1-p)$ level confidence interval just touches zero.
What is the relationship between a p-value and a confidence interval?
The p-value relates to a test against the null hypothesis, usually that the parameter value is zero (no relationship). The wider the confidence interval on a parameter estimate is, the closer one of i
What is the relationship between a p-value and a confidence interval? The p-value relates to a test against the null hypothesis, usually that the parameter value is zero (no relationship). The wider the confidence interval on a parameter estimate is, the closer one of its extreme points will be to zero, and a p-value of 0.05 means that the 95% confidence interval just touches zero. In fact for a p-value $p$ of a parameter estimate, the $(1-p)$ level confidence interval just touches zero.
What is the relationship between a p-value and a confidence interval? The p-value relates to a test against the null hypothesis, usually that the parameter value is zero (no relationship). The wider the confidence interval on a parameter estimate is, the closer one of i
34,822
What is the relationship between a p-value and a confidence interval?
A $(1−\alpha)$ level confidence interval is exactly the range of values that would not be rejected using an $\alpha$ level test, assuming the same general theory generated the confidence interval and the hypothesis test decision. Therefore, they are exactly equivalent in terms of the decision about whether or not to reject the null hypothesis.
What is the relationship between a p-value and a confidence interval?
A $(1−\alpha)$ level confidence interval is exactly the range of values that would not be rejected using an $\alpha$ level test, assuming the same general theory generated the confidence interval and
What is the relationship between a p-value and a confidence interval? A $(1−\alpha)$ level confidence interval is exactly the range of values that would not be rejected using an $\alpha$ level test, assuming the same general theory generated the confidence interval and the hypothesis test decision. Therefore, they are exactly equivalent in terms of the decision about whether or not to reject the null hypothesis.
What is the relationship between a p-value and a confidence interval? A $(1−\alpha)$ level confidence interval is exactly the range of values that would not be rejected using an $\alpha$ level test, assuming the same general theory generated the confidence interval and
34,823
Proxy variables versus instrumental variables
An instrumental variable is used to help estimate a causal effect (or to alleviate measurement error). The instrumental variable must affect the independent variable of interest, and only affect the dependent variable through the independent variable of interest. The second part (only effecting the dependent variable through the independent variable) is called an exclusion restriction. A proxy variable is a variable you use because you think it is correlated with the variable you are really interested in, but have no (or poor) measurement of.
Proxy variables versus instrumental variables
An instrumental variable is used to help estimate a causal effect (or to alleviate measurement error). The instrumental variable must affect the independent variable of interest, and only affect the
Proxy variables versus instrumental variables An instrumental variable is used to help estimate a causal effect (or to alleviate measurement error). The instrumental variable must affect the independent variable of interest, and only affect the dependent variable through the independent variable of interest. The second part (only effecting the dependent variable through the independent variable) is called an exclusion restriction. A proxy variable is a variable you use because you think it is correlated with the variable you are really interested in, but have no (or poor) measurement of.
Proxy variables versus instrumental variables An instrumental variable is used to help estimate a causal effect (or to alleviate measurement error). The instrumental variable must affect the independent variable of interest, and only affect the
34,824
Proxy variables versus instrumental variables
One way to think about what an instrumental variable is doing is to say you are first regressing X on the instrument Z. What you then have are the predicted values for X - say, X*. So intuitively this is sort of the part of X that you get from Z. Then you take Y and regress it on those X* (and correct for standard errors). This is different from deciding to use Z as a proxy directly and regressing Y on Z. Intuitively, you then have all of Z in the regression instead of Z's relationship to X.
Proxy variables versus instrumental variables
One way to think about what an instrumental variable is doing is to say you are first regressing X on the instrument Z. What you then have are the predicted values for X - say, X*. So intuitively this
Proxy variables versus instrumental variables One way to think about what an instrumental variable is doing is to say you are first regressing X on the instrument Z. What you then have are the predicted values for X - say, X*. So intuitively this is sort of the part of X that you get from Z. Then you take Y and regress it on those X* (and correct for standard errors). This is different from deciding to use Z as a proxy directly and regressing Y on Z. Intuitively, you then have all of Z in the regression instead of Z's relationship to X.
Proxy variables versus instrumental variables One way to think about what an instrumental variable is doing is to say you are first regressing X on the instrument Z. What you then have are the predicted values for X - say, X*. So intuitively this
34,825
Cox model with LASSO
Here are two suggestions. First, you can take a look at the glmnet package, from Friedman, Hastie and Tibshirani, but see their JSS 2010 (33) paper, Regularization Paths for Generalized Linear Models via Coordinate Descent. Second, although I've never used this kind of penalized model, I know that the penalized package implements L1/L2 penalties on GLM and the Cox model. What I found interesting in this package (this was with ordinary regression) was that you can include a set of unpenalized variables in the model. The associated publication is now: Goeman J.J. (2010). L-1 Penalized Estimation in the Cox Proportional Hazards Model. Biometrical Journal 52 (1) 70-84.
Cox model with LASSO
Here are two suggestions. First, you can take a look at the glmnet package, from Friedman, Hastie and Tibshirani, but see their JSS 2010 (33) paper, Regularization Paths for Generalized Linear Models
Cox model with LASSO Here are two suggestions. First, you can take a look at the glmnet package, from Friedman, Hastie and Tibshirani, but see their JSS 2010 (33) paper, Regularization Paths for Generalized Linear Models via Coordinate Descent. Second, although I've never used this kind of penalized model, I know that the penalized package implements L1/L2 penalties on GLM and the Cox model. What I found interesting in this package (this was with ordinary regression) was that you can include a set of unpenalized variables in the model. The associated publication is now: Goeman J.J. (2010). L-1 Penalized Estimation in the Cox Proportional Hazards Model. Biometrical Journal 52 (1) 70-84.
Cox model with LASSO Here are two suggestions. First, you can take a look at the glmnet package, from Friedman, Hastie and Tibshirani, but see their JSS 2010 (33) paper, Regularization Paths for Generalized Linear Models
34,826
Cox model with LASSO
Many years after the question was posed, of course, but it seems that there is a Coxnet R package (since 2015) https://cran.r-project.org/web/packages/Coxnet/Coxnet.pdf, which i plan to try out for Penalized Cox Model for proteomics data.
Cox model with LASSO
Many years after the question was posed, of course, but it seems that there is a Coxnet R package (since 2015) https://cran.r-project.org/web/packages/Coxnet/Coxnet.pdf, which i plan to try out for Pe
Cox model with LASSO Many years after the question was posed, of course, but it seems that there is a Coxnet R package (since 2015) https://cran.r-project.org/web/packages/Coxnet/Coxnet.pdf, which i plan to try out for Penalized Cox Model for proteomics data.
Cox model with LASSO Many years after the question was posed, of course, but it seems that there is a Coxnet R package (since 2015) https://cran.r-project.org/web/packages/Coxnet/Coxnet.pdf, which i plan to try out for Pe
34,827
Trend or no trend?
The answer to your first question is no. If the null hypothesis of unit root is rejected, the alternative in its most general form is stationary series with time trend. Here is the example: > rr <- 1+0.01*(1:100)+rnorm(100) > plot(rr) > adf.test(rr) Augmented Dickey-Fuller Test data: rr Dickey-Fuller = -4.1521, Lag order = 4, p-value = 0.01 alternative hypothesis: stationary Message d'avis : In adf.test(rr) : p-value smaller than printed p-value So your findings are consistent with ADF test: there is no unit root, but there is a time trend.
Trend or no trend?
The answer to your first question is no. If the null hypothesis of unit root is rejected, the alternative in its most general form is stationary series with time trend. Here is the example: > rr <- 1+
Trend or no trend? The answer to your first question is no. If the null hypothesis of unit root is rejected, the alternative in its most general form is stationary series with time trend. Here is the example: > rr <- 1+0.01*(1:100)+rnorm(100) > plot(rr) > adf.test(rr) Augmented Dickey-Fuller Test data: rr Dickey-Fuller = -4.1521, Lag order = 4, p-value = 0.01 alternative hypothesis: stationary Message d'avis : In adf.test(rr) : p-value smaller than printed p-value So your findings are consistent with ADF test: there is no unit root, but there is a time trend.
Trend or no trend? The answer to your first question is no. If the null hypothesis of unit root is rejected, the alternative in its most general form is stationary series with time trend. Here is the example: > rr <- 1+
34,828
Trend or no trend?
Larry Bretthorst's extended phd will greatly help you I think. You should take the discrete fourier transform of the data. This will give you a look at your series in the frequency domain. Trend is represented by low frequency. Ultimate modeling book. It's 200 pages, but well worth it - includes computer code to implement the methods
Trend or no trend?
Larry Bretthorst's extended phd will greatly help you I think. You should take the discrete fourier transform of the data. This will give you a look at your series in the frequency domain. Trend is
Trend or no trend? Larry Bretthorst's extended phd will greatly help you I think. You should take the discrete fourier transform of the data. This will give you a look at your series in the frequency domain. Trend is represented by low frequency. Ultimate modeling book. It's 200 pages, but well worth it - includes computer code to implement the methods
Trend or no trend? Larry Bretthorst's extended phd will greatly help you I think. You should take the discrete fourier transform of the data. This will give you a look at your series in the frequency domain. Trend is
34,829
Trend or no trend?
The ADF test has weak power and, as Dmitrij Celov mentioned, you should probably also check the results of PP and KPSS tests. If you find that your results are on the margin of detecting a unit root, it's possible your series is fractionally integrated. I would also check ACF and PACF plots of the series, looking for slow decay patterns. Generally, if you find that ADF test and Phillips-Perron reject the null of a unit root, but that the KPSS and ACF/PACF plots demonstrate some statistically-significant persistence through several lags, this may be strong evidence for fractional integration.
Trend or no trend?
The ADF test has weak power and, as Dmitrij Celov mentioned, you should probably also check the results of PP and KPSS tests. If you find that your results are on the margin of detecting a unit root,
Trend or no trend? The ADF test has weak power and, as Dmitrij Celov mentioned, you should probably also check the results of PP and KPSS tests. If you find that your results are on the margin of detecting a unit root, it's possible your series is fractionally integrated. I would also check ACF and PACF plots of the series, looking for slow decay patterns. Generally, if you find that ADF test and Phillips-Perron reject the null of a unit root, but that the KPSS and ACF/PACF plots demonstrate some statistically-significant persistence through several lags, this may be strong evidence for fractional integration.
Trend or no trend? The ADF test has weak power and, as Dmitrij Celov mentioned, you should probably also check the results of PP and KPSS tests. If you find that your results are on the margin of detecting a unit root,
34,830
Dealing with "trouble maker" samples
I think this will require domain expertise. If I were you, I would spend time examining these samples and their provenance, to figure out what (if anything) is wrong with them. If the samples were collected by a colleague working in some application domain, they may be able to help you with this. Sometimes, samples can indeed be 'bad'. For example, they could be mislabelled, collected under different circumstances from the rest, collected using off-calibration equipment, or there may be many other reasons why they're outliers. However, you shouldn't just say "these are probably bad" and delete them; much better to identify what's wrong with them so that you can verify that they're bad and justify their deletion. One reason for caution is that they might not actually be bad, just drawn from part of your sample space that's not well represented in the data. In that case, you shouldn't toss them out, you should (if possible) collect more like them. Another reason is that the samples at the extremities of a concept are the ones that may be hardest to classify correctly, but if they are not actually bad and you remove them, you just end up with new samples at the extremities. To take an artificial example, suppose you are classifying samples as Hot/NotHot, and everything above 50 degrees should be hot. Samples at 49.9 degrees and 50.1 degrees are quite similar even though they're different sides of your decision boundary, so they're just hard to classify and they're not outliers that should be tossed. Also, if you remove them, you may find that two new samples (49.8 and 50.2 degrees) that were previously being classified correctly are now getting misclassified. One final point: when you say that samples in the training set are generally being misclassified, do you mean under a cross-validation scheme or literally that when you test on the training data that they are misclassified? If the latter, it could be that the classification methods you are using are not able to capture the data variance sufficiently well. Hope this helps a little ...
Dealing with "trouble maker" samples
I think this will require domain expertise. If I were you, I would spend time examining these samples and their provenance, to figure out what (if anything) is wrong with them. If the samples were col
Dealing with "trouble maker" samples I think this will require domain expertise. If I were you, I would spend time examining these samples and their provenance, to figure out what (if anything) is wrong with them. If the samples were collected by a colleague working in some application domain, they may be able to help you with this. Sometimes, samples can indeed be 'bad'. For example, they could be mislabelled, collected under different circumstances from the rest, collected using off-calibration equipment, or there may be many other reasons why they're outliers. However, you shouldn't just say "these are probably bad" and delete them; much better to identify what's wrong with them so that you can verify that they're bad and justify their deletion. One reason for caution is that they might not actually be bad, just drawn from part of your sample space that's not well represented in the data. In that case, you shouldn't toss them out, you should (if possible) collect more like them. Another reason is that the samples at the extremities of a concept are the ones that may be hardest to classify correctly, but if they are not actually bad and you remove them, you just end up with new samples at the extremities. To take an artificial example, suppose you are classifying samples as Hot/NotHot, and everything above 50 degrees should be hot. Samples at 49.9 degrees and 50.1 degrees are quite similar even though they're different sides of your decision boundary, so they're just hard to classify and they're not outliers that should be tossed. Also, if you remove them, you may find that two new samples (49.8 and 50.2 degrees) that were previously being classified correctly are now getting misclassified. One final point: when you say that samples in the training set are generally being misclassified, do you mean under a cross-validation scheme or literally that when you test on the training data that they are misclassified? If the latter, it could be that the classification methods you are using are not able to capture the data variance sufficiently well. Hope this helps a little ...
Dealing with "trouble maker" samples I think this will require domain expertise. If I were you, I would spend time examining these samples and their provenance, to figure out what (if anything) is wrong with them. If the samples were col
34,831
Dealing with "trouble maker" samples
I think you are suffering from the presence of outliers in your design matrix. The remedy is to detect them using a multivariate robust estimator of location/scale (just as you can use the median to detect outliers in an univariate setting but you can't use the mean because the mean itself is sensitive to the presence of outliers). High quality estimators are already present in the R-base tool (through MASS). I advise you to read the following (non technical) summary introduction to multivariate robust method: P. J. Rousseeuw and K. van Driessen (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41, 212-223. There are many good implementation in R, one i recommend particularly is covMcd() in package robustbase (better than the MASS implementation because it includes the small sample correction factor). A typical use would be: x<-mydata #your 300 by 40 matrix of **design variables** out<-covMcd(x) ind.out<-which(out$mcd.wt==0) Now, ind.out contains the indexes of the observations flagged as outliers. You should exclude them from your sample and re-run your classification procedure on the 'decontaminated' sample. I think it will stabilize your results, solve your problem. Let us know :) EDIT: As pointed out by Chl (in the comments, below). It could be advisable, in your case, to supplement the hard rejection rule used in the code above by a graphical method (an implementation of which can be found in the R package mvoutlier). This is wholly consistent with the approach proposed in my answer, in fact it is well explained (and illustrated) in the paper i cite above. Therefore, i will just point out two arguments in its favor that may be particularly relevant to your case (assuming that you indeed have an outlier problem and that these can be found by the mcd): Provides a visually strong illustration of the problem with outliers as each observations is associated with a measure of its influence on the resulting estimates (observations with outsized influence then stand out). The approach i proposed applies a strong rejection rule: in a nutshell, any observation whose influence over the final estimates is larger than some threshold is considered an outlier. The graphical approach might help you save some observation, by trying to recover those observations whose influence over the estimator is beyond the threshold but only by a small amount. It is important in the context of your model because 300 observations in a 40 dimensional space is rather sparse already.
Dealing with "trouble maker" samples
I think you are suffering from the presence of outliers in your design matrix. The remedy is to detect them using a multivariate robust estimator of location/scale (just as you can use the median to
Dealing with "trouble maker" samples I think you are suffering from the presence of outliers in your design matrix. The remedy is to detect them using a multivariate robust estimator of location/scale (just as you can use the median to detect outliers in an univariate setting but you can't use the mean because the mean itself is sensitive to the presence of outliers). High quality estimators are already present in the R-base tool (through MASS). I advise you to read the following (non technical) summary introduction to multivariate robust method: P. J. Rousseeuw and K. van Driessen (1999) A fast algorithm for the minimum covariance determinant estimator. Technometrics 41, 212-223. There are many good implementation in R, one i recommend particularly is covMcd() in package robustbase (better than the MASS implementation because it includes the small sample correction factor). A typical use would be: x<-mydata #your 300 by 40 matrix of **design variables** out<-covMcd(x) ind.out<-which(out$mcd.wt==0) Now, ind.out contains the indexes of the observations flagged as outliers. You should exclude them from your sample and re-run your classification procedure on the 'decontaminated' sample. I think it will stabilize your results, solve your problem. Let us know :) EDIT: As pointed out by Chl (in the comments, below). It could be advisable, in your case, to supplement the hard rejection rule used in the code above by a graphical method (an implementation of which can be found in the R package mvoutlier). This is wholly consistent with the approach proposed in my answer, in fact it is well explained (and illustrated) in the paper i cite above. Therefore, i will just point out two arguments in its favor that may be particularly relevant to your case (assuming that you indeed have an outlier problem and that these can be found by the mcd): Provides a visually strong illustration of the problem with outliers as each observations is associated with a measure of its influence on the resulting estimates (observations with outsized influence then stand out). The approach i proposed applies a strong rejection rule: in a nutshell, any observation whose influence over the final estimates is larger than some threshold is considered an outlier. The graphical approach might help you save some observation, by trying to recover those observations whose influence over the estimator is beyond the threshold but only by a small amount. It is important in the context of your model because 300 observations in a 40 dimensional space is rather sparse already.
Dealing with "trouble maker" samples I think you are suffering from the presence of outliers in your design matrix. The remedy is to detect them using a multivariate robust estimator of location/scale (just as you can use the median to
34,832
Dealing with "trouble maker" samples
Addressing the issue mentioned under Update 2. You are dealing with outliers. Those outliers have a significant impact on your Logistic Regression coefficients. By removing them, you found that your models performed better on the validation set. Does it mean that the outliers are "bad"? No. It means that they are influential. There are several measures of statistical distances to confirm how far away and influential such outliers are. Those include Cook's D and DFFITS. Having identified the trouble makers, you are struggling with whether to keep them in or not. Ultimately, this may be a qualitative judgment rather than a statistical question. Here are a couple of investigative questions that may be helpful in making this qualitative decision: 1) First, are the outliers truly bad due to poor measurements? 2) Is it more important for your models to be correct in the tails where outliers reside or be more accurate in the vast majority of the cases?
Dealing with "trouble maker" samples
Addressing the issue mentioned under Update 2. You are dealing with outliers. Those outliers have a significant impact on your Logistic Regression coefficients. By removing them, you found that you
Dealing with "trouble maker" samples Addressing the issue mentioned under Update 2. You are dealing with outliers. Those outliers have a significant impact on your Logistic Regression coefficients. By removing them, you found that your models performed better on the validation set. Does it mean that the outliers are "bad"? No. It means that they are influential. There are several measures of statistical distances to confirm how far away and influential such outliers are. Those include Cook's D and DFFITS. Having identified the trouble makers, you are struggling with whether to keep them in or not. Ultimately, this may be a qualitative judgment rather than a statistical question. Here are a couple of investigative questions that may be helpful in making this qualitative decision: 1) First, are the outliers truly bad due to poor measurements? 2) Is it more important for your models to be correct in the tails where outliers reside or be more accurate in the vast majority of the cases?
Dealing with "trouble maker" samples Addressing the issue mentioned under Update 2. You are dealing with outliers. Those outliers have a significant impact on your Logistic Regression coefficients. By removing them, you found that you
34,833
Calculation of Relative Risk Confidence Interval
The three options that are proposed in riskratio() refer to an asymptotic or large sample approach, an approximation for small sample, a resampling approach (asymptotic bootstrap, i.e. not based on percentile or bias-corrected). The former is described in Rothman's book (as referenced in the online help), chap. 14, pp. 241-244. The latter is relatively trivial so I will skip it. The small sample approach is just an adjustment on the calculation of the estimated relative risk. If we consider the following table of counts for subjects cross-classififed according to their exposure and disease status, Exposed Non-exposed Total Cases a1 a0 m1 Non-case b1 b0 m0 Total n1 n0 N the MLE of the risk ratio (RR), $\text{RR}=R_1/R_0$, is $\text{RR}=\frac{a_1/n_1}{a_0/n_0}$. In the large sample approach, a score statistic (for testing $R_1=R_0$, or equivalently, $\text{RR}=1$) is used, $\chi_S=\frac{a_1-\tilde a_1}{V^{1/2}}$, where the numerator reflects the difference between the oberved and expected counts for exposed cases and $V=(m_1n_1m_0n_0)/(n^2(n-1))$ is the variance of $a_1$. Now, that's all for computing the $p$-value because we know that $\chi_S$ follow a chi-square distribution. In fact, the three $p$-values (mid-$p$, Fisher exact test, and $\chi^2$-test) that are returned by riskratio() are computed in the tab2by2.test() function. For more information on mid-$p$, you can refer to Berry and Armitage (1995). Mid-P confidence intervals: a brief review. The Statistician, 44(4), 417-423. Now, for computing the $100(1-\alpha)$ CIs, this asymptotic approach yields an approximate SD estimate for $\ln(\text{RR})$ of $(\frac{1}{a_1}-\frac{1}{n_1}+\frac{1}{a_0}-\frac{1}{n_0})^{1/2}$, and the Wald limits are found to be $\exp(\ln(\text{RR}))\pm Z_c \text{SD}(\ln(\text{RR}))$, where $Z_c$ is the corresponding quantile for the standard normal distribution. The small sample approach makes use of an adjusted RR estimator: we just replace the denominator $a_0/n_0$ by $(a_0+1)/(n_0+1)$. As to how to decide whether we should rely on the large or small sample approach, it is mainly by checking expected cell frequencies; for the $\chi_S$ to be valid, $\tilde a_1$, $m_1-\tilde a_1$, $n_1-\tilde a_1$ and $m_0-n_1+\tilde a_1$ should be $> 5$. Working through the example of Rothman (p. 243), sel <- matrix(c(2,9,12,7), 2, 2) riskratio(sel, rev="row") which yields $data Outcome Predictor Disease1 Disease2 Total Exposed2 9 7 16 Exposed1 2 12 14 Total 11 19 30 $measure risk ratio with 95% C.I. Predictor estimate lower upper Exposed2 1.000000 NA NA Exposed1 1.959184 1.080254 3.553240 $p.value two-sided Predictor midp.exact fisher.exact chi.square Exposed2 NA NA NA Exposed1 0.02332167 0.02588706 0.01733469 $correction [1] FALSE attr(,"method") [1] "Unconditional MLE & normal approximation (Wald) CI" By hand, we would get $\text{RR} = (12/14)/(7/16)=1.96$, $\tilde a_1 = 19\times 14 / 30= 8.87$, $V = (8.87\times 11\times 16)/ \big(30\times (30-1)\big)= 1.79$, $\chi_S = (12-8.87)/\sqrt{1.79}= 2.34$, $\text{SD}(\ln(\text{RR})) = \left( 1/12-1/14+1/7-1/16 \right)^{1/2}=0.304$, $95\% \text{CIs} = \exp\big(\ln(1.96)\pm 1.645\times0.304\big)=[1.2;3.2]\quad \text{(rounded)}$. The following papers also addresses the construction of the test statistic for the RR or the OR: Miettinen and Nurminen (1985). Comparative analysis of two rates. *Statistics in Medicine, 4: 213-226. Becker (1989). A comparison of maximum likelihood and Jewell's estimators of the odds ratio and relative risk in single 2 × 2 tables. Statistics in Medicine, 8(8): 987-996. Tian, Tang, Ng, and Chan (2008). Confidence intervals for the risk ratio under inverse sampling. Statistics in Medicine, 27(17), 3301-3324. Walter and Cook (1991). A comparison of several point estimators of the odds ratio in a single 2 x 2 contingency table. Biometrics, 47(3): 795-811. Notes As far as I know, there's no reference to relative risk in Selvin's book (also referenced in the online help). Alan Agresti has also some code for relative risk.
Calculation of Relative Risk Confidence Interval
The three options that are proposed in riskratio() refer to an asymptotic or large sample approach, an approximation for small sample, a resampling approach (asymptotic bootstrap, i.e. not based on pe
Calculation of Relative Risk Confidence Interval The three options that are proposed in riskratio() refer to an asymptotic or large sample approach, an approximation for small sample, a resampling approach (asymptotic bootstrap, i.e. not based on percentile or bias-corrected). The former is described in Rothman's book (as referenced in the online help), chap. 14, pp. 241-244. The latter is relatively trivial so I will skip it. The small sample approach is just an adjustment on the calculation of the estimated relative risk. If we consider the following table of counts for subjects cross-classififed according to their exposure and disease status, Exposed Non-exposed Total Cases a1 a0 m1 Non-case b1 b0 m0 Total n1 n0 N the MLE of the risk ratio (RR), $\text{RR}=R_1/R_0$, is $\text{RR}=\frac{a_1/n_1}{a_0/n_0}$. In the large sample approach, a score statistic (for testing $R_1=R_0$, or equivalently, $\text{RR}=1$) is used, $\chi_S=\frac{a_1-\tilde a_1}{V^{1/2}}$, where the numerator reflects the difference between the oberved and expected counts for exposed cases and $V=(m_1n_1m_0n_0)/(n^2(n-1))$ is the variance of $a_1$. Now, that's all for computing the $p$-value because we know that $\chi_S$ follow a chi-square distribution. In fact, the three $p$-values (mid-$p$, Fisher exact test, and $\chi^2$-test) that are returned by riskratio() are computed in the tab2by2.test() function. For more information on mid-$p$, you can refer to Berry and Armitage (1995). Mid-P confidence intervals: a brief review. The Statistician, 44(4), 417-423. Now, for computing the $100(1-\alpha)$ CIs, this asymptotic approach yields an approximate SD estimate for $\ln(\text{RR})$ of $(\frac{1}{a_1}-\frac{1}{n_1}+\frac{1}{a_0}-\frac{1}{n_0})^{1/2}$, and the Wald limits are found to be $\exp(\ln(\text{RR}))\pm Z_c \text{SD}(\ln(\text{RR}))$, where $Z_c$ is the corresponding quantile for the standard normal distribution. The small sample approach makes use of an adjusted RR estimator: we just replace the denominator $a_0/n_0$ by $(a_0+1)/(n_0+1)$. As to how to decide whether we should rely on the large or small sample approach, it is mainly by checking expected cell frequencies; for the $\chi_S$ to be valid, $\tilde a_1$, $m_1-\tilde a_1$, $n_1-\tilde a_1$ and $m_0-n_1+\tilde a_1$ should be $> 5$. Working through the example of Rothman (p. 243), sel <- matrix(c(2,9,12,7), 2, 2) riskratio(sel, rev="row") which yields $data Outcome Predictor Disease1 Disease2 Total Exposed2 9 7 16 Exposed1 2 12 14 Total 11 19 30 $measure risk ratio with 95% C.I. Predictor estimate lower upper Exposed2 1.000000 NA NA Exposed1 1.959184 1.080254 3.553240 $p.value two-sided Predictor midp.exact fisher.exact chi.square Exposed2 NA NA NA Exposed1 0.02332167 0.02588706 0.01733469 $correction [1] FALSE attr(,"method") [1] "Unconditional MLE & normal approximation (Wald) CI" By hand, we would get $\text{RR} = (12/14)/(7/16)=1.96$, $\tilde a_1 = 19\times 14 / 30= 8.87$, $V = (8.87\times 11\times 16)/ \big(30\times (30-1)\big)= 1.79$, $\chi_S = (12-8.87)/\sqrt{1.79}= 2.34$, $\text{SD}(\ln(\text{RR})) = \left( 1/12-1/14+1/7-1/16 \right)^{1/2}=0.304$, $95\% \text{CIs} = \exp\big(\ln(1.96)\pm 1.645\times0.304\big)=[1.2;3.2]\quad \text{(rounded)}$. The following papers also addresses the construction of the test statistic for the RR or the OR: Miettinen and Nurminen (1985). Comparative analysis of two rates. *Statistics in Medicine, 4: 213-226. Becker (1989). A comparison of maximum likelihood and Jewell's estimators of the odds ratio and relative risk in single 2 × 2 tables. Statistics in Medicine, 8(8): 987-996. Tian, Tang, Ng, and Chan (2008). Confidence intervals for the risk ratio under inverse sampling. Statistics in Medicine, 27(17), 3301-3324. Walter and Cook (1991). A comparison of several point estimators of the odds ratio in a single 2 x 2 contingency table. Biometrics, 47(3): 795-811. Notes As far as I know, there's no reference to relative risk in Selvin's book (also referenced in the online help). Alan Agresti has also some code for relative risk.
Calculation of Relative Risk Confidence Interval The three options that are proposed in riskratio() refer to an asymptotic or large sample approach, an approximation for small sample, a resampling approach (asymptotic bootstrap, i.e. not based on pe
34,834
Calculation of Relative Risk Confidence Interval
I bookmarked this thread from r-help a while back: Summary, was Re: Confidence interval for relative risk and you might find the referenced PDF by Michael Dewey helpful: confidence intervals for risk ratios If you can though, get a copy of the following book. I know it covers the unconditional likelihood and bootstrap methods for sure, and I suspect the small sample adjustment too (don't have a copy handy to check for the last): Biostatistical methods: the assessment of relative risks
Calculation of Relative Risk Confidence Interval
I bookmarked this thread from r-help a while back: Summary, was Re: Confidence interval for relative risk and you might find the referenced PDF by Michael Dewey helpful: confidence intervals for ri
Calculation of Relative Risk Confidence Interval I bookmarked this thread from r-help a while back: Summary, was Re: Confidence interval for relative risk and you might find the referenced PDF by Michael Dewey helpful: confidence intervals for risk ratios If you can though, get a copy of the following book. I know it covers the unconditional likelihood and bootstrap methods for sure, and I suspect the small sample adjustment too (don't have a copy handy to check for the last): Biostatistical methods: the assessment of relative risks
Calculation of Relative Risk Confidence Interval I bookmarked this thread from r-help a while back: Summary, was Re: Confidence interval for relative risk and you might find the referenced PDF by Michael Dewey helpful: confidence intervals for ri
34,835
On univariate outlier tests (or: Dixon Q versus Grubbs)
Really, these approaches have not been actively developed for a very long time. For univariate Outliers, the optimal (most efficent) filter is median+/-$\delta \times$ MAD, or better yet (if you have access to R) median+/-$\delta \times$ Qn (so you don't assume the underlying distribution to be symmetric), The Qn estimator is implemented in package robustbase. See: Rousseeuw, P.J. and Croux, C. (1993) Alternatives to the Median Absolute Deviation, Journal of the American Statistical Association *88*, 1273-1283. Response to comment: Two levels. A) Philosophical. Both the Dixon and Grub tests are only able to detect a particular type of (isolated, single) outlier. For the last 20-30 years the concept of outliers has involved unto "any observation that departs from the main body of the data". Without further specification of what the particular departure is. This characterization-free approach renders the idea of building tests to detect outliers void. The emphasize shifted to the concept of estimators (a classical example of which is the median) that retain there values (i.e. are insensitive) even for large rate of contamination by outliers -such estimator is then said to be robust- and the question of detecting outliers becomes void. B) Weakness, You can see that the Grub and Dixon tests easily break down: one can easily generated contaminated data that would pass either test like a bliss (i.e. without breaking the null). This is particularly obvious in the Grubb test, because outliers will break down the mean and s.d. used in the construction of the test stat. It's less obvious in the Dixon, until one learns that order statistics are not robust to outliers either. I think you will find more explanation of these facts in papers oriented towards the general non-statistician audience such as the one cited above (I can also think of the Fast-Mcd paper by Rousseeuw). If you consult any recent book/intro to robust analysis, you will notice that neither Grubb nor Dixon are mentioned.
On univariate outlier tests (or: Dixon Q versus Grubbs)
Really, these approaches have not been actively developed for a very long time. For univariate Outliers, the optimal (most efficent) filter is median+/-$\delta \times$ MAD, or better yet (if you have
On univariate outlier tests (or: Dixon Q versus Grubbs) Really, these approaches have not been actively developed for a very long time. For univariate Outliers, the optimal (most efficent) filter is median+/-$\delta \times$ MAD, or better yet (if you have access to R) median+/-$\delta \times$ Qn (so you don't assume the underlying distribution to be symmetric), The Qn estimator is implemented in package robustbase. See: Rousseeuw, P.J. and Croux, C. (1993) Alternatives to the Median Absolute Deviation, Journal of the American Statistical Association *88*, 1273-1283. Response to comment: Two levels. A) Philosophical. Both the Dixon and Grub tests are only able to detect a particular type of (isolated, single) outlier. For the last 20-30 years the concept of outliers has involved unto "any observation that departs from the main body of the data". Without further specification of what the particular departure is. This characterization-free approach renders the idea of building tests to detect outliers void. The emphasize shifted to the concept of estimators (a classical example of which is the median) that retain there values (i.e. are insensitive) even for large rate of contamination by outliers -such estimator is then said to be robust- and the question of detecting outliers becomes void. B) Weakness, You can see that the Grub and Dixon tests easily break down: one can easily generated contaminated data that would pass either test like a bliss (i.e. without breaking the null). This is particularly obvious in the Grubb test, because outliers will break down the mean and s.d. used in the construction of the test stat. It's less obvious in the Dixon, until one learns that order statistics are not robust to outliers either. I think you will find more explanation of these facts in papers oriented towards the general non-statistician audience such as the one cited above (I can also think of the Fast-Mcd paper by Rousseeuw). If you consult any recent book/intro to robust analysis, you will notice that neither Grubb nor Dixon are mentioned.
On univariate outlier tests (or: Dixon Q versus Grubbs) Really, these approaches have not been actively developed for a very long time. For univariate Outliers, the optimal (most efficent) filter is median+/-$\delta \times$ MAD, or better yet (if you have
34,836
Minimizing Kullback–Leiber divergence using the Hessian
To add some clarity to the original paper you cited in the previous post and make notations consistent, I'll write the true density as $p(x)$ and the approximating density as $q_\theta(x)$ parameterized by $\theta$. Since you haven't specified the true density (the one being approximated), I'll try to (re-)explain what the proof means in each step. The goal is to show that as long as the approximating density $q_\theta(x)$ belongs to an exponential family, minimizing the Kullback-Leibler (KL) divergence $\mathrm{KL}(p\| q_\theta)$ only requires matching the sufficient statistics. First, look at the definition of the KL divergence: \begin{align} \mathrm{KL}(p\| q_\theta) &= \int\log \frac{p(x)}{q_\theta(x)}\, p(x)\, dx \\ &= \mathrm{E}_{p(x)}\left(\log \frac{p(x)}{q_\theta(x)} \right) \\ &= \mathrm{E}_{p(x)}(\log p(x)) - \mathrm{E}_{p(x)}(\log q_\theta(x)). \end{align} Since we need to minimize this (as a function of the parameters $\theta$), we will make this point clear by rewriting it as $f(\theta) = \mathrm{KL}(p\| p_\theta)$, and use the first-order condition $\nabla_\theta f(\theta) = 0$. We see that $\mathrm{E}_{p(x)}(\log p(x))$ in the KL divergence disappears upon differentiation because it's not a function of $\theta$. Therefore, $$ \nabla_\theta f(\theta) = -\nabla_\theta \mathrm{E}_{p(x)}(\log q_\theta(x)). $$ Now, let's go back to the definition of exponential family: $q_\theta(x) = h(x) \exp\{\theta^\top \phi(x) - A(\theta) \} $ where $A(\theta)$ is the log-normalizing constant. Taking the logarithm of this density yields $$ \begin{align} \log q_\theta(x) &= \log h(x) + \theta^\top \phi(x) - A(\theta)\\ \mathrm{E}_{p(x)}(\log q_\theta(x)) &= \mathrm{E}_{p(x)}(\log h(x)) + \theta^\top \mathrm{E}_{p(x)}(\phi(x)) - A(\theta)\\ \nabla_\theta \mathrm{E}_{p(x)}(\log q_\theta(x)) &= \mathrm{E}_{p(x)}(\phi(x)) - \nabla_\theta A(\theta). \end{align} $$ But as I've derived in this answer, $\nabla_\theta A(\theta) = \mathrm{E}_{q_\theta(x)}(\phi(x))$. Therefore, the first-order condition $\nabla_\theta f(\theta) = 0$ gives us $\mathrm{E}_{p(x)}(\phi(x)) = \mathrm{E}_{q_\theta(x)}(\phi(x))$. To verify that solving the first-order condition for $\theta$ gives us the minimizer, we must compute the Hessian matrix and check if it's positive-definite. From $\nabla_\theta \mathrm{E}_{p(x)}(\log q_\theta(x)) = \mathrm{E}_{p(x)}(\phi(x)) - \nabla_\theta A(\theta)$, it's easily observed that $\nabla_\theta^2 f(\theta) = \nabla_\theta^2 A(\theta)$, which is the covariance matrix of the sufficient statistics. I will not prove this because this is a standard result in mathematical statistics, but refer to lecture notes like this if you're interested. Again, $[\nabla_\theta^2 A(\theta)]_{ij} = \mathrm{Cov}(\phi_i(x),\phi_j(x))$. Covariance matrices are by definition positive-definite, and therefore the first-order condition when solved for $\theta$ indeed produces the minimizer. Now, since you've asked how this plays out for normal-gamma, we've already established that the sufficient statistics are $\phi_1(x) = \log T$, $\phi_2(x) = T$, $\phi_3(x)=TX$, and $\phi_4(x)=TX^2$. To obtain the covariance matrix in full, you should compute 10 second-order derivatives $\frac{d^2}{d\theta_i d\theta_j} A(\theta)$ for $i,j=1,\ldots,4$ for $$ A(\theta)= -\left(\theta_1+\dfrac{1}{2} \right)\log\left(\frac{\theta_3^2}{4\theta_4} - \theta_2 \right) -\frac{1}{2}\log(-2\theta_4) +\log\Gamma\left(\theta_1+\frac{1}{2}\right) + \frac{1}{2}\log(2\pi), $$ for which I seriously recommend that you consider using tools like Wolfram Alpha.
Minimizing Kullback–Leiber divergence using the Hessian
To add some clarity to the original paper you cited in the previous post and make notations consistent, I'll write the true density as $p(x)$ and the approximating density as $q_\theta(x)$ parameteriz
Minimizing Kullback–Leiber divergence using the Hessian To add some clarity to the original paper you cited in the previous post and make notations consistent, I'll write the true density as $p(x)$ and the approximating density as $q_\theta(x)$ parameterized by $\theta$. Since you haven't specified the true density (the one being approximated), I'll try to (re-)explain what the proof means in each step. The goal is to show that as long as the approximating density $q_\theta(x)$ belongs to an exponential family, minimizing the Kullback-Leibler (KL) divergence $\mathrm{KL}(p\| q_\theta)$ only requires matching the sufficient statistics. First, look at the definition of the KL divergence: \begin{align} \mathrm{KL}(p\| q_\theta) &= \int\log \frac{p(x)}{q_\theta(x)}\, p(x)\, dx \\ &= \mathrm{E}_{p(x)}\left(\log \frac{p(x)}{q_\theta(x)} \right) \\ &= \mathrm{E}_{p(x)}(\log p(x)) - \mathrm{E}_{p(x)}(\log q_\theta(x)). \end{align} Since we need to minimize this (as a function of the parameters $\theta$), we will make this point clear by rewriting it as $f(\theta) = \mathrm{KL}(p\| p_\theta)$, and use the first-order condition $\nabla_\theta f(\theta) = 0$. We see that $\mathrm{E}_{p(x)}(\log p(x))$ in the KL divergence disappears upon differentiation because it's not a function of $\theta$. Therefore, $$ \nabla_\theta f(\theta) = -\nabla_\theta \mathrm{E}_{p(x)}(\log q_\theta(x)). $$ Now, let's go back to the definition of exponential family: $q_\theta(x) = h(x) \exp\{\theta^\top \phi(x) - A(\theta) \} $ where $A(\theta)$ is the log-normalizing constant. Taking the logarithm of this density yields $$ \begin{align} \log q_\theta(x) &= \log h(x) + \theta^\top \phi(x) - A(\theta)\\ \mathrm{E}_{p(x)}(\log q_\theta(x)) &= \mathrm{E}_{p(x)}(\log h(x)) + \theta^\top \mathrm{E}_{p(x)}(\phi(x)) - A(\theta)\\ \nabla_\theta \mathrm{E}_{p(x)}(\log q_\theta(x)) &= \mathrm{E}_{p(x)}(\phi(x)) - \nabla_\theta A(\theta). \end{align} $$ But as I've derived in this answer, $\nabla_\theta A(\theta) = \mathrm{E}_{q_\theta(x)}(\phi(x))$. Therefore, the first-order condition $\nabla_\theta f(\theta) = 0$ gives us $\mathrm{E}_{p(x)}(\phi(x)) = \mathrm{E}_{q_\theta(x)}(\phi(x))$. To verify that solving the first-order condition for $\theta$ gives us the minimizer, we must compute the Hessian matrix and check if it's positive-definite. From $\nabla_\theta \mathrm{E}_{p(x)}(\log q_\theta(x)) = \mathrm{E}_{p(x)}(\phi(x)) - \nabla_\theta A(\theta)$, it's easily observed that $\nabla_\theta^2 f(\theta) = \nabla_\theta^2 A(\theta)$, which is the covariance matrix of the sufficient statistics. I will not prove this because this is a standard result in mathematical statistics, but refer to lecture notes like this if you're interested. Again, $[\nabla_\theta^2 A(\theta)]_{ij} = \mathrm{Cov}(\phi_i(x),\phi_j(x))$. Covariance matrices are by definition positive-definite, and therefore the first-order condition when solved for $\theta$ indeed produces the minimizer. Now, since you've asked how this plays out for normal-gamma, we've already established that the sufficient statistics are $\phi_1(x) = \log T$, $\phi_2(x) = T$, $\phi_3(x)=TX$, and $\phi_4(x)=TX^2$. To obtain the covariance matrix in full, you should compute 10 second-order derivatives $\frac{d^2}{d\theta_i d\theta_j} A(\theta)$ for $i,j=1,\ldots,4$ for $$ A(\theta)= -\left(\theta_1+\dfrac{1}{2} \right)\log\left(\frac{\theta_3^2}{4\theta_4} - \theta_2 \right) -\frac{1}{2}\log(-2\theta_4) +\log\Gamma\left(\theta_1+\frac{1}{2}\right) + \frac{1}{2}\log(2\pi), $$ for which I seriously recommend that you consider using tools like Wolfram Alpha.
Minimizing Kullback–Leiber divergence using the Hessian To add some clarity to the original paper you cited in the previous post and make notations consistent, I'll write the true density as $p(x)$ and the approximating density as $q_\theta(x)$ parameteriz
34,837
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1$
Some more details than the other answers. The probability distribution of $Y$ will be a mixture distribution with two components, one continuous and one discrete. To find the distribution of $Y$, write $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y\in A) = \P(Y\in A \mid Y<1)\P(Y<1)+\P(Y\in A\mid Y=1)\P(Y=1)\\ =\frac1\theta\cdot\int_{A\cap [0,1]} \; dy + \frac{\theta-1}{\theta}\cdot \mathbb{1}(Y\in A) $$ leading directly to the likelihood $$ L_Y(\theta) = \left(\frac{\theta-1}{\theta}\right)^{n-r}\cdot \left( \frac1\theta\right)^r $$ showing that $R$ alone is a sufficient statistic. Now the usual procedure leads to the maximum likelihood estimator $$ \hat{\theta}_{ML}= 1+ \frac{n-r}r $$ which also seems intuitively reasonable. Some more details Likelihood defined in this situation with a mixture distribution might be new to many, so some details might help. But first, this is discussed onsite earlier at Maximum likelihood function for mixed type distribution and Weighted normal errors regression with censoring. We will need some concepts from measure theory ... let $\mu^*$ be the measure given by $$ \mu^*(A)= \mu(A) + \mathbb{1}\{ 1\in A\} $$ where $\mu$ is Leb (Lebesgue) measure and the second term is an atom at $1$. Now, the distribution of $Y$ can be written as a density (in the sense of the Radon-Nikodym theorem) with respect to $\mu^*$. This looks like $$ \P(Y\in A) =\int_A f(y) \; \mu^*(dy) =\int_A f(y) \; \mu(dy) + \int_A f(y)\; d\delta_1(y) $$ where $\delta_1$ is the atom at $1$. The Radon-Nikodym density $f$ can be written $$ f(y)= \frac1\theta \mathbb{1}\{0\le y < 1\} + \frac{\theta-1}\theta\cdot \mathbb{1}\{y=1\} $$ Note that the first term in the density only contributes to the integral with respect to $d\mu$, the second term only to the integral with respect to $d\delta_1$. So, defining the likelihood using the RN-derivative $f$, we get $$ L_Y(\theta)=\prod_i^n f(y_i)=\prod_i^n \left\{ \frac1\theta \mathbb{1}\{0\le y_i < 1\} + \frac{\theta-1}\theta\cdot \mathbb{1}\{y_i=1\} \right\} $$ and simplifying gives the likelihood above.
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1
Some more details than the other answers. The probability distribution of $Y$ will be a mixture distribution with two components, one continuous and one discrete. To find the distribution of $Y$, writ
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1$ Some more details than the other answers. The probability distribution of $Y$ will be a mixture distribution with two components, one continuous and one discrete. To find the distribution of $Y$, write $$ \DeclareMathOperator{\P}{\mathbb{P}} \P(Y\in A) = \P(Y\in A \mid Y<1)\P(Y<1)+\P(Y\in A\mid Y=1)\P(Y=1)\\ =\frac1\theta\cdot\int_{A\cap [0,1]} \; dy + \frac{\theta-1}{\theta}\cdot \mathbb{1}(Y\in A) $$ leading directly to the likelihood $$ L_Y(\theta) = \left(\frac{\theta-1}{\theta}\right)^{n-r}\cdot \left( \frac1\theta\right)^r $$ showing that $R$ alone is a sufficient statistic. Now the usual procedure leads to the maximum likelihood estimator $$ \hat{\theta}_{ML}= 1+ \frac{n-r}r $$ which also seems intuitively reasonable. Some more details Likelihood defined in this situation with a mixture distribution might be new to many, so some details might help. But first, this is discussed onsite earlier at Maximum likelihood function for mixed type distribution and Weighted normal errors regression with censoring. We will need some concepts from measure theory ... let $\mu^*$ be the measure given by $$ \mu^*(A)= \mu(A) + \mathbb{1}\{ 1\in A\} $$ where $\mu$ is Leb (Lebesgue) measure and the second term is an atom at $1$. Now, the distribution of $Y$ can be written as a density (in the sense of the Radon-Nikodym theorem) with respect to $\mu^*$. This looks like $$ \P(Y\in A) =\int_A f(y) \; \mu^*(dy) =\int_A f(y) \; \mu(dy) + \int_A f(y)\; d\delta_1(y) $$ where $\delta_1$ is the atom at $1$. The Radon-Nikodym density $f$ can be written $$ f(y)= \frac1\theta \mathbb{1}\{0\le y < 1\} + \frac{\theta-1}\theta\cdot \mathbb{1}\{y=1\} $$ Note that the first term in the density only contributes to the integral with respect to $d\mu$, the second term only to the integral with respect to $d\delta_1$. So, defining the likelihood using the RN-derivative $f$, we get $$ L_Y(\theta)=\prod_i^n f(y_i)=\prod_i^n \left\{ \frac1\theta \mathbb{1}\{0\le y_i < 1\} + \frac{\theta-1}\theta\cdot \mathbb{1}\{y_i=1\} \right\} $$ and simplifying gives the likelihood above.
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1 Some more details than the other answers. The probability distribution of $Y$ will be a mixture distribution with two components, one continuous and one discrete. To find the distribution of $Y$, writ
34,838
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1$
It's redundant to have the $\textbf{1}_{0 < y_i < \theta}$ in the likelihood, because the product is already over samples where that is true. So you don't need that factor there. Then the likelihood is nicely differentiable and easy to optimize for $\theta$. And yes, your likelihood is correct without that factor. When you do, you should get a result that seems intuitive. For example, if $\theta=2$, we should expect half the samples to be less than one. Think, in general, what proportion of samples (what $\frac{r}{n}$ in your notation) you should expect to see $<1$ for a given value of $\theta$, and see if that's consistent with your ML estimate $\hat\theta$.
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1
It's redundant to have the $\textbf{1}_{0 < y_i < \theta}$ in the likelihood, because the product is already over samples where that is true. So you don't need that factor there. Then the likelihood i
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1$ It's redundant to have the $\textbf{1}_{0 < y_i < \theta}$ in the likelihood, because the product is already over samples where that is true. So you don't need that factor there. Then the likelihood is nicely differentiable and easy to optimize for $\theta$. And yes, your likelihood is correct without that factor. When you do, you should get a result that seems intuitive. For example, if $\theta=2$, we should expect half the samples to be less than one. Think, in general, what proportion of samples (what $\frac{r}{n}$ in your notation) you should expect to see $<1$ for a given value of $\theta$, and see if that's consistent with your ML estimate $\hat\theta$.
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1 It's redundant to have the $\textbf{1}_{0 < y_i < \theta}$ in the likelihood, because the product is already over samples where that is true. So you don't need that factor there. Then the likelihood i
34,839
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1$
For censoring problems like this you are dealing with an observable random variable that is a mixture of a continuous and discrete part, with the discrete part occurring at the censoring value. For this type of data, it is sometimes easier to derive the likelihood function by starting with the CDF of the censored values and then using this to get their PDF. I'm going to do the derivation without your assumption that $\theta \geqslant 1$ initially, and then I'll add that assumption at the end, so that you can see how it is done in general. Deriving the PDF for the censored values: The simplest way to do this is to first obtain the CDF. Since $X \sim \text{U}(0, \theta)$ you have: $$\mathbb{P}(X \leqslant x) = \frac{\min(x, \theta)}{\theta} \quad \quad \quad \text{for all } x \geqslant 0.$$ Thus, for all $y \geqslant 0$ we have the CDF: $$\begin{align} F_Y(y) &\equiv \mathbb{P}(Y \leqslant y) \\[8pt] &= \mathbb{P}(\min(X,1) \leqslant y) \\[8pt] &= \mathbb{P}(X \leqslant y) \cdot \mathbb{I}(y \leqslant 1) + \mathbb{I}(y > 1) \\[6pt] &= \frac{\min(y, \theta)}{\theta} \cdot \mathbb{I}(y \leqslant 1) + \mathbb{I}(y > 1). \\[6pt] \end{align}$$ To get the PDF for this mixture variable we use the Dirac-delta function $\delta$ for the discrete part. Differentiating the CDF and using the Dirac delta function gives the PDF:$^\dagger$ $$\begin{align} f_Y(y) &= \frac{dF_Y}{dy}(y) \\[6pt] &= \bigg[ \frac{d}{dy} \frac{\min(y, \theta)}{\theta} \bigg] \mathbb{I}(y \leqslant 1) + \bigg[ 1 - \frac{\min(\theta, 1)}{\theta} \bigg] \delta(1) \\[6pt] &= \bigg[ \frac{1}{\theta} \cdot \mathbb{I}(y \leqslant \theta) \bigg] \mathbb{I}(y \leqslant 1) + \frac{\theta - \min(\theta, 1)}{\theta} \cdot \delta(1) \\[6pt] &= \frac{1}{\theta} \cdot \mathbb{I}(y \leqslant \min(\theta, 1)) + \frac{\max(0, \theta-1)}{\theta} \cdot \delta(1). \\[6pt] \end{align}$$ Now, if you add in your assumption that $\theta \geqslant 1$ then you get the simplified PDF: $$f_Y(y) = \frac{1}{\theta} \cdot \mathbb{I}(y \leqslant 1) + \frac{\theta-1}{\theta} \cdot \delta(1).$$ The likelihood function and MLE: Now that we have the PDF we can write the likelihood function. Using your notation, suppose we let $R \equiv R(\mathbf{y}) \equiv \sum_{i=1}^n \mathbb{I}(y_i < 1)$ be the number of non-censored data points. We can then write the likelihood as: $$\begin{align} L_\mathbf{y}(\theta) &= \prod_{i=1}^n f_Y(y_i) \\[6pt] &= \bigg( \frac{1}{\theta} \bigg)^r \times \bigg( \frac{\theta-1}{\theta} \bigg)^{n-r} \\[6pt] &= \frac{(\theta-1)^{n-r}}{\theta^n}, \\[6pt] \end{align}$$ which gives the log-likelihood function: $$\ell_\mathbf{y}(\theta) = (n-r) \log(\theta-1) - n \log (\theta) \quad \quad \quad \quad \quad \text{for } \theta \geqslant 1.$$ The statistic $R$ is sufficient for this problem, and the MLE is: $$\hat{\theta}_\text{MLE} = \frac{n}{r}.$$ I will leave the rest of the analysis (completeness, etc.) to you. Further analysis should be reasonably simple due to the simple form of the log-likelihood function $^\dagger$ We use the convention that $0 \cdot \delta(x) = 0$ so that the last term disappears if $\theta < 1$.
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1
For censoring problems like this you are dealing with an observable random variable that is a mixture of a continuous and discrete part, with the discrete part occurring at the censoring value. For t
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1$ For censoring problems like this you are dealing with an observable random variable that is a mixture of a continuous and discrete part, with the discrete part occurring at the censoring value. For this type of data, it is sometimes easier to derive the likelihood function by starting with the CDF of the censored values and then using this to get their PDF. I'm going to do the derivation without your assumption that $\theta \geqslant 1$ initially, and then I'll add that assumption at the end, so that you can see how it is done in general. Deriving the PDF for the censored values: The simplest way to do this is to first obtain the CDF. Since $X \sim \text{U}(0, \theta)$ you have: $$\mathbb{P}(X \leqslant x) = \frac{\min(x, \theta)}{\theta} \quad \quad \quad \text{for all } x \geqslant 0.$$ Thus, for all $y \geqslant 0$ we have the CDF: $$\begin{align} F_Y(y) &\equiv \mathbb{P}(Y \leqslant y) \\[8pt] &= \mathbb{P}(\min(X,1) \leqslant y) \\[8pt] &= \mathbb{P}(X \leqslant y) \cdot \mathbb{I}(y \leqslant 1) + \mathbb{I}(y > 1) \\[6pt] &= \frac{\min(y, \theta)}{\theta} \cdot \mathbb{I}(y \leqslant 1) + \mathbb{I}(y > 1). \\[6pt] \end{align}$$ To get the PDF for this mixture variable we use the Dirac-delta function $\delta$ for the discrete part. Differentiating the CDF and using the Dirac delta function gives the PDF:$^\dagger$ $$\begin{align} f_Y(y) &= \frac{dF_Y}{dy}(y) \\[6pt] &= \bigg[ \frac{d}{dy} \frac{\min(y, \theta)}{\theta} \bigg] \mathbb{I}(y \leqslant 1) + \bigg[ 1 - \frac{\min(\theta, 1)}{\theta} \bigg] \delta(1) \\[6pt] &= \bigg[ \frac{1}{\theta} \cdot \mathbb{I}(y \leqslant \theta) \bigg] \mathbb{I}(y \leqslant 1) + \frac{\theta - \min(\theta, 1)}{\theta} \cdot \delta(1) \\[6pt] &= \frac{1}{\theta} \cdot \mathbb{I}(y \leqslant \min(\theta, 1)) + \frac{\max(0, \theta-1)}{\theta} \cdot \delta(1). \\[6pt] \end{align}$$ Now, if you add in your assumption that $\theta \geqslant 1$ then you get the simplified PDF: $$f_Y(y) = \frac{1}{\theta} \cdot \mathbb{I}(y \leqslant 1) + \frac{\theta-1}{\theta} \cdot \delta(1).$$ The likelihood function and MLE: Now that we have the PDF we can write the likelihood function. Using your notation, suppose we let $R \equiv R(\mathbf{y}) \equiv \sum_{i=1}^n \mathbb{I}(y_i < 1)$ be the number of non-censored data points. We can then write the likelihood as: $$\begin{align} L_\mathbf{y}(\theta) &= \prod_{i=1}^n f_Y(y_i) \\[6pt] &= \bigg( \frac{1}{\theta} \bigg)^r \times \bigg( \frac{\theta-1}{\theta} \bigg)^{n-r} \\[6pt] &= \frac{(\theta-1)^{n-r}}{\theta^n}, \\[6pt] \end{align}$$ which gives the log-likelihood function: $$\ell_\mathbf{y}(\theta) = (n-r) \log(\theta-1) - n \log (\theta) \quad \quad \quad \quad \quad \text{for } \theta \geqslant 1.$$ The statistic $R$ is sufficient for this problem, and the MLE is: $$\hat{\theta}_\text{MLE} = \frac{n}{r}.$$ I will leave the rest of the analysis (completeness, etc.) to you. Further analysis should be reasonably simple due to the simple form of the log-likelihood function $^\dagger$ We use the convention that $0 \cdot \delta(x) = 0$ so that the last term disappears if $\theta < 1$.
Estimating $\theta$ based on censored data when $X_i\sim \text{Uniform}(0,\theta)$ with $\theta\ge 1 For censoring problems like this you are dealing with an observable random variable that is a mixture of a continuous and discrete part, with the discrete part occurring at the censoring value. For t
34,840
How to properly perform predictions in ordinal regression?
The reason why picking the most probable category is standard is that it doesn't involve any treatment of ordinal categories as if they were interval scaled. Computing the expectation treats the ordinal categories as numbers. The MAE does the same, so it is no surprise that the expectation gives better results using the MAE as a loss. The most probable category will normally give better results if as a loss you just count how often you get the category wrong (this is not a theorem and may occasionally not hold, at least not on test or validation data). So the way you pick the predicted category is related to what loss is relevant for you. Arguably neither MAE nor misclassification probability are 100% appropriate, one implicitly treats the data as having interval level, the other one as having nominal level. There are loss functions specifically for ordinal data in the literature, but chances are they are rarely used, and there's surely more than one way of defining them, potentially leading to ambiguous results. Ultimately it's your call what kind of loss is relevant for you. There's a dogma that ordinal data should not be used to do interval scaled stuff such as computing means, expectations, or MAE, however in a number of applications treating the categories as interval scaled numbers seems appropriate, particularly if there is an underlying quantitative scale that is equally split into ordinal categories (just assuming that there's an underlying scale without knowing what it is doesn't help though, because the categories might represent differently large intervals of values, in which case using the category numbers as interval scaled would misrepresent the underlying scale), or where data come from questionnaires that implicitly communicate to the respondents that data will be evaluated as integer numbers (for example if such numbers are explicitly given). If you actually decide that the MAE is most relevant for you, nothing should stop you from using the expectation or whatever is best to optimise your validation loss, however it may then well be that an ordinal regression is not the best method to start with in the first place. If you ultimately ignore the ordinal character of you data, why bother running a regression that is based on it (although if the model is close to the truth it may work well)? Whether the misclassification loss is appropriate depends on whether in your specific application it is relevant if a prediction is wrong how wrong it actually is. Once more, if this is what you decide, methods for (categorical) classification can be competitive against ordinal regression. A key issue is that any definition of a loss function will somehow quantify what according to the basic idea of ordinal data should not be quantified. You can somehow estimate the quantification from the data (looking at empirical frequencies of the categories, although it is problematic to assume that these are informative about "true" quantitative differences between categories), but ultimately, if you want to quantify loss, there is no way around having one. And if the loss is what you are most concerned about, pick the method that optimises yours.
How to properly perform predictions in ordinal regression?
The reason why picking the most probable category is standard is that it doesn't involve any treatment of ordinal categories as if they were interval scaled. Computing the expectation treats the ordin
How to properly perform predictions in ordinal regression? The reason why picking the most probable category is standard is that it doesn't involve any treatment of ordinal categories as if they were interval scaled. Computing the expectation treats the ordinal categories as numbers. The MAE does the same, so it is no surprise that the expectation gives better results using the MAE as a loss. The most probable category will normally give better results if as a loss you just count how often you get the category wrong (this is not a theorem and may occasionally not hold, at least not on test or validation data). So the way you pick the predicted category is related to what loss is relevant for you. Arguably neither MAE nor misclassification probability are 100% appropriate, one implicitly treats the data as having interval level, the other one as having nominal level. There are loss functions specifically for ordinal data in the literature, but chances are they are rarely used, and there's surely more than one way of defining them, potentially leading to ambiguous results. Ultimately it's your call what kind of loss is relevant for you. There's a dogma that ordinal data should not be used to do interval scaled stuff such as computing means, expectations, or MAE, however in a number of applications treating the categories as interval scaled numbers seems appropriate, particularly if there is an underlying quantitative scale that is equally split into ordinal categories (just assuming that there's an underlying scale without knowing what it is doesn't help though, because the categories might represent differently large intervals of values, in which case using the category numbers as interval scaled would misrepresent the underlying scale), or where data come from questionnaires that implicitly communicate to the respondents that data will be evaluated as integer numbers (for example if such numbers are explicitly given). If you actually decide that the MAE is most relevant for you, nothing should stop you from using the expectation or whatever is best to optimise your validation loss, however it may then well be that an ordinal regression is not the best method to start with in the first place. If you ultimately ignore the ordinal character of you data, why bother running a regression that is based on it (although if the model is close to the truth it may work well)? Whether the misclassification loss is appropriate depends on whether in your specific application it is relevant if a prediction is wrong how wrong it actually is. Once more, if this is what you decide, methods for (categorical) classification can be competitive against ordinal regression. A key issue is that any definition of a loss function will somehow quantify what according to the basic idea of ordinal data should not be quantified. You can somehow estimate the quantification from the data (looking at empirical frequencies of the categories, although it is problematic to assume that these are informative about "true" quantitative differences between categories), but ultimately, if you want to quantify loss, there is no way around having one. And if the loss is what you are most concerned about, pick the method that optimises yours.
How to properly perform predictions in ordinal regression? The reason why picking the most probable category is standard is that it doesn't involve any treatment of ordinal categories as if they were interval scaled. Computing the expectation treats the ordin
34,841
How to properly perform predictions in ordinal regression?
When Y is discrete we are really stuck: The most probable category may not be very probable, so using it may result in an arbitrary forced choice If Y is not numeric and is not approximately interval-scaled then you can't use the mean Quantiles work only for continuous distributions (they are too jumpy when the distribution is discrete; you may find that adding a single observation moves a quantile a whole Y level) So we are left with Estimating the whole probability distribution of Y given X, which is fine but is just noisy Estimating P(Y=y | X) which is fine, but will have low precision Estimating P(Y $\geq$ y | X) for pre-specified y, which is fine
How to properly perform predictions in ordinal regression?
When Y is discrete we are really stuck: The most probable category may not be very probable, so using it may result in an arbitrary forced choice If Y is not numeric and is not approximately interval
How to properly perform predictions in ordinal regression? When Y is discrete we are really stuck: The most probable category may not be very probable, so using it may result in an arbitrary forced choice If Y is not numeric and is not approximately interval-scaled then you can't use the mean Quantiles work only for continuous distributions (they are too jumpy when the distribution is discrete; you may find that adding a single observation moves a quantile a whole Y level) So we are left with Estimating the whole probability distribution of Y given X, which is fine but is just noisy Estimating P(Y=y | X) which is fine, but will have low precision Estimating P(Y $\geq$ y | X) for pre-specified y, which is fine
How to properly perform predictions in ordinal regression? When Y is discrete we are really stuck: The most probable category may not be very probable, so using it may result in an arbitrary forced choice If Y is not numeric and is not approximately interval
34,842
What is the distribution of $Z(X+Y)+XY$?
This is a special case of a real symmetric quadratic form $\mathbf{x^\prime \mathbb{Q} x},$ where $$z(x+y) + yz = xy+yz+xz= \frac{1}{2}\pmatrix{x,&y,&z}\pmatrix{0&1&1\\1&0&1\\1&1&0}\pmatrix{x\\y\\z}.$$ We find its eigenvalues $\lambda$ by solving its characteristic equation $$p_{\mathbb Q}(\lambda) = \det\left(\mathbb{Q} - \lambda\mathbb{I}_3\right) = -\lambda^3+3\lambda+2=(\lambda-2)(\lambda+1)^2.$$ The eigenspace for $\lambda=2$ is (therefore) the kernel of $$\mathbb{Q}-2\mathbb{I}_3 = \pmatrix{-2&1&1\\1&-2&1\\1&1&-2}$$ which evidently is generated by $(1,1,1)^\prime.$ Similarly, by inspection, we can find a basis for the two-dimensional eigenspace for $\lambda=-1.$ To match the original form I selected the first basis element to be $(1,-1,0)$ (corresponding to the linear combination $x-y,$ whose square will contribute the $xy$ term) and then chose an orthogonal vector $(1,1,-2).$ Thus, it must be the case that $xy+yz+zx$ can be written as a linear combination of the squares of $x+y+z,$ $x-y,$ and $x+y-z,$ respectively. It's now easy algebra to obtain $$xy+yz+zx= \frac{1}{12}\left(4(x+y+z)^2-3(x-y)^2-(x+y-2z)^2\right).$$ When $(X,Y,Z)$ is standard Normal, the three squared expressions are uncorrelated (by construction) and have variances $1^2+1^2+1^2=3,$ $1^2+(-1)^2=2,$ and $1^2+1^2+(-2)^2=6,$ respectively. So, letting $U=(X+Y+Z)/\sqrt{3},$ $V=(X-Y)/\sqrt{2},$ and $W=(X+Y-2Z)/\sqrt{6},$ we have shown $(U,V,W)$ is standard Normal--all have unit variances and are uncorrelated--and that $$XY + YZ + XZ = \frac{1}{12}\left(12 U^2 - 6 V^2 - 6 W^2\right) = U^2 - \frac{1}{2}\left(V^2 + W^2\right).$$ Finally, let the expectation of $(X,Y,Z)$ be $(\kappa,\lambda,\mu).$ By linearity, the expectation of $(U,V,W)$ is $$E[(U,V,W)] = \left(\frac{\kappa+\lambda+\mu}{\sqrt{3}}, \frac{\kappa-\lambda}{\sqrt{2}}, \frac{\kappa+\lambda-2\mu}{\sqrt{6}}\right)$$ By definition, $U^2$ has a $\chi^2_1((\kappa+\lambda+\mu)^2/3)$ distribution and, because $V$ and $W$ are independent, $V^2+W^2$ has a $\chi^2_2((\kappa-\lambda)^2/2 + (\kappa+\lambda-2\mu)^2)/6)$ distribution. Moreover, $U^2$ is independent of $V^2+W^2$ because $(U,V,W)$ are independent. Thus, The distribution of $XY+YZ+ZX$ is that of a $\chi^2_1((\kappa+\lambda+\mu)^2/3)$ distribution minus one-half of a $\chi^2_2((\kappa-\lambda)^2/2 + (\kappa+\lambda-2\mu)^2)/6)$ distribution. (I haven't simplified the expressions for the noncentrality parameters in order to display their derivation.) Arbitrary symmetric quadratic forms in Normal variables are analyzed in the same way, with comparable results: their distributions are those of linear combinations of (possibly) non-central chi-squared variables. When the form is positive-definite, all coefficients will be positive (and vice versa). As a check, we can generate random values in both ways. Using R, we sample from the form in the most straightforward way: n <- 1e4 mu <- c(1,-1,3) # Mean of (X,Y,Z) -- vary at will set.seed(17) xyz <- matrix(rnorm(3*n), 3) q <- xyz[1,]*xyz[2,] + xyz[2,]*xyz[3,] + xyz[3,]*xyz[1,] The chi-squared equivalent is also simple: ncp1 <- (mu[1] + mu[2] + mu[3])^2/3 ncp2 <- (mu[1] - mu[2])^2/2 + (mu[1] + mu[2] - 2*mu[3])^2/6 c1 <- rchisq(n, 1, ncp=ncp1) c2 <- rchisq(n, 2, ncp=ncp2) q. <- c1 - c2/2 A probability plot compares the distributions: plot(sort(q), sort(q.), asp=1, main="Theory v. Simulation", pch=19, col="#00000020", xlab="XY + YZ + ZX", ylab="") mtext(bquote({chi[1]^2}(.(signif(ncp1, 2))) - {chi[2]^2}(.(signif(ncp2, 2)))/2), side=2, line=2) abline(c(0,1), lwd=2, col="Red") Because the points come close to the reference line (of equal values), we see the two samples agree to within expected random deviation:
What is the distribution of $Z(X+Y)+XY$?
This is a special case of a real symmetric quadratic form $\mathbf{x^\prime \mathbb{Q} x},$ where $$z(x+y) + yz = xy+yz+xz= \frac{1}{2}\pmatrix{x,&y,&z}\pmatrix{0&1&1\\1&0&1\\1&1&0}\pmatrix{x\\y\\z}.$
What is the distribution of $Z(X+Y)+XY$? This is a special case of a real symmetric quadratic form $\mathbf{x^\prime \mathbb{Q} x},$ where $$z(x+y) + yz = xy+yz+xz= \frac{1}{2}\pmatrix{x,&y,&z}\pmatrix{0&1&1\\1&0&1\\1&1&0}\pmatrix{x\\y\\z}.$$ We find its eigenvalues $\lambda$ by solving its characteristic equation $$p_{\mathbb Q}(\lambda) = \det\left(\mathbb{Q} - \lambda\mathbb{I}_3\right) = -\lambda^3+3\lambda+2=(\lambda-2)(\lambda+1)^2.$$ The eigenspace for $\lambda=2$ is (therefore) the kernel of $$\mathbb{Q}-2\mathbb{I}_3 = \pmatrix{-2&1&1\\1&-2&1\\1&1&-2}$$ which evidently is generated by $(1,1,1)^\prime.$ Similarly, by inspection, we can find a basis for the two-dimensional eigenspace for $\lambda=-1.$ To match the original form I selected the first basis element to be $(1,-1,0)$ (corresponding to the linear combination $x-y,$ whose square will contribute the $xy$ term) and then chose an orthogonal vector $(1,1,-2).$ Thus, it must be the case that $xy+yz+zx$ can be written as a linear combination of the squares of $x+y+z,$ $x-y,$ and $x+y-z,$ respectively. It's now easy algebra to obtain $$xy+yz+zx= \frac{1}{12}\left(4(x+y+z)^2-3(x-y)^2-(x+y-2z)^2\right).$$ When $(X,Y,Z)$ is standard Normal, the three squared expressions are uncorrelated (by construction) and have variances $1^2+1^2+1^2=3,$ $1^2+(-1)^2=2,$ and $1^2+1^2+(-2)^2=6,$ respectively. So, letting $U=(X+Y+Z)/\sqrt{3},$ $V=(X-Y)/\sqrt{2},$ and $W=(X+Y-2Z)/\sqrt{6},$ we have shown $(U,V,W)$ is standard Normal--all have unit variances and are uncorrelated--and that $$XY + YZ + XZ = \frac{1}{12}\left(12 U^2 - 6 V^2 - 6 W^2\right) = U^2 - \frac{1}{2}\left(V^2 + W^2\right).$$ Finally, let the expectation of $(X,Y,Z)$ be $(\kappa,\lambda,\mu).$ By linearity, the expectation of $(U,V,W)$ is $$E[(U,V,W)] = \left(\frac{\kappa+\lambda+\mu}{\sqrt{3}}, \frac{\kappa-\lambda}{\sqrt{2}}, \frac{\kappa+\lambda-2\mu}{\sqrt{6}}\right)$$ By definition, $U^2$ has a $\chi^2_1((\kappa+\lambda+\mu)^2/3)$ distribution and, because $V$ and $W$ are independent, $V^2+W^2$ has a $\chi^2_2((\kappa-\lambda)^2/2 + (\kappa+\lambda-2\mu)^2)/6)$ distribution. Moreover, $U^2$ is independent of $V^2+W^2$ because $(U,V,W)$ are independent. Thus, The distribution of $XY+YZ+ZX$ is that of a $\chi^2_1((\kappa+\lambda+\mu)^2/3)$ distribution minus one-half of a $\chi^2_2((\kappa-\lambda)^2/2 + (\kappa+\lambda-2\mu)^2)/6)$ distribution. (I haven't simplified the expressions for the noncentrality parameters in order to display their derivation.) Arbitrary symmetric quadratic forms in Normal variables are analyzed in the same way, with comparable results: their distributions are those of linear combinations of (possibly) non-central chi-squared variables. When the form is positive-definite, all coefficients will be positive (and vice versa). As a check, we can generate random values in both ways. Using R, we sample from the form in the most straightforward way: n <- 1e4 mu <- c(1,-1,3) # Mean of (X,Y,Z) -- vary at will set.seed(17) xyz <- matrix(rnorm(3*n), 3) q <- xyz[1,]*xyz[2,] + xyz[2,]*xyz[3,] + xyz[3,]*xyz[1,] The chi-squared equivalent is also simple: ncp1 <- (mu[1] + mu[2] + mu[3])^2/3 ncp2 <- (mu[1] - mu[2])^2/2 + (mu[1] + mu[2] - 2*mu[3])^2/6 c1 <- rchisq(n, 1, ncp=ncp1) c2 <- rchisq(n, 2, ncp=ncp2) q. <- c1 - c2/2 A probability plot compares the distributions: plot(sort(q), sort(q.), asp=1, main="Theory v. Simulation", pch=19, col="#00000020", xlab="XY + YZ + ZX", ylab="") mtext(bquote({chi[1]^2}(.(signif(ncp1, 2))) - {chi[2]^2}(.(signif(ncp2, 2)))/2), side=2, line=2) abline(c(0,1), lwd=2, col="Red") Because the points come close to the reference line (of equal values), we see the two samples agree to within expected random deviation:
What is the distribution of $Z(X+Y)+XY$? This is a special case of a real symmetric quadratic form $\mathbf{x^\prime \mathbb{Q} x},$ where $$z(x+y) + yz = xy+yz+xz= \frac{1}{2}\pmatrix{x,&y,&z}\pmatrix{0&1&1\\1&0&1\\1&1&0}\pmatrix{x\\y\\z}.$
34,843
Intuition behind Weibull distribution?
Since the Weibull distribution is often used in connection with reliability or survival, the hazard rate function is crucial, see Non-monotone hazard functions. Below is a plot of the Weibull hazard rate, for scale 1 and some assorted values of the shape $k$, note that $k=1$ is the exponential distribution: So this gives one intuition: The Weibull hazard rate is monotone, decreasing for $k<1$ and increasing for $k>1$. See Wikipedia links to many applications ... the paper by Waloddi Weibull, which gave the distribution its name, can be found here, and is actually quite accessible. He says The objection has been stated that this distribution function has no theoretical basis. But insofar as the author understands, there are - with very few exceptions - the same against all other df, applied to real populations from natural biological fields, at least insofar as the theoretical has anything to do with the population in question. Furthermore, it is utterly hopeless to expect a theoretical basis for distribution functions of random variables such as strength of materials or of machine parts or particle sizes, the "particles" being fly ash, Cyrtoideae, or even adult males, born in the British Isles Nevertheless, in the paper he does give a justification, Assume that we have a chain consisting of several links. If we have found, by testing, the probability of failure $P$ applied to a "single" link, and if we want to find the probability of failure $P_n$ of a chain consisting of $n$ links, we have to base our deductions upon the proposition that the chain as a whole has failed, if anyone of its parts has failed. If you then start with an exponential distribution for a single link, you will arrive at the Weibull for $n$ links. What is more, if the distribution for a single link is Weibull, the distribution for the chain will also be Weibull. As pointed out in comments by @Scortchi - Reinstate Monica, ultimately this thinking will lead you to the Fisher–Tippett–Gnedenko theorem. For the record, R code for the plot: hweibull <- function(x, shape, scale=1) { dweibull(x, shape, scale) / pweibull(x, shape, scale, lower.tail=FALSE) } k <- seq(from=0.6, to=1.5, by=0.2) mypalette <- RColorBrewer::brewer.pal(length(k), "Oranges") for (t in seq_along(k)) { plot(function(x) hweibull(x, k[t]), from=0, to=10, col=mypalette[t], add=if(t==1)FALSE else TRUE, main="Weibull hazard", xlab="x", ylab="", lwd=2) } legend("topright", paste("k=", round(k, 2)), col=mypalette, text.col=mypalette)
Intuition behind Weibull distribution?
Since the Weibull distribution is often used in connection with reliability or survival, the hazard rate function is crucial, see Non-monotone hazard functions. Below is a plot of the Weibull hazard r
Intuition behind Weibull distribution? Since the Weibull distribution is often used in connection with reliability or survival, the hazard rate function is crucial, see Non-monotone hazard functions. Below is a plot of the Weibull hazard rate, for scale 1 and some assorted values of the shape $k$, note that $k=1$ is the exponential distribution: So this gives one intuition: The Weibull hazard rate is monotone, decreasing for $k<1$ and increasing for $k>1$. See Wikipedia links to many applications ... the paper by Waloddi Weibull, which gave the distribution its name, can be found here, and is actually quite accessible. He says The objection has been stated that this distribution function has no theoretical basis. But insofar as the author understands, there are - with very few exceptions - the same against all other df, applied to real populations from natural biological fields, at least insofar as the theoretical has anything to do with the population in question. Furthermore, it is utterly hopeless to expect a theoretical basis for distribution functions of random variables such as strength of materials or of machine parts or particle sizes, the "particles" being fly ash, Cyrtoideae, or even adult males, born in the British Isles Nevertheless, in the paper he does give a justification, Assume that we have a chain consisting of several links. If we have found, by testing, the probability of failure $P$ applied to a "single" link, and if we want to find the probability of failure $P_n$ of a chain consisting of $n$ links, we have to base our deductions upon the proposition that the chain as a whole has failed, if anyone of its parts has failed. If you then start with an exponential distribution for a single link, you will arrive at the Weibull for $n$ links. What is more, if the distribution for a single link is Weibull, the distribution for the chain will also be Weibull. As pointed out in comments by @Scortchi - Reinstate Monica, ultimately this thinking will lead you to the Fisher–Tippett–Gnedenko theorem. For the record, R code for the plot: hweibull <- function(x, shape, scale=1) { dweibull(x, shape, scale) / pweibull(x, shape, scale, lower.tail=FALSE) } k <- seq(from=0.6, to=1.5, by=0.2) mypalette <- RColorBrewer::brewer.pal(length(k), "Oranges") for (t in seq_along(k)) { plot(function(x) hweibull(x, k[t]), from=0, to=10, col=mypalette[t], add=if(t==1)FALSE else TRUE, main="Weibull hazard", xlab="x", ylab="", lwd=2) } legend("topright", paste("k=", round(k, 2)), col=mypalette, text.col=mypalette)
Intuition behind Weibull distribution? Since the Weibull distribution is often used in connection with reliability or survival, the hazard rate function is crucial, see Non-monotone hazard functions. Below is a plot of the Weibull hazard r
34,844
Intuition behind Weibull distribution?
As the OP uses the Gumbel distribution (maximum extreme value distribution) as an example having an intuitive explanation, it's worth adding to Kjetil's answer (+1) by pointing out the association of that distribution with the Weibull. Say that $W$ represents the standard minimum extreme value form of the distribution (replacing $x$ with $-x$, and setting $a=1,b=1$ in the terminology of this question). If survival times $T$ have the following distribution: $$\log T = \alpha + \sigma W $$ then $T$ follows a Weibull distribution with $\alpha = -log \lambda$ and $k = 1/\sigma$. Then $W$ represents the random contribution of a standard minimum extreme value distribution to the distribution of $\log T$ values, and one could then interpret $k$ as the "tightness" of that distribution.
Intuition behind Weibull distribution?
As the OP uses the Gumbel distribution (maximum extreme value distribution) as an example having an intuitive explanation, it's worth adding to Kjetil's answer (+1) by pointing out the association of
Intuition behind Weibull distribution? As the OP uses the Gumbel distribution (maximum extreme value distribution) as an example having an intuitive explanation, it's worth adding to Kjetil's answer (+1) by pointing out the association of that distribution with the Weibull. Say that $W$ represents the standard minimum extreme value form of the distribution (replacing $x$ with $-x$, and setting $a=1,b=1$ in the terminology of this question). If survival times $T$ have the following distribution: $$\log T = \alpha + \sigma W $$ then $T$ follows a Weibull distribution with $\alpha = -log \lambda$ and $k = 1/\sigma$. Then $W$ represents the random contribution of a standard minimum extreme value distribution to the distribution of $\log T$ values, and one could then interpret $k$ as the "tightness" of that distribution.
Intuition behind Weibull distribution? As the OP uses the Gumbel distribution (maximum extreme value distribution) as an example having an intuitive explanation, it's worth adding to Kjetil's answer (+1) by pointing out the association of
34,845
Limit of Integration of continuous function
Clearly, the integral can be rewritten as $E[f(Y_n)]$, where $Y_n = \frac{1}{n}(X_1 + \cdots + X_n)$, and $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$. By (weak) law of large numbers, we have $Y_n \to_d \frac{1}{2}$. This implies, by portmanteau lemma for any bounded and continuous function $h$, we have $E[h(Y_n)] \to E[h(1/2)] = h(1/2)$ as $n \to \infty$. The $f$ in your problem satisfies the boundedness and continuous condition (extend its domain to $\mathbb{R}$ if you want more rigor), so the answer is $f(1/2)$. $\newcommand{\eps}{\varepsilon}$ As @whuber suggested, there is a non-probabilistic argument, which goes as follows. For arbitrary given $\eps > 0$, since $f$ is continuous at $1/2$, there exists $\delta > 0$ such that $|f(x) - f(1/2)| < \eps$ whenever $|x - 1/2| < \delta$. Also, since $f$ is continuous on $[0, 1]$, there exists $M > 0$ such that $|f| \leq M$ for all $x \in [0, 1]$. (Indeed, from the proof below we can see that the original continuity condition may be weakened to $f$ is continuous at $1/2$ and it is sequentially integrable). For notational conciseness, for $0 \leq x_1, \ldots, x_n \leq 1$, denote $x_1 + \cdots + x_n$ by $s_n$, the region $[0, 1] \times \cdots \times [0, 1]$ by $V_n$, and the region $\{(x_1, \ldots, x_n): |n^{-1}s_n - 1/2| \geq \delta\}$ by $V_{n,\delta}$. Also denote $dx_1\cdots dx_n$ by $dx$, then \begin{align*} &\left|\int_{V_n} f(n^{-1}s_n) dx - f(1/2)\right| \leq \int_{V_n} |f(n^{-1}s_n) dx - f(1/2)| dx \\ =& \int_{V_{n, \delta}} |f(n^{-1}s_n) - f(1/2)| dx + \int_{V_{n, \delta}^c} |f(n^{-1}s_n) - f(1/2)| dx \\ <& \int_{V_{n, \delta}} |f(n^{-1}s_n) - f(1/2)| dx + \eps \leq 2M\int_{V_{n, \delta}} dx + \eps. \end{align*} by the setting up. So it remains to show the volume of $V_{n, \delta}$ can be made arbitrarily small when $n$ is sufficiently large. To this end, Chebyshev's inequality (or repeating its proof essence) implies that \begin{align*} & \int_{V_{n, \delta}} dx \leq \delta^{-2}\int_{V_n}(n^{-1}s_n - 1/2)^2 dx \\ =& \delta^{-2}\int_{V_n}\left(n^{-2}s_n^2 - n^{-1}s_n + \frac{1}{4}\right) dx \\ =& \delta^{-2}\left(n^{-2}\sum_{i = 1}^n \int_{V_n}x_i^2 dx + 2n^{-2}\sum_{1 \leq i < j \leq n}\int_{V_n}x_ix_j dx - n^{-1}\sum_{i = 1}^n\int_{V_n}x_i dx + \frac{1}{4}\right) \\ =& \delta^{-2}\left(\frac{1}{3n} + 2n^{-2} \times \frac{n(n - 1)}{2} \times \frac{1}{4} - n^{-1}\times \frac{n}{2} + \frac{1}{4}\right) \\ =& \frac{1}{12n\delta^2} \to 0 \end{align*} as $n \to \infty$, and this is what we want to show.
Limit of Integration of continuous function
Clearly, the integral can be rewritten as $E[f(Y_n)]$, where $Y_n = \frac{1}{n}(X_1 + \cdots + X_n)$, and $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$. By (weak) law of large numbers, we have $Y_n \
Limit of Integration of continuous function Clearly, the integral can be rewritten as $E[f(Y_n)]$, where $Y_n = \frac{1}{n}(X_1 + \cdots + X_n)$, and $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$. By (weak) law of large numbers, we have $Y_n \to_d \frac{1}{2}$. This implies, by portmanteau lemma for any bounded and continuous function $h$, we have $E[h(Y_n)] \to E[h(1/2)] = h(1/2)$ as $n \to \infty$. The $f$ in your problem satisfies the boundedness and continuous condition (extend its domain to $\mathbb{R}$ if you want more rigor), so the answer is $f(1/2)$. $\newcommand{\eps}{\varepsilon}$ As @whuber suggested, there is a non-probabilistic argument, which goes as follows. For arbitrary given $\eps > 0$, since $f$ is continuous at $1/2$, there exists $\delta > 0$ such that $|f(x) - f(1/2)| < \eps$ whenever $|x - 1/2| < \delta$. Also, since $f$ is continuous on $[0, 1]$, there exists $M > 0$ such that $|f| \leq M$ for all $x \in [0, 1]$. (Indeed, from the proof below we can see that the original continuity condition may be weakened to $f$ is continuous at $1/2$ and it is sequentially integrable). For notational conciseness, for $0 \leq x_1, \ldots, x_n \leq 1$, denote $x_1 + \cdots + x_n$ by $s_n$, the region $[0, 1] \times \cdots \times [0, 1]$ by $V_n$, and the region $\{(x_1, \ldots, x_n): |n^{-1}s_n - 1/2| \geq \delta\}$ by $V_{n,\delta}$. Also denote $dx_1\cdots dx_n$ by $dx$, then \begin{align*} &\left|\int_{V_n} f(n^{-1}s_n) dx - f(1/2)\right| \leq \int_{V_n} |f(n^{-1}s_n) dx - f(1/2)| dx \\ =& \int_{V_{n, \delta}} |f(n^{-1}s_n) - f(1/2)| dx + \int_{V_{n, \delta}^c} |f(n^{-1}s_n) - f(1/2)| dx \\ <& \int_{V_{n, \delta}} |f(n^{-1}s_n) - f(1/2)| dx + \eps \leq 2M\int_{V_{n, \delta}} dx + \eps. \end{align*} by the setting up. So it remains to show the volume of $V_{n, \delta}$ can be made arbitrarily small when $n$ is sufficiently large. To this end, Chebyshev's inequality (or repeating its proof essence) implies that \begin{align*} & \int_{V_{n, \delta}} dx \leq \delta^{-2}\int_{V_n}(n^{-1}s_n - 1/2)^2 dx \\ =& \delta^{-2}\int_{V_n}\left(n^{-2}s_n^2 - n^{-1}s_n + \frac{1}{4}\right) dx \\ =& \delta^{-2}\left(n^{-2}\sum_{i = 1}^n \int_{V_n}x_i^2 dx + 2n^{-2}\sum_{1 \leq i < j \leq n}\int_{V_n}x_ix_j dx - n^{-1}\sum_{i = 1}^n\int_{V_n}x_i dx + \frac{1}{4}\right) \\ =& \delta^{-2}\left(\frac{1}{3n} + 2n^{-2} \times \frac{n(n - 1)}{2} \times \frac{1}{4} - n^{-1}\times \frac{n}{2} + \frac{1}{4}\right) \\ =& \frac{1}{12n\delta^2} \to 0 \end{align*} as $n \to \infty$, and this is what we want to show.
Limit of Integration of continuous function Clearly, the integral can be rewritten as $E[f(Y_n)]$, where $Y_n = \frac{1}{n}(X_1 + \cdots + X_n)$, and $X_1, \ldots, X_n \text{ i.i.d.} \sim U(0, 1)$. By (weak) law of large numbers, we have $Y_n \
34,846
Reference request: Storks bring babies
The original reference is, as far as I can tell, this one (following a citation in Kronmal 1993, a very under-read paper imho): Neyman, J. (1952) Lectures and Conferences on Mathematical Statistics and Probability, 2nd edn, pp. 143-154. Washington DC: US Department of Agriculture. The stork and baby data is described and analyzed starting on page 143. Despite (or perhaps because) Neyman introduces them with "Once upon a time an inquisitive friend of mine decided to study the question empirically", these data are clearly fictional. The storks and babies are followed by a railroads example, whose analytical results are apparently real, but whose raw data was reconstructed to show how the same fallacy might have been at work, "Miss Evelyn Fix was kind enough to prepare Table IV indicating what might have been the raw data [...]" On the other citations: the data from Matthews is from about 50 years after but does seem to have the same structure as Neyman's. It is (I presume) real, and seems to have been collected independently. I cannot find a searchable version of Yule, so despite a personal weakness for hunting through old statistics textbooks, I have not found the time to search it. Perhaps a bird will bring the reference to us.
Reference request: Storks bring babies
The original reference is, as far as I can tell, this one (following a citation in Kronmal 1993, a very under-read paper imho): Neyman, J. (1952) Lectures and Conferences on Mathematical Statistics a
Reference request: Storks bring babies The original reference is, as far as I can tell, this one (following a citation in Kronmal 1993, a very under-read paper imho): Neyman, J. (1952) Lectures and Conferences on Mathematical Statistics and Probability, 2nd edn, pp. 143-154. Washington DC: US Department of Agriculture. The stork and baby data is described and analyzed starting on page 143. Despite (or perhaps because) Neyman introduces them with "Once upon a time an inquisitive friend of mine decided to study the question empirically", these data are clearly fictional. The storks and babies are followed by a railroads example, whose analytical results are apparently real, but whose raw data was reconstructed to show how the same fallacy might have been at work, "Miss Evelyn Fix was kind enough to prepare Table IV indicating what might have been the raw data [...]" On the other citations: the data from Matthews is from about 50 years after but does seem to have the same structure as Neyman's. It is (I presume) real, and seems to have been collected independently. I cannot find a searchable version of Yule, so despite a personal weakness for hunting through old statistics textbooks, I have not found the time to search it. Perhaps a bird will bring the reference to us.
Reference request: Storks bring babies The original reference is, as far as I can tell, this one (following a citation in Kronmal 1993, a very under-read paper imho): Neyman, J. (1952) Lectures and Conferences on Mathematical Statistics a
34,847
Reference request: Storks bring babies
This (http://www.nieuwarchief.nl/serie5/pdf/naw5-2010-11-2-134.pdf) dutch magazine article mentions G.E.P. Box, W.G. Hunter en J.S. Hunter (1978),Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, New York: John Wiley, p. 8 as the first example. Box et al. apparently use a data set from Oldenburg, Germany from the thirties (which is also analyzed in the magazine article).
Reference request: Storks bring babies
This (http://www.nieuwarchief.nl/serie5/pdf/naw5-2010-11-2-134.pdf) dutch magazine article mentions G.E.P. Box, W.G. Hunter en J.S. Hunter (1978),Statistics for Experimenters: An Introduction to Desi
Reference request: Storks bring babies This (http://www.nieuwarchief.nl/serie5/pdf/naw5-2010-11-2-134.pdf) dutch magazine article mentions G.E.P. Box, W.G. Hunter en J.S. Hunter (1978),Statistics for Experimenters: An Introduction to Design, Data Analysis, and Model Building, New York: John Wiley, p. 8 as the first example. Box et al. apparently use a data set from Oldenburg, Germany from the thirties (which is also analyzed in the magazine article).
Reference request: Storks bring babies This (http://www.nieuwarchief.nl/serie5/pdf/naw5-2010-11-2-134.pdf) dutch magazine article mentions G.E.P. Box, W.G. Hunter en J.S. Hunter (1978),Statistics for Experimenters: An Introduction to Desi
34,848
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but non-causal, time-series?
The question suggests some basic confusion between the equation and the solution The Equation Let ${\varphi} > 1$. Consider the following (infinite) system of equations---one equation for each $t\in \mathbb{Z}$: $$ X_{t}=\varphi X_{t-1}+e_{t}, \mbox{ where } e_t \sim WN(0,\sigma), \;\; t \in \mathbb{Z}. \quad (*) $$ Definition Given $e_t \sim WN(0,\sigma)$, a sequence of random variables $\{ X_t \}_{t\in \mathbb{Z}}$ is said to be a solution of $(*)$ if, for each $t$, $$ X_{t}=\varphi X_{t-1}+e_{t}, $$ with probability 1. The Solution Define $$ X_t= - \sum_{k=1}^\infty {\varphi}^{-k}e_{t+k}, $$ for each $t$. $X_t$ is well-defined: The sequence of partial sums $$ X_{t,m} = - \sum_{k=1}^m {\varphi}^{-k}e_{t+k}, \;\; m \geq 1 $$ is a Cauchy sequence in the Hilbert space $L^2$, and therefore converges in $L^2$. $L^2$ convergence implies convergence in probability (although not necessarily almost surely). By definition, for each $t$, $X_t$ is the $L^2$/probability-limit of $(X_{t,m})$ as $m \rightarrow \infty$. $\{ X_t \}$ is, trivially, weakly stationary. (Any MA$(\infty)$ series with absolutely summable coefficients is weakly stationary.) $\{ X_t \}_{t\in \mathbb{Z}}$ is a solution of $(*)$, as can be verified directly by substitution into $(*)$. This is a special case of how one would obtain a solution to an ARMA model: first guess/derive an MA$(\infty)$ expression, show that it is well-defined, then verify it's an actual solution. $\;$ ...But the $\epsilon_t$ is not independent from $X_{t}$... This impression perhaps results from confusing the equation and the solution. Consider the actual solution: $$ \varphi X_{t-1} + e_t = \varphi \cdot \left( - \sum_{k=1}^\infty {\varphi}^{-k}e_{t+k-1} \right) + e_t, $$ the right-hand side is exactly $- \sum_{k=1}^\infty {\varphi}^{-k}e_{t+k}$, which is $X_t$ (we just verified Point #3 above). Notice how $e_t$ cancels and actually doesn't show up in $X_t$. $\;$ ...where this...derivation originally comes from... I believe Mann and Wald (1943) already considered non-causal AR(1) case, among other examples. Perhaps one can find references even earlier. Certainly by the time of Box and Jenkins this is well-known. Further Comment The non-causal solution is typically excluded from the stationary AR(1) model because: It is un-physical. Assume that $(e_t)$ is, say, Gaussian white noise. Then, for every non-causal solution, there exists a causal solution that is observationally equivalent, i.e. the two solutions would be equal as probability measures on $\mathbb{R}^{\mathbb{Z}}$. In other words, a stationary AR(1) model that includes both causal and non-causal cases is un-indentified. Even if the non-causal solution is physical, one cannot distinguish it from a causal counterpart from data. For example, if innovation variance $\sigma^2 =1$, then the causal counterpart is causal solution to AR(1) equation with coefficient $\frac{1}{\varphi}$ and $\sigma^2 =\frac{1}{\varphi^2}$.
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but
The question suggests some basic confusion between the equation and the solution The Equation Let ${\varphi} > 1$. Consider the following (infinite) system of equations---one equation for each $t\in \
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but non-causal, time-series? The question suggests some basic confusion between the equation and the solution The Equation Let ${\varphi} > 1$. Consider the following (infinite) system of equations---one equation for each $t\in \mathbb{Z}$: $$ X_{t}=\varphi X_{t-1}+e_{t}, \mbox{ where } e_t \sim WN(0,\sigma), \;\; t \in \mathbb{Z}. \quad (*) $$ Definition Given $e_t \sim WN(0,\sigma)$, a sequence of random variables $\{ X_t \}_{t\in \mathbb{Z}}$ is said to be a solution of $(*)$ if, for each $t$, $$ X_{t}=\varphi X_{t-1}+e_{t}, $$ with probability 1. The Solution Define $$ X_t= - \sum_{k=1}^\infty {\varphi}^{-k}e_{t+k}, $$ for each $t$. $X_t$ is well-defined: The sequence of partial sums $$ X_{t,m} = - \sum_{k=1}^m {\varphi}^{-k}e_{t+k}, \;\; m \geq 1 $$ is a Cauchy sequence in the Hilbert space $L^2$, and therefore converges in $L^2$. $L^2$ convergence implies convergence in probability (although not necessarily almost surely). By definition, for each $t$, $X_t$ is the $L^2$/probability-limit of $(X_{t,m})$ as $m \rightarrow \infty$. $\{ X_t \}$ is, trivially, weakly stationary. (Any MA$(\infty)$ series with absolutely summable coefficients is weakly stationary.) $\{ X_t \}_{t\in \mathbb{Z}}$ is a solution of $(*)$, as can be verified directly by substitution into $(*)$. This is a special case of how one would obtain a solution to an ARMA model: first guess/derive an MA$(\infty)$ expression, show that it is well-defined, then verify it's an actual solution. $\;$ ...But the $\epsilon_t$ is not independent from $X_{t}$... This impression perhaps results from confusing the equation and the solution. Consider the actual solution: $$ \varphi X_{t-1} + e_t = \varphi \cdot \left( - \sum_{k=1}^\infty {\varphi}^{-k}e_{t+k-1} \right) + e_t, $$ the right-hand side is exactly $- \sum_{k=1}^\infty {\varphi}^{-k}e_{t+k}$, which is $X_t$ (we just verified Point #3 above). Notice how $e_t$ cancels and actually doesn't show up in $X_t$. $\;$ ...where this...derivation originally comes from... I believe Mann and Wald (1943) already considered non-causal AR(1) case, among other examples. Perhaps one can find references even earlier. Certainly by the time of Box and Jenkins this is well-known. Further Comment The non-causal solution is typically excluded from the stationary AR(1) model because: It is un-physical. Assume that $(e_t)$ is, say, Gaussian white noise. Then, for every non-causal solution, there exists a causal solution that is observationally equivalent, i.e. the two solutions would be equal as probability measures on $\mathbb{R}^{\mathbb{Z}}$. In other words, a stationary AR(1) model that includes both causal and non-causal cases is un-indentified. Even if the non-causal solution is physical, one cannot distinguish it from a causal counterpart from data. For example, if innovation variance $\sigma^2 =1$, then the causal counterpart is causal solution to AR(1) equation with coefficient $\frac{1}{\varphi}$ and $\sigma^2 =\frac{1}{\varphi^2}$.
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but The question suggests some basic confusion between the equation and the solution The Equation Let ${\varphi} > 1$. Consider the following (infinite) system of equations---one equation for each $t\in \
34,849
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but non-causal, time-series?
Re-arranging your first equation and increasing the index by one gives the "reverse" AR(1) form: $$X_{t} = \frac{1}{\varphi} X_{t+1} - \frac{e_{t+1}}{\varphi}.$$ Suppose you now define the observable values using the filter: $$X_t = - \sum_{k=1}^\infty \frac{e_{t+k}}{\varphi^k}.$$ You can confirm by substitution that both the original AR(1) form and the reversed form hold in this case. As pointed out in the excellent answer by Michael, this means that the model is not identified unless we exclude this solution by definition.
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but
Re-arranging your first equation and increasing the index by one gives the "reverse" AR(1) form: $$X_{t} = \frac{1}{\varphi} X_{t+1} - \frac{e_{t+1}}{\varphi}.$$ Suppose you now define the observable
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but non-causal, time-series? Re-arranging your first equation and increasing the index by one gives the "reverse" AR(1) form: $$X_{t} = \frac{1}{\varphi} X_{t+1} - \frac{e_{t+1}}{\varphi}.$$ Suppose you now define the observable values using the filter: $$X_t = - \sum_{k=1}^\infty \frac{e_{t+k}}{\varphi^k}.$$ You can confirm by substitution that both the original AR(1) form and the reversed form hold in this case. As pointed out in the excellent answer by Michael, this means that the model is not identified unless we exclude this solution by definition.
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but Re-arranging your first equation and increasing the index by one gives the "reverse" AR(1) form: $$X_{t} = \frac{1}{\varphi} X_{t+1} - \frac{e_{t+1}}{\varphi}.$$ Suppose you now define the observable
34,850
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but non-causal, time-series?
... the AR(1) process (with $e_t$ white noise): $$X_{t}=\varphi X_{t-1}+e_{t} \qquad , e_t \sim WN(0,\sigma)$$ is a stationary process if $\varphi>1$ because ... It seems me not possible as showed there: https://en.wikipedia.org/wiki/Autoregressive_model#Example:_An_AR(1)_process for wide sense stationarity $-1 < \varphi < 1$ must hold. Moreover, maybe I lose something here but it seems me that not only the process above cannot be stationary but it is entirely impossible and/or bad defined. This because if we have an autoregressive process, we do not stay in a situation like $Y=\theta Z+u$ where $Z$ and $u$ can be two unrestricted random variables and $\theta$ an unrestricted parameter. In a regression residuals and parameters are not free terms, given dependent and independent/s variables, they are given too. So, in AR(1) case it is possible to show that $-1 \leq \varphi \leq 1$ must hold; like autocorrelation. Moreover if we assume that $e_t$ (residuals) are white noise process ... we make a restriction on $X_t$ process too. If in the data we estimate an AR(1) and $e_t$ result as autocorrelated ... the assumption/restriction do not hold ... AR(1) is not a good specification.
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but
... the AR(1) process (with $e_t$ white noise): $$X_{t}=\varphi X_{t-1}+e_{t} \qquad , e_t \sim WN(0,\sigma)$$ is a stationary process if $\varphi>1$ because ... It seems me not possible as showed th
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but non-causal, time-series? ... the AR(1) process (with $e_t$ white noise): $$X_{t}=\varphi X_{t-1}+e_{t} \qquad , e_t \sim WN(0,\sigma)$$ is a stationary process if $\varphi>1$ because ... It seems me not possible as showed there: https://en.wikipedia.org/wiki/Autoregressive_model#Example:_An_AR(1)_process for wide sense stationarity $-1 < \varphi < 1$ must hold. Moreover, maybe I lose something here but it seems me that not only the process above cannot be stationary but it is entirely impossible and/or bad defined. This because if we have an autoregressive process, we do not stay in a situation like $Y=\theta Z+u$ where $Z$ and $u$ can be two unrestricted random variables and $\theta$ an unrestricted parameter. In a regression residuals and parameters are not free terms, given dependent and independent/s variables, they are given too. So, in AR(1) case it is possible to show that $-1 \leq \varphi \leq 1$ must hold; like autocorrelation. Moreover if we assume that $e_t$ (residuals) are white noise process ... we make a restriction on $X_t$ process too. If in the data we estimate an AR(1) and $e_t$ result as autocorrelated ... the assumption/restriction do not hold ... AR(1) is not a good specification.
The explosive AR(1) process with $\varphi>1$, where was this first represented as a stationary, but ... the AR(1) process (with $e_t$ white noise): $$X_{t}=\varphi X_{t-1}+e_{t} \qquad , e_t \sim WN(0,\sigma)$$ is a stationary process if $\varphi>1$ because ... It seems me not possible as showed th
34,851
Consistent estimator - consistent with what exactly?
Neither. An estimator is consistent for some parameter, so in this case the answer is Yes, $\hat\gamma_2$ is consistent for $\beta_2$ No, $\hat\gamma_2$ is not consistent for $\gamma_2$ (or for $\beta_0$ or lots of other things). In this case, the causal assumptions suggest you'd be more interested in whether it was consistent for $\gamma_2$, but you still need to say "consistent for $\gamma_2$", not just "consistent". The same is true for 'biased' and 'unbiased': an estimator is biased or unbiased for a parameter Sometimes there is genuinely only one interesting limit, and it's a reasonable abuse of notation to leave it implied, but a claim of consistency does require specifying the limit.
Consistent estimator - consistent with what exactly?
Neither. An estimator is consistent for some parameter, so in this case the answer is Yes, $\hat\gamma_2$ is consistent for $\beta_2$ No, $\hat\gamma_2$ is not consistent for $\gamma_2$ (or for $\bet
Consistent estimator - consistent with what exactly? Neither. An estimator is consistent for some parameter, so in this case the answer is Yes, $\hat\gamma_2$ is consistent for $\beta_2$ No, $\hat\gamma_2$ is not consistent for $\gamma_2$ (or for $\beta_0$ or lots of other things). In this case, the causal assumptions suggest you'd be more interested in whether it was consistent for $\gamma_2$, but you still need to say "consistent for $\gamma_2$", not just "consistent". The same is true for 'biased' and 'unbiased': an estimator is biased or unbiased for a parameter Sometimes there is genuinely only one interesting limit, and it's a reasonable abuse of notation to leave it implied, but a claim of consistency does require specifying the limit.
Consistent estimator - consistent with what exactly? Neither. An estimator is consistent for some parameter, so in this case the answer is Yes, $\hat\gamma_2$ is consistent for $\beta_2$ No, $\hat\gamma_2$ is not consistent for $\gamma_2$ (or for $\bet
34,852
Does every statistic have a sampling distribution, not just the sample mean?
Yes, every statistic has a sampling distribution (though some may be degenerate). What would they be like? The sampling distribution of a statistic - just as with the mean - will in general depend on the population distribution you start with (and the sample size, naturally). As an example, in a random sample from a normal distribution, the sample variance is a multiple of a chi-squared random variable and so the sample s.d. is a multiple of a chi random variable. Below is a histogram of the sample standard deviations from 10000 samples of size 10 from a normal distribution, and the true sampling distribution (scaled-chi, red curve): (click for larger version) If you don't start with a normal population, the distribution of the sample s.d. is something else. E.g. here's the sample sd for 10000 samples of size 10 from a uniform distribution: As we see, this one is mildly left skew rather than mildly right skew (I didn't calculate its theoretical distribution). Note also that a sample proportion is a form of mean (label the in-category observations with a 1 and the out-of-category observations with a 0 and the sample mean is the sample proportion you started with). If the probability of being in group is constant and the observations are independent, it will have a discrete sampling distribution; a scaled binomial. Many statistics are asymptotically normal under fairly mild conditions, but many are not (e.g. consider sample maxima for one). Sampling distributions of various statistics come up in a number of situations. As an example, sampling distributions are important in hypothesis testing.
Does every statistic have a sampling distribution, not just the sample mean?
Yes, every statistic has a sampling distribution (though some may be degenerate). What would they be like? The sampling distribution of a statistic - just as with the mean - will in general depend o
Does every statistic have a sampling distribution, not just the sample mean? Yes, every statistic has a sampling distribution (though some may be degenerate). What would they be like? The sampling distribution of a statistic - just as with the mean - will in general depend on the population distribution you start with (and the sample size, naturally). As an example, in a random sample from a normal distribution, the sample variance is a multiple of a chi-squared random variable and so the sample s.d. is a multiple of a chi random variable. Below is a histogram of the sample standard deviations from 10000 samples of size 10 from a normal distribution, and the true sampling distribution (scaled-chi, red curve): (click for larger version) If you don't start with a normal population, the distribution of the sample s.d. is something else. E.g. here's the sample sd for 10000 samples of size 10 from a uniform distribution: As we see, this one is mildly left skew rather than mildly right skew (I didn't calculate its theoretical distribution). Note also that a sample proportion is a form of mean (label the in-category observations with a 1 and the out-of-category observations with a 0 and the sample mean is the sample proportion you started with). If the probability of being in group is constant and the observations are independent, it will have a discrete sampling distribution; a scaled binomial. Many statistics are asymptotically normal under fairly mild conditions, but many are not (e.g. consider sample maxima for one). Sampling distributions of various statistics come up in a number of situations. As an example, sampling distributions are important in hypothesis testing.
Does every statistic have a sampling distribution, not just the sample mean? Yes, every statistic has a sampling distribution (though some may be degenerate). What would they be like? The sampling distribution of a statistic - just as with the mean - will in general depend o
34,853
Does every statistic have a sampling distribution, not just the sample mean?
Yes, as every statistic is a function of you sample (which are random variables) they will have a distribution. It might not be as easy to deduce the distribution as with the sample mean.
Does every statistic have a sampling distribution, not just the sample mean?
Yes, as every statistic is a function of you sample (which are random variables) they will have a distribution. It might not be as easy to deduce the distribution as with the sample mean.
Does every statistic have a sampling distribution, not just the sample mean? Yes, as every statistic is a function of you sample (which are random variables) they will have a distribution. It might not be as easy to deduce the distribution as with the sample mean.
Does every statistic have a sampling distribution, not just the sample mean? Yes, as every statistic is a function of you sample (which are random variables) they will have a distribution. It might not be as easy to deduce the distribution as with the sample mean.
34,854
Conditional Probability of Having Covid-19 Given Some Symptoms
You are asking about conditional independence: $S_1 \; {\rm INDEP} \; S_2 \mid C19$. The way you write the joint probability, as a product over the probabilities of each feature - that model assumes conditional independence. You can check whether this assumption holds by comparing the joint distributions of pairs of input variables, for each of the possible outcomes of $C19$.
Conditional Probability of Having Covid-19 Given Some Symptoms
You are asking about conditional independence: $S_1 \; {\rm INDEP} \; S_2 \mid C19$. The way you write the joint probability, as a product over the probabilities of each feature - that model assumes c
Conditional Probability of Having Covid-19 Given Some Symptoms You are asking about conditional independence: $S_1 \; {\rm INDEP} \; S_2 \mid C19$. The way you write the joint probability, as a product over the probabilities of each feature - that model assumes conditional independence. You can check whether this assumption holds by comparing the joint distributions of pairs of input variables, for each of the possible outcomes of $C19$.
Conditional Probability of Having Covid-19 Given Some Symptoms You are asking about conditional independence: $S_1 \; {\rm INDEP} \; S_2 \mid C19$. The way you write the joint probability, as a product over the probabilities of each feature - that model assumes c
34,855
Conditional Probability of Having Covid-19 Given Some Symptoms
Many clinics and medical offices scan for Temperature when you come in the door as a way to detect people who might have C-19 in order to take special precautions if necessary. I have no idea what the actual probabilities are, but here is how Bayes' Theorem would be used in order to find P(C-19 | Fever), denoted $P(C|F)$ below. $$P(C|F) = \frac{P(CF)}{P(F)} = \frac{P(C)P(F|C)}{P(CF)+P(C^cF)}\\ =\frac{P(C)P(F|C)}{P(C)P(F|C)+P(C^c)P(F|C^c)}.$$ So in order to find $P(C|F),$ you need to know all of the probabilities in the last expression. Right now, where I live, just knowing the prevalence $P(C)$ seems difficult. And if $P(F|C^c)$ gets too large (that is, lots of people have fever for reasons unrelated to Covid-19), then the temperature scans as people come in the door become useless as a quick screen for Covid-19. However, if you had information to be sure $P(C|F) > P(C),$ then you'd know temperature scans are useful, and maybe you can get an even larger probability of $C$ given a longer list of symptoms.
Conditional Probability of Having Covid-19 Given Some Symptoms
Many clinics and medical offices scan for Temperature when you come in the door as a way to detect people who might have C-19 in order to take special precautions if necessary. I have no idea what the
Conditional Probability of Having Covid-19 Given Some Symptoms Many clinics and medical offices scan for Temperature when you come in the door as a way to detect people who might have C-19 in order to take special precautions if necessary. I have no idea what the actual probabilities are, but here is how Bayes' Theorem would be used in order to find P(C-19 | Fever), denoted $P(C|F)$ below. $$P(C|F) = \frac{P(CF)}{P(F)} = \frac{P(C)P(F|C)}{P(CF)+P(C^cF)}\\ =\frac{P(C)P(F|C)}{P(C)P(F|C)+P(C^c)P(F|C^c)}.$$ So in order to find $P(C|F),$ you need to know all of the probabilities in the last expression. Right now, where I live, just knowing the prevalence $P(C)$ seems difficult. And if $P(F|C^c)$ gets too large (that is, lots of people have fever for reasons unrelated to Covid-19), then the temperature scans as people come in the door become useless as a quick screen for Covid-19. However, if you had information to be sure $P(C|F) > P(C),$ then you'd know temperature scans are useful, and maybe you can get an even larger probability of $C$ given a longer list of symptoms.
Conditional Probability of Having Covid-19 Given Some Symptoms Many clinics and medical offices scan for Temperature when you come in the door as a way to detect people who might have C-19 in order to take special precautions if necessary. I have no idea what the
34,856
Conditional Probability of Having Covid-19 Given Some Symptoms
Here's the way I would do it (check it's correct!) If you have more than one symptom you apply the Bayesian theorem recursively using the previous result as the new prior probability. Here's an example (works in R) - I made up some starting values without too much thinking about them: p_fever_C19 = 0.83 # P of fever given you have C19 p_fever_noC19 = 0.01 # P of fever given you DO NOT have C19 p_C19 = 0.0001 # P of C19 without any information about symptoms (i.e. P of C19 in the general population) If you have fever your probability of having C19 is: p_C19_given_fever = (p_C19 * p_fever_C19) / ((p_C19 * p_fever_C19) + ((1-p_C19) * p_fever_noC19)) Let's say now you also have cough, instead of p_C19 use the previous p_C19_given_fever p_cough_C19 = 0.82 p_cough_noC19 = 0.1 p_C19_given_fever_cough = (p_C19_given_fever * p_cough_C19) / ((p_C19_given_fever * p_cough_C19) + ((1-p_C19_given_fever) * p_cough_noC19)) You also have short breath: p_short_C19 = 0.31 p_short_noC19 = 0.2 p_C19_given_fever_cough_short = (p_C19_given_fever_cough * p_short_C19) / ((p_C19_given_fever_cough * p_short_C19) + ((1-p_C19_given_fever_cough) * p_short_noC19)) .. and so on for other symptoms. For this example the results are: p_C19_given_fever 0.008232494 p_C19_given_fever_cough 0.06372898 p_C19_given_fever_cough_short 0.09543484
Conditional Probability of Having Covid-19 Given Some Symptoms
Here's the way I would do it (check it's correct!) If you have more than one symptom you apply the Bayesian theorem recursively using the previous result as the new prior probability. Here's an exampl
Conditional Probability of Having Covid-19 Given Some Symptoms Here's the way I would do it (check it's correct!) If you have more than one symptom you apply the Bayesian theorem recursively using the previous result as the new prior probability. Here's an example (works in R) - I made up some starting values without too much thinking about them: p_fever_C19 = 0.83 # P of fever given you have C19 p_fever_noC19 = 0.01 # P of fever given you DO NOT have C19 p_C19 = 0.0001 # P of C19 without any information about symptoms (i.e. P of C19 in the general population) If you have fever your probability of having C19 is: p_C19_given_fever = (p_C19 * p_fever_C19) / ((p_C19 * p_fever_C19) + ((1-p_C19) * p_fever_noC19)) Let's say now you also have cough, instead of p_C19 use the previous p_C19_given_fever p_cough_C19 = 0.82 p_cough_noC19 = 0.1 p_C19_given_fever_cough = (p_C19_given_fever * p_cough_C19) / ((p_C19_given_fever * p_cough_C19) + ((1-p_C19_given_fever) * p_cough_noC19)) You also have short breath: p_short_C19 = 0.31 p_short_noC19 = 0.2 p_C19_given_fever_cough_short = (p_C19_given_fever_cough * p_short_C19) / ((p_C19_given_fever_cough * p_short_C19) + ((1-p_C19_given_fever_cough) * p_short_noC19)) .. and so on for other symptoms. For this example the results are: p_C19_given_fever 0.008232494 p_C19_given_fever_cough 0.06372898 p_C19_given_fever_cough_short 0.09543484
Conditional Probability of Having Covid-19 Given Some Symptoms Here's the way I would do it (check it's correct!) If you have more than one symptom you apply the Bayesian theorem recursively using the previous result as the new prior probability. Here's an exampl
34,857
Do we want to move away from significance?
Warning pessimistic/cynical post We do not want to move away from significance. That is a false premises that lead to your question. Recently I have found that many statisticians are speaking of moving away from significance. .. Since we want to move away from significance... We do not want to move away from significance. Significance is important. It is an indicator that a data set is large/significant enough in order for some observed effect to be unlikely due to random noise. We still want experimenters to aim for experiments that will be significant. Insignificant experiments, those which likely reflect noise, are not very useful; the interpretation of the outcome is uncertain (is it a 'true' effect or is it noise?). Significance means that the experiment is able to give outcomes with relatively more certain interpretations (the outcome is likely not noise but instead some true falsification of the null hypothesis). A problem with significance is in the wrong focus of research. What we want to move away from is the trend in science to perform and report about experiments only for the sake of being significant. The problem with significance is that it can be fake. The expression of significance is only as good as the model that has been used to compute it. That means that, even though significance means that it is something unlikely to occur given the present hypothesis predicting no effect, it is likely for a researcher to find a significant result while it isn't there. This makes that we now have a big noodle-soup of reports on research data with only tiny effects. If something is a big effect then it is likely to have already been proven. But, we are now having an enormous army of eager (and pressurized) scientists trying to find something new, so they will focus on something (anything) small and by doing a significant experiment make it big. A problem with significance is in the methodology to express errors occurring between experiments only based on the error occurring within an experiment. The current experimental scientific 'world' is being driven by these incentives to publish significant (it doesn't matter what) rather than meaningful. The problem with that is that due to technological developments we have been able to increase the scale of experimental work and do massive testing, allowing to make small effects significantly visible. This places a focus on finding small differences in parameters of the population distributions (it's resourceful niche for many researchers), while the individual people within those populations have much more variation and differences. We have a focus on the average, rather than the specific/individual, because differences between averages, no matter how small, can easily be made significant (in practice not always easy, but the principle is simple, it is just increasing the quantity of testing). For example: Say we sample the height of 10 thousand male people in Paris and 10 thousand in Berlin. If we find approximations for the distribution mean and standard deviation by $(\mu = 173.31 \,cm, \sigma = 5.29 \,cm)$ in Paris and $(\mu = 173.09 \,cm, \sigma = 5.74 \,cm)$ in Berlin, then a t-test might lead us to conclude that we found a significant effect and male people are on average taller in Paris than in Berlin. But look at the histograms of the (made up) samples below. The distributions are much the same; because of the large spread/variance we may consider the small difference in the means not so important (and also we should be careful in our expression of the standard error, because the methodology may have a relatively large influence on small effects). It is only the large sampling that makes our estimate of the standard error very tiny, and as a consequence we get to conclude that there is a significant difference. However for such tiny differences we can not really know so well whether the difference is to be ascribed to a true effect that causes people in Berlin to be different from people in Paris, or whether the difference is actually due to some systematic effect in our experiment (for instance the sampling might be biased and have different bias in Paris than in Berlin). The difference of $173.31-173.09 = 0.22$ might be for some given experiment statistically significant if you just sample sufficiently (increase your 'magnification' or 'research power'). But the difference between the populations is incredibly small, this makes that simplifying assumptions about the distributions are not negligible anymore. True when you just wish to compare means (about which you may wonder whether it is really the most useful, but hey it is the thing that we can make significant). For comparing means the sample means will approach a normal distribution so assumptions about the underlying population distribution do not matter. However, when you get into these tiny effects then sampling and other systematical effects may become an issue. All models are wrong, but some are useful. Expressions about significance are estimates and typically wrong, but often not so bad and therefore still useful. They are not so bad because the assumption that sampling error is dominating systematic error often works and the latter can be neglected. However, recently, these (previously) useful models for estimating errors are becoming less correct and less useful. This is because more and more research is able to zoom in on small effects occurring in populations with large variations. The small effects are being magnified by cranking up the sampling size. But when we look at small effects and small sampling noise (due to large samples) then the systematic error can not be neglected anymore. Model selection how we will go about selecting models? If you measure tiny effects, and make them significant purely by increasing the sample size, then your are not anymore certain that the determined effect is due to a discrepancy in the null model, it can also be the sampling procedure (When a significance test fails we tend to say that the null hypothesis is falsified, but we should say that the null hypothesis plus the experiment is falsified. However we do not normally say that because for large enough effects we tend to ignore the systematic effects). So significance is often determined only based on the variance/residuals in sample data (by estimating the spread in the measurements within our single experiment). But, it is false to assume that this is a good estimate for the error of the outcome (especially when determining small effects). We should also estimate/guess/assume the variation of our instruments/methods from experiment to experiment. That is actually how I learned it in my high school physics classes. There was no mentioning of formula's to compute standard deviations and have experiment based estimates of the error, but instead we had to make sane logical guesses about the error (e.g. when measuring some volume of water using some volumetric glassware then we used some rule of thumb, e.g. the error is 1/10 of the smallest division of the scale). Significance is not really a tool in model selection. Significance is a tool in hypothesis testing and in verifying the (statistical) validity of conclusions that may stem from such test (a conclusion should, with reasonable probability, not be due to random noise). With significance testing you often have a preference for the null hypothesis/model. The goal of the experiment is not model selection, but instead model rejection. Significance testing is done to trial/test whether the null hypothesis is correct (and often the test is made with an alternative hypothesis in hindsight such that the test has a high probability/power to reject the null hypothesis if the specific alternative is true). In these kind of trials you do get the situation that there might be multiple models against which the null hypothesis can be tested and the idea might be to see which of these models make most sense. This does resemble a lot model selection and the concepts can performed in a mixed way, but from my point of view they should not be considered mixed. E.g. one may test multiple factors and see whether any of them has a significant effect. You could see this as model selection, seeing which factor is the best model... However, it is in principle more like performing multiple null hypothesis tests (each hypothesis being that a specific factor has no effect). Model selection is an optimization which can be worked out without significance (if you have an appropriate loss function). If your are doing some optimization, e.g. predicting, then bootstrapping might indeed be a good way to not only test the variance of the estimates, but also the bias.
Do we want to move away from significance?
Warning pessimistic/cynical post We do not want to move away from significance. That is a false premises that lead to your question. Recently I have found that many statisticians are speaking of movi
Do we want to move away from significance? Warning pessimistic/cynical post We do not want to move away from significance. That is a false premises that lead to your question. Recently I have found that many statisticians are speaking of moving away from significance. .. Since we want to move away from significance... We do not want to move away from significance. Significance is important. It is an indicator that a data set is large/significant enough in order for some observed effect to be unlikely due to random noise. We still want experimenters to aim for experiments that will be significant. Insignificant experiments, those which likely reflect noise, are not very useful; the interpretation of the outcome is uncertain (is it a 'true' effect or is it noise?). Significance means that the experiment is able to give outcomes with relatively more certain interpretations (the outcome is likely not noise but instead some true falsification of the null hypothesis). A problem with significance is in the wrong focus of research. What we want to move away from is the trend in science to perform and report about experiments only for the sake of being significant. The problem with significance is that it can be fake. The expression of significance is only as good as the model that has been used to compute it. That means that, even though significance means that it is something unlikely to occur given the present hypothesis predicting no effect, it is likely for a researcher to find a significant result while it isn't there. This makes that we now have a big noodle-soup of reports on research data with only tiny effects. If something is a big effect then it is likely to have already been proven. But, we are now having an enormous army of eager (and pressurized) scientists trying to find something new, so they will focus on something (anything) small and by doing a significant experiment make it big. A problem with significance is in the methodology to express errors occurring between experiments only based on the error occurring within an experiment. The current experimental scientific 'world' is being driven by these incentives to publish significant (it doesn't matter what) rather than meaningful. The problem with that is that due to technological developments we have been able to increase the scale of experimental work and do massive testing, allowing to make small effects significantly visible. This places a focus on finding small differences in parameters of the population distributions (it's resourceful niche for many researchers), while the individual people within those populations have much more variation and differences. We have a focus on the average, rather than the specific/individual, because differences between averages, no matter how small, can easily be made significant (in practice not always easy, but the principle is simple, it is just increasing the quantity of testing). For example: Say we sample the height of 10 thousand male people in Paris and 10 thousand in Berlin. If we find approximations for the distribution mean and standard deviation by $(\mu = 173.31 \,cm, \sigma = 5.29 \,cm)$ in Paris and $(\mu = 173.09 \,cm, \sigma = 5.74 \,cm)$ in Berlin, then a t-test might lead us to conclude that we found a significant effect and male people are on average taller in Paris than in Berlin. But look at the histograms of the (made up) samples below. The distributions are much the same; because of the large spread/variance we may consider the small difference in the means not so important (and also we should be careful in our expression of the standard error, because the methodology may have a relatively large influence on small effects). It is only the large sampling that makes our estimate of the standard error very tiny, and as a consequence we get to conclude that there is a significant difference. However for such tiny differences we can not really know so well whether the difference is to be ascribed to a true effect that causes people in Berlin to be different from people in Paris, or whether the difference is actually due to some systematic effect in our experiment (for instance the sampling might be biased and have different bias in Paris than in Berlin). The difference of $173.31-173.09 = 0.22$ might be for some given experiment statistically significant if you just sample sufficiently (increase your 'magnification' or 'research power'). But the difference between the populations is incredibly small, this makes that simplifying assumptions about the distributions are not negligible anymore. True when you just wish to compare means (about which you may wonder whether it is really the most useful, but hey it is the thing that we can make significant). For comparing means the sample means will approach a normal distribution so assumptions about the underlying population distribution do not matter. However, when you get into these tiny effects then sampling and other systematical effects may become an issue. All models are wrong, but some are useful. Expressions about significance are estimates and typically wrong, but often not so bad and therefore still useful. They are not so bad because the assumption that sampling error is dominating systematic error often works and the latter can be neglected. However, recently, these (previously) useful models for estimating errors are becoming less correct and less useful. This is because more and more research is able to zoom in on small effects occurring in populations with large variations. The small effects are being magnified by cranking up the sampling size. But when we look at small effects and small sampling noise (due to large samples) then the systematic error can not be neglected anymore. Model selection how we will go about selecting models? If you measure tiny effects, and make them significant purely by increasing the sample size, then your are not anymore certain that the determined effect is due to a discrepancy in the null model, it can also be the sampling procedure (When a significance test fails we tend to say that the null hypothesis is falsified, but we should say that the null hypothesis plus the experiment is falsified. However we do not normally say that because for large enough effects we tend to ignore the systematic effects). So significance is often determined only based on the variance/residuals in sample data (by estimating the spread in the measurements within our single experiment). But, it is false to assume that this is a good estimate for the error of the outcome (especially when determining small effects). We should also estimate/guess/assume the variation of our instruments/methods from experiment to experiment. That is actually how I learned it in my high school physics classes. There was no mentioning of formula's to compute standard deviations and have experiment based estimates of the error, but instead we had to make sane logical guesses about the error (e.g. when measuring some volume of water using some volumetric glassware then we used some rule of thumb, e.g. the error is 1/10 of the smallest division of the scale). Significance is not really a tool in model selection. Significance is a tool in hypothesis testing and in verifying the (statistical) validity of conclusions that may stem from such test (a conclusion should, with reasonable probability, not be due to random noise). With significance testing you often have a preference for the null hypothesis/model. The goal of the experiment is not model selection, but instead model rejection. Significance testing is done to trial/test whether the null hypothesis is correct (and often the test is made with an alternative hypothesis in hindsight such that the test has a high probability/power to reject the null hypothesis if the specific alternative is true). In these kind of trials you do get the situation that there might be multiple models against which the null hypothesis can be tested and the idea might be to see which of these models make most sense. This does resemble a lot model selection and the concepts can performed in a mixed way, but from my point of view they should not be considered mixed. E.g. one may test multiple factors and see whether any of them has a significant effect. You could see this as model selection, seeing which factor is the best model... However, it is in principle more like performing multiple null hypothesis tests (each hypothesis being that a specific factor has no effect). Model selection is an optimization which can be worked out without significance (if you have an appropriate loss function). If your are doing some optimization, e.g. predicting, then bootstrapping might indeed be a good way to not only test the variance of the estimates, but also the bias.
Do we want to move away from significance? Warning pessimistic/cynical post We do not want to move away from significance. That is a false premises that lead to your question. Recently I have found that many statisticians are speaking of movi
34,858
Do we want to move away from significance?
This is too long to be a comment, but just to add to the excellent answer by Sextus, one issue with "significance" is the arbitrary nature of the significance level(s). Often these are dictated by whatever is common practice in a particular field. Also, when a researcher performs a test and finds a p-value of, say 0.0499999 they may claim to have found a significant result, yet another researcher conducting the same study may find a p value of sa 0.0500001 and conclude that there is no significant effect. Perhaps they fitted identical models but the only difference between their datasets was that the "significant" one had one more observation. I would contend that, practically speaking these results are identical. If you accept that these results are practically the same, then where would you draw the line as to where they are not practically the same. This is one reason why I avoid using significance levels in the first place. There is nothing wrong with p values and using significance levels as part of an overall strategy for analysis, but to make claims, or justify removing or retaining variable solely on this basis is misguided to say the least. Another problem which is related to this is that, often, researchers do not actually understand what a p-value is, especially in frequentist statistics, so it is common to see them interpreted as things like "the probability that the null hypothesis is true" or "the probabillity that the results are due to chance" or even "the probability that the research hypothesis is wrong". So if someone has this kind of misunderstanding, then blindly following some "established convention" is potentially compounding their misunderstanding. Lastly, another reason I dislike significance level testing is that it allows the analyst to not think. They can simply follow the convention and make a conclusion. This could be quite dangerous. So in summary, I don't think we should move away from significance but we should not use it as a dogma.
Do we want to move away from significance?
This is too long to be a comment, but just to add to the excellent answer by Sextus, one issue with "significance" is the arbitrary nature of the significance level(s). Often these are dictated by wh
Do we want to move away from significance? This is too long to be a comment, but just to add to the excellent answer by Sextus, one issue with "significance" is the arbitrary nature of the significance level(s). Often these are dictated by whatever is common practice in a particular field. Also, when a researcher performs a test and finds a p-value of, say 0.0499999 they may claim to have found a significant result, yet another researcher conducting the same study may find a p value of sa 0.0500001 and conclude that there is no significant effect. Perhaps they fitted identical models but the only difference between their datasets was that the "significant" one had one more observation. I would contend that, practically speaking these results are identical. If you accept that these results are practically the same, then where would you draw the line as to where they are not practically the same. This is one reason why I avoid using significance levels in the first place. There is nothing wrong with p values and using significance levels as part of an overall strategy for analysis, but to make claims, or justify removing or retaining variable solely on this basis is misguided to say the least. Another problem which is related to this is that, often, researchers do not actually understand what a p-value is, especially in frequentist statistics, so it is common to see them interpreted as things like "the probability that the null hypothesis is true" or "the probabillity that the results are due to chance" or even "the probability that the research hypothesis is wrong". So if someone has this kind of misunderstanding, then blindly following some "established convention" is potentially compounding their misunderstanding. Lastly, another reason I dislike significance level testing is that it allows the analyst to not think. They can simply follow the convention and make a conclusion. This could be quite dangerous. So in summary, I don't think we should move away from significance but we should not use it as a dogma.
Do we want to move away from significance? This is too long to be a comment, but just to add to the excellent answer by Sextus, one issue with "significance" is the arbitrary nature of the significance level(s). Often these are dictated by wh
34,859
Which pdf to choose for the prior of an angle?
You may want t consider the von Mises distribution, aka Tikhonov distribution, and plays the role similar to the normal distribution in 1D statistics: $$ p(\theta ; \alpha, \theta_0 ) = \frac{ e^{\alpha \cos (\theta -\theta_0)}} {2 \pi I_0(\alpha)} $$ For $\alpha=0$ it is uniform, for $\alpha >> 1$ the distribution is sharply peaked at $\theta_0$ C.f. this answer by StasK
Which pdf to choose for the prior of an angle?
You may want t consider the von Mises distribution, aka Tikhonov distribution, and plays the role similar to the normal distribution in 1D statistics: $$ p(\theta ; \alpha, \theta_0 ) = \frac{ e^{\alp
Which pdf to choose for the prior of an angle? You may want t consider the von Mises distribution, aka Tikhonov distribution, and plays the role similar to the normal distribution in 1D statistics: $$ p(\theta ; \alpha, \theta_0 ) = \frac{ e^{\alpha \cos (\theta -\theta_0)}} {2 \pi I_0(\alpha)} $$ For $\alpha=0$ it is uniform, for $\alpha >> 1$ the distribution is sharply peaked at $\theta_0$ C.f. this answer by StasK
Which pdf to choose for the prior of an angle? You may want t consider the von Mises distribution, aka Tikhonov distribution, and plays the role similar to the normal distribution in 1D statistics: $$ p(\theta ; \alpha, \theta_0 ) = \frac{ e^{\alp
34,860
Which pdf to choose for the prior of an angle?
The most obvious thing to do here would be to express the variable in polar coordinates and impose a prior on the angle and displacement. That is, you express your point $(x,y)$ as a vector $(\theta,r)$ where: $$\begin{aligned} x &= r \cos \theta, \\[4pt] y &= r \sin \theta. \\[4pt] \end{aligned}$$ You can then impose a prior on $0 \leqslant \theta < 2 \pi$ and $r \geqslant 0$, and this will create an implicit distribution on the vector $(x,y)$. In the absence of information about the angle, you could use the non-informative uniform prior $\theta \sim \text{U}(0, 2 \pi)$ for the angle, and then choose an appropriate prior for the displacement $r$.
Which pdf to choose for the prior of an angle?
The most obvious thing to do here would be to express the variable in polar coordinates and impose a prior on the angle and displacement. That is, you express your point $(x,y)$ as a vector $(\theta,
Which pdf to choose for the prior of an angle? The most obvious thing to do here would be to express the variable in polar coordinates and impose a prior on the angle and displacement. That is, you express your point $(x,y)$ as a vector $(\theta,r)$ where: $$\begin{aligned} x &= r \cos \theta, \\[4pt] y &= r \sin \theta. \\[4pt] \end{aligned}$$ You can then impose a prior on $0 \leqslant \theta < 2 \pi$ and $r \geqslant 0$, and this will create an implicit distribution on the vector $(x,y)$. In the absence of information about the angle, you could use the non-informative uniform prior $\theta \sim \text{U}(0, 2 \pi)$ for the angle, and then choose an appropriate prior for the displacement $r$.
Which pdf to choose for the prior of an angle? The most obvious thing to do here would be to express the variable in polar coordinates and impose a prior on the angle and displacement. That is, you express your point $(x,y)$ as a vector $(\theta,
34,861
Which pdf to choose for the prior of an angle?
If you have a prior for the angle, I'd use it as the reference. E.g. I'd rotate all data so that the prior is at $180^{\circ}$ and measure all angles on the scale $[0^{\circ}, 360^{\circ})$. I see no elegant solution to measuring distance between two angles, $\phi$ and $\psi$. I'd calculate the differences $(\phi - \psi)$ and $(((\phi + 180^{\circ}) \mod 360^{\circ}) - ((\psi + 180^{\circ}) \mod 360^{\circ}) )$ and take the one with the smaller absolute value. Regarding the distribution to use, if the variance is sufficiently small, I see no reason not to use Gaussian. If the small probabilities at the tails disturb you, you can try beta distribution (properly scaled, so that it covers your angle range), with $\alpha = \beta \ge 1$. For $\alpha = \beta = 1$ you'd get the uniform distribution. However, if your variance is large, so that the the process which generates the angles can generate values $> 360^{\circ}$ and $< 0^{\circ}$ with a non-negligible probability, then you are in trouble. You can wrap the Gaussian distribution so that its support is $[0, 360)^{\circ}$, but its PDF is not too handy (having an infinite sum of $\cosh$'s as a term, if I did my algebra correctly). The curve resembles a raised Gaussian: but mathematically it is different.
Which pdf to choose for the prior of an angle?
If you have a prior for the angle, I'd use it as the reference. E.g. I'd rotate all data so that the prior is at $180^{\circ}$ and measure all angles on the scale $[0^{\circ}, 360^{\circ})$. I see no
Which pdf to choose for the prior of an angle? If you have a prior for the angle, I'd use it as the reference. E.g. I'd rotate all data so that the prior is at $180^{\circ}$ and measure all angles on the scale $[0^{\circ}, 360^{\circ})$. I see no elegant solution to measuring distance between two angles, $\phi$ and $\psi$. I'd calculate the differences $(\phi - \psi)$ and $(((\phi + 180^{\circ}) \mod 360^{\circ}) - ((\psi + 180^{\circ}) \mod 360^{\circ}) )$ and take the one with the smaller absolute value. Regarding the distribution to use, if the variance is sufficiently small, I see no reason not to use Gaussian. If the small probabilities at the tails disturb you, you can try beta distribution (properly scaled, so that it covers your angle range), with $\alpha = \beta \ge 1$. For $\alpha = \beta = 1$ you'd get the uniform distribution. However, if your variance is large, so that the the process which generates the angles can generate values $> 360^{\circ}$ and $< 0^{\circ}$ with a non-negligible probability, then you are in trouble. You can wrap the Gaussian distribution so that its support is $[0, 360)^{\circ}$, but its PDF is not too handy (having an infinite sum of $\cosh$'s as a term, if I did my algebra correctly). The curve resembles a raised Gaussian: but mathematically it is different.
Which pdf to choose for the prior of an angle? If you have a prior for the angle, I'd use it as the reference. E.g. I'd rotate all data so that the prior is at $180^{\circ}$ and measure all angles on the scale $[0^{\circ}, 360^{\circ})$. I see no
34,862
Smooth bivariate interaction decomposition in GAM models
Q1 There are a couple of key practical situations where the decomposed model using separate marginal smooths plus a special tensor-product interaction, or ti(), smooth is a useful approach to take. The first situation is one where testing for the presence of an interaction is important to the problem being addressed with the GAM. If we fit the bivariate smooth model $$y_i \sim \mathcal{D}(\mu_i, \theta)$$ $$g(\mu_i) = \beta_0 + f(x_i, z_i)$$ representing $f(x_, z_i)$ via a tensor product smooth using te() in mgcv then, despite estimating the bivariate smooth interaction model we wanted, we have no model term that explicitly represents the interaction that we want to test. One might assume that we can use a generalized likelihood ratio test to compare the model above with the simpler model $$g(\mu_i) = \beta_0 + f_1(x_i) + f_2(z_i)$$ but one has to take great care to ensure that these two models are strictly nested for that GLRT approach to be valid. To get the correct nesting we can decompose the full bivariate smooth tensor product smooth into separate marginal smooths plus a tensor product smooth whose basis expansion has had the main effects of the separate marginal smooths removed from it. Our model then becomes $$g(\mu_i) = \beta_0 + f_1(x_i) + f_2(z_i) + f_3^{*}(x_i, z_i)$$ where the superscript $*$ is just there to remind us that $f_3^{*}(x_i, z_i)$ uses a different basis expansion to that of $f(x_i, z_i)$ from the original model. Now you could compare this decomposed model with the simpler model with two univariate smooths and the proper nesting required for the GLRT, or one could use the Wald-like tests of the null hypothesis that $f_3() = 0$. A practical example of when this might be needed in my own work is modelling monthly or daily air temperature time series. These data have a seasonal component plus a long-term trend and it is of scientific interest to ask whether the air temperatures are simply increasing, or, in addition to increasing over time, whether the seasonal distribution of temperature is changing as temperatures increase with the long term trend. The first option would amount to fitting the simpler univariate smooth model and focusing inference on the smooth of year: $$g(\mu_i) = \beta_0 + f_1(\text{year}_i) + f_2(\text{month}_i),$$ whilst the second option would imply that $f_3^{*}(\text{year}_i,\text{month}_i)$ in $$g(\mu_i) = \beta_0 + f_1(\text{year}_i) + f_2(\text{month}_i) + f_3^{*}(\text{year}_i,\text{month}_i)$$ is non-zero. Another situation where the tensor product interaction smooth is useful is when you wish to include several smooth effects in your model, and only some of them are required to interact or be part of bivariate relationships. Say we have four covariates that we will indicate by $x_i$, $z_i$, $v_i$ and $w_i$. Further assume we want to fit a model where $\mathbf{x}$ interacts with $\mathbf{z}$ and $\mathbf{v}$ whilst $\mathbf{z}$ interacts with $\mathbf{v}$ and $\mathbf{w}$, we could fit the following model $$g(\mu_i) = \beta_0 + f_1(x_i, z_i) + f_1(x_i, v_i) + f_1(z_i, v_i) + f_1(z_i, w_i)$$ but we should note that the main effects of all of the covariates *except that of $\mathbf{w}$ occur in the bases of two or more smooth functions. Despite the identifiability constraints that mgcv imposes, it is likely that problems will occur in fitting the model with these overlapping terms. We can largely avoid these issues through the use of the tensor product interaction smooths and the decomposed parameterisation of the model: $$g(\mu_i) = \beta_0 + f_1(x_i) + f_2(z_i) + f_3(v_i) + f_4(w_i) + f_5^{*}(x_i, z_i) + f_6^{*}(x_i, v_i) + f_7^{*}(z_i, v_i) + f_8^{*}(z_i, w_i)$$ where the main effects of each covariate is now separated out from the interaction smooths. How one approaches the modelling of a specific data set will depend on the specifics of the problem being tackled. Even if you are not explicitly interested in testing the interaction effect, once you start estimating multiple bivariate (or higher) smooth functions in moderately complex settings, the decomposed version of the model can become necessary to avoid computational issues. Note that the decomposed model is not a free lunch; the te(x,y) smooth would require the selection of two smoothness parameters, whereas the decomposed version s(x) + s(z) + ti(x,z) would require the selection of four smoothness parameters, one each for the two marginal smooths plus two for the tensor product. Hence, if you don't need to fit the decomposition, you can fit a simpler model by using the full tensor product rather than the decomposed parameterization. Q2 The interpretation doesn't appear to be quite so simple as $f_1(x_i)$ being the smooth effect of $\mathbf{x}$ averaged over the values of $\mathbf{z}$ when in the decomposed model. The figure below shows estimates of $f_1(x_i)$ where the true relationship with $\mathbf{y}$ involves a bivariate effects of $\mathbf{x}$ and $\mathbf{z}$. The models are m1 <- gam(y ~ s(x), data = df, method = 'REML') m2 <- gam(y ~ s(x) + s(z), data = df, method = 'REML') m3 <- gam(y ~ s(x) + s(z) + ti(x,z), data = df, method = 'REML') (full code is shown below). Here $f_1(x_i)$ is s(x) in all three models. In the first we ignore the effect of z, in the second we include the additive effect of z but not the interaction, while in the third model we include the main effects and interaction or x and z on y. If $f_1(x_i)$ were simply the smooth effect of x averaged over z then we would expect all three estimated functions to be effectively the same. We do see that the smooth functions of x in models 1 and 2 are effectively equivalent, but the smooth of x in model 3 is noticeably smoother than those of the other two models. The differences between the three estimated functions are quite small indeed, and almost indistinguishable when one plots them with the data. In hindsight we might have expected $f_1(x_i)$ to be smoother in model 3 than in either of models 1 and 2, because some of the variation in x is accounted for by the variation in z and the bivariate smooth relationship between the two variables. Another way to think about this is to consider what the average function of x might be $\bar{f}_1(x_i)$ over all possible values for $z_i$. My maths skills aren't good enough to work this out, but some manipulation of the m3 model fitted above and some averaging suggests that the function $f_1(x_i)$ is close to be not the same as $\bar{f}_1(x_i)$ might be (estimated from fits to data below). What I'm showing here is: the rainbow-coloured lines show a random sample of 10 (out of 250 in the example) slices through the estimated smooth surface, each of which is a smooth function of x by construction, The solid red line is the average over z of the fitted values of the smooth for each unique value of x. In other words it is the average of the rainbow-coloured lines (slices through the surface) at each unique point on x (but for all 250 of them, not just the 10 shown), and The blue line shows the estimate function $\hat{f}_1(x_i)$ from m3, the decomposed model (in other words the "main effect" smooth for x). The differences at the ends of the range of x are on the order of 0.01 to 0.03, which while not large aren't zero and don't seem to be converging towards 0 as I increased the number of locations in $z$ that I averaged the surface over. Q3 How do we interpret the tensor product interaction smooth $f_3(x,z)$? I think the simplest way to think about the interaction smooth is as the smooth deformations you need to apply to $f_1(x)$ as you vary $z$, or conversely to $f_2(z)$ as you vary $x$. The plots below attempt to illustrate this. The first panel shows the additive ("main") effects $f_1(x) + f_2(z)$. The second panel shows the deformations you would apply to that surface to yield the final panel, which is the full estimated value of $y$ given the additive effects of all three smooth functions. It is worth noting that in this example, if we estimated the bivariate effect using te() we get subtly different model fits, and this is most likely due to the extra smoothness parameters in the decomposed model compared with the simpler te() model. Code library('mgcv') library('dplyr') library('gratia') library('ggplot2') library('tidyr') library('cowplot') set.seed(1) sim <- gamSim(eg = 2, n = 1000) df <- sim$data m1 <- gam(y ~ s(x), data = df, method = 'REML') m2 <- gam(y ~ s(x) + s(z), data = df, method = 'REML') m3 <- gam(y ~ s(x) + s(z) + ti(x,z), data = df, method = 'REML') effs <- bind_rows(evaluate_smooth(m1, smooth = 's(x)'), evaluate_smooth(m2, smooth = 's(x)'), evaluate_smooth(m3, smooth = 's(x)')) effs <- mutate(effs, model = rep(paste0('m', 1:3), each = 100), upper = est + (2 * se), lower = est - (2 * se)) ## first figure in Q2 ggplot(effs, aes(x = x, y = est, colour = model)) + geom_ribbon(aes(x = x, ymin = lower, ymax = upper, fill = model), alpha = 0.1, inherit.aes = FALSE) + geom_line(size = 1) ## second figure in Q2 newd <- crossing(x = seq(0, 1, length = 250), z = seq(0, 1, length = 250)) newd <- mutate(newd, fit = predict(m3, newd)) avg_fx <- newd %>% group_by(x) %>% summarise(avg = mean(fit)) fx <- mutate(evaluate_smooth(m3, 's(x)'), est = est + coef(m3)[1L]) set.seed(1) ggplot(filter(newd, z %in% sample(unique(z), 10)), aes(x = x, y = fit, colour = factor(z))) + geom_line() + guides(colour = FALSE) + geom_line(data = avg_fx, mapping = aes(x = x, y = avg), colour = "red", size = 1) + geom_line(data = fx, mapping = aes(x = x, y = est), colour = "blue", size = 1, lty = 'dashed') ## Q3 newd <- crossing(x = seq(0,1, length = 100), z = seq(0,1, length = 100)) newd <- mutate(newd, main_eff = predict(m3, newd, exclude = 'ti(x,z)'), interact = predict(m3, newd, terms = 'ti(x,z)'), both = predict(m3, newd)) p1 <- ggplot(newd, aes(x = x, y = z, fill = main_eff)) + geom_raster() + geom_contour(aes(z = main_eff)) + scale_fill_distiller(palette = "RdBu", type = "div") + labs(title = "Main effects", subtitle = "s(x) + s(z)", fill = NULL) p2 <- ggplot(newd, aes(x = x, y = z, fill = interact)) + geom_raster() + geom_contour(aes(z = interact)) + scale_fill_distiller(palette = "RdBu", type = "div") + labs(title = "Interaction effect", subtitle = "ti(x,z)", fill = NULL) p3 <- ggplot(newd, aes(x = x, y = z, fill = both)) + geom_raster() + geom_contour(aes(z = both)) + scale_fill_distiller(palette = "RdBu", type = "div") + labs(title = "Main effects + interaction", subtitle = "s(x) + s(z) + ti(x,z)", fill = NULL) plot_grid(p1, p2, p3, ncol = 3, align = 'hv', axis = 'tb')
Smooth bivariate interaction decomposition in GAM models
Q1 There are a couple of key practical situations where the decomposed model using separate marginal smooths plus a special tensor-product interaction, or ti(), smooth is a useful approach to take. Th
Smooth bivariate interaction decomposition in GAM models Q1 There are a couple of key practical situations where the decomposed model using separate marginal smooths plus a special tensor-product interaction, or ti(), smooth is a useful approach to take. The first situation is one where testing for the presence of an interaction is important to the problem being addressed with the GAM. If we fit the bivariate smooth model $$y_i \sim \mathcal{D}(\mu_i, \theta)$$ $$g(\mu_i) = \beta_0 + f(x_i, z_i)$$ representing $f(x_, z_i)$ via a tensor product smooth using te() in mgcv then, despite estimating the bivariate smooth interaction model we wanted, we have no model term that explicitly represents the interaction that we want to test. One might assume that we can use a generalized likelihood ratio test to compare the model above with the simpler model $$g(\mu_i) = \beta_0 + f_1(x_i) + f_2(z_i)$$ but one has to take great care to ensure that these two models are strictly nested for that GLRT approach to be valid. To get the correct nesting we can decompose the full bivariate smooth tensor product smooth into separate marginal smooths plus a tensor product smooth whose basis expansion has had the main effects of the separate marginal smooths removed from it. Our model then becomes $$g(\mu_i) = \beta_0 + f_1(x_i) + f_2(z_i) + f_3^{*}(x_i, z_i)$$ where the superscript $*$ is just there to remind us that $f_3^{*}(x_i, z_i)$ uses a different basis expansion to that of $f(x_i, z_i)$ from the original model. Now you could compare this decomposed model with the simpler model with two univariate smooths and the proper nesting required for the GLRT, or one could use the Wald-like tests of the null hypothesis that $f_3() = 0$. A practical example of when this might be needed in my own work is modelling monthly or daily air temperature time series. These data have a seasonal component plus a long-term trend and it is of scientific interest to ask whether the air temperatures are simply increasing, or, in addition to increasing over time, whether the seasonal distribution of temperature is changing as temperatures increase with the long term trend. The first option would amount to fitting the simpler univariate smooth model and focusing inference on the smooth of year: $$g(\mu_i) = \beta_0 + f_1(\text{year}_i) + f_2(\text{month}_i),$$ whilst the second option would imply that $f_3^{*}(\text{year}_i,\text{month}_i)$ in $$g(\mu_i) = \beta_0 + f_1(\text{year}_i) + f_2(\text{month}_i) + f_3^{*}(\text{year}_i,\text{month}_i)$$ is non-zero. Another situation where the tensor product interaction smooth is useful is when you wish to include several smooth effects in your model, and only some of them are required to interact or be part of bivariate relationships. Say we have four covariates that we will indicate by $x_i$, $z_i$, $v_i$ and $w_i$. Further assume we want to fit a model where $\mathbf{x}$ interacts with $\mathbf{z}$ and $\mathbf{v}$ whilst $\mathbf{z}$ interacts with $\mathbf{v}$ and $\mathbf{w}$, we could fit the following model $$g(\mu_i) = \beta_0 + f_1(x_i, z_i) + f_1(x_i, v_i) + f_1(z_i, v_i) + f_1(z_i, w_i)$$ but we should note that the main effects of all of the covariates *except that of $\mathbf{w}$ occur in the bases of two or more smooth functions. Despite the identifiability constraints that mgcv imposes, it is likely that problems will occur in fitting the model with these overlapping terms. We can largely avoid these issues through the use of the tensor product interaction smooths and the decomposed parameterisation of the model: $$g(\mu_i) = \beta_0 + f_1(x_i) + f_2(z_i) + f_3(v_i) + f_4(w_i) + f_5^{*}(x_i, z_i) + f_6^{*}(x_i, v_i) + f_7^{*}(z_i, v_i) + f_8^{*}(z_i, w_i)$$ where the main effects of each covariate is now separated out from the interaction smooths. How one approaches the modelling of a specific data set will depend on the specifics of the problem being tackled. Even if you are not explicitly interested in testing the interaction effect, once you start estimating multiple bivariate (or higher) smooth functions in moderately complex settings, the decomposed version of the model can become necessary to avoid computational issues. Note that the decomposed model is not a free lunch; the te(x,y) smooth would require the selection of two smoothness parameters, whereas the decomposed version s(x) + s(z) + ti(x,z) would require the selection of four smoothness parameters, one each for the two marginal smooths plus two for the tensor product. Hence, if you don't need to fit the decomposition, you can fit a simpler model by using the full tensor product rather than the decomposed parameterization. Q2 The interpretation doesn't appear to be quite so simple as $f_1(x_i)$ being the smooth effect of $\mathbf{x}$ averaged over the values of $\mathbf{z}$ when in the decomposed model. The figure below shows estimates of $f_1(x_i)$ where the true relationship with $\mathbf{y}$ involves a bivariate effects of $\mathbf{x}$ and $\mathbf{z}$. The models are m1 <- gam(y ~ s(x), data = df, method = 'REML') m2 <- gam(y ~ s(x) + s(z), data = df, method = 'REML') m3 <- gam(y ~ s(x) + s(z) + ti(x,z), data = df, method = 'REML') (full code is shown below). Here $f_1(x_i)$ is s(x) in all three models. In the first we ignore the effect of z, in the second we include the additive effect of z but not the interaction, while in the third model we include the main effects and interaction or x and z on y. If $f_1(x_i)$ were simply the smooth effect of x averaged over z then we would expect all three estimated functions to be effectively the same. We do see that the smooth functions of x in models 1 and 2 are effectively equivalent, but the smooth of x in model 3 is noticeably smoother than those of the other two models. The differences between the three estimated functions are quite small indeed, and almost indistinguishable when one plots them with the data. In hindsight we might have expected $f_1(x_i)$ to be smoother in model 3 than in either of models 1 and 2, because some of the variation in x is accounted for by the variation in z and the bivariate smooth relationship between the two variables. Another way to think about this is to consider what the average function of x might be $\bar{f}_1(x_i)$ over all possible values for $z_i$. My maths skills aren't good enough to work this out, but some manipulation of the m3 model fitted above and some averaging suggests that the function $f_1(x_i)$ is close to be not the same as $\bar{f}_1(x_i)$ might be (estimated from fits to data below). What I'm showing here is: the rainbow-coloured lines show a random sample of 10 (out of 250 in the example) slices through the estimated smooth surface, each of which is a smooth function of x by construction, The solid red line is the average over z of the fitted values of the smooth for each unique value of x. In other words it is the average of the rainbow-coloured lines (slices through the surface) at each unique point on x (but for all 250 of them, not just the 10 shown), and The blue line shows the estimate function $\hat{f}_1(x_i)$ from m3, the decomposed model (in other words the "main effect" smooth for x). The differences at the ends of the range of x are on the order of 0.01 to 0.03, which while not large aren't zero and don't seem to be converging towards 0 as I increased the number of locations in $z$ that I averaged the surface over. Q3 How do we interpret the tensor product interaction smooth $f_3(x,z)$? I think the simplest way to think about the interaction smooth is as the smooth deformations you need to apply to $f_1(x)$ as you vary $z$, or conversely to $f_2(z)$ as you vary $x$. The plots below attempt to illustrate this. The first panel shows the additive ("main") effects $f_1(x) + f_2(z)$. The second panel shows the deformations you would apply to that surface to yield the final panel, which is the full estimated value of $y$ given the additive effects of all three smooth functions. It is worth noting that in this example, if we estimated the bivariate effect using te() we get subtly different model fits, and this is most likely due to the extra smoothness parameters in the decomposed model compared with the simpler te() model. Code library('mgcv') library('dplyr') library('gratia') library('ggplot2') library('tidyr') library('cowplot') set.seed(1) sim <- gamSim(eg = 2, n = 1000) df <- sim$data m1 <- gam(y ~ s(x), data = df, method = 'REML') m2 <- gam(y ~ s(x) + s(z), data = df, method = 'REML') m3 <- gam(y ~ s(x) + s(z) + ti(x,z), data = df, method = 'REML') effs <- bind_rows(evaluate_smooth(m1, smooth = 's(x)'), evaluate_smooth(m2, smooth = 's(x)'), evaluate_smooth(m3, smooth = 's(x)')) effs <- mutate(effs, model = rep(paste0('m', 1:3), each = 100), upper = est + (2 * se), lower = est - (2 * se)) ## first figure in Q2 ggplot(effs, aes(x = x, y = est, colour = model)) + geom_ribbon(aes(x = x, ymin = lower, ymax = upper, fill = model), alpha = 0.1, inherit.aes = FALSE) + geom_line(size = 1) ## second figure in Q2 newd <- crossing(x = seq(0, 1, length = 250), z = seq(0, 1, length = 250)) newd <- mutate(newd, fit = predict(m3, newd)) avg_fx <- newd %>% group_by(x) %>% summarise(avg = mean(fit)) fx <- mutate(evaluate_smooth(m3, 's(x)'), est = est + coef(m3)[1L]) set.seed(1) ggplot(filter(newd, z %in% sample(unique(z), 10)), aes(x = x, y = fit, colour = factor(z))) + geom_line() + guides(colour = FALSE) + geom_line(data = avg_fx, mapping = aes(x = x, y = avg), colour = "red", size = 1) + geom_line(data = fx, mapping = aes(x = x, y = est), colour = "blue", size = 1, lty = 'dashed') ## Q3 newd <- crossing(x = seq(0,1, length = 100), z = seq(0,1, length = 100)) newd <- mutate(newd, main_eff = predict(m3, newd, exclude = 'ti(x,z)'), interact = predict(m3, newd, terms = 'ti(x,z)'), both = predict(m3, newd)) p1 <- ggplot(newd, aes(x = x, y = z, fill = main_eff)) + geom_raster() + geom_contour(aes(z = main_eff)) + scale_fill_distiller(palette = "RdBu", type = "div") + labs(title = "Main effects", subtitle = "s(x) + s(z)", fill = NULL) p2 <- ggplot(newd, aes(x = x, y = z, fill = interact)) + geom_raster() + geom_contour(aes(z = interact)) + scale_fill_distiller(palette = "RdBu", type = "div") + labs(title = "Interaction effect", subtitle = "ti(x,z)", fill = NULL) p3 <- ggplot(newd, aes(x = x, y = z, fill = both)) + geom_raster() + geom_contour(aes(z = both)) + scale_fill_distiller(palette = "RdBu", type = "div") + labs(title = "Main effects + interaction", subtitle = "s(x) + s(z) + ti(x,z)", fill = NULL) plot_grid(p1, p2, p3, ncol = 3, align = 'hv', axis = 'tb')
Smooth bivariate interaction decomposition in GAM models Q1 There are a couple of key practical situations where the decomposed model using separate marginal smooths plus a special tensor-product interaction, or ti(), smooth is a useful approach to take. Th
34,863
Box-Cox vs Yeo-Johnson
Interpretability is a major issue. The power parameter is different for positive and negative values; and the transformation therefore has a different interpretation for positive and negative values. When you have both, the transformation over the whole range may seem somewhat arbitrary and somewhat tricky to explain to a lay audience.
Box-Cox vs Yeo-Johnson
Interpretability is a major issue. The power parameter is different for positive and negative values; and the transformation therefore has a different interpretation for positive and negative values.
Box-Cox vs Yeo-Johnson Interpretability is a major issue. The power parameter is different for positive and negative values; and the transformation therefore has a different interpretation for positive and negative values. When you have both, the transformation over the whole range may seem somewhat arbitrary and somewhat tricky to explain to a lay audience.
Box-Cox vs Yeo-Johnson Interpretability is a major issue. The power parameter is different for positive and negative values; and the transformation therefore has a different interpretation for positive and negative values.
34,864
Box-Cox vs Yeo-Johnson
I have found transformations like the cube root $$ \text{sign}(y)\ |y|^{1/3}$$ and the so-called neglog $$\text{sign}(y)\ \ln(1 + |y|)$$ and the inverse hyperbolic sine $$\text{asinh}(y)$$ helpful for responses that can be negative, zero, or positive and that are skewed and/or have very long tails. Their advantages include pulling in the tails relative to zero working smoothly around 0 preserving the sign of the response (which usually has substantive or scientific meaning: consider profit/loss or increase/decrease) having derivatives that are easy to work with having exact or approximate limiting behaviour as usually desired, in particular that for $y \gg 0$ the second is close to $\ln y$. In these cases there is no question of choosing a different power for positive and negative values, and any analyst should want to see a very strong justification for that. The practical test of a transformation is often just that it aids visualization. It's not essential that coefficients from regressions are easy to interpret. If the cube root of $y$ changes linearly with predictors, that is what you have found. The coefficients don't need chit-chat; they are the gradients in that space (which you should be using for visualization). Despite deeply impressive work on estimating a transformation, it is not often that I choose a transformation formally. Logarithms, reciprocals and square roots are often obvious candidates on a variety of grounds. Try out candidates and see if they help.
Box-Cox vs Yeo-Johnson
I have found transformations like the cube root $$ \text{sign}(y)\ |y|^{1/3}$$ and the so-called neglog $$\text{sign}(y)\ \ln(1 + |y|)$$ and the inverse hyperbolic sine $$\text{asinh}(y)$$ helpful for
Box-Cox vs Yeo-Johnson I have found transformations like the cube root $$ \text{sign}(y)\ |y|^{1/3}$$ and the so-called neglog $$\text{sign}(y)\ \ln(1 + |y|)$$ and the inverse hyperbolic sine $$\text{asinh}(y)$$ helpful for responses that can be negative, zero, or positive and that are skewed and/or have very long tails. Their advantages include pulling in the tails relative to zero working smoothly around 0 preserving the sign of the response (which usually has substantive or scientific meaning: consider profit/loss or increase/decrease) having derivatives that are easy to work with having exact or approximate limiting behaviour as usually desired, in particular that for $y \gg 0$ the second is close to $\ln y$. In these cases there is no question of choosing a different power for positive and negative values, and any analyst should want to see a very strong justification for that. The practical test of a transformation is often just that it aids visualization. It's not essential that coefficients from regressions are easy to interpret. If the cube root of $y$ changes linearly with predictors, that is what you have found. The coefficients don't need chit-chat; they are the gradients in that space (which you should be using for visualization). Despite deeply impressive work on estimating a transformation, it is not often that I choose a transformation formally. Logarithms, reciprocals and square roots are often obvious candidates on a variety of grounds. Try out candidates and see if they help.
Box-Cox vs Yeo-Johnson I have found transformations like the cube root $$ \text{sign}(y)\ |y|^{1/3}$$ and the so-called neglog $$\text{sign}(y)\ \ln(1 + |y|)$$ and the inverse hyperbolic sine $$\text{asinh}(y)$$ helpful for
34,865
Why is rejection of null hypothesis not a case of prosecutor's fallacy?
The p-value is the probability of seeing what you saw or something more extreme if the null hypothesis was true. The p-value is not the probability that the null hypothesis is true. So yes, interpreting a p-value as the probability that the null hypothesis is true is akin to the prosecutor's fallacy. If you want that probability, you need to assume a probability of the null hypothesis being true prior to the collection of data. Then you can use the data collected to influence or update that initial probability. Whether or not choosing to "reject the null hypothesis" is akin to the prosecutor's fallacy gets into semantics. If "rejecting the null" means to believe the null is false in a probabilistic sense, then yes, that's commiting the prosecutor's fallacy. If "rejecting the null" means to act as if the null is false, that's different. That's a decision process whose performance will depend on the situations in which it's used. A great example is the response of the scientific community to the first study showing evidence of a new particle with p < 0.0000003. Do all of the scientists accept the particle's existence? No. Some may, but some will remain skeptical. The differences in beliefs are connected to different prior probabilities on the null, i.e. how skeptical were they of the new particle's existence before the experiment. The results of one study can only shift their belief probabilities so far. But what does the scientific community do? They do a second experiment. They act as if the particle exists, or more precisely, they act as though the existence of the particle warrants further study. Even the skeptical scientists will support acting in this way. If the second experiment also has a p < 0.0000003, some of the skeptical scientists will now believe the particle exists. Why? Even if the first experiment didn't convince them, it still shifted their belief probabilities. The second experiment will shift them further. The second experiment may lead to a third, and so on. Each scientist's underlying belief distribution shifts with each experiment. After a given experiment, they may not agree on the existence of the particle, but still agree that the experiments are worth continuing. Eventually the series of experiments will have shifted all but the most skeptical scientists' belief distributions over to believing the particle exists. Personal note: I'm not trying to sell anyone on this statistical paradigm; only to answer the initial question. There are other statistical paradigms worth exploring. Bayesian analysis facilitates explicitly quantifying your belief distribution before and after the experiment. Likelihood inference facilitates expressing the evidence of the experiment in a way that those with different prior beliefs can still agree on. Second generation p-values place the focus on pre-specifying clinical significance and providing clinicians with a value that behaves the way they wish the traditional p-value did, i.e. still indicating when the evidence is against the null but also distinguishing between when the evidence is for the null versus when uncertainty remains high. And there are many other interesting approaches.
Why is rejection of null hypothesis not a case of prosecutor's fallacy?
The p-value is the probability of seeing what you saw or something more extreme if the null hypothesis was true. The p-value is not the probability that the null hypothesis is true. So yes, interpreti
Why is rejection of null hypothesis not a case of prosecutor's fallacy? The p-value is the probability of seeing what you saw or something more extreme if the null hypothesis was true. The p-value is not the probability that the null hypothesis is true. So yes, interpreting a p-value as the probability that the null hypothesis is true is akin to the prosecutor's fallacy. If you want that probability, you need to assume a probability of the null hypothesis being true prior to the collection of data. Then you can use the data collected to influence or update that initial probability. Whether or not choosing to "reject the null hypothesis" is akin to the prosecutor's fallacy gets into semantics. If "rejecting the null" means to believe the null is false in a probabilistic sense, then yes, that's commiting the prosecutor's fallacy. If "rejecting the null" means to act as if the null is false, that's different. That's a decision process whose performance will depend on the situations in which it's used. A great example is the response of the scientific community to the first study showing evidence of a new particle with p < 0.0000003. Do all of the scientists accept the particle's existence? No. Some may, but some will remain skeptical. The differences in beliefs are connected to different prior probabilities on the null, i.e. how skeptical were they of the new particle's existence before the experiment. The results of one study can only shift their belief probabilities so far. But what does the scientific community do? They do a second experiment. They act as if the particle exists, or more precisely, they act as though the existence of the particle warrants further study. Even the skeptical scientists will support acting in this way. If the second experiment also has a p < 0.0000003, some of the skeptical scientists will now believe the particle exists. Why? Even if the first experiment didn't convince them, it still shifted their belief probabilities. The second experiment will shift them further. The second experiment may lead to a third, and so on. Each scientist's underlying belief distribution shifts with each experiment. After a given experiment, they may not agree on the existence of the particle, but still agree that the experiments are worth continuing. Eventually the series of experiments will have shifted all but the most skeptical scientists' belief distributions over to believing the particle exists. Personal note: I'm not trying to sell anyone on this statistical paradigm; only to answer the initial question. There are other statistical paradigms worth exploring. Bayesian analysis facilitates explicitly quantifying your belief distribution before and after the experiment. Likelihood inference facilitates expressing the evidence of the experiment in a way that those with different prior beliefs can still agree on. Second generation p-values place the focus on pre-specifying clinical significance and providing clinicians with a value that behaves the way they wish the traditional p-value did, i.e. still indicating when the evidence is against the null but also distinguishing between when the evidence is for the null versus when uncertainty remains high. And there are many other interesting approaches.
Why is rejection of null hypothesis not a case of prosecutor's fallacy? The p-value is the probability of seeing what you saw or something more extreme if the null hypothesis was true. The p-value is not the probability that the null hypothesis is true. So yes, interpreti
34,866
Variance of $X$ and Variance of $\log(X)$. How to relate them?
The delta method is pretty useful here. Using a first order Taylor series approximation about the mean of $X$ $$ \log(X) \approx \log(E[X]) + \frac{(X-E[X])}{E[X]} $$ so, after we take expectations and variances on both sides, $E[\log(X)] \approx \log(E[X])$ $V[\log(X)] \approx E[X]^{-2}\text{V}[X]$. This relates to the idea of variance-stabilization; if a dependent variable in a regression has a variance that is proportional to the mean squared, then taking the log of that dependent variable produces something that has constant variance, which is often a desirable or necessary assumption.
Variance of $X$ and Variance of $\log(X)$. How to relate them?
The delta method is pretty useful here. Using a first order Taylor series approximation about the mean of $X$ $$ \log(X) \approx \log(E[X]) + \frac{(X-E[X])}{E[X]} $$ so, after we take expectations an
Variance of $X$ and Variance of $\log(X)$. How to relate them? The delta method is pretty useful here. Using a first order Taylor series approximation about the mean of $X$ $$ \log(X) \approx \log(E[X]) + \frac{(X-E[X])}{E[X]} $$ so, after we take expectations and variances on both sides, $E[\log(X)] \approx \log(E[X])$ $V[\log(X)] \approx E[X]^{-2}\text{V}[X]$. This relates to the idea of variance-stabilization; if a dependent variable in a regression has a variance that is proportional to the mean squared, then taking the log of that dependent variable produces something that has constant variance, which is often a desirable or necessary assumption.
Variance of $X$ and Variance of $\log(X)$. How to relate them? The delta method is pretty useful here. Using a first order Taylor series approximation about the mean of $X$ $$ \log(X) \approx \log(E[X]) + \frac{(X-E[X])}{E[X]} $$ so, after we take expectations an
34,867
Variance of $X$ and Variance of $\log(X)$. How to relate them?
Comment on lognormal and normal distributions. The variance of data tends to decrease when you take logs of the values. Perhaps the most common case is that, if $X_1$ is lognormal, then $X_2 = \ln(X_1)$ is normal, which has smaller variance than $X_.1$ Also, if it exists, $X_3 = \ln(X_2)$ may have a still smaller variance. Below we begin with a random sample (using R) of $n = 10^4$ observations from a lognormal distribution with parameters $\mu = 50, \sigma = 2.$ (It is customary to use lognormal parameters that match parameters of the related normal distribution. You can look at Wikipedia on 'lognormal distributions' for details.) We show means and standard deviations for distributions of $X_1, X_2,$ and $X_3.$ set.seed(720); n = 10^4 x2 = rnorm(n, 50, 2); x1 = exp(x2); x3 = log(x2) mean(x1); mean(x2); mean(x3) [1] 3.686093e+22 [1] 50.01289 # aprx E(X2) = 50 [1] 3.911481 sd(x1); sd(x2); sd(x3) [1] 2.308712e+23 [1] 1.997261 # aprx SD(X2) = 2 [1] 0.04004551 Then we show histograms of the three samples. At the left, notice that it is difficult to make an informative histogram of $X_1$ because it is so severely skewed to the right. In the center panel, we overlay the density function of $\mathsf{Norm}(\mu =50, \sigma=2);$ which is symmetrical. At the right, notice that taking (natural) logs once again results in a slightly left-skewed distribution. Notes: (1) The the support of a lognormal distribution is $(0, \infty).$ A normal distribution may take negative values. If the lognormal distribution is truncated to $(1, \infty)$ so that the normal distribution is truncated to $(0,\infty),$ then the natural log of that "normal" distribution exists. The distribution $\mathsf{Norm}(50, 2)$ has almost no probability below $0,$ so the truncation would have little practical effect in this example. (2) R code for the figure above: par(mfrow=c(1,3)) hist(x1, prob=T, br=50, col="skyblue2") hist(x2, prob=T, col="skyblue2") curve(dnorm(x,50,2), add=T, col="red") hist(x3, prob=T, col="skyblue2") par(mfrow=c(1,1)) (3) However, it is not always true that taking logs gives a smaller variance. If $X_2 \sim \mathsf{Unif}(0,1),\, X_1 = e^{X_2},$ and $X_3 = \ln(X_2),$ then R code similar to the code for the lognormal example gives the following results: set.seed(720); n = 10^5 x2 = runif(n); x1 = exp(x2); x3 = log(x2) var(x1); var(x2); var(x3) [1] 0.2411124 [1] 0.08316279 # aprx V(X2) = 1/12 [1] 1.01091
Variance of $X$ and Variance of $\log(X)$. How to relate them?
Comment on lognormal and normal distributions. The variance of data tends to decrease when you take logs of the values. Perhaps the most common case is that, if $X_1$ is lognormal, then $X_2 = \ln(X_1
Variance of $X$ and Variance of $\log(X)$. How to relate them? Comment on lognormal and normal distributions. The variance of data tends to decrease when you take logs of the values. Perhaps the most common case is that, if $X_1$ is lognormal, then $X_2 = \ln(X_1)$ is normal, which has smaller variance than $X_.1$ Also, if it exists, $X_3 = \ln(X_2)$ may have a still smaller variance. Below we begin with a random sample (using R) of $n = 10^4$ observations from a lognormal distribution with parameters $\mu = 50, \sigma = 2.$ (It is customary to use lognormal parameters that match parameters of the related normal distribution. You can look at Wikipedia on 'lognormal distributions' for details.) We show means and standard deviations for distributions of $X_1, X_2,$ and $X_3.$ set.seed(720); n = 10^4 x2 = rnorm(n, 50, 2); x1 = exp(x2); x3 = log(x2) mean(x1); mean(x2); mean(x3) [1] 3.686093e+22 [1] 50.01289 # aprx E(X2) = 50 [1] 3.911481 sd(x1); sd(x2); sd(x3) [1] 2.308712e+23 [1] 1.997261 # aprx SD(X2) = 2 [1] 0.04004551 Then we show histograms of the three samples. At the left, notice that it is difficult to make an informative histogram of $X_1$ because it is so severely skewed to the right. In the center panel, we overlay the density function of $\mathsf{Norm}(\mu =50, \sigma=2);$ which is symmetrical. At the right, notice that taking (natural) logs once again results in a slightly left-skewed distribution. Notes: (1) The the support of a lognormal distribution is $(0, \infty).$ A normal distribution may take negative values. If the lognormal distribution is truncated to $(1, \infty)$ so that the normal distribution is truncated to $(0,\infty),$ then the natural log of that "normal" distribution exists. The distribution $\mathsf{Norm}(50, 2)$ has almost no probability below $0,$ so the truncation would have little practical effect in this example. (2) R code for the figure above: par(mfrow=c(1,3)) hist(x1, prob=T, br=50, col="skyblue2") hist(x2, prob=T, col="skyblue2") curve(dnorm(x,50,2), add=T, col="red") hist(x3, prob=T, col="skyblue2") par(mfrow=c(1,1)) (3) However, it is not always true that taking logs gives a smaller variance. If $X_2 \sim \mathsf{Unif}(0,1),\, X_1 = e^{X_2},$ and $X_3 = \ln(X_2),$ then R code similar to the code for the lognormal example gives the following results: set.seed(720); n = 10^5 x2 = runif(n); x1 = exp(x2); x3 = log(x2) var(x1); var(x2); var(x3) [1] 0.2411124 [1] 0.08316279 # aprx V(X2) = 1/12 [1] 1.01091
Variance of $X$ and Variance of $\log(X)$. How to relate them? Comment on lognormal and normal distributions. The variance of data tends to decrease when you take logs of the values. Perhaps the most common case is that, if $X_1$ is lognormal, then $X_2 = \ln(X_1
34,868
How to fit a robust step function to a time series?
A simple, robust method to handle such noise is to compute medians. A rolling median over a short window will detect all but the smallest jumps, while medians of the response within intervals between detected jumps will robustly estimate their levels. (You may replace this latter estimate by any robust estimate that is unaffected by the outliers.) You should tune this approach with real or simulated data to achieve acceptable error rates. For instance, for the simulation in the question I found it good to use the second and 98th percentiles to set thresholds for detecting the jumps. In other circumstances--such as when many jumps might occur--more central percentiles would work better. Here is the result showing (a) the three jumps as red dots and (b) the four estimated levels as light blue lines. The jumps are estimated to occur at indexes 100, 200, 250 (which is exactly where the simulation makes them occur) and the resulting levels are estimated at 199.6, 249.8, 300.0, and 250.2: all within 0.4 of the true underlying values. This excellent behavior persists with repeated simulations (removing the set.seed command at the beginning). Here is the R code. # # Rolling medians. # rollmed <- function(x, k=3) { n <- length(x) x.med <- sapply(1:(n-k+10), function(i) median(x[i + 0:(k-1)])) l <- floor(k/2) c(rep(NA, l), x.med, rep(NA, k-l)) } y.med <- rollmed(y, k=5) # # Changepoint analysis. # dy <- diff(y.med) fourths <- quantile(dy, c(1,49)/50, na.rm=TRUE) thresholds <- fourths + diff(fourths)*2.5*c(-1,1) jumps <- which(dy < thresholds[1] | dy > thresholds[2]) + 1 points(jumps, y.med[jumps], pch=21, bg="Red") # # Plotting. # limits <- c(1, jumps, length(y)+1) y.hat <- rep(NA, length(jumps)+1) for (i in 1:(length(jumps)+1)) { j0 <- limits[i] j1 <- limits[i+1]-1 y.hat[i] <- median(y[j0:j1]) lines(x[j0:j1], rep(y.hat[i], j1-j0+1), col="skyblue", lwd=2) }
How to fit a robust step function to a time series?
A simple, robust method to handle such noise is to compute medians. A rolling median over a short window will detect all but the smallest jumps, while medians of the response within intervals between
How to fit a robust step function to a time series? A simple, robust method to handle such noise is to compute medians. A rolling median over a short window will detect all but the smallest jumps, while medians of the response within intervals between detected jumps will robustly estimate their levels. (You may replace this latter estimate by any robust estimate that is unaffected by the outliers.) You should tune this approach with real or simulated data to achieve acceptable error rates. For instance, for the simulation in the question I found it good to use the second and 98th percentiles to set thresholds for detecting the jumps. In other circumstances--such as when many jumps might occur--more central percentiles would work better. Here is the result showing (a) the three jumps as red dots and (b) the four estimated levels as light blue lines. The jumps are estimated to occur at indexes 100, 200, 250 (which is exactly where the simulation makes them occur) and the resulting levels are estimated at 199.6, 249.8, 300.0, and 250.2: all within 0.4 of the true underlying values. This excellent behavior persists with repeated simulations (removing the set.seed command at the beginning). Here is the R code. # # Rolling medians. # rollmed <- function(x, k=3) { n <- length(x) x.med <- sapply(1:(n-k+10), function(i) median(x[i + 0:(k-1)])) l <- floor(k/2) c(rep(NA, l), x.med, rep(NA, k-l)) } y.med <- rollmed(y, k=5) # # Changepoint analysis. # dy <- diff(y.med) fourths <- quantile(dy, c(1,49)/50, na.rm=TRUE) thresholds <- fourths + diff(fourths)*2.5*c(-1,1) jumps <- which(dy < thresholds[1] | dy > thresholds[2]) + 1 points(jumps, y.med[jumps], pch=21, bg="Red") # # Plotting. # limits <- c(1, jumps, length(y)+1) y.hat <- rep(NA, length(jumps)+1) for (i in 1:(length(jumps)+1)) { j0 <- limits[i] j1 <- limits[i+1]-1 y.hat[i] <- median(y[j0:j1]) lines(x[j0:j1], rep(y.hat[i], j1-j0+1), col="skyblue", lwd=2) }
How to fit a robust step function to a time series? A simple, robust method to handle such noise is to compute medians. A rolling median over a short window will detect all but the smallest jumps, while medians of the response within intervals between
34,869
How to fit a robust step function to a time series?
If you are still interested in smoothing with L0-penalties I would give a look to the following reference: "Visualization of Genomic Changes by Segmented Smoothing Using an L0 Penalty" - DOI: 10.1371/journal.pone.0038230 (a nice intro to the Whittaker smoother can be found in P. Eilers paper "A perfect smoother" - DOI: 10.1021/ac034173t). Of course, in order to achieve your objective you have to work a bit around the method. In principle, you need 3 ingredients: The smoother - I would use the Whittaker smoother. Also, I will use matrix augmentation (see Eilers and Marx, 1996 - "Flexible Smoothing with B-splines and Penalties", p.101). Quantile regression - I will use the R package quantreg (rho = 0.5) for laziness :-) L0-penalty - I will follow the mentioned "Visualization of Genomic Changes by Segmented Smoothing Using an L0 Penalty" - DOI: 10.1371/journal.pone.0038230 Of course, you would need also a way to select the optimal amount of smoothing. This is done by my carpenter eyes for this example. You could use the criteria in DOI: 10.1371/journal.pone.0038230 (pg. 5, but I did not try it on your example). You will find a small code below. I left some comments as guide through it. # Cross Validated example rm(list = ls()); graphics.off(); cat("\014") library(splines) library(Matrix) library(quantreg) # The data set.seed(20181118) n = 400 x = 1:n true_fct = stepfun(c(100, 200, 250), c(200, 250, 300, 250)) y = true_fct(x) + rt(length(x), df = 1) # Prepare bases - Identity matrix (Whittaker) # Can be changed for B-splines B = diag(1, n, n) # Prepare penalty - lambda parameter fix nb = ncol(B) D = diff(diag(1, nb, nb), diff = 1) lambda = 1e2 # Solve standard Whittaker - for initial values a = solve(t(B) %*% B + crossprod(D), t(B) %*% y, tol = 1e-50) # est. loop with L0-Diff penalty as in DOI: 10.1371/journal.pone.0038230 p = 1e-6 nit = 100 beta = 1e-5 for (it in 1:nit) { ao = a # Penalty weights w = (c(D %*% a) ^ 2 + beta ^ 2) ^ ((p - 2)/2) W = diag(c(w)) # Matrix augmentation cD = lambda * sqrt(W) %*% D Bp = rbind(B, cD) yp = c(y, 1:nrow(cD)*0) # Update coefficients - rq.fit from quantreg a = rq.fit(Bp, yp, tau = 0.5)$coef # Check convergence and update da = max(abs((a - ao)/ao)) cat(it, da, '\n') if (da < 1e-6) break } # Fit v = B %*% a # Show results plot(x, y, pch = 16, cex = 0.5) lines(x, y, col = 8, lwd = 0.5) lines(x, v, col = 'blue', lwd = 2) lines(x, true_fct(x), col = 'red', lty = 2, lwd = 2) legend("topright", legend = c("True Signal", "Smoothed signal"), col = c("red", "blue"), lty = c(2, 1)) PS. This is my first answer on Cross Validated. I hope it is useful and clear enough :-)
How to fit a robust step function to a time series?
If you are still interested in smoothing with L0-penalties I would give a look to the following reference: "Visualization of Genomic Changes by Segmented Smoothing Using an L0 Penalty" - DOI: 10.1371/
How to fit a robust step function to a time series? If you are still interested in smoothing with L0-penalties I would give a look to the following reference: "Visualization of Genomic Changes by Segmented Smoothing Using an L0 Penalty" - DOI: 10.1371/journal.pone.0038230 (a nice intro to the Whittaker smoother can be found in P. Eilers paper "A perfect smoother" - DOI: 10.1021/ac034173t). Of course, in order to achieve your objective you have to work a bit around the method. In principle, you need 3 ingredients: The smoother - I would use the Whittaker smoother. Also, I will use matrix augmentation (see Eilers and Marx, 1996 - "Flexible Smoothing with B-splines and Penalties", p.101). Quantile regression - I will use the R package quantreg (rho = 0.5) for laziness :-) L0-penalty - I will follow the mentioned "Visualization of Genomic Changes by Segmented Smoothing Using an L0 Penalty" - DOI: 10.1371/journal.pone.0038230 Of course, you would need also a way to select the optimal amount of smoothing. This is done by my carpenter eyes for this example. You could use the criteria in DOI: 10.1371/journal.pone.0038230 (pg. 5, but I did not try it on your example). You will find a small code below. I left some comments as guide through it. # Cross Validated example rm(list = ls()); graphics.off(); cat("\014") library(splines) library(Matrix) library(quantreg) # The data set.seed(20181118) n = 400 x = 1:n true_fct = stepfun(c(100, 200, 250), c(200, 250, 300, 250)) y = true_fct(x) + rt(length(x), df = 1) # Prepare bases - Identity matrix (Whittaker) # Can be changed for B-splines B = diag(1, n, n) # Prepare penalty - lambda parameter fix nb = ncol(B) D = diff(diag(1, nb, nb), diff = 1) lambda = 1e2 # Solve standard Whittaker - for initial values a = solve(t(B) %*% B + crossprod(D), t(B) %*% y, tol = 1e-50) # est. loop with L0-Diff penalty as in DOI: 10.1371/journal.pone.0038230 p = 1e-6 nit = 100 beta = 1e-5 for (it in 1:nit) { ao = a # Penalty weights w = (c(D %*% a) ^ 2 + beta ^ 2) ^ ((p - 2)/2) W = diag(c(w)) # Matrix augmentation cD = lambda * sqrt(W) %*% D Bp = rbind(B, cD) yp = c(y, 1:nrow(cD)*0) # Update coefficients - rq.fit from quantreg a = rq.fit(Bp, yp, tau = 0.5)$coef # Check convergence and update da = max(abs((a - ao)/ao)) cat(it, da, '\n') if (da < 1e-6) break } # Fit v = B %*% a # Show results plot(x, y, pch = 16, cex = 0.5) lines(x, y, col = 8, lwd = 0.5) lines(x, v, col = 'blue', lwd = 2) lines(x, true_fct(x), col = 'red', lty = 2, lwd = 2) legend("topright", legend = c("True Signal", "Smoothed signal"), col = c("red", "blue"), lty = c(2, 1)) PS. This is my first answer on Cross Validated. I hope it is useful and clear enough :-)
How to fit a robust step function to a time series? If you are still interested in smoothing with L0-penalties I would give a look to the following reference: "Visualization of Genomic Changes by Segmented Smoothing Using an L0 Penalty" - DOI: 10.1371/
34,870
How to fit a robust step function to a time series?
I would consider using Ruey Tsay's paper Outliers, level shifts, and variance changes in time series Differencing model with AR1 and 21 outliers. We turned off differencng and the level shifts are specifically called out.
How to fit a robust step function to a time series?
I would consider using Ruey Tsay's paper Outliers, level shifts, and variance changes in time series Differencing model with AR1 and 21 outliers. We turned off differencng and the level shifts are
How to fit a robust step function to a time series? I would consider using Ruey Tsay's paper Outliers, level shifts, and variance changes in time series Differencing model with AR1 and 21 outliers. We turned off differencng and the level shifts are specifically called out.
How to fit a robust step function to a time series? I would consider using Ruey Tsay's paper Outliers, level shifts, and variance changes in time series Differencing model with AR1 and 21 outliers. We turned off differencng and the level shifts are
34,871
Apply Bayes rule sequentially
You can write: $$P(a,b,c) = P(a \vert b,c)P(b,c) = P(a \vert b,c)P(c \vert b)P(b)$$ or, also valid: $$P(a,b,c) = P(c \vert a,b)P(a,b) = P(c \vert a,b)P(b \vert a)P(a)$$ Putting together both expressions: $$P(a \vert b,c) = \frac{P(c \vert a,b)P(b \vert a)P(a)}{P(c \vert b)P(b)} = \frac{P(c \vert a,b) \pi(a)}{P(c \vert b)} $$ And if this new observation $c$ does not depend on the previous observation $b$ (i.e. $P(b,c) = P(b)P(c)$), you can write: $$P(a \vert b,c) = \frac{P(c \vert a) \pi(a)}{P(c)} $$
Apply Bayes rule sequentially
You can write: $$P(a,b,c) = P(a \vert b,c)P(b,c) = P(a \vert b,c)P(c \vert b)P(b)$$ or, also valid: $$P(a,b,c) = P(c \vert a,b)P(a,b) = P(c \vert a,b)P(b \vert a)P(a)$$ Putting together both expressio
Apply Bayes rule sequentially You can write: $$P(a,b,c) = P(a \vert b,c)P(b,c) = P(a \vert b,c)P(c \vert b)P(b)$$ or, also valid: $$P(a,b,c) = P(c \vert a,b)P(a,b) = P(c \vert a,b)P(b \vert a)P(a)$$ Putting together both expressions: $$P(a \vert b,c) = \frac{P(c \vert a,b)P(b \vert a)P(a)}{P(c \vert b)P(b)} = \frac{P(c \vert a,b) \pi(a)}{P(c \vert b)} $$ And if this new observation $c$ does not depend on the previous observation $b$ (i.e. $P(b,c) = P(b)P(c)$), you can write: $$P(a \vert b,c) = \frac{P(c \vert a) \pi(a)}{P(c)} $$
Apply Bayes rule sequentially You can write: $$P(a,b,c) = P(a \vert b,c)P(b,c) = P(a \vert b,c)P(c \vert b)P(b)$$ or, also valid: $$P(a,b,c) = P(c \vert a,b)P(a,b) = P(c \vert a,b)P(b \vert a)P(a)$$ Putting together both expressio
34,872
Apply Bayes rule sequentially
It might help to use some more specific notation because the Bayesian update will depend on what model you're using. As an example, say you had a linear regression model where you regressed $y_i$ on $\mathbf{x}_i$ variables. The coefficients/parameters for this model could be called $\theta$. If you had a batch of $n$ data points, you might write your likelihood as $$ p(\mathbf{y}_{1:n} \mid \mathbf{X}_{1:n}, \theta) = \prod_{i=1}^n p(y_i \mid \mathbf{x}_i, \theta). $$ Your model would be complete as soon as you chose some prior distribution $\pi(\theta)$. Bayes' rule states $$ \pi(\theta \mid \mathbf{y}_{1:n}, \mathbf{x}_{1:n}, \theta) \propto p(\mathbf{y}_{1:n} \mid \mathbf{X}_{1:n}, \theta)\pi(\theta). $$ If you got another row of data ($y_{n+1}, \mathbf{x}_{n+1}$), then you could update your posterior using $$ \pi(\theta \mid \mathbf{y}_{1:n+1}, \mathbf{x}_{1:n+1}, \theta) = p(y_{n+1} \mid \mathbf{x}_{n+1}, \theta)\pi(\theta \mid \mathbf{y}_{1:n}, \mathbf{x}_{1:n}, \theta). $$ So the old posterior distribution takes the place of the prior distribution when you update sequentially. As I said before, the Bayesian update will depend on what model you're using. A different model would have different conditional independence structure. However, many other models will resemble the last expression in that the old posterior is used as a prior, and it's being multiplied by some marginal "likelihood."
Apply Bayes rule sequentially
It might help to use some more specific notation because the Bayesian update will depend on what model you're using. As an example, say you had a linear regression model where you regressed $y_i$ on $
Apply Bayes rule sequentially It might help to use some more specific notation because the Bayesian update will depend on what model you're using. As an example, say you had a linear regression model where you regressed $y_i$ on $\mathbf{x}_i$ variables. The coefficients/parameters for this model could be called $\theta$. If you had a batch of $n$ data points, you might write your likelihood as $$ p(\mathbf{y}_{1:n} \mid \mathbf{X}_{1:n}, \theta) = \prod_{i=1}^n p(y_i \mid \mathbf{x}_i, \theta). $$ Your model would be complete as soon as you chose some prior distribution $\pi(\theta)$. Bayes' rule states $$ \pi(\theta \mid \mathbf{y}_{1:n}, \mathbf{x}_{1:n}, \theta) \propto p(\mathbf{y}_{1:n} \mid \mathbf{X}_{1:n}, \theta)\pi(\theta). $$ If you got another row of data ($y_{n+1}, \mathbf{x}_{n+1}$), then you could update your posterior using $$ \pi(\theta \mid \mathbf{y}_{1:n+1}, \mathbf{x}_{1:n+1}, \theta) = p(y_{n+1} \mid \mathbf{x}_{n+1}, \theta)\pi(\theta \mid \mathbf{y}_{1:n}, \mathbf{x}_{1:n}, \theta). $$ So the old posterior distribution takes the place of the prior distribution when you update sequentially. As I said before, the Bayesian update will depend on what model you're using. A different model would have different conditional independence structure. However, many other models will resemble the last expression in that the old posterior is used as a prior, and it's being multiplied by some marginal "likelihood."
Apply Bayes rule sequentially It might help to use some more specific notation because the Bayesian update will depend on what model you're using. As an example, say you had a linear regression model where you regressed $y_i$ on $
34,873
Graphical interpretation of LASSO
Recall that the Lasso minimization problem can be expressed as: $$ \hat \theta_{lasso} = argmin_{\theta \in \mathbb{R}^n} \sum_{i=1}^m (y_i - \mathbf{x_i}^T \theta)^2 + \lambda \sum_{j=1}^n | \theta_j| \ $$ Which can be viewed as the minimization of two terms: $OLS + L_1$. The first OLS term can be written as $(y - X \theta)^T(y - X \theta) $ which gives rise to an elipse contour plot centered around the Maximum Likelihood Estimator. The second $L_1$ term is the equation of a diamond centered around 0 (or a romboid in higher dimensions) The solution to the constrained optimization lies at the intersection between the contours of the two functions, and this intersection varies as a function of $\lambda$. For $\lambda = 0$ the solution is the MLE (as usual) and for $\lambda = \infty$ the solution is at $[0,0]$. Since at the vertices of the diamond, one or many of the variables have value 0, there is a non zero probability that one or many of the coefficients will have a value exactly equal to 0. This last bullet is important in answering your question: But I don't understand the case when the lasso Regression just shrink the paramters and don't set them to Zero The lasso regression does't have to set coefficients to zero, in many cases it doesn't. What happens is that as you increase the $\lambda$ parameter, the probability that the solution takes place at a vertex of the diamond increases, and so the probability that one or many coefficients is exactly zero also increases. Could you give me some intuition when the RSS-Line tangents with the side of the diamond instead of the corner? Here is a graph I have produced based on simulated data. It shows the optimal solution for ridge and lasso regression as a function of the $\lambda$ parameter (lasso is on the right hand side). You can see that there are many solutions that are not on the vertex of the diamond! The impact of strongly correlated features This simple example shows what happens when the two features are highly correlated, in fact here $x_1 = x$ and $x_2 = x^2$ so they are so strongly correlated that the shape of the OLS cost function looks like an upside-down ridge, or a valley - hence the intuition behind the name ridge regression. Sources This post is strongly based on my previous post - For anyone interested, you can find most of the code and associated mathematical derivations on my blog and at this page
Graphical interpretation of LASSO
Recall that the Lasso minimization problem can be expressed as: $$ \hat \theta_{lasso} = argmin_{\theta \in \mathbb{R}^n} \sum_{i=1}^m (y_i - \mathbf{x_i}^T \theta)^2 + \lambda \sum_{j=1}^n | \theta_j
Graphical interpretation of LASSO Recall that the Lasso minimization problem can be expressed as: $$ \hat \theta_{lasso} = argmin_{\theta \in \mathbb{R}^n} \sum_{i=1}^m (y_i - \mathbf{x_i}^T \theta)^2 + \lambda \sum_{j=1}^n | \theta_j| \ $$ Which can be viewed as the minimization of two terms: $OLS + L_1$. The first OLS term can be written as $(y - X \theta)^T(y - X \theta) $ which gives rise to an elipse contour plot centered around the Maximum Likelihood Estimator. The second $L_1$ term is the equation of a diamond centered around 0 (or a romboid in higher dimensions) The solution to the constrained optimization lies at the intersection between the contours of the two functions, and this intersection varies as a function of $\lambda$. For $\lambda = 0$ the solution is the MLE (as usual) and for $\lambda = \infty$ the solution is at $[0,0]$. Since at the vertices of the diamond, one or many of the variables have value 0, there is a non zero probability that one or many of the coefficients will have a value exactly equal to 0. This last bullet is important in answering your question: But I don't understand the case when the lasso Regression just shrink the paramters and don't set them to Zero The lasso regression does't have to set coefficients to zero, in many cases it doesn't. What happens is that as you increase the $\lambda$ parameter, the probability that the solution takes place at a vertex of the diamond increases, and so the probability that one or many coefficients is exactly zero also increases. Could you give me some intuition when the RSS-Line tangents with the side of the diamond instead of the corner? Here is a graph I have produced based on simulated data. It shows the optimal solution for ridge and lasso regression as a function of the $\lambda$ parameter (lasso is on the right hand side). You can see that there are many solutions that are not on the vertex of the diamond! The impact of strongly correlated features This simple example shows what happens when the two features are highly correlated, in fact here $x_1 = x$ and $x_2 = x^2$ so they are so strongly correlated that the shape of the OLS cost function looks like an upside-down ridge, or a valley - hence the intuition behind the name ridge regression. Sources This post is strongly based on my previous post - For anyone interested, you can find most of the code and associated mathematical derivations on my blog and at this page
Graphical interpretation of LASSO Recall that the Lasso minimization problem can be expressed as: $$ \hat \theta_{lasso} = argmin_{\theta \in \mathbb{R}^n} \sum_{i=1}^m (y_i - \mathbf{x_i}^T \theta)^2 + \lambda \sum_{j=1}^n | \theta_j
34,874
Graphical interpretation of LASSO
The role of the diamond is pretty clear, I guess: in brief, the region where the the (two in this picture) coefficients are such that the sum of their absolute values does not exceed the "budget" $s$. The ellipses denote regions where the sum of squared residuals take identical values. The OLS estimate $\hat\beta$ is, by its very definition, the point where that sum is smallest. Hence, the further we move away from that point, the larger the sum gets. We now seek a point that is such that it touches the region where the budget constraint is met while at the same time making the sum of squared residuals no larger than necessary.
Graphical interpretation of LASSO
The role of the diamond is pretty clear, I guess: in brief, the region where the the (two in this picture) coefficients are such that the sum of their absolute values does not exceed the "budget" $s$.
Graphical interpretation of LASSO The role of the diamond is pretty clear, I guess: in brief, the region where the the (two in this picture) coefficients are such that the sum of their absolute values does not exceed the "budget" $s$. The ellipses denote regions where the sum of squared residuals take identical values. The OLS estimate $\hat\beta$ is, by its very definition, the point where that sum is smallest. Hence, the further we move away from that point, the larger the sum gets. We now seek a point that is such that it touches the region where the budget constraint is met while at the same time making the sum of squared residuals no larger than necessary.
Graphical interpretation of LASSO The role of the diamond is pretty clear, I guess: in brief, the region where the the (two in this picture) coefficients are such that the sum of their absolute values does not exceed the "budget" $s$.
34,875
Why convergence in probability is defined as convergence to R.V.?
That the sequence of rv's $(X_n)$ can converge to another rv $X$ is clear from the example$$X_n=X+\upsilon_n/n\qquad \upsilon_n\stackrel{\text{i.i.d.}}{\sim}\cal{N}(0,1)$$ where $X$ is an arbitrary random variable. Indeed, then$$|X_n-X|=|\upsilon_n|/n$$which converges to zero in probability:$$\mathbb{P}(|\upsilon_n|/n>\epsilon)=2(1-\Phi(n\epsilon))\stackrel{n\to\infty}{\longrightarrow}0$$ Now, if the $X_n$ are independent, then they cannot converge in probability to a random variable $X$ unless $X$ is a.s. constant.
Why convergence in probability is defined as convergence to R.V.?
That the sequence of rv's $(X_n)$ can converge to another rv $X$ is clear from the example$$X_n=X+\upsilon_n/n\qquad \upsilon_n\stackrel{\text{i.i.d.}}{\sim}\cal{N}(0,1)$$ where $X$ is an arbitrary ra
Why convergence in probability is defined as convergence to R.V.? That the sequence of rv's $(X_n)$ can converge to another rv $X$ is clear from the example$$X_n=X+\upsilon_n/n\qquad \upsilon_n\stackrel{\text{i.i.d.}}{\sim}\cal{N}(0,1)$$ where $X$ is an arbitrary random variable. Indeed, then$$|X_n-X|=|\upsilon_n|/n$$which converges to zero in probability:$$\mathbb{P}(|\upsilon_n|/n>\epsilon)=2(1-\Phi(n\epsilon))\stackrel{n\to\infty}{\longrightarrow}0$$ Now, if the $X_n$ are independent, then they cannot converge in probability to a random variable $X$ unless $X$ is a.s. constant.
Why convergence in probability is defined as convergence to R.V.? That the sequence of rv's $(X_n)$ can converge to another rv $X$ is clear from the example$$X_n=X+\upsilon_n/n\qquad \upsilon_n\stackrel{\text{i.i.d.}}{\sim}\cal{N}(0,1)$$ where $X$ is an arbitrary ra
34,876
Why convergence in probability is defined as convergence to R.V.?
Convergence in probability to a constant is a special case of the more general result of convergence in probability to a random variable, so it is somewhat natural to allow for the more general case. It is also worth bearing in mind that once you allow convergence to a constant, or even just convergence to zero, that is sufficient to get the essence of convergence to a random variable, even if you don't name it that. To see this, suppose we let $R_n = X_n - X$ denote the series of residuals that occurs from subtracting $X$ from each of the values $X_n$. From the definition of convergence in probability, we have the following equivalence: $$X_n \overset{p}{\rightarrow} X \quad \quad \iff \quad \quad R_n \overset{p}{\rightarrow} 0.$$ Thus, once you define convergence in probability to zero, this also gives a corresponding condition for convergence to a random variable. It then makes sense to refer to that condition as entailing convergence to the random variable.
Why convergence in probability is defined as convergence to R.V.?
Convergence in probability to a constant is a special case of the more general result of convergence in probability to a random variable, so it is somewhat natural to allow for the more general case.
Why convergence in probability is defined as convergence to R.V.? Convergence in probability to a constant is a special case of the more general result of convergence in probability to a random variable, so it is somewhat natural to allow for the more general case. It is also worth bearing in mind that once you allow convergence to a constant, or even just convergence to zero, that is sufficient to get the essence of convergence to a random variable, even if you don't name it that. To see this, suppose we let $R_n = X_n - X$ denote the series of residuals that occurs from subtracting $X$ from each of the values $X_n$. From the definition of convergence in probability, we have the following equivalence: $$X_n \overset{p}{\rightarrow} X \quad \quad \iff \quad \quad R_n \overset{p}{\rightarrow} 0.$$ Thus, once you define convergence in probability to zero, this also gives a corresponding condition for convergence to a random variable. It then makes sense to refer to that condition as entailing convergence to the random variable.
Why convergence in probability is defined as convergence to R.V.? Convergence in probability to a constant is a special case of the more general result of convergence in probability to a random variable, so it is somewhat natural to allow for the more general case.
34,877
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA?
Group comparisons of means based on the general linear model are often said to be generally robust to violations of the homogeneity of variance assumption. There are, however, certain conditions under which this is definitely not the case, and a relatively simple one is a situation where the homogeneity of variance assumption is violated and you have disparities in group sizes. This combination can increase your Type I or Type II error rate, depending on the distribution of disparities in variances and sample sizes across groups. A series of simple simulations of $p$-values will show you how. First, let's look at how a distribution $p$-values should look like when the null is true, the homogeneity of variance assumption is met, and group sizes are equal. We will simulate equal standardized scores for 200 observations in two groups (x and y), run a parametric $t$-test, and save the resulting $p$-value (and repeat this 10,000 times). We will then plot a histogram of the simulated $p$-values: nSims <- 10000 h0 <-numeric(nSims) for(i in 1:nSims){ x<-rnorm(n = 200, mean = 0, sd = 1) y<-rnorm(n = 200, mean = 0, sd = 1) z<-t.test(x,y, var.equal = T) h0[i]<-z$p.value } hist(h0, main="Histogram of p-values [H0 = T, HoV = T, Cell.Eq = T]", xlab=("Observed p-value"), breaks=100) The distribution of $p$-values is relatively uniform, as it should be. But what if we make group y's standard deviation 5 times as large as group x's (i.e., homogeneity of variance is violated)? Still pretty uniform. But when we combine violated homogeneity of variance assumption with disparities in group size (now decreasing group x's sample size to 20), we run into major problems. The combination of a larger standard deviation in one group and a smaller group size in the other produces a rather dramatic inflation in our Type I error rate. But disparities in both can work the other way too. If, instead, we specify a population where the null is false (group x's mean is .4 instead of 0), and one group (in this case, group y) has both a larger standard deviation and the larger sample size, then we can actually hurt our power to detect a real effect: So in summary, homogeneity of variance isn't a huge problem when group sizes are relatively equal, but when group sizes are unequal (as they might be in many areas of quasi-experimental research), homogeneity of variance can really inflate your Type I or II error rates.
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA?
Group comparisons of means based on the general linear model are often said to be generally robust to violations of the homogeneity of variance assumption. There are, however, certain conditions under
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA? Group comparisons of means based on the general linear model are often said to be generally robust to violations of the homogeneity of variance assumption. There are, however, certain conditions under which this is definitely not the case, and a relatively simple one is a situation where the homogeneity of variance assumption is violated and you have disparities in group sizes. This combination can increase your Type I or Type II error rate, depending on the distribution of disparities in variances and sample sizes across groups. A series of simple simulations of $p$-values will show you how. First, let's look at how a distribution $p$-values should look like when the null is true, the homogeneity of variance assumption is met, and group sizes are equal. We will simulate equal standardized scores for 200 observations in two groups (x and y), run a parametric $t$-test, and save the resulting $p$-value (and repeat this 10,000 times). We will then plot a histogram of the simulated $p$-values: nSims <- 10000 h0 <-numeric(nSims) for(i in 1:nSims){ x<-rnorm(n = 200, mean = 0, sd = 1) y<-rnorm(n = 200, mean = 0, sd = 1) z<-t.test(x,y, var.equal = T) h0[i]<-z$p.value } hist(h0, main="Histogram of p-values [H0 = T, HoV = T, Cell.Eq = T]", xlab=("Observed p-value"), breaks=100) The distribution of $p$-values is relatively uniform, as it should be. But what if we make group y's standard deviation 5 times as large as group x's (i.e., homogeneity of variance is violated)? Still pretty uniform. But when we combine violated homogeneity of variance assumption with disparities in group size (now decreasing group x's sample size to 20), we run into major problems. The combination of a larger standard deviation in one group and a smaller group size in the other produces a rather dramatic inflation in our Type I error rate. But disparities in both can work the other way too. If, instead, we specify a population where the null is false (group x's mean is .4 instead of 0), and one group (in this case, group y) has both a larger standard deviation and the larger sample size, then we can actually hurt our power to detect a real effect: So in summary, homogeneity of variance isn't a huge problem when group sizes are relatively equal, but when group sizes are unequal (as they might be in many areas of quasi-experimental research), homogeneity of variance can really inflate your Type I or II error rates.
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA? Group comparisons of means based on the general linear model are often said to be generally robust to violations of the homogeneity of variance assumption. There are, however, certain conditions under
34,878
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA?
Gregg, do you mean for normal, heteroscedastic data? Your second paragraph seems to suggest so. I added an answer to the original post you reference, where I suggested that if the data are normal but heteroscedastic, using generalized least squares provides the most flexible approach for dealing with the data features you mention. Not accounting explicitly for those features will lead to suboptimal and possibly misleading results, as you noticed in your own practice. How suboptimal or misleading the results may be will ultimately depend on the peculiarities of each data set. A nice way to understand this would be to set up a simulation study where you can vary two factors: the number of groups and the extent to which the variability changes across groups. Then you could track the impact of these factors on the results of the test of differences between any of the means and the results of post-hoc comparisons between pairs of means when you use standard ANOVA (which ignores heteroscedasticity) versus gls (which accounts for heteroscedasticity). Perhaps you could start your simulation exercise with a simple example with just 3 groups, where you keep the variability of the first two groups the same but change the variability of the third group by a factor f where f becomes increasingly large. This would allow you to see if and when that third group begins to dominate the results. (For simplicity, the differences in mean outcome values between each of the three groups could be kept the same, though you could look to see how the magnitude of the common difference plays with the magnitude of the variability in the third group.) I think it would be hard to come up with a general assessment of what exactly might go wrong when heteroscedasticity is ignored, other than warning people that ignoring heteroscedasticity is ill-advised when better methods for dealing with it exist.
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA?
Gregg, do you mean for normal, heteroscedastic data? Your second paragraph seems to suggest so. I added an answer to the original post you reference, where I suggested that if the data are normal but
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA? Gregg, do you mean for normal, heteroscedastic data? Your second paragraph seems to suggest so. I added an answer to the original post you reference, where I suggested that if the data are normal but heteroscedastic, using generalized least squares provides the most flexible approach for dealing with the data features you mention. Not accounting explicitly for those features will lead to suboptimal and possibly misleading results, as you noticed in your own practice. How suboptimal or misleading the results may be will ultimately depend on the peculiarities of each data set. A nice way to understand this would be to set up a simulation study where you can vary two factors: the number of groups and the extent to which the variability changes across groups. Then you could track the impact of these factors on the results of the test of differences between any of the means and the results of post-hoc comparisons between pairs of means when you use standard ANOVA (which ignores heteroscedasticity) versus gls (which accounts for heteroscedasticity). Perhaps you could start your simulation exercise with a simple example with just 3 groups, where you keep the variability of the first two groups the same but change the variability of the third group by a factor f where f becomes increasingly large. This would allow you to see if and when that third group begins to dominate the results. (For simplicity, the differences in mean outcome values between each of the three groups could be kept the same, though you could look to see how the magnitude of the common difference plays with the magnitude of the variability in the third group.) I think it would be hard to come up with a general assessment of what exactly might go wrong when heteroscedasticity is ignored, other than warning people that ignoring heteroscedasticity is ill-advised when better methods for dealing with it exist.
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA? Gregg, do you mean for normal, heteroscedastic data? Your second paragraph seems to suggest so. I added an answer to the original post you reference, where I suggested that if the data are normal but
34,879
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA?
Well, for non-normal heteroskedastic data, in the worst case, you could have no meaning at all. Consider variables drawn from $$\frac{1}{2\pi}\frac{\sigma}{\left[\left(r_1-\mu_1\right)^2+(r_2-\mu_2)^2+\sigma^2\right]^{\frac{3}{2}}},$$ which you would get if you were drawing returns from two equity securities, then ANOVA would produce an entirely random result uncorrelated with reality. It would have a power of zero regardless of sample size.
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA?
Well, for non-normal heteroskedastic data, in the worst case, you could have no meaning at all. Consider variables drawn from $$\frac{1}{2\pi}\frac{\sigma}{\left[\left(r_1-\mu_1\right)^2+(r_2-\mu_2)^
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA? Well, for non-normal heteroskedastic data, in the worst case, you could have no meaning at all. Consider variables drawn from $$\frac{1}{2\pi}\frac{\sigma}{\left[\left(r_1-\mu_1\right)^2+(r_2-\mu_2)^2+\sigma^2\right]^{\frac{3}{2}}},$$ which you would get if you were drawing returns from two equity securities, then ANOVA would produce an entirely random result uncorrelated with reality. It would have a power of zero regardless of sample size.
What is the worst that can happen when the homoscedasticity assumption is violated in ANOVA? Well, for non-normal heteroskedastic data, in the worst case, you could have no meaning at all. Consider variables drawn from $$\frac{1}{2\pi}\frac{\sigma}{\left[\left(r_1-\mu_1\right)^2+(r_2-\mu_2)^
34,880
What is the convolution of a normal distribution with a gamma distribution?
Often, convolving something with itself gives a solution even when the more direct convolution of two different distributions has no obvious answer. To convolve a ND and a GD, I used Pearson III and convolved two Pearson III distributions with themselves after reparameterization of those Pearson III distributions to be ND and GD using Mathematica. $$\text{PDF}[\text{PearsonDistribution}[3,a,b,x,y,z],t]=\begin{array}{cc} & \begin{cases} \dfrac{\sqrt{\frac{a}{z}} e^{-\dfrac{(a t+b)^2}{2 a z}}}{\sqrt{2 \pi }} & y=0\land a z>0 \\ \dfrac{a \ e^{-\dfrac{a \left(t+\dfrac{z}{y}\right)}{y}} \left(\dfrac{a \left(t+\dfrac{z}{y}\right)}{y}\right)^{\dfrac{a z}{y^2}-\dfrac{b}{y}}}{y \Gamma \left(-\dfrac{b}{y}+\dfrac{a z}{y^2}+1\right)} & y^2>0\land a (t y+z)>0 \\ \end{cases} \\ \end{array}$$ Then ND from Pearson III is $$\text{PDF}\left[\text{PearsonDistribution}\left[3,1,-\mu ,x,0,\sigma ^2\right],t\right]=\dfrac{ e^{-\dfrac{(t-\mu )^2}{2 \sigma ^2}}}{\sigma\sqrt{2 \pi }}\,, \\ $$ And GD from Pearson III is $$\text{PDF}\left[\text{PearsonDistribution}\left[3,\beta ,1,x,1,\frac{\alpha }{\beta }\right],t-\frac{\alpha }{\beta }\right]=\begin{array}{cc} & \begin{cases} \dfrac{\beta e^{-\beta \ t} (\beta \ t)^{\alpha -1}}{\Gamma (\alpha )} & \beta t>0 \\ 0 & \text{Otherwise} \\ \end{cases} \\ \end{array}\,.$$ Then ND*GD is $$f(s)=\text{Convolve}\left[\text{PDF}\left[\text{PearsonDistribution}\left[3,\beta ,1,x,1,\frac{\alpha }{\beta }\right],t-\frac{\alpha }{\beta }\right],\text{PDF}\left[\text{PearsonDistribution}\left[3,1,-\mu ,x,0,\sigma ^2\right],t\right],t,s\right]=2^{-\frac{\alpha }{2}} \beta ^{\alpha } \sigma ^{\alpha -2} e^{-\frac{(s-\mu )^2}{2 \sigma ^2}} \left(\frac{\sigma \, _1F_1\left(\frac{\alpha }{2};\frac{1}{2};\frac{\left(\beta \sigma ^2-s+\mu \right)^2}{2 \sigma ^2}\right)}{\sqrt{2} \Gamma \left(\frac{\alpha +1}{2}\right)}+\frac{\left(-\beta \sigma ^2-\mu +s\right) \, _1F_1\left(\frac{\alpha +1}{2};\frac{3}{2};\frac{\left(\beta \sigma ^2-s+\mu \right)^2}{2 \sigma ^2}\right)}{\Gamma \left(\frac{\alpha }{2}\right)}\right)\,.$$ That is, after a lot of simplifying. Note, $_1F_1(a;b;z)$ is the confluent hypergeometric function of the first kind.
What is the convolution of a normal distribution with a gamma distribution?
Often, convolving something with itself gives a solution even when the more direct convolution of two different distributions has no obvious answer. To convolve a ND and a GD, I used Pearson III and
What is the convolution of a normal distribution with a gamma distribution? Often, convolving something with itself gives a solution even when the more direct convolution of two different distributions has no obvious answer. To convolve a ND and a GD, I used Pearson III and convolved two Pearson III distributions with themselves after reparameterization of those Pearson III distributions to be ND and GD using Mathematica. $$\text{PDF}[\text{PearsonDistribution}[3,a,b,x,y,z],t]=\begin{array}{cc} & \begin{cases} \dfrac{\sqrt{\frac{a}{z}} e^{-\dfrac{(a t+b)^2}{2 a z}}}{\sqrt{2 \pi }} & y=0\land a z>0 \\ \dfrac{a \ e^{-\dfrac{a \left(t+\dfrac{z}{y}\right)}{y}} \left(\dfrac{a \left(t+\dfrac{z}{y}\right)}{y}\right)^{\dfrac{a z}{y^2}-\dfrac{b}{y}}}{y \Gamma \left(-\dfrac{b}{y}+\dfrac{a z}{y^2}+1\right)} & y^2>0\land a (t y+z)>0 \\ \end{cases} \\ \end{array}$$ Then ND from Pearson III is $$\text{PDF}\left[\text{PearsonDistribution}\left[3,1,-\mu ,x,0,\sigma ^2\right],t\right]=\dfrac{ e^{-\dfrac{(t-\mu )^2}{2 \sigma ^2}}}{\sigma\sqrt{2 \pi }}\,, \\ $$ And GD from Pearson III is $$\text{PDF}\left[\text{PearsonDistribution}\left[3,\beta ,1,x,1,\frac{\alpha }{\beta }\right],t-\frac{\alpha }{\beta }\right]=\begin{array}{cc} & \begin{cases} \dfrac{\beta e^{-\beta \ t} (\beta \ t)^{\alpha -1}}{\Gamma (\alpha )} & \beta t>0 \\ 0 & \text{Otherwise} \\ \end{cases} \\ \end{array}\,.$$ Then ND*GD is $$f(s)=\text{Convolve}\left[\text{PDF}\left[\text{PearsonDistribution}\left[3,\beta ,1,x,1,\frac{\alpha }{\beta }\right],t-\frac{\alpha }{\beta }\right],\text{PDF}\left[\text{PearsonDistribution}\left[3,1,-\mu ,x,0,\sigma ^2\right],t\right],t,s\right]=2^{-\frac{\alpha }{2}} \beta ^{\alpha } \sigma ^{\alpha -2} e^{-\frac{(s-\mu )^2}{2 \sigma ^2}} \left(\frac{\sigma \, _1F_1\left(\frac{\alpha }{2};\frac{1}{2};\frac{\left(\beta \sigma ^2-s+\mu \right)^2}{2 \sigma ^2}\right)}{\sqrt{2} \Gamma \left(\frac{\alpha +1}{2}\right)}+\frac{\left(-\beta \sigma ^2-\mu +s\right) \, _1F_1\left(\frac{\alpha +1}{2};\frac{3}{2};\frac{\left(\beta \sigma ^2-s+\mu \right)^2}{2 \sigma ^2}\right)}{\Gamma \left(\frac{\alpha }{2}\right)}\right)\,.$$ That is, after a lot of simplifying. Note, $_1F_1(a;b;z)$ is the confluent hypergeometric function of the first kind.
What is the convolution of a normal distribution with a gamma distribution? Often, convolving something with itself gives a solution even when the more direct convolution of two different distributions has no obvious answer. To convolve a ND and a GD, I used Pearson III and
34,881
What is the convolution of a normal distribution with a gamma distribution?
For integral shape parameters there is a relatively simple form. Without any loss of generality, we may suppose the Normal distribution is a standard Normal, because any shifting of it translates directly to a shift of the convolution; and we may incorporate all scale factors in the Gamma distribution (and then effect a rescaling at the end if needed). This means we may limit the analysis to the standard Normal distribution $\Phi$ with density $$\phi(z) = \frac{1}{\sqrt{2\pi}}\exp\left(-z^2/2\right)$$ and Gamma densities that are zero on negative values and otherwise are given by $$\gamma_{a,\sigma}(t) = \frac{1}{\Gamma(a)\sigma^a} t^{a-1}\exp\left(-t/\sigma\right).$$ The convolution can be computed as $$f(z; a, \sigma) = \left(\phi \star \gamma_{a,\sigma}\right)(z) = \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\int_0^\infty t^{a-1}\exp\left(-t/\sigma\right)\exp\left(-(z-t)^2/2\right)\,\mathrm{d}t.$$ The instances with integral $a$ promise to yield simple expressions. Let's begin at $a=1,$ for which the calculation yields to the usual expedient of completing the square: $$\begin{aligned} f(z;1,\sigma) &= \frac{1}{\sqrt{2\pi}\sigma} \int_0^\infty \exp\left(-t/\sigma - (z-t)^2/2\right)\,\mathrm{d}t\\ &= \frac{\exp\left(-\left(z-\frac{1}{2\sigma}\right)/\sigma\right)}{\sqrt{2\pi}\sigma}\int_0^\infty \exp\left(-(t - (1/\sigma-z))^2\right)/2)\,\mathrm{d}t\\ &= \frac{\exp\left(-\left(z-\frac{1}{2\sigma}\right)/\sigma\right)}{\sqrt{2\pi}\sigma}\int_{-\infty}^{z-1/\sigma} \exp\left(-t^2/2\right)\,\mathrm{d}t\\ &=\exp\left(-\left(z-\frac{1}{2\sigma}\right)/\sigma\right)\Phi\left(z-1/\sigma\right). \end{aligned}$$ That, at least, can be considered a "closed form." We can exploit it to find other convolution densities. Returning to the convolution integral, observe $$\begin{aligned} f^\prime(z;a,\sigma) &= \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\int_0^\infty \frac{\mathrm d}{\mathrm{d}z} t^{a-1}\exp\left(-t/\sigma\right)\exp\left(-(z-t)^2/2\right)\,\mathrm{d}t\\ &= \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\int_0^\infty t^{a-1}\exp\left(-t/\sigma\right)(t-z)\exp\left(-(z-t)^2/2\right)\,\mathrm{d}t\\ &= \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\left[\int_0^\infty t^{a}e^\left(-t/\sigma\right)e^\left(-(z-t)^2/2\right)\,\mathrm{d}t - z\int_0^\infty t^{a-1}e^\left(-t/\sigma\right)e^\left(-(z-t)^2/2\right)\,\mathrm{d}t\right]\\ &= \frac{\sigma\Gamma(a+1)}{\Gamma(a)} f(z;a+1,\sigma) - z f(z;a,\sigma)\\ &= \sigma a f(z;a+1,\sigma) - z f(z;a,\sigma). \end{aligned}$$ Solving for $f(z;a+1,\sigma)$ and writing $D=\mathrm{d}/\mathrm{d}z$ for the differential operator produces the fundamental recurrence $$f(z;a+1,\sigma) = \frac{f^\prime(z;a,\sigma) + z f(z;a,\sigma)}{\sigma a} = \frac{D+z}{\sigma a} f(z;a,\sigma)$$ Repeating this $\lfloor a\rfloor - 1$ times gives $$\begin{aligned} f(z;a,\sigma) &= \frac{D+z}{\sigma (a-1)}\frac{D+z}{\sigma (a-2)}\cdots\frac{D+z}{\sigma(a-\lfloor a\rfloor+1)} f(z;a-\lfloor a\rfloor+1,\sigma) \\ &= \frac{\Gamma(a-\lfloor a \rfloor + 1)}{\sigma^{\lfloor a\rfloor-1}\Gamma(a)} (D+z)^{\lfloor a\rfloor-1} f(z;a-\lfloor a\rfloor+1,\sigma). \end{aligned}$$ For integral $a$ it simplifes further to $$f(z;a,\sigma) = \frac{1}{\sigma^{a-1}\Gamma(a)} (D+z)^{a-1} f(z;1,\sigma).$$ Just to see the patterns, let $\sigma=1,$ drop the "$z-1$" arguments from $\Phi$ and $\phi$ (reducing the visual clutter in the formulas) and use $\phi^\prime(z-1) = -(z-1) \phi(z-1)$ to compute $$\begin{aligned} f(z;1,1) &= e^{1/2-z} \Phi\\ f^\prime(z;1,1) &= e^{1/2-z}\left(-\Phi + \phi\right)\\ f(z;2,1) &= \frac{1}{1}\left(f^\prime(z;1,1) + z f(z;1,1)\right) = e^{1/2-z}\left((z-1)\Phi + \phi\right)\\ f^\prime(z;2,1) &= e^{1/2-z}\left(-(z-1)\Phi-\phi + \Phi + (z-1)\phi - (z-1)\phi\right)\\ &= e^{1/2-z}\left((2-z)\Phi - \phi\right)\\ f(z;3,1) &= \frac{1}{2}\left(f^\prime(z;2,1) + z f(z;2,1)\right) \\ &= \frac{1}{2}e^{1/2-z}\left((z^2-2z+2)\Phi + (z-1)\phi\right)\\ etc. \end{aligned}$$ Generally, the form of these densities is $$f(z;a,1) = e^{(1/(2\sigma)-z)/\sigma}\left[P(z;a,\sigma)\Phi(z-1/\sigma) + Q(z;a,\sigma)\phi(z-1/\sigma)\right]$$ where $P$ and $Q$ are polynomials in $z.$ The basic recurrence (for $\sigma=1$) translates to $$\begin{aligned} P(z;a+1) &= \left[(z-1)P(z;a) + P^\prime(z;a)\right]/a\\ Q(z;a+1) &= \left[P(z;a) + Q^\prime(z;a)\right]/a \end{aligned}.$$ It is now immediate that For positive integral $a,$ the density $f(z;a,\sigma)$ is $\exp(1/(2\sigma) - z)/\sigma)$ times a polynomial linear combination of $\Phi(z-1)$ and $\phi(z-1).$ The degree of $P$ is $a-1$ and the degree of $Q$ is $a-2$ (when $a\ge 2$). Here are histograms of simulated values for $a=1,3,10$ overplotted with the graphs of $f(z;a,1)$ to demonstrate the agreement. By comparing these to all the Wikipedia listings of continuous distributions supported on the real line we can establish that they are not among those lists. Most of this is obvious, with the exception perhaps of the Pearson system of distributions. But these can be ruled out since (by definition) the logarithmic derivative of a Pearson density is a rational function of its argument with at most one zero and two (complex) poles. The recurrence relation for $f(z;a,\sigma)$ readily demonstrates that no such relationship holds.
What is the convolution of a normal distribution with a gamma distribution?
For integral shape parameters there is a relatively simple form. Without any loss of generality, we may suppose the Normal distribution is a standard Normal, because any shifting of it translates dire
What is the convolution of a normal distribution with a gamma distribution? For integral shape parameters there is a relatively simple form. Without any loss of generality, we may suppose the Normal distribution is a standard Normal, because any shifting of it translates directly to a shift of the convolution; and we may incorporate all scale factors in the Gamma distribution (and then effect a rescaling at the end if needed). This means we may limit the analysis to the standard Normal distribution $\Phi$ with density $$\phi(z) = \frac{1}{\sqrt{2\pi}}\exp\left(-z^2/2\right)$$ and Gamma densities that are zero on negative values and otherwise are given by $$\gamma_{a,\sigma}(t) = \frac{1}{\Gamma(a)\sigma^a} t^{a-1}\exp\left(-t/\sigma\right).$$ The convolution can be computed as $$f(z; a, \sigma) = \left(\phi \star \gamma_{a,\sigma}\right)(z) = \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\int_0^\infty t^{a-1}\exp\left(-t/\sigma\right)\exp\left(-(z-t)^2/2\right)\,\mathrm{d}t.$$ The instances with integral $a$ promise to yield simple expressions. Let's begin at $a=1,$ for which the calculation yields to the usual expedient of completing the square: $$\begin{aligned} f(z;1,\sigma) &= \frac{1}{\sqrt{2\pi}\sigma} \int_0^\infty \exp\left(-t/\sigma - (z-t)^2/2\right)\,\mathrm{d}t\\ &= \frac{\exp\left(-\left(z-\frac{1}{2\sigma}\right)/\sigma\right)}{\sqrt{2\pi}\sigma}\int_0^\infty \exp\left(-(t - (1/\sigma-z))^2\right)/2)\,\mathrm{d}t\\ &= \frac{\exp\left(-\left(z-\frac{1}{2\sigma}\right)/\sigma\right)}{\sqrt{2\pi}\sigma}\int_{-\infty}^{z-1/\sigma} \exp\left(-t^2/2\right)\,\mathrm{d}t\\ &=\exp\left(-\left(z-\frac{1}{2\sigma}\right)/\sigma\right)\Phi\left(z-1/\sigma\right). \end{aligned}$$ That, at least, can be considered a "closed form." We can exploit it to find other convolution densities. Returning to the convolution integral, observe $$\begin{aligned} f^\prime(z;a,\sigma) &= \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\int_0^\infty \frac{\mathrm d}{\mathrm{d}z} t^{a-1}\exp\left(-t/\sigma\right)\exp\left(-(z-t)^2/2\right)\,\mathrm{d}t\\ &= \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\int_0^\infty t^{a-1}\exp\left(-t/\sigma\right)(t-z)\exp\left(-(z-t)^2/2\right)\,\mathrm{d}t\\ &= \frac{1}{\sqrt{2\pi}\Gamma(a)\sigma^a}\left[\int_0^\infty t^{a}e^\left(-t/\sigma\right)e^\left(-(z-t)^2/2\right)\,\mathrm{d}t - z\int_0^\infty t^{a-1}e^\left(-t/\sigma\right)e^\left(-(z-t)^2/2\right)\,\mathrm{d}t\right]\\ &= \frac{\sigma\Gamma(a+1)}{\Gamma(a)} f(z;a+1,\sigma) - z f(z;a,\sigma)\\ &= \sigma a f(z;a+1,\sigma) - z f(z;a,\sigma). \end{aligned}$$ Solving for $f(z;a+1,\sigma)$ and writing $D=\mathrm{d}/\mathrm{d}z$ for the differential operator produces the fundamental recurrence $$f(z;a+1,\sigma) = \frac{f^\prime(z;a,\sigma) + z f(z;a,\sigma)}{\sigma a} = \frac{D+z}{\sigma a} f(z;a,\sigma)$$ Repeating this $\lfloor a\rfloor - 1$ times gives $$\begin{aligned} f(z;a,\sigma) &= \frac{D+z}{\sigma (a-1)}\frac{D+z}{\sigma (a-2)}\cdots\frac{D+z}{\sigma(a-\lfloor a\rfloor+1)} f(z;a-\lfloor a\rfloor+1,\sigma) \\ &= \frac{\Gamma(a-\lfloor a \rfloor + 1)}{\sigma^{\lfloor a\rfloor-1}\Gamma(a)} (D+z)^{\lfloor a\rfloor-1} f(z;a-\lfloor a\rfloor+1,\sigma). \end{aligned}$$ For integral $a$ it simplifes further to $$f(z;a,\sigma) = \frac{1}{\sigma^{a-1}\Gamma(a)} (D+z)^{a-1} f(z;1,\sigma).$$ Just to see the patterns, let $\sigma=1,$ drop the "$z-1$" arguments from $\Phi$ and $\phi$ (reducing the visual clutter in the formulas) and use $\phi^\prime(z-1) = -(z-1) \phi(z-1)$ to compute $$\begin{aligned} f(z;1,1) &= e^{1/2-z} \Phi\\ f^\prime(z;1,1) &= e^{1/2-z}\left(-\Phi + \phi\right)\\ f(z;2,1) &= \frac{1}{1}\left(f^\prime(z;1,1) + z f(z;1,1)\right) = e^{1/2-z}\left((z-1)\Phi + \phi\right)\\ f^\prime(z;2,1) &= e^{1/2-z}\left(-(z-1)\Phi-\phi + \Phi + (z-1)\phi - (z-1)\phi\right)\\ &= e^{1/2-z}\left((2-z)\Phi - \phi\right)\\ f(z;3,1) &= \frac{1}{2}\left(f^\prime(z;2,1) + z f(z;2,1)\right) \\ &= \frac{1}{2}e^{1/2-z}\left((z^2-2z+2)\Phi + (z-1)\phi\right)\\ etc. \end{aligned}$$ Generally, the form of these densities is $$f(z;a,1) = e^{(1/(2\sigma)-z)/\sigma}\left[P(z;a,\sigma)\Phi(z-1/\sigma) + Q(z;a,\sigma)\phi(z-1/\sigma)\right]$$ where $P$ and $Q$ are polynomials in $z.$ The basic recurrence (for $\sigma=1$) translates to $$\begin{aligned} P(z;a+1) &= \left[(z-1)P(z;a) + P^\prime(z;a)\right]/a\\ Q(z;a+1) &= \left[P(z;a) + Q^\prime(z;a)\right]/a \end{aligned}.$$ It is now immediate that For positive integral $a,$ the density $f(z;a,\sigma)$ is $\exp(1/(2\sigma) - z)/\sigma)$ times a polynomial linear combination of $\Phi(z-1)$ and $\phi(z-1).$ The degree of $P$ is $a-1$ and the degree of $Q$ is $a-2$ (when $a\ge 2$). Here are histograms of simulated values for $a=1,3,10$ overplotted with the graphs of $f(z;a,1)$ to demonstrate the agreement. By comparing these to all the Wikipedia listings of continuous distributions supported on the real line we can establish that they are not among those lists. Most of this is obvious, with the exception perhaps of the Pearson system of distributions. But these can be ruled out since (by definition) the logarithmic derivative of a Pearson density is a rational function of its argument with at most one zero and two (complex) poles. The recurrence relation for $f(z;a,\sigma)$ readily demonstrates that no such relationship holds.
What is the convolution of a normal distribution with a gamma distribution? For integral shape parameters there is a relatively simple form. Without any loss of generality, we may suppose the Normal distribution is a standard Normal, because any shifting of it translates dire
34,882
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in general)
In GLM, prediction is a non-linear function $f$ of the product of covariates $X$ with estimated coefficient vector $\hat{\beta}$: $$\hat{y} = f(X\hat{\beta})$$ Finite-sample distribution of $\hat{\beta}$ is generally unknown, but as long as $\hat{\beta}$ is a maximum likelihood estimate, it has asymptotic normal distribution $\mathcal{N}(\beta, -H^{-1})$, where $H$ is the Hessian matrix of the likelihood function in its maximum. The p-values of $\beta$ that are shown as an output of a regression are nearly always based on this asymptotics. But if you feel your sample is too small for asymptotics, use numerical distribution (e.g. bootstrapping). When you use asymptotic normal distribution of $\hat{\beta}$ (and therefore $X\hat{\beta}$), distribution of $\hat{y}$ is still non-normal due to non-linear $f$. You can ignore it - get normal confidence bounds $(z_{lower}, z_{upper})$ for $X\beta$, and plug them into $f$, getting bounds for $y$ as $(y_{lower}, y_{upper}) = (f(z_{lower}), f(z_{upper}))$. Another strategy (called delta method) is to take a Taylor expansion of $f$ around $X\hat{\beta}$ - it will be linear in $\hat{\beta}$. Therefore, you can approximate distribution of $f(X\hat{\beta})$ as $$f(X\hat{\beta}) \sim \mathcal{N}\left(f(X\beta), -(f^{'}(X\beta))^2 X H^{-1} X^T \right)$$ Then the asymptotic 95% confidence interval for $f(X\beta)$ would look like $$ f(X\hat{\beta}) \pm 1.96 \sqrt{(f^{'}(X\hat{\beta}))^2 X H(\hat{\beta})^{-1} X^T}$$ Now you need only to find expression for Hessian matrices for particular models, like logistic regression in this question. And this question presents practical comparison of bootstrap, transformed normal bounds, and delta method for logistic regression.
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in g
In GLM, prediction is a non-linear function $f$ of the product of covariates $X$ with estimated coefficient vector $\hat{\beta}$: $$\hat{y} = f(X\hat{\beta})$$ Finite-sample distribution of $\hat{\bet
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in general) In GLM, prediction is a non-linear function $f$ of the product of covariates $X$ with estimated coefficient vector $\hat{\beta}$: $$\hat{y} = f(X\hat{\beta})$$ Finite-sample distribution of $\hat{\beta}$ is generally unknown, but as long as $\hat{\beta}$ is a maximum likelihood estimate, it has asymptotic normal distribution $\mathcal{N}(\beta, -H^{-1})$, where $H$ is the Hessian matrix of the likelihood function in its maximum. The p-values of $\beta$ that are shown as an output of a regression are nearly always based on this asymptotics. But if you feel your sample is too small for asymptotics, use numerical distribution (e.g. bootstrapping). When you use asymptotic normal distribution of $\hat{\beta}$ (and therefore $X\hat{\beta}$), distribution of $\hat{y}$ is still non-normal due to non-linear $f$. You can ignore it - get normal confidence bounds $(z_{lower}, z_{upper})$ for $X\beta$, and plug them into $f$, getting bounds for $y$ as $(y_{lower}, y_{upper}) = (f(z_{lower}), f(z_{upper}))$. Another strategy (called delta method) is to take a Taylor expansion of $f$ around $X\hat{\beta}$ - it will be linear in $\hat{\beta}$. Therefore, you can approximate distribution of $f(X\hat{\beta})$ as $$f(X\hat{\beta}) \sim \mathcal{N}\left(f(X\beta), -(f^{'}(X\beta))^2 X H^{-1} X^T \right)$$ Then the asymptotic 95% confidence interval for $f(X\beta)$ would look like $$ f(X\hat{\beta}) \pm 1.96 \sqrt{(f^{'}(X\hat{\beta}))^2 X H(\hat{\beta})^{-1} X^T}$$ Now you need only to find expression for Hessian matrices for particular models, like logistic regression in this question. And this question presents practical comparison of bootstrap, transformed normal bounds, and delta method for logistic regression.
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in g In GLM, prediction is a non-linear function $f$ of the product of covariates $X$ with estimated coefficient vector $\hat{\beta}$: $$\hat{y} = f(X\hat{\beta})$$ Finite-sample distribution of $\hat{\bet
34,883
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in general)
When all else fails, you can always construct bootstrapped CIs for any statistic. Here's a simple algorithm: Draw $N$ samples with replacement from $X$ (where $N$ is the number of rows in $X$). You'll find that about 2/3rds of your observations will appear in such a sample. Use these samples to fit a model Use this model to generate predictions for the observations in $X$ that weren't used in training. Repeat this process 100 or so times (the more the merrier) to accumulate a collection of predictions for each observation. This collection is an approximation to the distribution of your predictions. Call these your "bootstrapped predictions". Construct confidence intervals by taking quantiles on the predictions. E.g. for a particular observation, calculate the .025 and .975 quantiles for a 95% confidence interval.
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in g
When all else fails, you can always construct bootstrapped CIs for any statistic. Here's a simple algorithm: Draw $N$ samples with replacement from $X$ (where $N$ is the number of rows in $X$). You'l
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in general) When all else fails, you can always construct bootstrapped CIs for any statistic. Here's a simple algorithm: Draw $N$ samples with replacement from $X$ (where $N$ is the number of rows in $X$). You'll find that about 2/3rds of your observations will appear in such a sample. Use these samples to fit a model Use this model to generate predictions for the observations in $X$ that weren't used in training. Repeat this process 100 or so times (the more the merrier) to accumulate a collection of predictions for each observation. This collection is an approximation to the distribution of your predictions. Call these your "bootstrapped predictions". Construct confidence intervals by taking quantiles on the predictions. E.g. for a particular observation, calculate the .025 and .975 quantiles for a 95% confidence interval.
Derivation of confidence and prediction intervals of predictions for probit and logit (and GLMs in g When all else fails, you can always construct bootstrapped CIs for any statistic. Here's a simple algorithm: Draw $N$ samples with replacement from $X$ (where $N$ is the number of rows in $X$). You'l
34,884
Interpreting R coxph() cox.zph()
zph() checks for proportionality assumption, by using the Schoenfeld residuals against the transformed time. Having very small p values indicates that there are time dependent coefficients which you need to take care of. That is to say, the proportionality assumption does not check linearity - the Cox PH model is semi parametric and thus makes no assumption as to the form of the hazard. The proportionality assumption is that the hazard rate of an individual is relatively constant in time, and this is what cox.zph() tests. If a covariate breaks the assumption, it might need fixing as there are time dependent coefficients. To solve this you can either interact the coefficient with explicit time, or use strata based on the plotted residuals. For a detailed guide on doing this, see my answer here: Extended Cox model and cox.zph Without doing something about it, it might invalidate the results, in a similar manner to how breaking linear regression assumptions might. Edit: the effect of holding a diploma on the length of job search vs no diploma. Same scenario : coxph() and zph() significant. Can I say that holding a diploma has an effect on the length of job search? In all likelihood yes, you could say that. However, given the failed proportionality assumption test, you cannot trust the coefficient of having a diploma (lets call this $diploma$). The Cox model assumes that the effect $diploma$ has on the time to find a job (lets call this $t_{job}$) is constant in time. That means that If $diploma$ increases the % of finding a job by 35%, this increase is irregardless of time. In the case that $diploma$ failed the proportionality assumption, as when the cox.zph() test for it is significant, adjustments need to be had. If $diploma$'s coefficient changes linearly with time (i.e., the more time it takes to find a job, the relative benefit of having a diploma linearly declines), than you need to add an interaction: $diploma \times t_{job}$. In this case, the value of $diploma$ will be the initial (at $t_0$ difference between having a diploma and not having a diploma), and the interaction would mean the decrease/increase in that initial value with every passing unit of time (hours/days etc.. however you count time). Do this and check cox.zph() again. If it it non-significant, you can probably leave it at that. It makes more theoretical sense that having the coefficient change at specific time points.
Interpreting R coxph() cox.zph()
zph() checks for proportionality assumption, by using the Schoenfeld residuals against the transformed time. Having very small p values indicates that there are time dependent coefficients which you n
Interpreting R coxph() cox.zph() zph() checks for proportionality assumption, by using the Schoenfeld residuals against the transformed time. Having very small p values indicates that there are time dependent coefficients which you need to take care of. That is to say, the proportionality assumption does not check linearity - the Cox PH model is semi parametric and thus makes no assumption as to the form of the hazard. The proportionality assumption is that the hazard rate of an individual is relatively constant in time, and this is what cox.zph() tests. If a covariate breaks the assumption, it might need fixing as there are time dependent coefficients. To solve this you can either interact the coefficient with explicit time, or use strata based on the plotted residuals. For a detailed guide on doing this, see my answer here: Extended Cox model and cox.zph Without doing something about it, it might invalidate the results, in a similar manner to how breaking linear regression assumptions might. Edit: the effect of holding a diploma on the length of job search vs no diploma. Same scenario : coxph() and zph() significant. Can I say that holding a diploma has an effect on the length of job search? In all likelihood yes, you could say that. However, given the failed proportionality assumption test, you cannot trust the coefficient of having a diploma (lets call this $diploma$). The Cox model assumes that the effect $diploma$ has on the time to find a job (lets call this $t_{job}$) is constant in time. That means that If $diploma$ increases the % of finding a job by 35%, this increase is irregardless of time. In the case that $diploma$ failed the proportionality assumption, as when the cox.zph() test for it is significant, adjustments need to be had. If $diploma$'s coefficient changes linearly with time (i.e., the more time it takes to find a job, the relative benefit of having a diploma linearly declines), than you need to add an interaction: $diploma \times t_{job}$. In this case, the value of $diploma$ will be the initial (at $t_0$ difference between having a diploma and not having a diploma), and the interaction would mean the decrease/increase in that initial value with every passing unit of time (hours/days etc.. however you count time). Do this and check cox.zph() again. If it it non-significant, you can probably leave it at that. It makes more theoretical sense that having the coefficient change at specific time points.
Interpreting R coxph() cox.zph() zph() checks for proportionality assumption, by using the Schoenfeld residuals against the transformed time. Having very small p values indicates that there are time dependent coefficients which you n
34,885
How to correctly use validation and test sets for Neural Network training?
The bottom line is: As soon as you use a portion of the data to choose which model performs better, you are already biasing your model towards that data.1 Machine learning in general In general machine learning scenarios you would use cross-validation to find the optimal combination of your hyperparameters, then fix them and train on the whole training set. In the end, you would evaluate on the test set only to get a realistic idea about its performance on new, unseen data. If you would then train a different model and select the one of them which performs better on the test set, you are already using the test set as part of your model selection loop, so you would need yet a new, independent test set to evaluate the test performance. Neural networks Neural networks are a bit specific in the sense that their training is usually very long, thus cross-validation is not used very often (if training would take 1 day, then doing 10 fold cross validation already takes over a week on a single machine). Moreover, one of the important hyperparameters is the number of training epochs. The optimal length of the training varies with different initializations and different training sets, so fixing number of epochs to one number and then training on all training data (training+validation) for this fixed number is not very reliable approach. Instead, as you mentioned, some form of early stopping is used: Potentially, the model is trained for a long time, saving "snapshots" periodically, and eventually the "snapshot" with the best performance on some validation set is picked. To enable this, you have to always keep some portion of the validation data aside2. Therefore, you will never train the neural net on all of the samples. Finally, there are plenty of other hyperparameters, such as the learning rate, weight decay, dropout ratios, but also the network architecture itself (depth, number of units, size of conv. kernels, etc.). You could potentially use the same validation set which you use for early stopping to tune these, but then again, you are overfitting to this set by using it for early stopping, so it does give you a biased estimate. Ideal would be, however, using yet another, separate validation set. Once you fix all the remaining hyperparameters, you could merge this second validation set into your final training set. To wrap it up: Split all your data into training + validation 1 + validation 2 + testing Train network on training, use validation 1 for early stopping Evaluate on validation 2, change hyperparameters, repeat 2. Select the best hyperparameter combination from 3., train network on training + validation 2, use validation 1 for early stopping Evaluate on testing. This is your final (real) model performance. 1 This is exactly the reason why Kaggle challenges have 2 test sets: a public and private one. You can use the public test set to check the performance of your model, but eventually it is the performance on the private test set that matters, and if you overfit to the public test set, you lose. 2 Amari et al. (1997) in their article Asymptotic Statistical Theory of Overtraining and Cross-Validation recommend setting the ratio of samples used for early stopping to $1/\sqrt{2N}$, where $N$ is the size of the training set.
How to correctly use validation and test sets for Neural Network training?
The bottom line is: As soon as you use a portion of the data to choose which model performs better, you are already biasing your model towards that data.1 Machine learning in general In general mach
How to correctly use validation and test sets for Neural Network training? The bottom line is: As soon as you use a portion of the data to choose which model performs better, you are already biasing your model towards that data.1 Machine learning in general In general machine learning scenarios you would use cross-validation to find the optimal combination of your hyperparameters, then fix them and train on the whole training set. In the end, you would evaluate on the test set only to get a realistic idea about its performance on new, unseen data. If you would then train a different model and select the one of them which performs better on the test set, you are already using the test set as part of your model selection loop, so you would need yet a new, independent test set to evaluate the test performance. Neural networks Neural networks are a bit specific in the sense that their training is usually very long, thus cross-validation is not used very often (if training would take 1 day, then doing 10 fold cross validation already takes over a week on a single machine). Moreover, one of the important hyperparameters is the number of training epochs. The optimal length of the training varies with different initializations and different training sets, so fixing number of epochs to one number and then training on all training data (training+validation) for this fixed number is not very reliable approach. Instead, as you mentioned, some form of early stopping is used: Potentially, the model is trained for a long time, saving "snapshots" periodically, and eventually the "snapshot" with the best performance on some validation set is picked. To enable this, you have to always keep some portion of the validation data aside2. Therefore, you will never train the neural net on all of the samples. Finally, there are plenty of other hyperparameters, such as the learning rate, weight decay, dropout ratios, but also the network architecture itself (depth, number of units, size of conv. kernels, etc.). You could potentially use the same validation set which you use for early stopping to tune these, but then again, you are overfitting to this set by using it for early stopping, so it does give you a biased estimate. Ideal would be, however, using yet another, separate validation set. Once you fix all the remaining hyperparameters, you could merge this second validation set into your final training set. To wrap it up: Split all your data into training + validation 1 + validation 2 + testing Train network on training, use validation 1 for early stopping Evaluate on validation 2, change hyperparameters, repeat 2. Select the best hyperparameter combination from 3., train network on training + validation 2, use validation 1 for early stopping Evaluate on testing. This is your final (real) model performance. 1 This is exactly the reason why Kaggle challenges have 2 test sets: a public and private one. You can use the public test set to check the performance of your model, but eventually it is the performance on the private test set that matters, and if you overfit to the public test set, you lose. 2 Amari et al. (1997) in their article Asymptotic Statistical Theory of Overtraining and Cross-Validation recommend setting the ratio of samples used for early stopping to $1/\sqrt{2N}$, where $N$ is the size of the training set.
How to correctly use validation and test sets for Neural Network training? The bottom line is: As soon as you use a portion of the data to choose which model performs better, you are already biasing your model towards that data.1 Machine learning in general In general mach
34,886
What is the difference between a pooled OLS regression model and a fixed effect model?
There is an important difference. If there is unobserved heterogeneity (i.e. some unobserved factor that affects the dependent variable), and this is correlated with some observed regressor, then POLS is inconsistent, whereas FE is consistent. If there is no unobserved heterogeneity (unlikely), or this is uncorrelated with all regressors, then both POLS and FE are consistent (albeit not efficient). Assume a simple model: $$y_{it} = x_{it}\beta + w_i\gamma + (\eta_i + \epsilon_{ij})$$ where $x_{it}$ is a vector of time-variable factors, $w_i$ is a vector of time-invariant factors, $\eta_i$ is the individual effect (or unobserved, time-invariant heterogeneity), and $\epsilon_{ij}$ is an idiosyncratic error (i.e. it is unique to the individual-period). For simplicity, assume exogeneity of time-variable factors. This is: $$ E[x_{ij} \cdot \epsilon_{ij} ]=0 $$ (if not, we need to think about instruments, just as in the non-panel data world). Pooled OLS (POLS): if $x_{ij}$ uncorrelated with $\eta_i$, OLS consistent but inefficient (because of serial correlation). Use adjusted POLS. If $x_{ij}$ correlated with $\eta_i$, POLS inconsistent. Fixed Effects (FE) (or Within Groups, WG): estimates a de-meaned model. This is, it substracts to each individual the average of the period, for each time-varying variable. As the fixed effect is constant, this method eliminates $\eta_i$!. Therefore, no need to worry about correlation between $x_{ij}$ and $\eta_i$, which we might be concerned it does exist. Notice that FE is basically POLS of de-meaned model, and it is consistent in the case of correlation between the unobserved heterogeneity and a time-varying dependent variable. Notice: quality of FE estimation depends on extent of variation of regressors over time, as estimation uses de-meaned data, i.e. the time differences. also, all invariant variables are eliminated. For example, FE is less successful in controlling for unobserved ability if we want to estimate the effect of schooling on earnings (assuming they are constant). In consequence, FE imprecise if there is only limited time-series (‘within’) variation.
What is the difference between a pooled OLS regression model and a fixed effect model?
There is an important difference. If there is unobserved heterogeneity (i.e. some unobserved factor that affects the dependent variable), and this is correlated with some observed regressor, then POLS
What is the difference between a pooled OLS regression model and a fixed effect model? There is an important difference. If there is unobserved heterogeneity (i.e. some unobserved factor that affects the dependent variable), and this is correlated with some observed regressor, then POLS is inconsistent, whereas FE is consistent. If there is no unobserved heterogeneity (unlikely), or this is uncorrelated with all regressors, then both POLS and FE are consistent (albeit not efficient). Assume a simple model: $$y_{it} = x_{it}\beta + w_i\gamma + (\eta_i + \epsilon_{ij})$$ where $x_{it}$ is a vector of time-variable factors, $w_i$ is a vector of time-invariant factors, $\eta_i$ is the individual effect (or unobserved, time-invariant heterogeneity), and $\epsilon_{ij}$ is an idiosyncratic error (i.e. it is unique to the individual-period). For simplicity, assume exogeneity of time-variable factors. This is: $$ E[x_{ij} \cdot \epsilon_{ij} ]=0 $$ (if not, we need to think about instruments, just as in the non-panel data world). Pooled OLS (POLS): if $x_{ij}$ uncorrelated with $\eta_i$, OLS consistent but inefficient (because of serial correlation). Use adjusted POLS. If $x_{ij}$ correlated with $\eta_i$, POLS inconsistent. Fixed Effects (FE) (or Within Groups, WG): estimates a de-meaned model. This is, it substracts to each individual the average of the period, for each time-varying variable. As the fixed effect is constant, this method eliminates $\eta_i$!. Therefore, no need to worry about correlation between $x_{ij}$ and $\eta_i$, which we might be concerned it does exist. Notice that FE is basically POLS of de-meaned model, and it is consistent in the case of correlation between the unobserved heterogeneity and a time-varying dependent variable. Notice: quality of FE estimation depends on extent of variation of regressors over time, as estimation uses de-meaned data, i.e. the time differences. also, all invariant variables are eliminated. For example, FE is less successful in controlling for unobserved ability if we want to estimate the effect of schooling on earnings (assuming they are constant). In consequence, FE imprecise if there is only limited time-series (‘within’) variation.
What is the difference between a pooled OLS regression model and a fixed effect model? There is an important difference. If there is unobserved heterogeneity (i.e. some unobserved factor that affects the dependent variable), and this is correlated with some observed regressor, then POLS
34,887
MCMC Bayesian approach - centering and standardizing
Mean centering or standardizing is done only to improve the efficiency of MCMC sampling (i.e., to reduce autocorrelation in the chains). In principle it is not necessary to mean center (or standardize), but then you'd have to wait around a lot longer for the chains to produce a reasonable effective sample size. (There is no guarantee that mean centering or standardizing will help in all applications, but it tends to help.) The mean centering or standardizing does not change the parameter estimates. You do, however, have to transform back to the original scale. Details of this are covered at length in DBDA2E. For linear regression, see Section 17.2.1.1 (p. 485+). For quadratic trend, Section 17.4 (p. 495). For multiple linear regression, Section 18.1.2 (p. 516). For logistic regression, Section 21.1.1 (pp. 624-625).
MCMC Bayesian approach - centering and standardizing
Mean centering or standardizing is done only to improve the efficiency of MCMC sampling (i.e., to reduce autocorrelation in the chains). In principle it is not necessary to mean center (or standardize
MCMC Bayesian approach - centering and standardizing Mean centering or standardizing is done only to improve the efficiency of MCMC sampling (i.e., to reduce autocorrelation in the chains). In principle it is not necessary to mean center (or standardize), but then you'd have to wait around a lot longer for the chains to produce a reasonable effective sample size. (There is no guarantee that mean centering or standardizing will help in all applications, but it tends to help.) The mean centering or standardizing does not change the parameter estimates. You do, however, have to transform back to the original scale. Details of this are covered at length in DBDA2E. For linear regression, see Section 17.2.1.1 (p. 485+). For quadratic trend, Section 17.4 (p. 495). For multiple linear regression, Section 18.1.2 (p. 516). For logistic regression, Section 21.1.1 (pp. 624-625).
MCMC Bayesian approach - centering and standardizing Mean centering or standardizing is done only to improve the efficiency of MCMC sampling (i.e., to reduce autocorrelation in the chains). In principle it is not necessary to mean center (or standardize
34,888
Simple, multiple, univariate, bivariate, multivariate - terminology
As for Question 1, you are correct with what you said. As for Question 2, multivariate stands for an analysis involving more than one response variables. To my knowledge there is no differentiation in terminology with respect to the predictor variables. To be consistent one could maybe say, but I am not sure, "simple multivariate regression" when multiple responses and one predictor variable are present. As for Question 3, I'd say you are right again. As for Question 4, the term bivariate refers to a situation when there are two continuous variables in total, i.e. an analysis that can be visualized in a 2d scatter plot (simple linear regression and correlation for example). So now what does univariate refers too? I think (and I might be wrong) that is the case when you have one response and one or more categorical predictor(s). So, for example you measure the heights of trees coming from the same parent tree, or the weight of chicken fed with the different feeds. This type of analyses would be analyzed as a t-test or Analysis of Variance. The difference between univariate and bivariate can be seen when you visualize the data. If you plot something as a bar graph, (or dot plot) it is univariate, if you plot something on a 2d scatter plot, it is bivariate. I might be wrong here but I am sure if that's the case someone will comment!
Simple, multiple, univariate, bivariate, multivariate - terminology
As for Question 1, you are correct with what you said. As for Question 2, multivariate stands for an analysis involving more than one response variables. To my knowledge there is no differentiation in
Simple, multiple, univariate, bivariate, multivariate - terminology As for Question 1, you are correct with what you said. As for Question 2, multivariate stands for an analysis involving more than one response variables. To my knowledge there is no differentiation in terminology with respect to the predictor variables. To be consistent one could maybe say, but I am not sure, "simple multivariate regression" when multiple responses and one predictor variable are present. As for Question 3, I'd say you are right again. As for Question 4, the term bivariate refers to a situation when there are two continuous variables in total, i.e. an analysis that can be visualized in a 2d scatter plot (simple linear regression and correlation for example). So now what does univariate refers too? I think (and I might be wrong) that is the case when you have one response and one or more categorical predictor(s). So, for example you measure the heights of trees coming from the same parent tree, or the weight of chicken fed with the different feeds. This type of analyses would be analyzed as a t-test or Analysis of Variance. The difference between univariate and bivariate can be seen when you visualize the data. If you plot something as a bar graph, (or dot plot) it is univariate, if you plot something on a 2d scatter plot, it is bivariate. I might be wrong here but I am sure if that's the case someone will comment!
Simple, multiple, univariate, bivariate, multivariate - terminology As for Question 1, you are correct with what you said. As for Question 2, multivariate stands for an analysis involving more than one response variables. To my knowledge there is no differentiation in
34,889
Simple, multiple, univariate, bivariate, multivariate - terminology
Let $y$ be a predicted variable and let $x$ be a predictor variable. One $y$ and one $x$ = simple regression One $y$ and many $x$ = multiple regression Many $y$ and one $x$ = multivariate simple regression Many $y$ and many $x$ = multivariate multiple regression In practice, because these four cases can all be, and usually are, handled within the same framework (e.g., general linear modeling), the differences between them are purely terminological and not statistical. However, using these terms consistently can enhance communication. David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics (3rd ed.), John Wiley & Sons, p. 19.
Simple, multiple, univariate, bivariate, multivariate - terminology
Let $y$ be a predicted variable and let $x$ be a predictor variable. One $y$ and one $x$ = simple regression One $y$ and many $x$ = multiple regression Many $y$ and one $x$ = multivariate simple regr
Simple, multiple, univariate, bivariate, multivariate - terminology Let $y$ be a predicted variable and let $x$ be a predictor variable. One $y$ and one $x$ = simple regression One $y$ and many $x$ = multiple regression Many $y$ and one $x$ = multivariate simple regression Many $y$ and many $x$ = multivariate multiple regression In practice, because these four cases can all be, and usually are, handled within the same framework (e.g., general linear modeling), the differences between them are purely terminological and not statistical. However, using these terms consistently can enhance communication. David A. Freedman (2009). Statistical Models: Theory and Practice. Cambridge University Press. Rencher, Alvin C.; Christensen, William F. (2012), "Chapter 10, Multivariate regression", Methods of Multivariate Analysis, Wiley Series in Probability and Statistics (3rd ed.), John Wiley & Sons, p. 19.
Simple, multiple, univariate, bivariate, multivariate - terminology Let $y$ be a predicted variable and let $x$ be a predictor variable. One $y$ and one $x$ = simple regression One $y$ and many $x$ = multiple regression Many $y$ and one $x$ = multivariate simple regr
34,890
Probabilistic vs. other approaches to machine learning
The question may be too broad to answer. It is hard to guess another person's perspective. But I think the question is interesting, and I would like to try to answer. The term "machine learning" can have many definitions. I believe The popular ones are Convex optimization (there are tons of papers on NIPS for this topic) "Statistics minus any checking of models and assumptions" by Brian D. Ripley From optimization perspective, the ultimate goal is minimizing the "empirical loss" and try to win it on testing data set. Where we do not emphasize too much on the "statistical model" of the data. Some big black box discriminative model would be perfect examples, such as Gradient Boosting, Random Forest, and Neural Network. These types of work got popular because the way we collect data and process data has been changed. Where we can think we have infinite data and will never over-fit (for example number of images in Internet). All the computational model we can afford would under-fit super complicated data. The goal would be have an effective way to build the model faster and more complex (For example using GPU for deep learning) On the other hand, from statistical points (probabilistic approach) of view, we may emphasize more on generative models. For example, mixture of Gaussian Model, Bayesian Network, etc. The book by Murphy "machine learning a probabilistic perspective" may give you a better idea on this branch.
Probabilistic vs. other approaches to machine learning
The question may be too broad to answer. It is hard to guess another person's perspective. But I think the question is interesting, and I would like to try to answer. The term "machine learning" can h
Probabilistic vs. other approaches to machine learning The question may be too broad to answer. It is hard to guess another person's perspective. But I think the question is interesting, and I would like to try to answer. The term "machine learning" can have many definitions. I believe The popular ones are Convex optimization (there are tons of papers on NIPS for this topic) "Statistics minus any checking of models and assumptions" by Brian D. Ripley From optimization perspective, the ultimate goal is minimizing the "empirical loss" and try to win it on testing data set. Where we do not emphasize too much on the "statistical model" of the data. Some big black box discriminative model would be perfect examples, such as Gradient Boosting, Random Forest, and Neural Network. These types of work got popular because the way we collect data and process data has been changed. Where we can think we have infinite data and will never over-fit (for example number of images in Internet). All the computational model we can afford would under-fit super complicated data. The goal would be have an effective way to build the model faster and more complex (For example using GPU for deep learning) On the other hand, from statistical points (probabilistic approach) of view, we may emphasize more on generative models. For example, mixture of Gaussian Model, Bayesian Network, etc. The book by Murphy "machine learning a probabilistic perspective" may give you a better idea on this branch.
Probabilistic vs. other approaches to machine learning The question may be too broad to answer. It is hard to guess another person's perspective. But I think the question is interesting, and I would like to try to answer. The term "machine learning" can h
34,891
Probabilistic vs. other approaches to machine learning
The term "probabilistic approach" means that the inference and reasoning taught in your class will be rooted in the mature field of probability theory. That term is often (but not always) synonymous with "Bayesian" approaches, so if you have had any exposure to Bayesian inference you should have no problems picking up on the probabilistic approach. I don't have enough experience to say what other approaches to machine learning exist, but I can point you towards a couple of great refs for the probabilistic paradigm, one of which is a classic and the other will soon be, I think: Jaynes, E.T. (2003) Probability Theory: The Logic of Science. Cambridge University Press, New York. Murphy, K. (2012) Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge.
Probabilistic vs. other approaches to machine learning
The term "probabilistic approach" means that the inference and reasoning taught in your class will be rooted in the mature field of probability theory. That term is often (but not always) synonymous w
Probabilistic vs. other approaches to machine learning The term "probabilistic approach" means that the inference and reasoning taught in your class will be rooted in the mature field of probability theory. That term is often (but not always) synonymous with "Bayesian" approaches, so if you have had any exposure to Bayesian inference you should have no problems picking up on the probabilistic approach. I don't have enough experience to say what other approaches to machine learning exist, but I can point you towards a couple of great refs for the probabilistic paradigm, one of which is a classic and the other will soon be, I think: Jaynes, E.T. (2003) Probability Theory: The Logic of Science. Cambridge University Press, New York. Murphy, K. (2012) Machine Learning: A Probabilistic Perspective. MIT Press, Cambridge.
Probabilistic vs. other approaches to machine learning The term "probabilistic approach" means that the inference and reasoning taught in your class will be rooted in the mature field of probability theory. That term is often (but not always) synonymous w
34,892
Can I replace NAs based on response variable?
In short, you should look at multiple imputation (==replacement) techniques, first put forward by Rubin in 1987. In more detail: replacing by a single value assumes certainty about this replaced value and might ignore any selective loss of information (and therefore bias!). Furthermore, you should try to think of the way your data got missing. In general there are three 'mechanisms' explaining missingness: Missing completely at random (MCAR): which roughly means the missing value is not related to any known or unknown properties of the unit/individual which was supposed to be measured. Missing at random (MAR): the missing value is related to known properties of the unit/individual which was supposed to be measured. Missing not at random (MNAR): the missing value is related to unknown properties of the unit/individual which was supposed to be measured. These situations (MCAR, MAR, MNAR) are only theoretical in the extent that they often occur simultaneously within datasets, and even per missing value. There is an abundance of literature available which shows how different strategies of handling missing data pan out in different situations [1-5]. Make sure to check whichever is appropriate for your study. In general (and this is generalizing a lot, sometimes based on opinions), it is preferable to use multiple imputation techniques. These techniques are based on estimating the missing values based on the known parts of the data multiple times, in order to create multiple completed imputation datasets. The intended analysis is then performed in all completed imputation datasets and pooled according to predefined rules taking into account the uncertainty which occurs when replacing missing values with estimates. Finally, this pooled analysis can be interpreted as you would have an analysis in a complete case database. I've always found Stef van Buuren's MICE package in R very good for performing these techniques. Especially because he provides excellent background on both the biases of missing data, and the handling of the MICE function in the R programming language. Do note that there are more ways you can implement multiple imputation techniques (see also Amelia Expectation Maximization for example). References: Rubin DB. Multiple imputation for non response in surveys. New York: Wiley; 1987. Donders AR, van der Heijden GJ, Stijnen T, et al. Review: a gentle introduction to imputation of missing values. J Clin Epidemiol 2006;59(10):1087-1091. Li P, Stuart EA, Allison DB. Multiple Imputation: A Flexible Tool for Handling Missing Data. JAMA 2015;314(18):1966-1967. Groenwold RH, Donders AR, Roes KC, et al. Dealing with missing outcome data in randomized trials and observational studies. Am J Epidemiol 2012;175(3):210-217. van Buuren S, Groothuis-Oudshoorn K. mice: Multivariate Imputation by Chained Equations in R. J Stat Softw 2011;45(3):1-67. http://www.stefvanbuuren.nl/mi/MICE.html
Can I replace NAs based on response variable?
In short, you should look at multiple imputation (==replacement) techniques, first put forward by Rubin in 1987. In more detail: replacing by a single value assumes certainty about this replaced valu
Can I replace NAs based on response variable? In short, you should look at multiple imputation (==replacement) techniques, first put forward by Rubin in 1987. In more detail: replacing by a single value assumes certainty about this replaced value and might ignore any selective loss of information (and therefore bias!). Furthermore, you should try to think of the way your data got missing. In general there are three 'mechanisms' explaining missingness: Missing completely at random (MCAR): which roughly means the missing value is not related to any known or unknown properties of the unit/individual which was supposed to be measured. Missing at random (MAR): the missing value is related to known properties of the unit/individual which was supposed to be measured. Missing not at random (MNAR): the missing value is related to unknown properties of the unit/individual which was supposed to be measured. These situations (MCAR, MAR, MNAR) are only theoretical in the extent that they often occur simultaneously within datasets, and even per missing value. There is an abundance of literature available which shows how different strategies of handling missing data pan out in different situations [1-5]. Make sure to check whichever is appropriate for your study. In general (and this is generalizing a lot, sometimes based on opinions), it is preferable to use multiple imputation techniques. These techniques are based on estimating the missing values based on the known parts of the data multiple times, in order to create multiple completed imputation datasets. The intended analysis is then performed in all completed imputation datasets and pooled according to predefined rules taking into account the uncertainty which occurs when replacing missing values with estimates. Finally, this pooled analysis can be interpreted as you would have an analysis in a complete case database. I've always found Stef van Buuren's MICE package in R very good for performing these techniques. Especially because he provides excellent background on both the biases of missing data, and the handling of the MICE function in the R programming language. Do note that there are more ways you can implement multiple imputation techniques (see also Amelia Expectation Maximization for example). References: Rubin DB. Multiple imputation for non response in surveys. New York: Wiley; 1987. Donders AR, van der Heijden GJ, Stijnen T, et al. Review: a gentle introduction to imputation of missing values. J Clin Epidemiol 2006;59(10):1087-1091. Li P, Stuart EA, Allison DB. Multiple Imputation: A Flexible Tool for Handling Missing Data. JAMA 2015;314(18):1966-1967. Groenwold RH, Donders AR, Roes KC, et al. Dealing with missing outcome data in randomized trials and observational studies. Am J Epidemiol 2012;175(3):210-217. van Buuren S, Groothuis-Oudshoorn K. mice: Multivariate Imputation by Chained Equations in R. J Stat Softw 2011;45(3):1-67. http://www.stefvanbuuren.nl/mi/MICE.html
Can I replace NAs based on response variable? In short, you should look at multiple imputation (==replacement) techniques, first put forward by Rubin in 1987. In more detail: replacing by a single value assumes certainty about this replaced valu
34,893
An electronics company produces devices that work properly 95% of the time
You have to assume the devices in any box are independent. When that is the case, the number of working devices in any box must follow a Binomial distribution. The parameters are $400$ (the number of devices in the box) and $.95$ (the working rate). Suppose you guarantee $k$ or more devices per box work. You are saying that at least 95% of all such boxes contain $k$ or more working devices. In the language of random variables and distributions, you are asserting that the chance of a Binomial$(400, 0.95)$ variable equaling or exceeding $k$ is at least $95\%$. The solution is found by computing the $100-95$ = fifth percentile of this distribution. The only delicate part is that since this is a discrete distribution, we should take some care not to be one off in our answer. R tells us the fifth percentile is $k=373$: qbinom(.05, 400, .95) 373 Let's check by computing the chance of equaling or exceeding this value: pbinom(373-1, 400, .95, lower.tail=FALSE) 0.9520076 (Somewhat counter-intuitively, for me at least, is that the lower.tail=FALSE argument of R's pbinom function does not include the value of its argument. Thus, pbinom(k,n,p,lower.tail=FALSE) computes the chance associated with an outcome strictly greater than k.) As a double-check, let's confirm that we cannot guarantee even a larger value: pbinom(373, 400, .95, lower.tail=FALSE) 0.9273511 Thus, the threshold of $0.95$ falls between these two successive probabilities. In other words, we have found that In the long run $95.2\%$ of the boxes will contain $k=373$ or more working devices, but only $92.7\%$ of them will contain $374$ or more working devices. Therefore we should not guarantee any more than $373$ if we want $95\%$ or more of the boxes to meet this standard. Incidentally, a Normal distribution turns out to be an excellent approximation for this particular question. (Rather than display the answer you would get, I will leave it to you to do the calculation, since you requested information only on how to set up the problem.) This plot compares the Binomial distribution function to its approximating Normal probability. The two don't perfectly agree--but near $k=373$ they are very close indeed.
An electronics company produces devices that work properly 95% of the time
You have to assume the devices in any box are independent. When that is the case, the number of working devices in any box must follow a Binomial distribution. The parameters are $400$ (the number o
An electronics company produces devices that work properly 95% of the time You have to assume the devices in any box are independent. When that is the case, the number of working devices in any box must follow a Binomial distribution. The parameters are $400$ (the number of devices in the box) and $.95$ (the working rate). Suppose you guarantee $k$ or more devices per box work. You are saying that at least 95% of all such boxes contain $k$ or more working devices. In the language of random variables and distributions, you are asserting that the chance of a Binomial$(400, 0.95)$ variable equaling or exceeding $k$ is at least $95\%$. The solution is found by computing the $100-95$ = fifth percentile of this distribution. The only delicate part is that since this is a discrete distribution, we should take some care not to be one off in our answer. R tells us the fifth percentile is $k=373$: qbinom(.05, 400, .95) 373 Let's check by computing the chance of equaling or exceeding this value: pbinom(373-1, 400, .95, lower.tail=FALSE) 0.9520076 (Somewhat counter-intuitively, for me at least, is that the lower.tail=FALSE argument of R's pbinom function does not include the value of its argument. Thus, pbinom(k,n,p,lower.tail=FALSE) computes the chance associated with an outcome strictly greater than k.) As a double-check, let's confirm that we cannot guarantee even a larger value: pbinom(373, 400, .95, lower.tail=FALSE) 0.9273511 Thus, the threshold of $0.95$ falls between these two successive probabilities. In other words, we have found that In the long run $95.2\%$ of the boxes will contain $k=373$ or more working devices, but only $92.7\%$ of them will contain $374$ or more working devices. Therefore we should not guarantee any more than $373$ if we want $95\%$ or more of the boxes to meet this standard. Incidentally, a Normal distribution turns out to be an excellent approximation for this particular question. (Rather than display the answer you would get, I will leave it to you to do the calculation, since you requested information only on how to set up the problem.) This plot compares the Binomial distribution function to its approximating Normal probability. The two don't perfectly agree--but near $k=373$ they are very close indeed.
An electronics company produces devices that work properly 95% of the time You have to assume the devices in any box are independent. When that is the case, the number of working devices in any box must follow a Binomial distribution. The parameters are $400$ (the number o
34,894
An electronics company produces devices that work properly 95% of the time
"At least" from "at least 95%" means "min". Code: #reproducible set.seed(250048) #how many times to check N_repeats <- 500000 #stage for loop temp <- numeric() #loop for (j in 1:N_repeats){ #draw 400 samples at 95% rate y <- rbinom(n = 400,size = 1,prob = 0.95) #compute and store sampled rate temp[j] <- mean(y) } #print summary (includes min) summary(temp) Results: > summary(temp) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.8900 0.9425 0.9500 0.9500 0.9575 0.9925 When I look at this, I see that the minimum value for the rate is 89%. This means that in half a million tries, the worst case was 89% working. 89% of 400 is 356. This gives about 100%, not 95%. It is likely that actual 100% is lower than this. #find the 95% case quantile(temp,probs = 0.05) yields: > quantile(temp,probs = 0.05) 5% 0.9325 93.25% of 400 is 373. This is not an edge of the data, but interior, so it is likely a good estimate. Your answer is going to be close to 373.
An electronics company produces devices that work properly 95% of the time
"At least" from "at least 95%" means "min". Code: #reproducible set.seed(250048) #how many times to check N_repeats <- 500000 #stage for loop temp <- numeric() #loop for (j in 1:N_repeats){ #
An electronics company produces devices that work properly 95% of the time "At least" from "at least 95%" means "min". Code: #reproducible set.seed(250048) #how many times to check N_repeats <- 500000 #stage for loop temp <- numeric() #loop for (j in 1:N_repeats){ #draw 400 samples at 95% rate y <- rbinom(n = 400,size = 1,prob = 0.95) #compute and store sampled rate temp[j] <- mean(y) } #print summary (includes min) summary(temp) Results: > summary(temp) Min. 1st Qu. Median Mean 3rd Qu. Max. 0.8900 0.9425 0.9500 0.9500 0.9575 0.9925 When I look at this, I see that the minimum value for the rate is 89%. This means that in half a million tries, the worst case was 89% working. 89% of 400 is 356. This gives about 100%, not 95%. It is likely that actual 100% is lower than this. #find the 95% case quantile(temp,probs = 0.05) yields: > quantile(temp,probs = 0.05) 5% 0.9325 93.25% of 400 is 373. This is not an edge of the data, but interior, so it is likely a good estimate. Your answer is going to be close to 373.
An electronics company produces devices that work properly 95% of the time "At least" from "at least 95%" means "min". Code: #reproducible set.seed(250048) #how many times to check N_repeats <- 500000 #stage for loop temp <- numeric() #loop for (j in 1:N_repeats){ #
34,895
Why does the L2 norm heuristic work in measuring uniformity of probability distributions?
I believe the intended application is that frequencies $f_i$ of $n$ items $i=1,2,\ldots, n$ have been observed; and you are wondering whether these frequencies are consistent with an underlying uniform distribution in which all observations are (a) independent and (b) equally probable. The "vector" in question is the normalized tuple of relative frequencies, $$p = (p_1, p_2, \ldots, p_n) = \left(\frac{f_1}{f}, \frac{f_2}{f}, \ldots, \frac{f_n}{f}\right)$$ with $f = f_1 + f_2 + \cdots + f_n$ being the total number of observations. The $L_2$ norm of $p$ is, by definition, the square root of $$||p||_2^2 = p_1^2 + p_2^2 + \cdots + p_n^2.$$ Using this as a measure of uniformity of the $p_i$ has an intuitive mathematical justification--it does get larger as the $p_i$ vary more--but it has no immediate statistical justification. Let's see whether we can put it on a solid footing. To this end, notice that the average value of the $p_i$ is $\bar p = 1/n$ (because they sum to unity and there are $n$ of them). Uniformity doesn't really refer to the actual values of the $p_i$: it refers to how they vary around their expected value. Let us therefore compute that variation and try to relate it to the $L_2$ norm. A well-known algebraic result (easily proven) is $$||p||_2^2 = \sum_i \left(p_i - \frac{1}{n}\right)^2 + \frac{1}{n}.$$ Now the number of observations $f$ must play a critical role, for without that information we have no good idea how variable the observed frequencies ought to be. It is natural to introduce a factor of $f^2$ in order to clear the denominators in the $p_i = f_i/f$: $$f^2||p||_2^2 = \sum_i \left(f p_i - \frac{f}{n}\right)^2 + \frac{f^2}{n} = \sum_i \left(f_i - \frac{f}{n}\right)^2 + \frac{f^2}{n}.\tag{1}$$ This is immediately recognizable as almost equal to the chi-squared statistic for a test of uniformity: the expected frequencies ($E_i$) are $f/n$ while the observed frequencies ($O_i$) are $f_i$. This statistic, by definition, is the sum of standardized differences, $$\chi^2 = \sum_i \frac{(O_i - E_i)^2}{E_i} = \sum_i \frac{(f_i - f/n)^2}{f/n}.$$ Let us therefore divide $(1)$ by $f/n$ to introduce $\chi^2$: $$n f ||p||_2^2 = \sum_i \frac{(f_i - f/n)^2}{f/n} + f = \chi^2 + f.$$ Finally we may isolate a statistically meaningful expression: $$f\left(n||p||_2^2 - 1\right) = \chi^2.$$ This shows that up to an affine transformation determined by the number of categories $n$ and total number of observations $f$, the square of the $L_2$ norm of the relative frequency vector is a standard statistic used to measure uniformity of a frequency distribution. That's why the $L_2$ norm may be of value. But why not just use the $\chi^2$ statistic in the first place?
Why does the L2 norm heuristic work in measuring uniformity of probability distributions?
I believe the intended application is that frequencies $f_i$ of $n$ items $i=1,2,\ldots, n$ have been observed; and you are wondering whether these frequencies are consistent with an underlying unif
Why does the L2 norm heuristic work in measuring uniformity of probability distributions? I believe the intended application is that frequencies $f_i$ of $n$ items $i=1,2,\ldots, n$ have been observed; and you are wondering whether these frequencies are consistent with an underlying uniform distribution in which all observations are (a) independent and (b) equally probable. The "vector" in question is the normalized tuple of relative frequencies, $$p = (p_1, p_2, \ldots, p_n) = \left(\frac{f_1}{f}, \frac{f_2}{f}, \ldots, \frac{f_n}{f}\right)$$ with $f = f_1 + f_2 + \cdots + f_n$ being the total number of observations. The $L_2$ norm of $p$ is, by definition, the square root of $$||p||_2^2 = p_1^2 + p_2^2 + \cdots + p_n^2.$$ Using this as a measure of uniformity of the $p_i$ has an intuitive mathematical justification--it does get larger as the $p_i$ vary more--but it has no immediate statistical justification. Let's see whether we can put it on a solid footing. To this end, notice that the average value of the $p_i$ is $\bar p = 1/n$ (because they sum to unity and there are $n$ of them). Uniformity doesn't really refer to the actual values of the $p_i$: it refers to how they vary around their expected value. Let us therefore compute that variation and try to relate it to the $L_2$ norm. A well-known algebraic result (easily proven) is $$||p||_2^2 = \sum_i \left(p_i - \frac{1}{n}\right)^2 + \frac{1}{n}.$$ Now the number of observations $f$ must play a critical role, for without that information we have no good idea how variable the observed frequencies ought to be. It is natural to introduce a factor of $f^2$ in order to clear the denominators in the $p_i = f_i/f$: $$f^2||p||_2^2 = \sum_i \left(f p_i - \frac{f}{n}\right)^2 + \frac{f^2}{n} = \sum_i \left(f_i - \frac{f}{n}\right)^2 + \frac{f^2}{n}.\tag{1}$$ This is immediately recognizable as almost equal to the chi-squared statistic for a test of uniformity: the expected frequencies ($E_i$) are $f/n$ while the observed frequencies ($O_i$) are $f_i$. This statistic, by definition, is the sum of standardized differences, $$\chi^2 = \sum_i \frac{(O_i - E_i)^2}{E_i} = \sum_i \frac{(f_i - f/n)^2}{f/n}.$$ Let us therefore divide $(1)$ by $f/n$ to introduce $\chi^2$: $$n f ||p||_2^2 = \sum_i \frac{(f_i - f/n)^2}{f/n} + f = \chi^2 + f.$$ Finally we may isolate a statistically meaningful expression: $$f\left(n||p||_2^2 - 1\right) = \chi^2.$$ This shows that up to an affine transformation determined by the number of categories $n$ and total number of observations $f$, the square of the $L_2$ norm of the relative frequency vector is a standard statistic used to measure uniformity of a frequency distribution. That's why the $L_2$ norm may be of value. But why not just use the $\chi^2$ statistic in the first place?
Why does the L2 norm heuristic work in measuring uniformity of probability distributions? I believe the intended application is that frequencies $f_i$ of $n$ items $i=1,2,\ldots, n$ have been observed; and you are wondering whether these frequencies are consistent with an underlying unif
34,896
Why does the L2 norm heuristic work in measuring uniformity of probability distributions?
Why this method works ? Lets say $x+y = 1$; $(x+y)^2 = 1$; $\texttt{norm}^2 + 2xy = 1$; $xy$ is maximum when $x=y$ (uniform) and thus norm must be lower. Thus, lower norm means more uniformity. You can extend the same to vectors of length more than 2. However, the idea of scaling using $d$ is simply a heuristic. You might want to omit that. Over to your next question. You can use multiple methods to solve this problem. Few of the measures are: KL divergence from a uniform distribution. Dot product of the vector converted to a unit vector with a uniformly distributed unit vector of same length. Variance
Why does the L2 norm heuristic work in measuring uniformity of probability distributions?
Why this method works ? Lets say $x+y = 1$; $(x+y)^2 = 1$; $\texttt{norm}^2 + 2xy = 1$; $xy$ is maximum when $x=y$ (uniform) and thus norm must be lower. Thus, lower norm means more uniformity. You ca
Why does the L2 norm heuristic work in measuring uniformity of probability distributions? Why this method works ? Lets say $x+y = 1$; $(x+y)^2 = 1$; $\texttt{norm}^2 + 2xy = 1$; $xy$ is maximum when $x=y$ (uniform) and thus norm must be lower. Thus, lower norm means more uniformity. You can extend the same to vectors of length more than 2. However, the idea of scaling using $d$ is simply a heuristic. You might want to omit that. Over to your next question. You can use multiple methods to solve this problem. Few of the measures are: KL divergence from a uniform distribution. Dot product of the vector converted to a unit vector with a uniformly distributed unit vector of same length. Variance
Why does the L2 norm heuristic work in measuring uniformity of probability distributions? Why this method works ? Lets say $x+y = 1$; $(x+y)^2 = 1$; $\texttt{norm}^2 + 2xy = 1$; $xy$ is maximum when $x=y$ (uniform) and thus norm must be lower. Thus, lower norm means more uniformity. You ca
34,897
Why does the L2 norm heuristic work in measuring uniformity of probability distributions?
@whuber's answer is the most generalized and elaborate as usual. At the same time, @Aveek's simple mathematical observation also makes things pretty intuitive. I'd like to extend his observation a bit. As noted, $$norm^2 + 2xy = 1$$ Therefore, $$norm^2 = 1 - 2xy$$ $$norm^2 = 1 - 2x(1-x) $$ $$norm^2 = 1 - 2x + 2x^2 $$ Hence we can denote the $norm^2$ function as: $$f = 2x^2 - 2x + 1$$ We find the minimum of $f$ by taking its derivative and equating it to $0$, i.e. $$4x - 2 = 0$$which gives us: $$x = 0.5$$ which is $1/n$ at $n = 2$, i.e. uniform distribution. Hence we note that the minimum of the $(L_2)^2$ function (in $R^2$ in this case) is $0.5$.
Why does the L2 norm heuristic work in measuring uniformity of probability distributions?
@whuber's answer is the most generalized and elaborate as usual. At the same time, @Aveek's simple mathematical observation also makes things pretty intuitive. I'd like to extend his observation a bit
Why does the L2 norm heuristic work in measuring uniformity of probability distributions? @whuber's answer is the most generalized and elaborate as usual. At the same time, @Aveek's simple mathematical observation also makes things pretty intuitive. I'd like to extend his observation a bit. As noted, $$norm^2 + 2xy = 1$$ Therefore, $$norm^2 = 1 - 2xy$$ $$norm^2 = 1 - 2x(1-x) $$ $$norm^2 = 1 - 2x + 2x^2 $$ Hence we can denote the $norm^2$ function as: $$f = 2x^2 - 2x + 1$$ We find the minimum of $f$ by taking its derivative and equating it to $0$, i.e. $$4x - 2 = 0$$which gives us: $$x = 0.5$$ which is $1/n$ at $n = 2$, i.e. uniform distribution. Hence we note that the minimum of the $(L_2)^2$ function (in $R^2$ in this case) is $0.5$.
Why does the L2 norm heuristic work in measuring uniformity of probability distributions? @whuber's answer is the most generalized and elaborate as usual. At the same time, @Aveek's simple mathematical observation also makes things pretty intuitive. I'd like to extend his observation a bit
34,898
Limitation of Gaussian process regression
GPs assume a Gaussian uncertainty on the $y$-values. However, this may not be the type of uncertainty that you have. For example, let us assume the output values are strictly positive, or bounded between two values, then the Gaussian prior would be inappropriate (or used only as an approximation). SVMs are somewhat similar as they are kernel-based regression models for which you can choose your loss function. However, they don't offer a probabilistic interpretation (which is a big no no if you are a die hard Bayesian for example). Kernel methods versus random forests or neural nets have other trade-offs. A GP kernel allows us to specify a prior on our function space which can be extremely useful especially when we have little data. However, a poor choice of kernel which specifies misconceptions about the function space can make convergence slow. Specifying appropriate kernels beyond the most basic requires some mathematical understanding. On the other hand, random forests and neural nets are completely frequentist (in general) and so usually require more data in order to get decent predictive performance.
Limitation of Gaussian process regression
GPs assume a Gaussian uncertainty on the $y$-values. However, this may not be the type of uncertainty that you have. For example, let us assume the output values are strictly positive, or bounded betw
Limitation of Gaussian process regression GPs assume a Gaussian uncertainty on the $y$-values. However, this may not be the type of uncertainty that you have. For example, let us assume the output values are strictly positive, or bounded between two values, then the Gaussian prior would be inappropriate (or used only as an approximation). SVMs are somewhat similar as they are kernel-based regression models for which you can choose your loss function. However, they don't offer a probabilistic interpretation (which is a big no no if you are a die hard Bayesian for example). Kernel methods versus random forests or neural nets have other trade-offs. A GP kernel allows us to specify a prior on our function space which can be extremely useful especially when we have little data. However, a poor choice of kernel which specifies misconceptions about the function space can make convergence slow. Specifying appropriate kernels beyond the most basic requires some mathematical understanding. On the other hand, random forests and neural nets are completely frequentist (in general) and so usually require more data in order to get decent predictive performance.
Limitation of Gaussian process regression GPs assume a Gaussian uncertainty on the $y$-values. However, this may not be the type of uncertainty that you have. For example, let us assume the output values are strictly positive, or bounded betw
34,899
Ridge regression: regularizing towards a value
We have the cost function $$\| \mathrm y - \mathrm X \beta \|_2^2 + \gamma \| \beta - \beta_0 \|_2^2$$ where $\gamma \geq 0$. The minimum is attained at $$\hat{\beta} := ( \mathrm X^{\top} \mathrm X + \gamma \mathrm I )^{-1} ( \mathrm X^{\top} \mathrm y + \gamma \beta_0 )$$ Note that while $\mathrm X^{\top} \mathrm X$ may not be invertible, $\mathrm X^{\top} \mathrm X + \gamma \mathrm I$ is always invertible if $\gamma > 0$. If $\gamma \gg 1$, then $$\begin{array}{rl} \hat{\beta} &= ( \mathrm X^{\top} \mathrm X + \gamma \mathrm I )^{-1} ( \mathrm X^{\top} \mathrm y + \gamma \beta_0 )\\ &= ( \gamma^{-1} \mathrm X^{\top} \mathrm X + \mathrm I )^{-1} ( \gamma^{-1} \mathrm X^{\top} \mathrm y + \beta_0 )\\ &\approx ( \mathrm I - \gamma^{-1} \mathrm X^{\top} \mathrm X ) ( \beta_0 + \gamma^{-1} \mathrm X^{\top} \mathrm y )\\ &\approx ( \mathrm I - \gamma^{-1} \mathrm X^{\top} \mathrm X ) \beta_0 + \gamma^{-1} \mathrm X^{\top} \mathrm y\\ &= \beta_0 + \gamma^{-1} \mathrm X^{\top} \left( \mathrm y - \mathrm X \beta_0 \right)\end{array}$$ For large $\gamma$, we have the approximate estimate $$\boxed{\tilde{\beta} := \beta_0 + \gamma^{-1} \mathrm X^{\top} \left( \mathrm y - \mathrm X \beta_0 \right)}$$ If $\gamma \to \infty$, then $\tilde{\beta} \to \beta_0$, as expected. Left-multiplying both sides by $\mathrm X$, we obtain $$\mathrm X \tilde{\beta} = \mathrm X \beta_0 + \gamma^{-1} \mathrm X \mathrm X^{\top} \left( \mathrm y - \mathrm X \beta_0 \right)$$ and, thus, $$\mathrm y - \mathrm X \tilde{\beta} = \left( \mathrm I - \gamma^{-1} \mathrm X \mathrm X^{\top} \right) \left( \mathrm y - \mathrm X \beta_0 \right)$$ which gives us $\mathrm y - \mathrm X \tilde{\beta}$, an approximation of the error vector for large but finite $\gamma$, in terms of $\mathrm y - \mathrm X \beta_0$, the error vector for infinite $\gamma$. None of this seems particularly insightful or useful, but it may be better than nothing.
Ridge regression: regularizing towards a value
We have the cost function $$\| \mathrm y - \mathrm X \beta \|_2^2 + \gamma \| \beta - \beta_0 \|_2^2$$ where $\gamma \geq 0$. The minimum is attained at $$\hat{\beta} := ( \mathrm X^{\top} \mathrm X +
Ridge regression: regularizing towards a value We have the cost function $$\| \mathrm y - \mathrm X \beta \|_2^2 + \gamma \| \beta - \beta_0 \|_2^2$$ where $\gamma \geq 0$. The minimum is attained at $$\hat{\beta} := ( \mathrm X^{\top} \mathrm X + \gamma \mathrm I )^{-1} ( \mathrm X^{\top} \mathrm y + \gamma \beta_0 )$$ Note that while $\mathrm X^{\top} \mathrm X$ may not be invertible, $\mathrm X^{\top} \mathrm X + \gamma \mathrm I$ is always invertible if $\gamma > 0$. If $\gamma \gg 1$, then $$\begin{array}{rl} \hat{\beta} &= ( \mathrm X^{\top} \mathrm X + \gamma \mathrm I )^{-1} ( \mathrm X^{\top} \mathrm y + \gamma \beta_0 )\\ &= ( \gamma^{-1} \mathrm X^{\top} \mathrm X + \mathrm I )^{-1} ( \gamma^{-1} \mathrm X^{\top} \mathrm y + \beta_0 )\\ &\approx ( \mathrm I - \gamma^{-1} \mathrm X^{\top} \mathrm X ) ( \beta_0 + \gamma^{-1} \mathrm X^{\top} \mathrm y )\\ &\approx ( \mathrm I - \gamma^{-1} \mathrm X^{\top} \mathrm X ) \beta_0 + \gamma^{-1} \mathrm X^{\top} \mathrm y\\ &= \beta_0 + \gamma^{-1} \mathrm X^{\top} \left( \mathrm y - \mathrm X \beta_0 \right)\end{array}$$ For large $\gamma$, we have the approximate estimate $$\boxed{\tilde{\beta} := \beta_0 + \gamma^{-1} \mathrm X^{\top} \left( \mathrm y - \mathrm X \beta_0 \right)}$$ If $\gamma \to \infty$, then $\tilde{\beta} \to \beta_0$, as expected. Left-multiplying both sides by $\mathrm X$, we obtain $$\mathrm X \tilde{\beta} = \mathrm X \beta_0 + \gamma^{-1} \mathrm X \mathrm X^{\top} \left( \mathrm y - \mathrm X \beta_0 \right)$$ and, thus, $$\mathrm y - \mathrm X \tilde{\beta} = \left( \mathrm I - \gamma^{-1} \mathrm X \mathrm X^{\top} \right) \left( \mathrm y - \mathrm X \beta_0 \right)$$ which gives us $\mathrm y - \mathrm X \tilde{\beta}$, an approximation of the error vector for large but finite $\gamma$, in terms of $\mathrm y - \mathrm X \beta_0$, the error vector for infinite $\gamma$. None of this seems particularly insightful or useful, but it may be better than nothing.
Ridge regression: regularizing towards a value We have the cost function $$\| \mathrm y - \mathrm X \beta \|_2^2 + \gamma \| \beta - \beta_0 \|_2^2$$ where $\gamma \geq 0$. The minimum is attained at $$\hat{\beta} := ( \mathrm X^{\top} \mathrm X +
34,900
Ridge regression: regularizing towards a value
Conceptually it may help to think in terms of Bayesian updating: The penalty term is equivalent to a prior estimate $\beta_0$ with precision $\lambda$ (i.e. a multivariate Gaussian prior $\beta\sim\mathrm{N}_{\beta_0,\,I/\lambda}).$ In this sense a "very large" $\lambda$ does not correspond to any particular numerical value. Rather it would be a value which "dominates" the error, so numerically it must be large relative to some norm $\|X\|$ of the design matrix. So for your example we cannot say whether $\lambda=100000$ is "very large" or not, without more information. That said, why might a "very large" value be used? A common case I have seen in practice is where the actual problem is equality constrained least squares, but this is approximated using Tikhonov Regularization with a "large $\lambda$". (This is slightly more general than your case, and would correspond to a "wide" matrix $\Lambda$, such that $\Lambda(\beta-\beta_0)=0$ could be solved exactly.)
Ridge regression: regularizing towards a value
Conceptually it may help to think in terms of Bayesian updating: The penalty term is equivalent to a prior estimate $\beta_0$ with precision $\lambda$ (i.e. a multivariate Gaussian prior $\beta\sim\ma
Ridge regression: regularizing towards a value Conceptually it may help to think in terms of Bayesian updating: The penalty term is equivalent to a prior estimate $\beta_0$ with precision $\lambda$ (i.e. a multivariate Gaussian prior $\beta\sim\mathrm{N}_{\beta_0,\,I/\lambda}).$ In this sense a "very large" $\lambda$ does not correspond to any particular numerical value. Rather it would be a value which "dominates" the error, so numerically it must be large relative to some norm $\|X\|$ of the design matrix. So for your example we cannot say whether $\lambda=100000$ is "very large" or not, without more information. That said, why might a "very large" value be used? A common case I have seen in practice is where the actual problem is equality constrained least squares, but this is approximated using Tikhonov Regularization with a "large $\lambda$". (This is slightly more general than your case, and would correspond to a "wide" matrix $\Lambda$, such that $\Lambda(\beta-\beta_0)=0$ could be solved exactly.)
Ridge regression: regularizing towards a value Conceptually it may help to think in terms of Bayesian updating: The penalty term is equivalent to a prior estimate $\beta_0$ with precision $\lambda$ (i.e. a multivariate Gaussian prior $\beta\sim\ma