idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
27,601
Simple real world examples for teaching Bayesian statistics?
Bayesian search theory is an interesting real-world application of Bayesian statistics which has been applied many times to search for lost vessels at sea. To begin, a map is divided into squares. Each square is assigned a prior probability of containing the lost vessel, based on last known position, heading, time missing, currents, etc. Additionally, each square is assigned a conditional probability of finding the vessel if it's actually in that square, based on things like water depth. These distributions are combined to prioritize map squares that have the highest likelihood of producing a positive result - it's not necessarily the most likely place for the ship to be, but the most likely place of actually finding the ship.
Simple real world examples for teaching Bayesian statistics?
Bayesian search theory is an interesting real-world application of Bayesian statistics which has been applied many times to search for lost vessels at sea. To begin, a map is divided into squares. Eac
Simple real world examples for teaching Bayesian statistics? Bayesian search theory is an interesting real-world application of Bayesian statistics which has been applied many times to search for lost vessels at sea. To begin, a map is divided into squares. Each square is assigned a prior probability of containing the lost vessel, based on last known position, heading, time missing, currents, etc. Additionally, each square is assigned a conditional probability of finding the vessel if it's actually in that square, based on things like water depth. These distributions are combined to prioritize map squares that have the highest likelihood of producing a positive result - it's not necessarily the most likely place for the ship to be, but the most likely place of actually finding the ship.
Simple real world examples for teaching Bayesian statistics? Bayesian search theory is an interesting real-world application of Bayesian statistics which has been applied many times to search for lost vessels at sea. To begin, a map is divided into squares. Eac
27,602
Simple real world examples for teaching Bayesian statistics?
I think estimating production or population size from serial numbers is interesting if traditional explanatory example. Here you are trying the maximum of a discrete uniform distribution. Depending on your choice of prior then the maximum likelihood and Bayesian estimates will differ in a pretty transparent way. Perhaps the most famous example is estimating the production rate of German tanks during the second World War from tank serial number bands and manufacturer codes done in the frequentist setting by (Ruggles and Brodie, 1947). An alternative analysis from a Bayesian point of view with informative priors has been done by (Downey, 2013), and with an improper uninformative priors by (Höhle and Held, 2004). The work by (Höhle and Held, 2004) also contains many more references to previous treatment in the literature and there is also more discussion of this problem on this site. Sources: Chapter 3, Downey, Allen. Think Bayes: Bayesian Statistics in Python. " O'Reilly Media, Inc.", 2013. Wikipedia Ruggles, R.; Brodie, H. (1947). "An Empirical Approach to Economic Intelligence in World War II". Journal of the American Statistical Association. 42 (237): 72. Höhle, Michael, and Leonhard Held. Bayesian estimation of the size of a population. No. 499. Discussion paper//Sonderforschungsbereich 386 der Ludwig-Maximilians-Universität München, 2006.
Simple real world examples for teaching Bayesian statistics?
I think estimating production or population size from serial numbers is interesting if traditional explanatory example. Here you are trying the maximum of a discrete uniform distribution. Depending on
Simple real world examples for teaching Bayesian statistics? I think estimating production or population size from serial numbers is interesting if traditional explanatory example. Here you are trying the maximum of a discrete uniform distribution. Depending on your choice of prior then the maximum likelihood and Bayesian estimates will differ in a pretty transparent way. Perhaps the most famous example is estimating the production rate of German tanks during the second World War from tank serial number bands and manufacturer codes done in the frequentist setting by (Ruggles and Brodie, 1947). An alternative analysis from a Bayesian point of view with informative priors has been done by (Downey, 2013), and with an improper uninformative priors by (Höhle and Held, 2004). The work by (Höhle and Held, 2004) also contains many more references to previous treatment in the literature and there is also more discussion of this problem on this site. Sources: Chapter 3, Downey, Allen. Think Bayes: Bayesian Statistics in Python. " O'Reilly Media, Inc.", 2013. Wikipedia Ruggles, R.; Brodie, H. (1947). "An Empirical Approach to Economic Intelligence in World War II". Journal of the American Statistical Association. 42 (237): 72. Höhle, Michael, and Leonhard Held. Bayesian estimation of the size of a population. No. 499. Discussion paper//Sonderforschungsbereich 386 der Ludwig-Maximilians-Universität München, 2006.
Simple real world examples for teaching Bayesian statistics? I think estimating production or population size from serial numbers is interesting if traditional explanatory example. Here you are trying the maximum of a discrete uniform distribution. Depending on
27,603
Simple real world examples for teaching Bayesian statistics?
There is a nice story in Cressie & Wickle Statistics for Spatio-Temporal Data, Wiley, about the (bayesian) search of the USS Scorpion, a submarine that was lost in 1968. We tell this story to our students and have them perform a (simplified) search using a simulator. Similar examples could be constructed around the story of the lost flight MH370; you might want to look at Davey et al., Bayesian Methods in the Search for MH370, Springer-Verlag.
Simple real world examples for teaching Bayesian statistics?
There is a nice story in Cressie & Wickle Statistics for Spatio-Temporal Data, Wiley, about the (bayesian) search of the USS Scorpion, a submarine that was lost in 1968. We tell this story to our stud
Simple real world examples for teaching Bayesian statistics? There is a nice story in Cressie & Wickle Statistics for Spatio-Temporal Data, Wiley, about the (bayesian) search of the USS Scorpion, a submarine that was lost in 1968. We tell this story to our students and have them perform a (simplified) search using a simulator. Similar examples could be constructed around the story of the lost flight MH370; you might want to look at Davey et al., Bayesian Methods in the Search for MH370, Springer-Verlag.
Simple real world examples for teaching Bayesian statistics? There is a nice story in Cressie & Wickle Statistics for Spatio-Temporal Data, Wiley, about the (bayesian) search of the USS Scorpion, a submarine that was lost in 1968. We tell this story to our stud
27,604
Simple real world examples for teaching Bayesian statistics?
Here is an example of estimating a mean, $\theta$, from Normal continuous data. Before delving directly into an example, though, I'd like to review some of the math for Normal-Normal Bayesian data models. Consider a random sample of n continuous values denoted by $y_1, ..., y_n$. Here the vector $y = (y_1, ..., y_n)^T$ represents the data gathered. The probability model for Normal data with known variance and independent and identically distributed (i.i.d.) samples is $$ y_1, ..., y_n | \theta \sim N(\theta, \sigma^2) $$ Or as more typically written by Bayesian, $$ y_1, ..., y_n | \theta \sim N(\theta, \tau) $$ where $\tau = 1 / \sigma^2$; $\tau$ is known as the precision With this notation, the density for $y_i$ is then $$ f(y_i | \theta, \tau) = \sqrt(\frac{\tau}{2 \pi}) \times exp\left( -\tau (y_i - \theta)^2 / 2 \right) $$ Classical statistics (i.e. maximum likelihood) gives us an estimate of $\hat{\theta} = \bar{y}$ In a Bayesian perspective, we append maximum likelihood with prior information. A choice of priors for this Normal data model is another Normal distribution for $\theta$. The Normal distribution is conjugate to the Normal distribution. $$ \theta \sim N(a,1/b) $$ The posterior distribution we obtain from this Normal-Normal (after a lot of algebra) data model is another Normal distribution. $$ \theta | y \sim N(\frac{b}{b + n\tau} a + \frac{n \tau}{b + n \tau} \bar{y}, \frac{1}{b + n\tau}) $$ The posterior precision is $b + n\tau$ and mean is a weighted mean between $a$ and $\bar{y}$, $\frac{b}{b + n\tau} a + \frac{n \tau}{b + n \tau} \bar{y}$. The usefulness of this Bayesian methodology comes from the fact that you obtain a distribution of $\theta | y$ rather than just an estimate since $\theta$ is viewed as a random variable rather than a fixed (unknown) value. In addition, your estimate of $\theta$ in this model is a weighted average between the empirical mean and prior information. That said, you can now use any Normal-data textbook example to illustrate this. I'll use the data set airquality within R. Consider the problem of estimating average wind speeds (MPH). > ## New York Air Quality Measurements > > help("airquality") > > ## Estimating average wind speeds > > wind = airquality$Wind > hist(wind, col = "gray", border = "white", xlab = "Wind Speed (MPH)") > > n = length(wind) > ybar = mean(wind) > ybar [1] 9.957516 ## "frequentist" estimate > tau = 1/sd(wind) > > > ## but based on some research, you felt avgerage wind speeds were closer to 12 mph > ## but probably no greater than 15, > ## then a potential prior would be N(12, 2) > > a = 12 > b = 2 > > ## Your posterior would be N((1/)) > > postmean = 1/(1 + n*tau) * a + n*tau/(1 + n*tau) * ybar > postsd = 1/(1 + n*tau) > > set.seed(123) > posterior_sample = rnorm(n = 10000, mean = postmean, sd = postsd) > hist(posterior_sample, col = "gray", border = "white", xlab = "Wind Speed (MPH)") > abline(v = median(posterior_sample)) > abline(v = ybar, lty = 3) > > median(posterior_sample) [1] 10.00324 > quantile(x = posterior_sample, probs = c(0.025, 0.975)) ## confidence intervals 2.5% 97.5% 9.958984 10.047404 In this analysis, the researcher (you) can say that given data + prior information, your estimate of average wind, using the 50th percentile, speeds should be 10.00324, greater than simply using the average from the data. You also obtain a full distribution, from which you can extract a 95% credible interval using the 2.5 and 97.5 quantiles. Below I include two references, I highly recommend reading Casella's short paper. It's specifically aimed at empirical Bayes methods, but explains the general Bayesian methodology for Normal models. References: Casella, G. (1985). An Introduction to Empirical Bayes Data Analysis. The American Statistician, 39(2), 83-87. Gelman, A. (2004). Bayesian data analysis (2nd ed., Texts in statistical science). Boca Raton, Fla.: Chapman & Hall/CRC.
Simple real world examples for teaching Bayesian statistics?
Here is an example of estimating a mean, $\theta$, from Normal continuous data. Before delving directly into an example, though, I'd like to review some of the math for Normal-Normal Bayesian data mod
Simple real world examples for teaching Bayesian statistics? Here is an example of estimating a mean, $\theta$, from Normal continuous data. Before delving directly into an example, though, I'd like to review some of the math for Normal-Normal Bayesian data models. Consider a random sample of n continuous values denoted by $y_1, ..., y_n$. Here the vector $y = (y_1, ..., y_n)^T$ represents the data gathered. The probability model for Normal data with known variance and independent and identically distributed (i.i.d.) samples is $$ y_1, ..., y_n | \theta \sim N(\theta, \sigma^2) $$ Or as more typically written by Bayesian, $$ y_1, ..., y_n | \theta \sim N(\theta, \tau) $$ where $\tau = 1 / \sigma^2$; $\tau$ is known as the precision With this notation, the density for $y_i$ is then $$ f(y_i | \theta, \tau) = \sqrt(\frac{\tau}{2 \pi}) \times exp\left( -\tau (y_i - \theta)^2 / 2 \right) $$ Classical statistics (i.e. maximum likelihood) gives us an estimate of $\hat{\theta} = \bar{y}$ In a Bayesian perspective, we append maximum likelihood with prior information. A choice of priors for this Normal data model is another Normal distribution for $\theta$. The Normal distribution is conjugate to the Normal distribution. $$ \theta \sim N(a,1/b) $$ The posterior distribution we obtain from this Normal-Normal (after a lot of algebra) data model is another Normal distribution. $$ \theta | y \sim N(\frac{b}{b + n\tau} a + \frac{n \tau}{b + n \tau} \bar{y}, \frac{1}{b + n\tau}) $$ The posterior precision is $b + n\tau$ and mean is a weighted mean between $a$ and $\bar{y}$, $\frac{b}{b + n\tau} a + \frac{n \tau}{b + n \tau} \bar{y}$. The usefulness of this Bayesian methodology comes from the fact that you obtain a distribution of $\theta | y$ rather than just an estimate since $\theta$ is viewed as a random variable rather than a fixed (unknown) value. In addition, your estimate of $\theta$ in this model is a weighted average between the empirical mean and prior information. That said, you can now use any Normal-data textbook example to illustrate this. I'll use the data set airquality within R. Consider the problem of estimating average wind speeds (MPH). > ## New York Air Quality Measurements > > help("airquality") > > ## Estimating average wind speeds > > wind = airquality$Wind > hist(wind, col = "gray", border = "white", xlab = "Wind Speed (MPH)") > > n = length(wind) > ybar = mean(wind) > ybar [1] 9.957516 ## "frequentist" estimate > tau = 1/sd(wind) > > > ## but based on some research, you felt avgerage wind speeds were closer to 12 mph > ## but probably no greater than 15, > ## then a potential prior would be N(12, 2) > > a = 12 > b = 2 > > ## Your posterior would be N((1/)) > > postmean = 1/(1 + n*tau) * a + n*tau/(1 + n*tau) * ybar > postsd = 1/(1 + n*tau) > > set.seed(123) > posterior_sample = rnorm(n = 10000, mean = postmean, sd = postsd) > hist(posterior_sample, col = "gray", border = "white", xlab = "Wind Speed (MPH)") > abline(v = median(posterior_sample)) > abline(v = ybar, lty = 3) > > median(posterior_sample) [1] 10.00324 > quantile(x = posterior_sample, probs = c(0.025, 0.975)) ## confidence intervals 2.5% 97.5% 9.958984 10.047404 In this analysis, the researcher (you) can say that given data + prior information, your estimate of average wind, using the 50th percentile, speeds should be 10.00324, greater than simply using the average from the data. You also obtain a full distribution, from which you can extract a 95% credible interval using the 2.5 and 97.5 quantiles. Below I include two references, I highly recommend reading Casella's short paper. It's specifically aimed at empirical Bayes methods, but explains the general Bayesian methodology for Normal models. References: Casella, G. (1985). An Introduction to Empirical Bayes Data Analysis. The American Statistician, 39(2), 83-87. Gelman, A. (2004). Bayesian data analysis (2nd ed., Texts in statistical science). Boca Raton, Fla.: Chapman & Hall/CRC.
Simple real world examples for teaching Bayesian statistics? Here is an example of estimating a mean, $\theta$, from Normal continuous data. Before delving directly into an example, though, I'd like to review some of the math for Normal-Normal Bayesian data mod
27,605
Simple real world examples for teaching Bayesian statistics?
An area of research where I believe the Bayesian methods are absolutely necessary is that of optimal design. In the logistic regression setting, a researcher is trying to estimate a coefficient and is actively collecting data, sometimes one data point at a time. The researcher has the ability to choose the input values of $x$. The goal is to maximize the information learned for a given sample size (alternatively, minimize the sample size required to reach some level of certainty). One can show that for a given $\beta$ there is a set of $x$ values that optimize this problem. The catch-22 here is that to choose the optimal $x$'s, you need to know $\beta$. Clearly, you don't know $\beta$ or you wouldn't need to collect data to learn about $\beta$. You could just use the MLE's to select $x$, but This doesn't give you a starting point; for $n = 0$, $\hat \beta$ is undefined Even after taking several samples, the Hauck-Donner effect means that $\hat \beta$ has a positive probability of being undefined (and this is very common for even samples of, say 10, in this problem) Even after the MLE is finite, its likely to be incredibly unstable, thus wasting many samples (i.e if $\beta = 1$ but $\hat \beta = 5$, you will pick values of $x$ that would have been optimal if $\beta = 5$, but it's not, resulting in very suboptimal $x$'s). This doesn't take into account the uncertainty of $\beta$ The (admittedly older) Frequentist literature deals with a lot of these issues in a very ad-hoc manner and offers sub-optimal solutions: "pick regions of $x$ that you think should lead to both 0's and 1's, take samples until the MLE is defined, and then use the MLE to choose $x$". The Bayesian analysis is to start with a prior, find the $x$ that is most informative about $\beta$ given the current knowledge, repeat until the convergence. Given that this is a problem that starts with no data and requires information about $\beta$ to choose $x$, I think it's undeniable that the Bayesian method is necessary; even the Frequentist methods instruct one to use prior information. The Bayesian method just does so in a much more efficient and logically justified manner. Also, it's totally reasonable to analyze the data that comes in a Frequentist method (or ignoring the prior), but it's very hard to argue against using a Bayesian method to choose the next $x$.
Simple real world examples for teaching Bayesian statistics?
An area of research where I believe the Bayesian methods are absolutely necessary is that of optimal design. In the logistic regression setting, a researcher is trying to estimate a coefficient and i
Simple real world examples for teaching Bayesian statistics? An area of research where I believe the Bayesian methods are absolutely necessary is that of optimal design. In the logistic regression setting, a researcher is trying to estimate a coefficient and is actively collecting data, sometimes one data point at a time. The researcher has the ability to choose the input values of $x$. The goal is to maximize the information learned for a given sample size (alternatively, minimize the sample size required to reach some level of certainty). One can show that for a given $\beta$ there is a set of $x$ values that optimize this problem. The catch-22 here is that to choose the optimal $x$'s, you need to know $\beta$. Clearly, you don't know $\beta$ or you wouldn't need to collect data to learn about $\beta$. You could just use the MLE's to select $x$, but This doesn't give you a starting point; for $n = 0$, $\hat \beta$ is undefined Even after taking several samples, the Hauck-Donner effect means that $\hat \beta$ has a positive probability of being undefined (and this is very common for even samples of, say 10, in this problem) Even after the MLE is finite, its likely to be incredibly unstable, thus wasting many samples (i.e if $\beta = 1$ but $\hat \beta = 5$, you will pick values of $x$ that would have been optimal if $\beta = 5$, but it's not, resulting in very suboptimal $x$'s). This doesn't take into account the uncertainty of $\beta$ The (admittedly older) Frequentist literature deals with a lot of these issues in a very ad-hoc manner and offers sub-optimal solutions: "pick regions of $x$ that you think should lead to both 0's and 1's, take samples until the MLE is defined, and then use the MLE to choose $x$". The Bayesian analysis is to start with a prior, find the $x$ that is most informative about $\beta$ given the current knowledge, repeat until the convergence. Given that this is a problem that starts with no data and requires information about $\beta$ to choose $x$, I think it's undeniable that the Bayesian method is necessary; even the Frequentist methods instruct one to use prior information. The Bayesian method just does so in a much more efficient and logically justified manner. Also, it's totally reasonable to analyze the data that comes in a Frequentist method (or ignoring the prior), but it's very hard to argue against using a Bayesian method to choose the next $x$.
Simple real world examples for teaching Bayesian statistics? An area of research where I believe the Bayesian methods are absolutely necessary is that of optimal design. In the logistic regression setting, a researcher is trying to estimate a coefficient and i
27,606
Simple real world examples for teaching Bayesian statistics?
I was thinking of this question lately, and I think I have an example where bayesian make sense, with the use a prior probability: the likelyhood ratio of a clinical test. The example could be this one: the validity of the urine dipslide under daily practice conditions (Family Practice 2003;20:410-2). The idea is to see what a positive result of the urine dipslide imply on the diagnostic of urine infection. The likelyhood ratio of the positive result is: $$LR(+) = \frac{test+|H+}{test+|H-} = \frac{Sensibility}{1-specificity} $$ with $H+$ the hypothesis of a urine infection, and $H-$ no urine infection. What Bayes tells us is $$OR(+|test+) = LR(+) \times OR(+) $$ Where $OR$ is the odds ratio. $OR(+|test+)$ is the odd ratio of having a urine infection knowing that the test is positive, and $OR(+)$ the prior odd ratio. The article gives that $LR(+) = 12.2$, and $LR(-) = 0.29$. Here the prior knowledge is the probability to have a urine infection based on the clinical analysis of the potentially sick person before making the test. if the physician estimate that this probability is $p_{+} = 2/3$ based on observation, then a positive test leads the a post probability of $p_{+|test+} = 0.96$, and of $p_{+|test-} = 0.37$ if the test is negative. Here the test is good to detect the infection, but not that good to discard the infection.
Simple real world examples for teaching Bayesian statistics?
I was thinking of this question lately, and I think I have an example where bayesian make sense, with the use a prior probability: the likelyhood ratio of a clinical test. The example could be this o
Simple real world examples for teaching Bayesian statistics? I was thinking of this question lately, and I think I have an example where bayesian make sense, with the use a prior probability: the likelyhood ratio of a clinical test. The example could be this one: the validity of the urine dipslide under daily practice conditions (Family Practice 2003;20:410-2). The idea is to see what a positive result of the urine dipslide imply on the diagnostic of urine infection. The likelyhood ratio of the positive result is: $$LR(+) = \frac{test+|H+}{test+|H-} = \frac{Sensibility}{1-specificity} $$ with $H+$ the hypothesis of a urine infection, and $H-$ no urine infection. What Bayes tells us is $$OR(+|test+) = LR(+) \times OR(+) $$ Where $OR$ is the odds ratio. $OR(+|test+)$ is the odd ratio of having a urine infection knowing that the test is positive, and $OR(+)$ the prior odd ratio. The article gives that $LR(+) = 12.2$, and $LR(-) = 0.29$. Here the prior knowledge is the probability to have a urine infection based on the clinical analysis of the potentially sick person before making the test. if the physician estimate that this probability is $p_{+} = 2/3$ based on observation, then a positive test leads the a post probability of $p_{+|test+} = 0.96$, and of $p_{+|test-} = 0.37$ if the test is negative. Here the test is good to detect the infection, but not that good to discard the infection.
Simple real world examples for teaching Bayesian statistics? I was thinking of this question lately, and I think I have an example where bayesian make sense, with the use a prior probability: the likelyhood ratio of a clinical test. The example could be this o
27,607
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X?
Analysis of the Problem The SVD of a matrix is never unique. Let matrix $A$ have dimensions $n\times k$ and let its SVD be $$A = U D V^\prime$$ for an $n\times p$ matrix $U$ with orthonormal columns, a diagonal $p\times p$ matrix $D$ with non-negative entries, and a $k\times p$ matrix $V$ with orthonormal columns. Now choose, arbitrarily, any diagonal $p\times p$ matrix $S$ having $\pm 1$s on the diagonal, so that $S^2 = I$ is the $p\times p$ identity $I_p$. Then $$A = U D V^\prime = U I D I V^\prime = U (S^2) D (S^2) V^\prime = (US) (SDS) (VS)^\prime$$ is also an SVD of $A$ because $$(US)^\prime(US) = S^\prime U^\prime U S = S^\prime I_p S = S^\prime S = S^2 = I_p$$ demonstrates $US$ has orthonormal columns and a similar calculation demonstrates $VS$ has orthonormal columns. Moreover, since $S$ and $D$ are diagonal, they commute, whence $$S D S = DS^2 = D$$ shows $D$ still has non-negative entries. The method implemented in the code to find an SVD finds a $U$ that diagonalizes $$AA^\prime = (UDV^\prime)(UDV^\prime)^\prime = UDV^\prime V D^\prime U^\prime = UD^2 U^\prime$$ and, similarly, a $V$ that diagonalizes $$A^\prime A = VD^2V^\prime.$$ It proceeds to compute $D$ in terms of the eigenvalues found in $D^2$. The problem is this does not assure a consistent matching of the columns of $U$ with the columns of $V$. A Solution Instead, after finding such a $U$ and such a $V$, use them to compute $$U^\prime A V = U^\prime (U D V^\prime) V = (U^\prime U) D (V^\prime V) = D$$ directly and efficiently. The diagonal values of this $D$ are not necessarily positive. (That is because there is nothing about the process of diagonalizing either $A^\prime A$ or $AA^\prime$ that will guarantee that, since those two processes were carried out separately.) Make them positive by choosing the entries along the diagonal of $S$ to equal the signs of the entries of $D$, so that $SD$ has all positive values. Compensate for this by right-multiplying $U$ by $S$: $$A = U D V^\prime = (US) (SD) V^\prime.$$ That is an SVD. Example Let $n=p=k=1$ with $A=(-2)$. An SVD is $$(-2) = (1)(2)(-1)$$ with $U=(1)$, $D=(2)$, and $V=(-1)$. If you diagonalize $A^\prime A = (4)$ you would naturally choose $U=(1)$ and $D=(\sqrt{4})=(2)$. Likewise if you diagonalize $AA^\prime=(4)$ you would choose $V=(1)$. Unfortunately, $$UDV^\prime = (1)(2)(1) = (2) \ne A.$$ Instead, compute $$D=U^\prime A V = (1)^\prime (-2) (1) = (-2).$$ Because this is negative, set $S=(-1)$. This adjusts $U$ to $US = (1)(-1)=(-1)$ and $D$ to $SD = (-1)(-2)=(2)$. You have obtained $$A = (-1)(2)(1),$$ which is one of the two possible SVDs (but not the same as the original!). Code Here is modified code. Its output confirms The method recreates m correctly. $U$ and $V$ really are still orthonormal. But the result is not the same SVD returned by svd. (Both are equally valid.) m <- matrix(c(1,0,1,2,1,1,1,0,0),byrow=TRUE,nrow=3) U <- eigen(tcrossprod(m))$vector V <- eigen(crossprod(m))$vector D <- diag(zapsmall(diag(t(U) %*% m %*% V))) s <- diag(sign(diag(D))) # Find the signs of the eigenvalues U <- U %*% s # Adjust the columns of U D <- s %*% D # Fix up D. (D <- abs(D) would be more efficient.) U1=svd(m)$u V1=svd(m)$v D1=diag(svd(m)$d,n,n) zapsmall(U1 %*% D1 %*% t(V1)) # SVD zapsmall(U %*% D %*% t(V)) # Hand-rolled SVD zapsmall(crossprod(U)) # Check that U is orthonormal zapsmall(tcrossprod(V)) # Check that V' is orthonormal
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X?
Analysis of the Problem The SVD of a matrix is never unique. Let matrix $A$ have dimensions $n\times k$ and let its SVD be $$A = U D V^\prime$$ for an $n\times p$ matrix $U$ with orthonormal columns,
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X? Analysis of the Problem The SVD of a matrix is never unique. Let matrix $A$ have dimensions $n\times k$ and let its SVD be $$A = U D V^\prime$$ for an $n\times p$ matrix $U$ with orthonormal columns, a diagonal $p\times p$ matrix $D$ with non-negative entries, and a $k\times p$ matrix $V$ with orthonormal columns. Now choose, arbitrarily, any diagonal $p\times p$ matrix $S$ having $\pm 1$s on the diagonal, so that $S^2 = I$ is the $p\times p$ identity $I_p$. Then $$A = U D V^\prime = U I D I V^\prime = U (S^2) D (S^2) V^\prime = (US) (SDS) (VS)^\prime$$ is also an SVD of $A$ because $$(US)^\prime(US) = S^\prime U^\prime U S = S^\prime I_p S = S^\prime S = S^2 = I_p$$ demonstrates $US$ has orthonormal columns and a similar calculation demonstrates $VS$ has orthonormal columns. Moreover, since $S$ and $D$ are diagonal, they commute, whence $$S D S = DS^2 = D$$ shows $D$ still has non-negative entries. The method implemented in the code to find an SVD finds a $U$ that diagonalizes $$AA^\prime = (UDV^\prime)(UDV^\prime)^\prime = UDV^\prime V D^\prime U^\prime = UD^2 U^\prime$$ and, similarly, a $V$ that diagonalizes $$A^\prime A = VD^2V^\prime.$$ It proceeds to compute $D$ in terms of the eigenvalues found in $D^2$. The problem is this does not assure a consistent matching of the columns of $U$ with the columns of $V$. A Solution Instead, after finding such a $U$ and such a $V$, use them to compute $$U^\prime A V = U^\prime (U D V^\prime) V = (U^\prime U) D (V^\prime V) = D$$ directly and efficiently. The diagonal values of this $D$ are not necessarily positive. (That is because there is nothing about the process of diagonalizing either $A^\prime A$ or $AA^\prime$ that will guarantee that, since those two processes were carried out separately.) Make them positive by choosing the entries along the diagonal of $S$ to equal the signs of the entries of $D$, so that $SD$ has all positive values. Compensate for this by right-multiplying $U$ by $S$: $$A = U D V^\prime = (US) (SD) V^\prime.$$ That is an SVD. Example Let $n=p=k=1$ with $A=(-2)$. An SVD is $$(-2) = (1)(2)(-1)$$ with $U=(1)$, $D=(2)$, and $V=(-1)$. If you diagonalize $A^\prime A = (4)$ you would naturally choose $U=(1)$ and $D=(\sqrt{4})=(2)$. Likewise if you diagonalize $AA^\prime=(4)$ you would choose $V=(1)$. Unfortunately, $$UDV^\prime = (1)(2)(1) = (2) \ne A.$$ Instead, compute $$D=U^\prime A V = (1)^\prime (-2) (1) = (-2).$$ Because this is negative, set $S=(-1)$. This adjusts $U$ to $US = (1)(-1)=(-1)$ and $D$ to $SD = (-1)(-2)=(2)$. You have obtained $$A = (-1)(2)(1),$$ which is one of the two possible SVDs (but not the same as the original!). Code Here is modified code. Its output confirms The method recreates m correctly. $U$ and $V$ really are still orthonormal. But the result is not the same SVD returned by svd. (Both are equally valid.) m <- matrix(c(1,0,1,2,1,1,1,0,0),byrow=TRUE,nrow=3) U <- eigen(tcrossprod(m))$vector V <- eigen(crossprod(m))$vector D <- diag(zapsmall(diag(t(U) %*% m %*% V))) s <- diag(sign(diag(D))) # Find the signs of the eigenvalues U <- U %*% s # Adjust the columns of U D <- s %*% D # Fix up D. (D <- abs(D) would be more efficient.) U1=svd(m)$u V1=svd(m)$v D1=diag(svd(m)$d,n,n) zapsmall(U1 %*% D1 %*% t(V1)) # SVD zapsmall(U %*% D %*% t(V)) # Hand-rolled SVD zapsmall(crossprod(U)) # Check that U is orthonormal zapsmall(tcrossprod(V)) # Check that V' is orthonormal
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X? Analysis of the Problem The SVD of a matrix is never unique. Let matrix $A$ have dimensions $n\times k$ and let its SVD be $$A = U D V^\prime$$ for an $n\times p$ matrix $U$ with orthonormal columns,
27,608
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X?
As I outlined in a comment to @whuber's answer, this method to compute the SVD doesn't work for every matrix. The issue is not limited to signs. The problem is that there may be repeated eigenvalues, and in this case the eigendecomposition of $A'A$ and $AA'$ is not unique and not all choices of $U$ and $V$ can be used to retrieve the diagonal factor of the SVD. For instance, if you take any non-diagonal orthogonal matrix (say, $A=\begin{bmatrix}3/5&4/5\\-4/5&3/5\end{bmatrix}$), then $AA'=A'A=I$. Among all possible choices for the eigenvector matrix of $I$, eigen will return $U=V=I$, thus in this case $U'AV=A$ is not diagonal. Intuitively, this is another manifestation of the same problem that @whuber outlines, that there has to be a "matching" between the columns of $U$ and $V$, and computing two eigendecompositions separately does not ensure it. If all the singular values of $A$ are distinct, then the eigendecomposition is unique (up to scaling/signs) and the method works. Remark: it is still not a good idea to use it in production code on a computer with floating point arithmetic, because when you form the products $A'A$ and $AA'$ the computed result may be perturbed by a quantity of the order of $\|A\|^2u$, where $u \approx 2\times 10^{-16}$ is the machine precision. If the magnitudes of the singular values differ greatly (of more than $10^{-8}$, roughly), this is detrimental to the numerical accuracy of the smallest ones. Computing the SVD from the two eigendecompositions is a great learning example, but in real life applications always use R's svd function to compute the singular value decomposition.
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X?
As I outlined in a comment to @whuber's answer, this method to compute the SVD doesn't work for every matrix. The issue is not limited to signs. The problem is that there may be repeated eigenvalues,
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X? As I outlined in a comment to @whuber's answer, this method to compute the SVD doesn't work for every matrix. The issue is not limited to signs. The problem is that there may be repeated eigenvalues, and in this case the eigendecomposition of $A'A$ and $AA'$ is not unique and not all choices of $U$ and $V$ can be used to retrieve the diagonal factor of the SVD. For instance, if you take any non-diagonal orthogonal matrix (say, $A=\begin{bmatrix}3/5&4/5\\-4/5&3/5\end{bmatrix}$), then $AA'=A'A=I$. Among all possible choices for the eigenvector matrix of $I$, eigen will return $U=V=I$, thus in this case $U'AV=A$ is not diagonal. Intuitively, this is another manifestation of the same problem that @whuber outlines, that there has to be a "matching" between the columns of $U$ and $V$, and computing two eigendecompositions separately does not ensure it. If all the singular values of $A$ are distinct, then the eigendecomposition is unique (up to scaling/signs) and the method works. Remark: it is still not a good idea to use it in production code on a computer with floating point arithmetic, because when you form the products $A'A$ and $AA'$ the computed result may be perturbed by a quantity of the order of $\|A\|^2u$, where $u \approx 2\times 10^{-16}$ is the machine precision. If the magnitudes of the singular values differ greatly (of more than $10^{-8}$, roughly), this is detrimental to the numerical accuracy of the smallest ones. Computing the SVD from the two eigendecompositions is a great learning example, but in real life applications always use R's svd function to compute the singular value decomposition.
Why cannot I obtain a valid SVD of X via eigenvalue decomposition of XX' and X'X? As I outlined in a comment to @whuber's answer, this method to compute the SVD doesn't work for every matrix. The issue is not limited to signs. The problem is that there may be repeated eigenvalues,
27,609
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques rather than the linear regression model
In short, ridge regression and lasso are regression techniques optimized for prediction, rather than inference. Normal regression gives you unbiased regression coefficients (maximum likelihood estimates "as observed in the data-set"). Ridge and lasso regression allow you to regularize ("shrink") coefficients. This means that the estimated coefficients are pushed towards 0, to make them work better on new data-sets ("optimized for prediction"). This allows you to use complex models and avoid over-fitting at the same time. For both ridge and lasso you have to set a so-called "meta-parameter" that defines how aggressive regularization is performed. Meta-parameters are usually chosen by cross-validation. For Ridge regression the meta-parameter is often called "alpha" or "L2"; it simply defines regularization strength. For LASSO the meta-parameter is often called "lambda", or "L1". In contrast to Ridge, the LASSO regularization will actually set less-important predictors to 0 and help you with choosing the predictors that can be left out of the model. The two methods are combined in "Elastic Net" Regularization. Here, both parameters can be set, with "L2" defining regularization strength and "L1" the desired sparseness of results. Here you find a nice intro to the topic: http://scikit-learn.org/stable/modules/linear_model.html
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques
In short, ridge regression and lasso are regression techniques optimized for prediction, rather than inference. Normal regression gives you unbiased regression coefficients (maximum likelihood estimat
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques rather than the linear regression model In short, ridge regression and lasso are regression techniques optimized for prediction, rather than inference. Normal regression gives you unbiased regression coefficients (maximum likelihood estimates "as observed in the data-set"). Ridge and lasso regression allow you to regularize ("shrink") coefficients. This means that the estimated coefficients are pushed towards 0, to make them work better on new data-sets ("optimized for prediction"). This allows you to use complex models and avoid over-fitting at the same time. For both ridge and lasso you have to set a so-called "meta-parameter" that defines how aggressive regularization is performed. Meta-parameters are usually chosen by cross-validation. For Ridge regression the meta-parameter is often called "alpha" or "L2"; it simply defines regularization strength. For LASSO the meta-parameter is often called "lambda", or "L1". In contrast to Ridge, the LASSO regularization will actually set less-important predictors to 0 and help you with choosing the predictors that can be left out of the model. The two methods are combined in "Elastic Net" Regularization. Here, both parameters can be set, with "L2" defining regularization strength and "L1" the desired sparseness of results. Here you find a nice intro to the topic: http://scikit-learn.org/stable/modules/linear_model.html
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques In short, ridge regression and lasso are regression techniques optimized for prediction, rather than inference. Normal regression gives you unbiased regression coefficients (maximum likelihood estimat
27,610
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques rather than the linear regression model
Even though the linear model may be optimal for the data given to create the model, it is not necessarily guaranteed to be the best model for predictions on unseen data If our underlying data follows a relatively simple model, and the model we use is too complex for the task, what we are essentially doing is we are putting too much weight on any possible change or variance in the data. Our model is overreacting and overcompensating for even the slightest change in our data. People in the field of statistics and machine learning call this phenomenon overfitting. When you have features in your dataset that are highly linearly correlated with other features, turns out linear models will be likely to overfit. Ridge Regression, avoids over fitting by adding a penalty to models that have too large coefficients.
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques
Even though the linear model may be optimal for the data given to create the model, it is not necessarily guaranteed to be the best model for predictions on unseen data If our underlying data follows
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques rather than the linear regression model Even though the linear model may be optimal for the data given to create the model, it is not necessarily guaranteed to be the best model for predictions on unseen data If our underlying data follows a relatively simple model, and the model we use is too complex for the task, what we are essentially doing is we are putting too much weight on any possible change or variance in the data. Our model is overreacting and overcompensating for even the slightest change in our data. People in the field of statistics and machine learning call this phenomenon overfitting. When you have features in your dataset that are highly linearly correlated with other features, turns out linear models will be likely to overfit. Ridge Regression, avoids over fitting by adding a penalty to models that have too large coefficients.
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques Even though the linear model may be optimal for the data given to create the model, it is not necessarily guaranteed to be the best model for predictions on unseen data If our underlying data follows
27,611
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques rather than the linear regression model
It all depends on the kind of problem you are dealing with. On a first glance, deciding upon a model of choice can be tricky.I think, first you should understand the dataset and how features interact with each other and come up to a good representation of your dataset to be used for modelling then comes your model of choice. suppose you have a high dimensionality and high correlation in your dataset, then you would want to prefer L1(lasso) regularisation since it penalises less important features more and makes them zero which gives you the benefit of algorithmic feature selection and would make robust predictions than L2(ridge) regularisation but sometimes it can remove certain signals from the model even when they have information so it should be used carefully. L2 regularisation handles the model complexity by focusing more on the important features which contribute more to the overall error than the less important features. But still, it uses information from less important features in the model. Different features contribute differently to the overall error and naturally our quest is to focus more on the important features which contribute more to the error than less important ones which can be handles with L2(ridge) regularisation.
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques
It all depends on the kind of problem you are dealing with. On a first glance, deciding upon a model of choice can be tricky.I think, first you should understand the dataset and how features interact
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques rather than the linear regression model It all depends on the kind of problem you are dealing with. On a first glance, deciding upon a model of choice can be tricky.I think, first you should understand the dataset and how features interact with each other and come up to a good representation of your dataset to be used for modelling then comes your model of choice. suppose you have a high dimensionality and high correlation in your dataset, then you would want to prefer L1(lasso) regularisation since it penalises less important features more and makes them zero which gives you the benefit of algorithmic feature selection and would make robust predictions than L2(ridge) regularisation but sometimes it can remove certain signals from the model even when they have information so it should be used carefully. L2 regularisation handles the model complexity by focusing more on the important features which contribute more to the overall error than the less important features. But still, it uses information from less important features in the model. Different features contribute differently to the overall error and naturally our quest is to focus more on the important features which contribute more to the error than less important ones which can be handles with L2(ridge) regularisation.
When to use Ridge regression and Lasso regression. What can be achieved while using these techniques It all depends on the kind of problem you are dealing with. On a first glance, deciding upon a model of choice can be tricky.I think, first you should understand the dataset and how features interact
27,612
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time series model?
I would argue that at least when discussing linear models (like AR models), adjusted $R^2$ and AIC are not that different. (This discussion is based on Hansen's Econometrics textbook, if I remember correctly.) Consider the question of whether $X_2$ should be included in $$ y=\underset{(n\times K_1)}{X_1}\beta_1+\underset{(n\times K_2)}{X_2}\beta_2+\epsilon $$ This is equivalent to comparing the models \begin{eqnarray*} \mathcal{M}_1&:&y=X_1\beta_1+u\\ \mathcal{M}_2&:&y=X_1\beta_1+X_2\beta_2+u, \end{eqnarray*} where $E(u|X_1,X_2)=0$. We say that $\mathcal{M}_2$ is the true model if $\beta_2\neq0$. Notice that $\mathcal{M}_1\subset\mathcal{M}_2$. The models are thus nested. A model selection procedure $\widehat{\mathcal{M}}$ is a data-dependent rule that selects the most plausible of several models. We say $\widehat{\mathcal{M}}$ is consistent if \begin{eqnarray*} \lim_{n\rightarrow\infty}P\bigl(\widehat{\mathcal{M}}=\mathcal{M}_1|\mathcal{M}_1\bigr)&=&1\\ \lim_{n\rightarrow\infty}P\bigl(\widehat{\mathcal{M}}=\mathcal{M}_2|\mathcal{M}_2\bigr)&=&1 \end{eqnarray*} Consider adjusted $R^2$. That is, choose $\mathcal{M}_1$ if $\bar{R}^2_1>\bar{R}^2_2$. As $\bar{R}^2$ is monotonically decreasing in $s^2$, this procedure is equivalent to minimizing $s^2$. In turn, this is equivalent to minimizing $\log(s^2)$. For sufficiently large $n$, the latter can be written as \begin{eqnarray*} \log(s^2)&=&\log\left(\widehat{\sigma}^2\frac{n}{n-K}\right) \\ &=&\log(\widehat{\sigma}^2)+\log\left(1+\frac{K}{n-K}\right) \\ &\approx&\log(\widehat{\sigma}^2)+\frac{K}{n-K} \\ &\approx&\log(\widehat{\sigma}^2)+\frac{K}{n}, \end{eqnarray*} where $\widehat{\sigma}^2$ is the ML estimator of the error variance. Model selection based on $\bar{R}^2$ is therefore asymptotically equivalent to choosing the model with the smallest $\log(\widehat{\sigma}^2)+K/n$. This procedure is inconsistent. Proposition: $$\lim_{n\rightarrow\infty}P\bigl(\bar{R}^2_1>\bar{R}^2_2|\mathcal{M}_1\bigr)<1$$ Proof: \begin{eqnarray*} P\bigl(\bar{R}^2_1>\bar{R}^2_2|\mathcal{M}_1\bigr)&\approx&P\bigl(\log(s^2_1)<\log(s^2_2)|\mathcal{M}_1\bigr) \\ &=&P\bigl(n\log(s^2_1)<n\log(s^2_2)|\mathcal{M}_1\bigr) \\ &\approx&P(n\log(\widehat{\sigma}^2_1)+K_1<n\log(\widehat{\sigma}^2_2)+K_1+K_2|\mathcal{M}_1) \\ &=&P(n[\log(\widehat{\sigma}^2_1)-\log(\widehat{\sigma}^2_2)]<K_2|\mathcal{M}_1) \\ &\rightarrow&P(\chi^2_{K_2}<K_2) \\ &<&1, \end{eqnarray*} where the 2nd-to-last line follows because the statistic is the LR statistic in the linear regression case that follows an asymptotic $\chi^2_{K_2}$ null distribution. QED Now consider Akaike's criterion, $$ AIC=\log(\widehat{\sigma}^2)+2\frac{K}{n} $$ Thus, the AIC also trades off the reduction of the SSR implied by additional regressors against the "penalty term," which points in the opposite direction. Thus, choose $\mathcal{M}_1$ if $AIC_1<AIC_2$, else select $\mathcal{M}_2$. It can be seen that the $AIC$ is also inconsistent by continuing the above proof in line three with $P(n\log(\widehat{\sigma}^2_1)+2K_1<n\log(\widehat{\sigma}^2_2)+2(K_1+K_2)|\mathcal{M}_1)$. The adjusted $R^2$ and the $AIC$ thus choose the "large" model $\mathcal{M}_2$ with positive probability, even if $\mathcal{M}_1$ is the true model. As the penalty for complexity in AIC is a little larger than for adjusted $R^2$, it may be less prone to overselect, though. And it has other nice properties (minimizing the KL divergence to the true model if that is not in the set of models considered) that are not addressed in my post.
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time seri
I would argue that at least when discussing linear models (like AR models), adjusted $R^2$ and AIC are not that different. (This discussion is based on Hansen's Econometrics textbook, if I remember co
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time series model? I would argue that at least when discussing linear models (like AR models), adjusted $R^2$ and AIC are not that different. (This discussion is based on Hansen's Econometrics textbook, if I remember correctly.) Consider the question of whether $X_2$ should be included in $$ y=\underset{(n\times K_1)}{X_1}\beta_1+\underset{(n\times K_2)}{X_2}\beta_2+\epsilon $$ This is equivalent to comparing the models \begin{eqnarray*} \mathcal{M}_1&:&y=X_1\beta_1+u\\ \mathcal{M}_2&:&y=X_1\beta_1+X_2\beta_2+u, \end{eqnarray*} where $E(u|X_1,X_2)=0$. We say that $\mathcal{M}_2$ is the true model if $\beta_2\neq0$. Notice that $\mathcal{M}_1\subset\mathcal{M}_2$. The models are thus nested. A model selection procedure $\widehat{\mathcal{M}}$ is a data-dependent rule that selects the most plausible of several models. We say $\widehat{\mathcal{M}}$ is consistent if \begin{eqnarray*} \lim_{n\rightarrow\infty}P\bigl(\widehat{\mathcal{M}}=\mathcal{M}_1|\mathcal{M}_1\bigr)&=&1\\ \lim_{n\rightarrow\infty}P\bigl(\widehat{\mathcal{M}}=\mathcal{M}_2|\mathcal{M}_2\bigr)&=&1 \end{eqnarray*} Consider adjusted $R^2$. That is, choose $\mathcal{M}_1$ if $\bar{R}^2_1>\bar{R}^2_2$. As $\bar{R}^2$ is monotonically decreasing in $s^2$, this procedure is equivalent to minimizing $s^2$. In turn, this is equivalent to minimizing $\log(s^2)$. For sufficiently large $n$, the latter can be written as \begin{eqnarray*} \log(s^2)&=&\log\left(\widehat{\sigma}^2\frac{n}{n-K}\right) \\ &=&\log(\widehat{\sigma}^2)+\log\left(1+\frac{K}{n-K}\right) \\ &\approx&\log(\widehat{\sigma}^2)+\frac{K}{n-K} \\ &\approx&\log(\widehat{\sigma}^2)+\frac{K}{n}, \end{eqnarray*} where $\widehat{\sigma}^2$ is the ML estimator of the error variance. Model selection based on $\bar{R}^2$ is therefore asymptotically equivalent to choosing the model with the smallest $\log(\widehat{\sigma}^2)+K/n$. This procedure is inconsistent. Proposition: $$\lim_{n\rightarrow\infty}P\bigl(\bar{R}^2_1>\bar{R}^2_2|\mathcal{M}_1\bigr)<1$$ Proof: \begin{eqnarray*} P\bigl(\bar{R}^2_1>\bar{R}^2_2|\mathcal{M}_1\bigr)&\approx&P\bigl(\log(s^2_1)<\log(s^2_2)|\mathcal{M}_1\bigr) \\ &=&P\bigl(n\log(s^2_1)<n\log(s^2_2)|\mathcal{M}_1\bigr) \\ &\approx&P(n\log(\widehat{\sigma}^2_1)+K_1<n\log(\widehat{\sigma}^2_2)+K_1+K_2|\mathcal{M}_1) \\ &=&P(n[\log(\widehat{\sigma}^2_1)-\log(\widehat{\sigma}^2_2)]<K_2|\mathcal{M}_1) \\ &\rightarrow&P(\chi^2_{K_2}<K_2) \\ &<&1, \end{eqnarray*} where the 2nd-to-last line follows because the statistic is the LR statistic in the linear regression case that follows an asymptotic $\chi^2_{K_2}$ null distribution. QED Now consider Akaike's criterion, $$ AIC=\log(\widehat{\sigma}^2)+2\frac{K}{n} $$ Thus, the AIC also trades off the reduction of the SSR implied by additional regressors against the "penalty term," which points in the opposite direction. Thus, choose $\mathcal{M}_1$ if $AIC_1<AIC_2$, else select $\mathcal{M}_2$. It can be seen that the $AIC$ is also inconsistent by continuing the above proof in line three with $P(n\log(\widehat{\sigma}^2_1)+2K_1<n\log(\widehat{\sigma}^2_2)+2(K_1+K_2)|\mathcal{M}_1)$. The adjusted $R^2$ and the $AIC$ thus choose the "large" model $\mathcal{M}_2$ with positive probability, even if $\mathcal{M}_1$ is the true model. As the penalty for complexity in AIC is a little larger than for adjusted $R^2$, it may be less prone to overselect, though. And it has other nice properties (minimizing the KL divergence to the true model if that is not in the set of models considered) that are not addressed in my post.
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time seri I would argue that at least when discussing linear models (like AR models), adjusted $R^2$ and AIC are not that different. (This discussion is based on Hansen's Econometrics textbook, if I remember co
27,613
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time series model?
The penalty in $R^2_{adj}$ does not yield the nice properties in terms of model selection as posessed by the AIC or BIC. The penalty in $R^2_{adj}$ is enough to make $R^2_{adj}$ an unbiased estimator of the population $R^2$ when none of the regressors actually belongs to the model (as per Dave Giles' blog posts "In What Sense is the "Adjusted" R-Squared Unbiased?" and "More on the Properties of the "Adjusted" Coefficient of Determination"); however, $R^2_{adj}$ is not an optimal model selector. (There could be a proof by contradiction: if AIC is optimal in one sense and BIC is optimal in another, and $R^2_{adj}$ is not equivalent to either of them, then $R^2_{adj}$ is not optimal in either of these two senses.)
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time seri
The penalty in $R^2_{adj}$ does not yield the nice properties in terms of model selection as posessed by the AIC or BIC. The penalty in $R^2_{adj}$ is enough to make $R^2_{adj}$ an unbiased estimator
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time series model? The penalty in $R^2_{adj}$ does not yield the nice properties in terms of model selection as posessed by the AIC or BIC. The penalty in $R^2_{adj}$ is enough to make $R^2_{adj}$ an unbiased estimator of the population $R^2$ when none of the regressors actually belongs to the model (as per Dave Giles' blog posts "In What Sense is the "Adjusted" R-Squared Unbiased?" and "More on the Properties of the "Adjusted" Coefficient of Determination"); however, $R^2_{adj}$ is not an optimal model selector. (There could be a proof by contradiction: if AIC is optimal in one sense and BIC is optimal in another, and $R^2_{adj}$ is not equivalent to either of them, then $R^2_{adj}$ is not optimal in either of these two senses.)
Why information criterion (not adjusted $R^2$) are used to select appropriate lag order in time seri The penalty in $R^2_{adj}$ does not yield the nice properties in terms of model selection as posessed by the AIC or BIC. The penalty in $R^2_{adj}$ is enough to make $R^2_{adj}$ an unbiased estimator
27,614
Structure of Recurrent Neural Network (LSTM, GRU)
A is, in fact, a full layer. The output of the layer is $h_t$, is in fact the neuron output, that can be plugged into a softmax layer (if you want a classification for the time step $t$, for instance) or anything else such as another LSTM layer if you want to go deeper. The input of this layer is what sets it apart from the regular feedforward network: it takes both the input $x_t$ and the full state of the network in the previous time step (both $h_{t-1}$ and the other variables from the LSTM cell). Note that $h_t$ is a vector. So, if you want to make an analogy with a regular feedforward network with 1 hidden layer, then A could be thought as taking the place of all of these neurons in the hidden layer (plus the extra complexity of the recurring part).
Structure of Recurrent Neural Network (LSTM, GRU)
A is, in fact, a full layer. The output of the layer is $h_t$, is in fact the neuron output, that can be plugged into a softmax layer (if you want a classification for the time step $t$, for instance)
Structure of Recurrent Neural Network (LSTM, GRU) A is, in fact, a full layer. The output of the layer is $h_t$, is in fact the neuron output, that can be plugged into a softmax layer (if you want a classification for the time step $t$, for instance) or anything else such as another LSTM layer if you want to go deeper. The input of this layer is what sets it apart from the regular feedforward network: it takes both the input $x_t$ and the full state of the network in the previous time step (both $h_{t-1}$ and the other variables from the LSTM cell). Note that $h_t$ is a vector. So, if you want to make an analogy with a regular feedforward network with 1 hidden layer, then A could be thought as taking the place of all of these neurons in the hidden layer (plus the extra complexity of the recurring part).
Structure of Recurrent Neural Network (LSTM, GRU) A is, in fact, a full layer. The output of the layer is $h_t$, is in fact the neuron output, that can be plugged into a softmax layer (if you want a classification for the time step $t$, for instance)
27,615
Structure of Recurrent Neural Network (LSTM, GRU)
In your image A is a single hidden layer with a single hidden Neuron. From left to right is the time axis, and at the bottom you receive an input at every time. At the top the network could be further expanded by adding layers. If you would unfold this network in time, as is visually being shown in your picture (from left to right the time-axis is unfolded) then you would obtain a feedforward network with T (total amount of time steps) hidden layers each containing a single node (neuron) as is drawn in the middle A block. Hope this answers your question.
Structure of Recurrent Neural Network (LSTM, GRU)
In your image A is a single hidden layer with a single hidden Neuron. From left to right is the time axis, and at the bottom you receive an input at every time. At the top the network could be further
Structure of Recurrent Neural Network (LSTM, GRU) In your image A is a single hidden layer with a single hidden Neuron. From left to right is the time axis, and at the bottom you receive an input at every time. At the top the network could be further expanded by adding layers. If you would unfold this network in time, as is visually being shown in your picture (from left to right the time-axis is unfolded) then you would obtain a feedforward network with T (total amount of time steps) hidden layers each containing a single node (neuron) as is drawn in the middle A block. Hope this answers your question.
Structure of Recurrent Neural Network (LSTM, GRU) In your image A is a single hidden layer with a single hidden Neuron. From left to right is the time axis, and at the bottom you receive an input at every time. At the top the network could be further
27,616
Structure of Recurrent Neural Network (LSTM, GRU)
I'd like to explain that simple diagram in a relatively complicated context: attention mechanism in the decoder of the seq2seq model. In the flow diagram below, $h_0$ to $h_{k-1}$ are time steps(of the same length as the input number with PADs for blanks). Each time the word is put into the ith (time step) LSTM neural(or kernel cell the same as any one of the three in your image) it calculates the ith output according to its previous state((i-1)th output) and the ith input $x_i$. I illustrate your issue using this is because all the states of the timestep are saved for the attention mechanism rather than just discarded only to get the last one. It is just one neural and is viewed as a layer(multiple layers can be stacked to form for example a bidirectional encoder in some seq2seq models to extract more abstract information in the higher layers). It then encodes the sentence(with the L words and each one represented as a vector of the shape: embedding_dimention * 1) into a list of L tensors(each of the shape: num_hidden/num_units * 1). And the state past to the decoder is just the last vector as the sentence embedding of the same shape of each item in the list. Picture source: Attention Mechanism
Structure of Recurrent Neural Network (LSTM, GRU)
I'd like to explain that simple diagram in a relatively complicated context: attention mechanism in the decoder of the seq2seq model. In the flow diagram below, $h_0$ to $h_{k-1}$ are time steps(of th
Structure of Recurrent Neural Network (LSTM, GRU) I'd like to explain that simple diagram in a relatively complicated context: attention mechanism in the decoder of the seq2seq model. In the flow diagram below, $h_0$ to $h_{k-1}$ are time steps(of the same length as the input number with PADs for blanks). Each time the word is put into the ith (time step) LSTM neural(or kernel cell the same as any one of the three in your image) it calculates the ith output according to its previous state((i-1)th output) and the ith input $x_i$. I illustrate your issue using this is because all the states of the timestep are saved for the attention mechanism rather than just discarded only to get the last one. It is just one neural and is viewed as a layer(multiple layers can be stacked to form for example a bidirectional encoder in some seq2seq models to extract more abstract information in the higher layers). It then encodes the sentence(with the L words and each one represented as a vector of the shape: embedding_dimention * 1) into a list of L tensors(each of the shape: num_hidden/num_units * 1). And the state past to the decoder is just the last vector as the sentence embedding of the same shape of each item in the list. Picture source: Attention Mechanism
Structure of Recurrent Neural Network (LSTM, GRU) I'd like to explain that simple diagram in a relatively complicated context: attention mechanism in the decoder of the seq2seq model. In the flow diagram below, $h_0$ to $h_{k-1}$ are time steps(of th
27,617
Vectorization of Cross Entropy Loss
No, the gradients should not be zero for the other components. If your prediction is $\hat y_{ij}$ for some $i,j$ and your observation $y_{ij}=0$, then you predicted too much by $\hat y_{ij}$.
Vectorization of Cross Entropy Loss
No, the gradients should not be zero for the other components. If your prediction is $\hat y_{ij}$ for some $i,j$ and your observation $y_{ij}=0$, then you predicted too much by $\hat y_{ij}$.
Vectorization of Cross Entropy Loss No, the gradients should not be zero for the other components. If your prediction is $\hat y_{ij}$ for some $i,j$ and your observation $y_{ij}=0$, then you predicted too much by $\hat y_{ij}$.
Vectorization of Cross Entropy Loss No, the gradients should not be zero for the other components. If your prediction is $\hat y_{ij}$ for some $i,j$ and your observation $y_{ij}=0$, then you predicted too much by $\hat y_{ij}$.
27,618
Vectorization of Cross Entropy Loss
The following is the same content as the edit, but in (for me) slightly clearer step-by-step format: We are trying to proof that: $\frac{\partial{CE}}{\partial{\theta}} = \hat{y} - y$ given $CE(\theta) = -\sum\nolimits_{i}{y_i*log({\hat{y}_{i}})}$ and $\hat{y}_{i} = \frac{exp(\theta_i)}{\sum\nolimits_{j}{exp(\theta_j)}}$ We know that $y_{j} = 0$ for $j \neq k$ and $y_k = 1$, so: $CE(\theta) = -\ log({\hat{y}_{k}})$ $= - \ log(\frac{exp(\theta_k)}{\sum\nolimits_{j}{exp(\theta_j)}})$ $ = - \ \theta_k + log(\sum\nolimits_{j}{exp(\theta_j)}) $ $\frac{\partial{CE}}{\partial{\theta}} = - \frac{\partial{\theta_k}}{\partial{\theta}} + \frac{\partial}{\partial{\theta}} log(\sum\nolimits_{j}{exp(\theta_j))}$ Use the fact that $ \frac{\partial{\theta_k}}{\partial{\theta_k}} = 1 $ and $ \frac{\partial{\theta_k}}{\partial{\theta_q}} = 0 $ for $q \neq k$, to show that. $ \frac{\partial{\theta_k}}{\partial{\theta}} = y $ For the second part we write out the derivative for each individual element of $\theta$ and use the chain rule to get: $\frac{\partial}{\partial{\theta_i}} log(\sum\nolimits_{j}{exp(\theta_j))} = \frac{exp(\theta_i)}{\sum\nolimits_{j}{exp(\theta_j)}} = \hat{y}_{i}$ Hence, $\frac{\partial{CE}}{\partial{\theta}} = \frac{\partial}{\partial{\theta}} log(\sum\nolimits_{j}{exp(\theta_j))} - \frac{\partial{\theta_k}}{\partial{\theta}} = \hat{y}$ - y
Vectorization of Cross Entropy Loss
The following is the same content as the edit, but in (for me) slightly clearer step-by-step format: We are trying to proof that: $\frac{\partial{CE}}{\partial{\theta}} = \hat{y} - y$ given $CE(\the
Vectorization of Cross Entropy Loss The following is the same content as the edit, but in (for me) slightly clearer step-by-step format: We are trying to proof that: $\frac{\partial{CE}}{\partial{\theta}} = \hat{y} - y$ given $CE(\theta) = -\sum\nolimits_{i}{y_i*log({\hat{y}_{i}})}$ and $\hat{y}_{i} = \frac{exp(\theta_i)}{\sum\nolimits_{j}{exp(\theta_j)}}$ We know that $y_{j} = 0$ for $j \neq k$ and $y_k = 1$, so: $CE(\theta) = -\ log({\hat{y}_{k}})$ $= - \ log(\frac{exp(\theta_k)}{\sum\nolimits_{j}{exp(\theta_j)}})$ $ = - \ \theta_k + log(\sum\nolimits_{j}{exp(\theta_j)}) $ $\frac{\partial{CE}}{\partial{\theta}} = - \frac{\partial{\theta_k}}{\partial{\theta}} + \frac{\partial}{\partial{\theta}} log(\sum\nolimits_{j}{exp(\theta_j))}$ Use the fact that $ \frac{\partial{\theta_k}}{\partial{\theta_k}} = 1 $ and $ \frac{\partial{\theta_k}}{\partial{\theta_q}} = 0 $ for $q \neq k$, to show that. $ \frac{\partial{\theta_k}}{\partial{\theta}} = y $ For the second part we write out the derivative for each individual element of $\theta$ and use the chain rule to get: $\frac{\partial}{\partial{\theta_i}} log(\sum\nolimits_{j}{exp(\theta_j))} = \frac{exp(\theta_i)}{\sum\nolimits_{j}{exp(\theta_j)}} = \hat{y}_{i}$ Hence, $\frac{\partial{CE}}{\partial{\theta}} = \frac{\partial}{\partial{\theta}} log(\sum\nolimits_{j}{exp(\theta_j))} - \frac{\partial{\theta_k}}{\partial{\theta}} = \hat{y}$ - y
Vectorization of Cross Entropy Loss The following is the same content as the edit, but in (for me) slightly clearer step-by-step format: We are trying to proof that: $\frac{\partial{CE}}{\partial{\theta}} = \hat{y} - y$ given $CE(\the
27,619
How to write an AR(2) stationary process in the Wold representation
Let $X_t$ be a zero-mean covariance-stationary time series such that $$X_t = \varphi_1 X_{t-1} + \varphi_2 X_{t-2} + \varepsilon_t$$ where $\varepsilon_t$ is white noise. Using $L$ to mean the lag (backshift) operator, the above can be expressed as $$(1-\varphi_1L - \varphi_2L^2)X_t=\varepsilon_t . \tag{1}$$ Since $X_t$ is a covariance-stationary AR(2) process, the roots of its characteristic polynomial $(1-\varphi_1 z - \varphi_2 z^2) = 0$ must lie outside the unit circle. Thus, Equation (1) can be written as $$(1-\lambda_1 L)(1-\lambda_2 L)X_t=\varepsilon_t $$ where $\vert \lambda_1 \rvert<1$ and $\vert \lambda_2 \rvert<1$. The last two inequalities are true of a covariance-stationary AR(2) process since then the roots of the characteristic polynomial, $z^*_1=1/ \lambda_1$ and $z^*_2=1/ \lambda_2$, will lie outside the unit circle. Therefore, $$X_t= \frac{1}{(1-\lambda_1 L)} \frac{1}{(1- \lambda_2 L)} \varepsilon_t .$$ Expand the two fractions on the right-hand side in the equation above using the geometric series. and you'll have the Wold decomposition of an AR(2).
How to write an AR(2) stationary process in the Wold representation
Let $X_t$ be a zero-mean covariance-stationary time series such that $$X_t = \varphi_1 X_{t-1} + \varphi_2 X_{t-2} + \varepsilon_t$$ where $\varepsilon_t$ is white noise. Using $L$ to mean the lag (ba
How to write an AR(2) stationary process in the Wold representation Let $X_t$ be a zero-mean covariance-stationary time series such that $$X_t = \varphi_1 X_{t-1} + \varphi_2 X_{t-2} + \varepsilon_t$$ where $\varepsilon_t$ is white noise. Using $L$ to mean the lag (backshift) operator, the above can be expressed as $$(1-\varphi_1L - \varphi_2L^2)X_t=\varepsilon_t . \tag{1}$$ Since $X_t$ is a covariance-stationary AR(2) process, the roots of its characteristic polynomial $(1-\varphi_1 z - \varphi_2 z^2) = 0$ must lie outside the unit circle. Thus, Equation (1) can be written as $$(1-\lambda_1 L)(1-\lambda_2 L)X_t=\varepsilon_t $$ where $\vert \lambda_1 \rvert<1$ and $\vert \lambda_2 \rvert<1$. The last two inequalities are true of a covariance-stationary AR(2) process since then the roots of the characteristic polynomial, $z^*_1=1/ \lambda_1$ and $z^*_2=1/ \lambda_2$, will lie outside the unit circle. Therefore, $$X_t= \frac{1}{(1-\lambda_1 L)} \frac{1}{(1- \lambda_2 L)} \varepsilon_t .$$ Expand the two fractions on the right-hand side in the equation above using the geometric series. and you'll have the Wold decomposition of an AR(2).
How to write an AR(2) stationary process in the Wold representation Let $X_t$ be a zero-mean covariance-stationary time series such that $$X_t = \varphi_1 X_{t-1} + \varphi_2 X_{t-2} + \varepsilon_t$$ where $\varepsilon_t$ is white noise. Using $L$ to mean the lag (ba
27,620
How to write an AR(2) stationary process in the Wold representation
The Wold representation is an infinite weighted sum of the current and past innovations $\epsilon_t$: $$ X_t = \psi_0 \epsilon_t + \psi_1 \epsilon_{t-1} + \psi_1 \epsilon_{t-2} + ... = \sum_{i=0}^\infty \psi_i \epsilon_{t-i} $$ If you were interested in obtaining the values of the first weights, $\psi_i$, for a given ARMA process, you can proceed as follows (more details are given in Brockwell and Davis (1991) Time Series: Theory and Methods §3.3): By definition, we have: $$ X_t = \psi(L) \epsilon_t \quad \hbox{ and } \quad \phi(L) X_t = \theta(L) \epsilon_t \rightarrow X_t = \frac{\theta(L)}{\phi(L)} \epsilon_t $$ where $\psi(L)$ is the infinite polynomial, $\phi(L)$ is the autoregressive polynomial and $\theta(L)$ is the moving average polynomial. Your question is about an AR process but for generality I would consider an ARMA process, which may also be of interest for this question. Thus, we can write: $$ \psi(L) \epsilon_t = X_t = \frac{\theta(L)}{\phi(L)} \epsilon_t \rightarrow \psi(L) \phi(L) \epsilon_t = \theta(L) \epsilon_t $$ The values of $\psi_i$ can be obtained equating the coefficients related to the same lags, $L^i$, from both sides of the last equation, $\psi(L) \phi(L) = \theta(L)$. Example: Let's take the following ARMA(2,2) process: $$ X_t = 0.4 X_{t-1} + 0.2 X_{t-1} + \epsilon_t + 0.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2} $$ You can check that the values $\psi_i$ can be obtained recursively (normalizing $\psi_0=1$, $\phi_0=0$ and $\theta_0=0$): \begin{eqnarray} \begin{array}{l} \psi_1 = \theta_1 + \phi_1 = 0.3 + 0.4 = 0.7 \\ \psi_2 = \theta_2 + \phi_2 + \phi_1 \psi_1 = -0.4 + 0.2 + 0.7\times0.4 = 0.08 \\ \psi_3 = \phi_1 \psi_2 + \phi_2 \psi_1 = 0.4 \times 0.08 + 0.2 \times 0.7 = 0.172 \\ \psi_4 = \phi_1 \psi_3 + \phi_2 \psi_2 = 0.4 \times 0.172 + 0.2 \times 0.08 = 0.0848 \\ \psi_5 = \phi_1 \psi4 + \phi_2 \psi_3 = 0.4 \times 0.0848 + 0.2 \times 0.172 = 0.0683 \\ \psi_6 = ... = 0.0443 \\ ... \end{array} \end{eqnarray} For an AR(2) process you can simply set $\theta_1 = \theta_2 = 0$.
How to write an AR(2) stationary process in the Wold representation
The Wold representation is an infinite weighted sum of the current and past innovations $\epsilon_t$: $$ X_t = \psi_0 \epsilon_t + \psi_1 \epsilon_{t-1} + \psi_1 \epsilon_{t-2} + ... = \sum_{i=0}^\in
How to write an AR(2) stationary process in the Wold representation The Wold representation is an infinite weighted sum of the current and past innovations $\epsilon_t$: $$ X_t = \psi_0 \epsilon_t + \psi_1 \epsilon_{t-1} + \psi_1 \epsilon_{t-2} + ... = \sum_{i=0}^\infty \psi_i \epsilon_{t-i} $$ If you were interested in obtaining the values of the first weights, $\psi_i$, for a given ARMA process, you can proceed as follows (more details are given in Brockwell and Davis (1991) Time Series: Theory and Methods §3.3): By definition, we have: $$ X_t = \psi(L) \epsilon_t \quad \hbox{ and } \quad \phi(L) X_t = \theta(L) \epsilon_t \rightarrow X_t = \frac{\theta(L)}{\phi(L)} \epsilon_t $$ where $\psi(L)$ is the infinite polynomial, $\phi(L)$ is the autoregressive polynomial and $\theta(L)$ is the moving average polynomial. Your question is about an AR process but for generality I would consider an ARMA process, which may also be of interest for this question. Thus, we can write: $$ \psi(L) \epsilon_t = X_t = \frac{\theta(L)}{\phi(L)} \epsilon_t \rightarrow \psi(L) \phi(L) \epsilon_t = \theta(L) \epsilon_t $$ The values of $\psi_i$ can be obtained equating the coefficients related to the same lags, $L^i$, from both sides of the last equation, $\psi(L) \phi(L) = \theta(L)$. Example: Let's take the following ARMA(2,2) process: $$ X_t = 0.4 X_{t-1} + 0.2 X_{t-1} + \epsilon_t + 0.3 \epsilon_{t-1} - 0.4 \epsilon_{t-2} $$ You can check that the values $\psi_i$ can be obtained recursively (normalizing $\psi_0=1$, $\phi_0=0$ and $\theta_0=0$): \begin{eqnarray} \begin{array}{l} \psi_1 = \theta_1 + \phi_1 = 0.3 + 0.4 = 0.7 \\ \psi_2 = \theta_2 + \phi_2 + \phi_1 \psi_1 = -0.4 + 0.2 + 0.7\times0.4 = 0.08 \\ \psi_3 = \phi_1 \psi_2 + \phi_2 \psi_1 = 0.4 \times 0.08 + 0.2 \times 0.7 = 0.172 \\ \psi_4 = \phi_1 \psi_3 + \phi_2 \psi_2 = 0.4 \times 0.172 + 0.2 \times 0.08 = 0.0848 \\ \psi_5 = \phi_1 \psi4 + \phi_2 \psi_3 = 0.4 \times 0.0848 + 0.2 \times 0.172 = 0.0683 \\ \psi_6 = ... = 0.0443 \\ ... \end{array} \end{eqnarray} For an AR(2) process you can simply set $\theta_1 = \theta_2 = 0$.
How to write an AR(2) stationary process in the Wold representation The Wold representation is an infinite weighted sum of the current and past innovations $\epsilon_t$: $$ X_t = \psi_0 \epsilon_t + \psi_1 \epsilon_{t-1} + \psi_1 \epsilon_{t-2} + ... = \sum_{i=0}^\in
27,621
Prove expression for variance AR(1)
$Var(X_t)=var(\delta+\phi x_{t-1}+\eta_t)=0+var(\phi x_{t-1})+var(\eta_t)\\ =\phi^2var(x_{t-1})+\sigma^2_{\eta}=\frac{var(\eta_{t})}{1-\phi^2}=\frac{\sigma^2_{\eta}}{1-\phi^2}$
Prove expression for variance AR(1)
$Var(X_t)=var(\delta+\phi x_{t-1}+\eta_t)=0+var(\phi x_{t-1})+var(\eta_t)\\ =\phi^2var(x_{t-1})+\sigma^2_{\eta}=\frac{var(\eta_{t})}{1-\phi^2}=\frac{\sigma^2_{\eta}}{1-\phi^2}$
Prove expression for variance AR(1) $Var(X_t)=var(\delta+\phi x_{t-1}+\eta_t)=0+var(\phi x_{t-1})+var(\eta_t)\\ =\phi^2var(x_{t-1})+\sigma^2_{\eta}=\frac{var(\eta_{t})}{1-\phi^2}=\frac{\sigma^2_{\eta}}{1-\phi^2}$
Prove expression for variance AR(1) $Var(X_t)=var(\delta+\phi x_{t-1}+\eta_t)=0+var(\phi x_{t-1})+var(\eta_t)\\ =\phi^2var(x_{t-1})+\sigma^2_{\eta}=\frac{var(\eta_{t})}{1-\phi^2}=\frac{\sigma^2_{\eta}}{1-\phi^2}$
27,622
Prove expression for variance AR(1)
\begin{align} \mathrm{Var}\left[X_t\right] &= \mathrm{E}\left[X_t\left(\delta+\phi X_{t-1}+\eta_t\right)\right]-\mathrm{E}^2\left[X_t\right]\\ &= \delta\mathrm{E}\left[X_t\right]+\phi\mathrm{E}\left[X_tX_{t-1}\right]+\mathrm{E}\left[X_t\eta_t\right]-\frac{\delta^2}{\left(1-\phi\right)^2}\\ &= \delta\mathrm{E}\left[X_t\right]+\phi\left(\mathrm{Cov}\left[X_t,X_{t-1}\right]+\mathrm{E}\left[X_t\right]\mathrm{E}\left[X_{t-1}\right]\right)+\mathrm{E}\left[X_t\eta_t\right]-\frac{\delta^2}{\left(1-\phi\right)^2}\\ &= -\frac{\phi\delta^2}{\left(1-\phi\right)^2}+\phi\left(\gamma_{1,x}+\frac{\delta^2}{\left(1-\phi\right)^2}\right)+\sigma_\eta^2\\ &= \phi^2\sigma_x^2+\sigma_\eta^2\\ &= \phi^2\mathrm{Var}\left[X_t\right]+\sigma_\eta^2\\ &\\ \mathrm{Var}\left[X_t\right] &= \frac{\sigma_\eta^2}{1-\phi^2}\\ \end{align}
Prove expression for variance AR(1)
\begin{align} \mathrm{Var}\left[X_t\right] &= \mathrm{E}\left[X_t\left(\delta+\phi X_{t-1}+\eta_t\right)\right]-\mathrm{E}^2\left[X_t\right]\\ &= \delta\mathrm{E}\left[X_t\right]+\phi\mathrm{E}\left[X
Prove expression for variance AR(1) \begin{align} \mathrm{Var}\left[X_t\right] &= \mathrm{E}\left[X_t\left(\delta+\phi X_{t-1}+\eta_t\right)\right]-\mathrm{E}^2\left[X_t\right]\\ &= \delta\mathrm{E}\left[X_t\right]+\phi\mathrm{E}\left[X_tX_{t-1}\right]+\mathrm{E}\left[X_t\eta_t\right]-\frac{\delta^2}{\left(1-\phi\right)^2}\\ &= \delta\mathrm{E}\left[X_t\right]+\phi\left(\mathrm{Cov}\left[X_t,X_{t-1}\right]+\mathrm{E}\left[X_t\right]\mathrm{E}\left[X_{t-1}\right]\right)+\mathrm{E}\left[X_t\eta_t\right]-\frac{\delta^2}{\left(1-\phi\right)^2}\\ &= -\frac{\phi\delta^2}{\left(1-\phi\right)^2}+\phi\left(\gamma_{1,x}+\frac{\delta^2}{\left(1-\phi\right)^2}\right)+\sigma_\eta^2\\ &= \phi^2\sigma_x^2+\sigma_\eta^2\\ &= \phi^2\mathrm{Var}\left[X_t\right]+\sigma_\eta^2\\ &\\ \mathrm{Var}\left[X_t\right] &= \frac{\sigma_\eta^2}{1-\phi^2}\\ \end{align}
Prove expression for variance AR(1) \begin{align} \mathrm{Var}\left[X_t\right] &= \mathrm{E}\left[X_t\left(\delta+\phi X_{t-1}+\eta_t\right)\right]-\mathrm{E}^2\left[X_t\right]\\ &= \delta\mathrm{E}\left[X_t\right]+\phi\mathrm{E}\left[X
27,623
auto.arima does not recognize seasonal pattern
R will not fit an ARIMA model with seasonality greater than 350. See http://robjhyndman.com/hyndsight/longseasonality/ for a discussion of this issue. The solution is to use Fourier terms for the seasonality, and ARMA errors for the short-term dynamics.
auto.arima does not recognize seasonal pattern
R will not fit an ARIMA model with seasonality greater than 350. See http://robjhyndman.com/hyndsight/longseasonality/ for a discussion of this issue. The solution is to use Fourier terms for the seas
auto.arima does not recognize seasonal pattern R will not fit an ARIMA model with seasonality greater than 350. See http://robjhyndman.com/hyndsight/longseasonality/ for a discussion of this issue. The solution is to use Fourier terms for the seasonality, and ARMA errors for the short-term dynamics.
auto.arima does not recognize seasonal pattern R will not fit an ARIMA model with seasonality greater than 350. See http://robjhyndman.com/hyndsight/longseasonality/ for a discussion of this issue. The solution is to use Fourier terms for the seas
27,624
auto.arima does not recognize seasonal pattern
The solution to your problem is as Rob points out is to combine deterministic effects (week of the year) and stochastic effects (ARIMA structure) while isolating unusual days and detecting the possible presence of one or more level shifts and/or one or more local time trends. AUTOBOX , the software used for the analysis was in part developed by me to automatically provide robust modeling for data sets like this. I have placed your data at http://www.autobox.com/weather/weather.txt. The acf of the original data is which lead to an automatic model selection of the form . The model statistics are with a residual plot of The plot of the forecasts for the next 60 days is presented here THe Actual/Fit/Forecast graph is shown here . It might be interesting for others to follow Prof. Hyndaman's advice and to report their final model with disgnostic checks regarding residual diagnostics and parameter tests of significance. I am personally uncomfortable with the suggestion about first performing a fourier analysis (possibly/probably impacted by anomalies) and then doing ARIMA on the residuals is unacceptable as it is not a simultaneous solution leading to 1 equation but rather a presumptive sequence. My equation use week-of-the-month and also included an AR(1) and remedies for the unusual data points. All software has limitations and it is good to know them. Again I reiterate why doesn't somebody try to implement Rob's suggestions and show the complete results.
auto.arima does not recognize seasonal pattern
The solution to your problem is as Rob points out is to combine deterministic effects (week of the year) and stochastic effects (ARIMA structure) while isolating unusual days and detecting the possibl
auto.arima does not recognize seasonal pattern The solution to your problem is as Rob points out is to combine deterministic effects (week of the year) and stochastic effects (ARIMA structure) while isolating unusual days and detecting the possible presence of one or more level shifts and/or one or more local time trends. AUTOBOX , the software used for the analysis was in part developed by me to automatically provide robust modeling for data sets like this. I have placed your data at http://www.autobox.com/weather/weather.txt. The acf of the original data is which lead to an automatic model selection of the form . The model statistics are with a residual plot of The plot of the forecasts for the next 60 days is presented here THe Actual/Fit/Forecast graph is shown here . It might be interesting for others to follow Prof. Hyndaman's advice and to report their final model with disgnostic checks regarding residual diagnostics and parameter tests of significance. I am personally uncomfortable with the suggestion about first performing a fourier analysis (possibly/probably impacted by anomalies) and then doing ARIMA on the residuals is unacceptable as it is not a simultaneous solution leading to 1 equation but rather a presumptive sequence. My equation use week-of-the-month and also included an AR(1) and remedies for the unusual data points. All software has limitations and it is good to know them. Again I reiterate why doesn't somebody try to implement Rob's suggestions and show the complete results.
auto.arima does not recognize seasonal pattern The solution to your problem is as Rob points out is to combine deterministic effects (week of the year) and stochastic effects (ARIMA structure) while isolating unusual days and detecting the possibl
27,625
Why does the rank of the design matrix X equal the rank of X'X?
For any matrix $X$, $R(X'X) =R(X)$. Where R() is the rank function. You could prove this using null space. If $Xz=0$ for some $z$, then clearly $X'Xz =0$. Conversely, if $X'Xz=0$, then $z'X'Xz=0$, and it follows that $Xz=0$. This implies $X$ and $X'X$ have the same null space. Hence the result.
Why does the rank of the design matrix X equal the rank of X'X?
For any matrix $X$, $R(X'X) =R(X)$. Where R() is the rank function. You could prove this using null space. If $Xz=0$ for some $z$, then clearly $X'Xz =0$. Conversely, if $X'Xz=0$, then $z'X'Xz=0$, an
Why does the rank of the design matrix X equal the rank of X'X? For any matrix $X$, $R(X'X) =R(X)$. Where R() is the rank function. You could prove this using null space. If $Xz=0$ for some $z$, then clearly $X'Xz =0$. Conversely, if $X'Xz=0$, then $z'X'Xz=0$, and it follows that $Xz=0$. This implies $X$ and $X'X$ have the same null space. Hence the result.
Why does the rank of the design matrix X equal the rank of X'X? For any matrix $X$, $R(X'X) =R(X)$. Where R() is the rank function. You could prove this using null space. If $Xz=0$ for some $z$, then clearly $X'Xz =0$. Conversely, if $X'Xz=0$, then $z'X'Xz=0$, an
27,626
Two seasonal periods in ARIMA using R
There are no R packages that handle multiple seasonality for ARIMA models as far as I know. You could try the forecast package which implements multiple seasonality using models based on exponential smoothing. The dshw, bats and tbats functions will all handle data with two seasonal periods.
Two seasonal periods in ARIMA using R
There are no R packages that handle multiple seasonality for ARIMA models as far as I know. You could try the forecast package which implements multiple seasonality using models based on exponential s
Two seasonal periods in ARIMA using R There are no R packages that handle multiple seasonality for ARIMA models as far as I know. You could try the forecast package which implements multiple seasonality using models based on exponential smoothing. The dshw, bats and tbats functions will all handle data with two seasonal periods.
Two seasonal periods in ARIMA using R There are no R packages that handle multiple seasonality for ARIMA models as far as I know. You could try the forecast package which implements multiple seasonality using models based on exponential s
27,627
Two seasonal periods in ARIMA using R
I found this paper: Au, et al. Automatic Forecasting of Double Seasonal Time Series with Applications on Mobility Network Traffic Prediction It is about predicting the mobile network traffic prediction using the automatic double seasonal ARIMA. As it is a research paper, it has clearly described the algorithm that one can adopt to adopt multi-seasonal ARIMA prediction. So far, it has given me enough background to proceed further with my research.
Two seasonal periods in ARIMA using R
I found this paper: Au, et al. Automatic Forecasting of Double Seasonal Time Series with Applications on Mobility Network Traffic Prediction It is about predicting the mobile network traffic predic
Two seasonal periods in ARIMA using R I found this paper: Au, et al. Automatic Forecasting of Double Seasonal Time Series with Applications on Mobility Network Traffic Prediction It is about predicting the mobile network traffic prediction using the automatic double seasonal ARIMA. As it is a research paper, it has clearly described the algorithm that one can adopt to adopt multi-seasonal ARIMA prediction. So far, it has given me enough background to proceed further with my research.
Two seasonal periods in ARIMA using R I found this paper: Au, et al. Automatic Forecasting of Double Seasonal Time Series with Applications on Mobility Network Traffic Prediction It is about predicting the mobile network traffic predic
27,628
Winbugs and other MCMC without information for prior distribution
Parameters in linear predictor are t-distributed. When the number of records goes to infinity, it converges to normal distribution. So yes, normally it is considered correct to assume normal distribution of parameters. Anyways, in bayesian statistics, you need not to assume parameter distribution. Normally you specify so called uninformative priors. For each case, different uninformative priors are recommended. In this case, people often use something like (you can tweak the values of course): dunif(-100000, 100000) or dnorm(0, 1/10^10) The second one is preferred, because it is not limited to particular values. With uninformative priors, you have take no risk. You can of course limit them to particular interval, but be careful. So, you specify uninformative prior and the parameter distribution will come out itself! No need to make any assumptions about it.
Winbugs and other MCMC without information for prior distribution
Parameters in linear predictor are t-distributed. When the number of records goes to infinity, it converges to normal distribution. So yes, normally it is considered correct to assume normal distribut
Winbugs and other MCMC without information for prior distribution Parameters in linear predictor are t-distributed. When the number of records goes to infinity, it converges to normal distribution. So yes, normally it is considered correct to assume normal distribution of parameters. Anyways, in bayesian statistics, you need not to assume parameter distribution. Normally you specify so called uninformative priors. For each case, different uninformative priors are recommended. In this case, people often use something like (you can tweak the values of course): dunif(-100000, 100000) or dnorm(0, 1/10^10) The second one is preferred, because it is not limited to particular values. With uninformative priors, you have take no risk. You can of course limit them to particular interval, but be careful. So, you specify uninformative prior and the parameter distribution will come out itself! No need to make any assumptions about it.
Winbugs and other MCMC without information for prior distribution Parameters in linear predictor are t-distributed. When the number of records goes to infinity, it converges to normal distribution. So yes, normally it is considered correct to assume normal distribut
27,629
Winbugs and other MCMC without information for prior distribution
Unfortunately, harmless seeming priors can be very dangerous (and have even fooled some seasoned Bayesians). This recent paper, provides a nice introduction along with plotting methods to visualize the prior and posterior (usually marginal priors/posterior for the parameter(s) of interest). Hidden Dangers of Specifying Noninformative Priors. John W. Seaman III, John W. Seaman Jr. & James D. Stamey The American StatisticianVolume 66, Issue 2, May 2012, pages 77-84. http://amstat.tandfonline.com/doi/full/10.1080/00031305.2012.695938 Such plots in my opinion should be obligatory in any actual Bayesian analysis, even if the analyst does not need them – what is happening in a Bayesian analysis should be made clear for most readers.
Winbugs and other MCMC without information for prior distribution
Unfortunately, harmless seeming priors can be very dangerous (and have even fooled some seasoned Bayesians). This recent paper, provides a nice introduction along with plotting methods to visualize t
Winbugs and other MCMC without information for prior distribution Unfortunately, harmless seeming priors can be very dangerous (and have even fooled some seasoned Bayesians). This recent paper, provides a nice introduction along with plotting methods to visualize the prior and posterior (usually marginal priors/posterior for the parameter(s) of interest). Hidden Dangers of Specifying Noninformative Priors. John W. Seaman III, John W. Seaman Jr. & James D. Stamey The American StatisticianVolume 66, Issue 2, May 2012, pages 77-84. http://amstat.tandfonline.com/doi/full/10.1080/00031305.2012.695938 Such plots in my opinion should be obligatory in any actual Bayesian analysis, even if the analyst does not need them – what is happening in a Bayesian analysis should be made clear for most readers.
Winbugs and other MCMC without information for prior distribution Unfortunately, harmless seeming priors can be very dangerous (and have even fooled some seasoned Bayesians). This recent paper, provides a nice introduction along with plotting methods to visualize t
27,630
Winbugs and other MCMC without information for prior distribution
Sensitivity analysis is usually a good way to go: try different priors and see how your results change with them. If they are robust, you'll probably be able to convince many people about your results. Otherwise, you'll probably want to somehow quantify how the priors change the results.
Winbugs and other MCMC without information for prior distribution
Sensitivity analysis is usually a good way to go: try different priors and see how your results change with them. If they are robust, you'll probably be able to convince many people about your results
Winbugs and other MCMC without information for prior distribution Sensitivity analysis is usually a good way to go: try different priors and see how your results change with them. If they are robust, you'll probably be able to convince many people about your results. Otherwise, you'll probably want to somehow quantify how the priors change the results.
Winbugs and other MCMC without information for prior distribution Sensitivity analysis is usually a good way to go: try different priors and see how your results change with them. If they are robust, you'll probably be able to convince many people about your results
27,631
Is it ok to fit a Bayesian model first, then begin weakening priors?
Subjective Bayesians might disagree, but from my perspective, the prior is just a part of the model, like the likelihood. Changing the prior in response to model behavior is no better or worse than changing your likelihood function (e.g. trying different error distributions or different model formulations). It can be dangerous if it lets you go on a fishing expedition, but the alternatives can be worse. For example, in the case you mentioned, where your model blows up and you get nonsensical coefficients, then you don't have much choice but to try again. Also, there are steps you can take to minimize the dangers of a fishing expedition somewhat: Deciding in advance which prior you'll use in the final analysis Being up-front when you publish or describe your analysis about your whole procedure Doing as much as possible with either simulated data and/or holding out data for the final analysis. That way, you won't contaminate your analysis too much.
Is it ok to fit a Bayesian model first, then begin weakening priors?
Subjective Bayesians might disagree, but from my perspective, the prior is just a part of the model, like the likelihood. Changing the prior in response to model behavior is no better or worse than ch
Is it ok to fit a Bayesian model first, then begin weakening priors? Subjective Bayesians might disagree, but from my perspective, the prior is just a part of the model, like the likelihood. Changing the prior in response to model behavior is no better or worse than changing your likelihood function (e.g. trying different error distributions or different model formulations). It can be dangerous if it lets you go on a fishing expedition, but the alternatives can be worse. For example, in the case you mentioned, where your model blows up and you get nonsensical coefficients, then you don't have much choice but to try again. Also, there are steps you can take to minimize the dangers of a fishing expedition somewhat: Deciding in advance which prior you'll use in the final analysis Being up-front when you publish or describe your analysis about your whole procedure Doing as much as possible with either simulated data and/or holding out data for the final analysis. That way, you won't contaminate your analysis too much.
Is it ok to fit a Bayesian model first, then begin weakening priors? Subjective Bayesians might disagree, but from my perspective, the prior is just a part of the model, like the likelihood. Changing the prior in response to model behavior is no better or worse than ch
27,632
Is it ok to fit a Bayesian model first, then begin weakening priors?
I think you're okay in this case for three reasons: You're not actually adjusting your priors in response to your results. If you said something like, "I use XYZ priors and depending on the rate of convergence and my DIC results, I then modify my prior by ABC," then I'd say you were committing a no-no, but in this case it sounds like you really are not doing that. In a Bayesian context, priors are explicit. So it's possible for you to tweak your priors improperly, but the resultant priors will always be visible for inspection by others who can question why you have those particular priors. Maybe I'm naive here, since it's easy to glance at something like a prior and say, "Hmm, looks reasonable" simply because someone offered it up, but... I think what you're doing is related to Gelman's (and others') advice to build up a JAGS model piece-by-piece, first working with synthetic data, then real data, to make sure you don't have a specification error. That's not really a factor in frequentist methodology, and it's not really an experimental methodology. Then again, I'm still learning this stuff myself. P.S. When you say you originally rig it to converge quickly with "informative priors", do you mean actually informative priors that are motivated by the problem at hand, or just priors that for arbitrary reasons strongly push/restrict the posterior to speed up "convergence" to some arbitrary point? If it's the first case, why are you then moving away from these (motivated) priors?
Is it ok to fit a Bayesian model first, then begin weakening priors?
I think you're okay in this case for three reasons: You're not actually adjusting your priors in response to your results. If you said something like, "I use XYZ priors and depending on the rate of c
Is it ok to fit a Bayesian model first, then begin weakening priors? I think you're okay in this case for three reasons: You're not actually adjusting your priors in response to your results. If you said something like, "I use XYZ priors and depending on the rate of convergence and my DIC results, I then modify my prior by ABC," then I'd say you were committing a no-no, but in this case it sounds like you really are not doing that. In a Bayesian context, priors are explicit. So it's possible for you to tweak your priors improperly, but the resultant priors will always be visible for inspection by others who can question why you have those particular priors. Maybe I'm naive here, since it's easy to glance at something like a prior and say, "Hmm, looks reasonable" simply because someone offered it up, but... I think what you're doing is related to Gelman's (and others') advice to build up a JAGS model piece-by-piece, first working with synthetic data, then real data, to make sure you don't have a specification error. That's not really a factor in frequentist methodology, and it's not really an experimental methodology. Then again, I'm still learning this stuff myself. P.S. When you say you originally rig it to converge quickly with "informative priors", do you mean actually informative priors that are motivated by the problem at hand, or just priors that for arbitrary reasons strongly push/restrict the posterior to speed up "convergence" to some arbitrary point? If it's the first case, why are you then moving away from these (motivated) priors?
Is it ok to fit a Bayesian model first, then begin weakening priors? I think you're okay in this case for three reasons: You're not actually adjusting your priors in response to your results. If you said something like, "I use XYZ priors and depending on the rate of c
27,633
Is it ok to fit a Bayesian model first, then begin weakening priors?
If you experiment with priors and select one in terms of its performances on the data at hand, it is no longer a "prior". Not only does it depend on the data (as in an empirical Bayes analysis), but it also depends on what you want to see (which is worse). In the end, you do use Bayesian tools, but this cannot be called a Bayesian analysis.
Is it ok to fit a Bayesian model first, then begin weakening priors?
If you experiment with priors and select one in terms of its performances on the data at hand, it is no longer a "prior". Not only does it depend on the data (as in an empirical Bayes analysis), but i
Is it ok to fit a Bayesian model first, then begin weakening priors? If you experiment with priors and select one in terms of its performances on the data at hand, it is no longer a "prior". Not only does it depend on the data (as in an empirical Bayes analysis), but it also depends on what you want to see (which is worse). In the end, you do use Bayesian tools, but this cannot be called a Bayesian analysis.
Is it ok to fit a Bayesian model first, then begin weakening priors? If you experiment with priors and select one in terms of its performances on the data at hand, it is no longer a "prior". Not only does it depend on the data (as in an empirical Bayes analysis), but i
27,634
Is it ok to fit a Bayesian model first, then begin weakening priors?
I would say no, you do not have to commit to specific priors. Generally during any Bayesian data analysis you should perform an analysis of the sensitivity of the model to the prior. That would include trying various other priors to see what happens to the results. This might reveal a better or more robust prior that should be used. The two obvious "no-no's" are: playing around with the prior too much to get a better fit, resulting in over fit and changing the other parameters of the model to get a better fit. As an example of the first: changing an initial prior on the mean so that it is closer to the sample mean. For the second: changing your explanatory variables/ features in a regression to get a better fit. This is a problem in any version of regression and basically invalidates your degrees of freedom.
Is it ok to fit a Bayesian model first, then begin weakening priors?
I would say no, you do not have to commit to specific priors. Generally during any Bayesian data analysis you should perform an analysis of the sensitivity of the model to the prior. That would incl
Is it ok to fit a Bayesian model first, then begin weakening priors? I would say no, you do not have to commit to specific priors. Generally during any Bayesian data analysis you should perform an analysis of the sensitivity of the model to the prior. That would include trying various other priors to see what happens to the results. This might reveal a better or more robust prior that should be used. The two obvious "no-no's" are: playing around with the prior too much to get a better fit, resulting in over fit and changing the other parameters of the model to get a better fit. As an example of the first: changing an initial prior on the mean so that it is closer to the sample mean. For the second: changing your explanatory variables/ features in a regression to get a better fit. This is a problem in any version of regression and basically invalidates your degrees of freedom.
Is it ok to fit a Bayesian model first, then begin weakening priors? I would say no, you do not have to commit to specific priors. Generally during any Bayesian data analysis you should perform an analysis of the sensitivity of the model to the prior. That would incl
27,635
Is it ok to fit a Bayesian model first, then begin weakening priors?
I think this might be a no no independent of the Bayesian school. Jeffreys would want to use noninformative priors. Lindley might want you to use informative priors. Empirical Bayesians would ask that you let the data influence the prior. But I think although each school is making a different suggestion about the choice of prior, they all have an approach that does not mean that you can take the prior and keep tweaking it until you get the results you want. That would definitely be like looking at the data and contnuing to collect data and test until you reach your preconceived notion of what the answer should be. Frequentist or Bayesian it doesn't matter i don't think anyone would want you to play tricks with (or massage) the data. Maybe this is something we all can agree on and Peter's funny poem is really apropo.
Is it ok to fit a Bayesian model first, then begin weakening priors?
I think this might be a no no independent of the Bayesian school. Jeffreys would want to use noninformative priors. Lindley might want you to use informative priors. Empirical Bayesians would ask th
Is it ok to fit a Bayesian model first, then begin weakening priors? I think this might be a no no independent of the Bayesian school. Jeffreys would want to use noninformative priors. Lindley might want you to use informative priors. Empirical Bayesians would ask that you let the data influence the prior. But I think although each school is making a different suggestion about the choice of prior, they all have an approach that does not mean that you can take the prior and keep tweaking it until you get the results you want. That would definitely be like looking at the data and contnuing to collect data and test until you reach your preconceived notion of what the answer should be. Frequentist or Bayesian it doesn't matter i don't think anyone would want you to play tricks with (or massage) the data. Maybe this is something we all can agree on and Peter's funny poem is really apropo.
Is it ok to fit a Bayesian model first, then begin weakening priors? I think this might be a no no independent of the Bayesian school. Jeffreys would want to use noninformative priors. Lindley might want you to use informative priors. Empirical Bayesians would ask th
27,636
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regression?
You would want to use a flexible formulation that would capture non-linearity automatically, e.g., some version of a generalized additive model. A poor man's choice is a polynomial $x_k$, $x_k^2$, ..., $x_k^{p_k}$, but such polynomials produce terrible overswings at the ends of the range of their respective variables. A much better formulation would be to use (cubic) B-splines (see a random intro note from the first page of Google here, and a good book, here). B-splines are a sequence of local humps: http://ars.sciencedirect.com/content/image/1-s2.0-S0169743911002292-gr2.jpg The height of the humps is determined from your (linear, logistic, other GLM) regression, as the function you are fitting is simply $$ \theta = \beta_0 + \sum_{k=1}^K \beta_k B\Bigl( \frac{x-x_k}{h_k} \Bigr) $$ for the specified functional form of your hump $B(\cdot)$. By far the most popular version is a bell-shaped smooth cubic spline: $$ B(z) = \left\{ \begin{array}{ll} \frac14 (z+2)^3, & -2 \le z \le -1 \\ \frac14 (3|x|^3 - 6x^2 +4 ), & -1 < x < 1 \\ \frac14 (2-x)^3, & 1 \le x \le 2 \\ 0, & \mbox{otherwise} \end{array} \right. $$ On the implementation side, all you need to do is to set up 3-5-10-whatever number of knots $x_k$ would be reasonable for your application and create the corresponding 3-5-10-whatever variables in the data set with the values of $B\Bigl( \frac{x-x_k}{h_k} \Bigr) $. Typically, a simple grid of values is chosen, with $h_k$ being twice the mesh size of the grid, so that at each point, there are two overlapping B-splines, as in the above plot.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regre
You would want to use a flexible formulation that would capture non-linearity automatically, e.g., some version of a generalized additive model. A poor man's choice is a polynomial $x_k$, $x_k^2$, ...
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regression? You would want to use a flexible formulation that would capture non-linearity automatically, e.g., some version of a generalized additive model. A poor man's choice is a polynomial $x_k$, $x_k^2$, ..., $x_k^{p_k}$, but such polynomials produce terrible overswings at the ends of the range of their respective variables. A much better formulation would be to use (cubic) B-splines (see a random intro note from the first page of Google here, and a good book, here). B-splines are a sequence of local humps: http://ars.sciencedirect.com/content/image/1-s2.0-S0169743911002292-gr2.jpg The height of the humps is determined from your (linear, logistic, other GLM) regression, as the function you are fitting is simply $$ \theta = \beta_0 + \sum_{k=1}^K \beta_k B\Bigl( \frac{x-x_k}{h_k} \Bigr) $$ for the specified functional form of your hump $B(\cdot)$. By far the most popular version is a bell-shaped smooth cubic spline: $$ B(z) = \left\{ \begin{array}{ll} \frac14 (z+2)^3, & -2 \le z \le -1 \\ \frac14 (3|x|^3 - 6x^2 +4 ), & -1 < x < 1 \\ \frac14 (2-x)^3, & 1 \le x \le 2 \\ 0, & \mbox{otherwise} \end{array} \right. $$ On the implementation side, all you need to do is to set up 3-5-10-whatever number of knots $x_k$ would be reasonable for your application and create the corresponding 3-5-10-whatever variables in the data set with the values of $B\Bigl( \frac{x-x_k}{h_k} \Bigr) $. Typically, a simple grid of values is chosen, with $h_k$ being twice the mesh size of the grid, so that at each point, there are two overlapping B-splines, as in the above plot.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regre You would want to use a flexible formulation that would capture non-linearity automatically, e.g., some version of a generalized additive model. A poor man's choice is a polynomial $x_k$, $x_k^2$, ...
27,637
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regression?
Just like linear regression, logistic regression and more generally generalized linear models are required to be linear in the parameters but not necessarily in the covariates. So polynomial terms like a quadratic that Macro suggests can be used. This is a common misunderstanding of the linear term in generalized linear models. Nonlinear models are models that are nonlinear in the parameters. If the model is linear in the parameters and contains additive noise terms that are IID the model is linear even if there are covariates like X$^2$ log X or exp(X). As I now read the question it seems to be edited. My specific answer would be yes to 1 and not necessary to 2.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regre
Just like linear regression, logistic regression and more generally generalized linear models are required to be linear in the parameters but not necessarily in the covariates. So polynomial terms li
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regression? Just like linear regression, logistic regression and more generally generalized linear models are required to be linear in the parameters but not necessarily in the covariates. So polynomial terms like a quadratic that Macro suggests can be used. This is a common misunderstanding of the linear term in generalized linear models. Nonlinear models are models that are nonlinear in the parameters. If the model is linear in the parameters and contains additive noise terms that are IID the model is linear even if there are covariates like X$^2$ log X or exp(X). As I now read the question it seems to be edited. My specific answer would be yes to 1 and not necessary to 2.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regre Just like linear regression, logistic regression and more generally generalized linear models are required to be linear in the parameters but not necessarily in the covariates. So polynomial terms li
27,638
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regression?
Another viable alternative that the modeling shop I work for routinely employs, is binning the continuous independent variables and substituting the 'bad rate'. This forces a linear relationship.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regre
Another viable alternative that the modeling shop I work for routinely employs, is binning the continuous independent variables and substituting the 'bad rate'. This forces a linear relationship.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regression? Another viable alternative that the modeling shop I work for routinely employs, is binning the continuous independent variables and substituting the 'bad rate'. This forces a linear relationship.
Can I use a variable which has a non-linear relationship to the dependent variable in logistic regre Another viable alternative that the modeling shop I work for routinely employs, is binning the continuous independent variables and substituting the 'bad rate'. This forces a linear relationship.
27,639
Understanding factor analysis
No. In factor analysis, all variables are dependent variables, and they depend on latent factors (and also contain measurement errors). While factor scores are often used in place of the original variables, which may seem like a data reduction issue, this is precisely what factor analysis is aimed at. In other words, rather than saying, "Wow, I've got a lot of data that I cannot really process and understand; can I come up with a trick to have fewer variables?", factor analysis is usually performed in the situation "I cannot measure a thing directly, so I will try different approaches to it; I know I will have a lot of data, but this would be related data of known structure, and I shall be able to exploit that structure to learn about that thing that I could not measure directly". What you described qualifies either as multivariate regression (don't confuse with multiple regression, which encompasses one dependent variable and many explanatory variables; multivariate regression has many dependent variables and the same set of explanatory variables in each individual regression), or canonical correlations (with some stretch of imagination though), or a multiple indicators and multiple causes structural equation model, may be. But no, this is not factor analysis.
Understanding factor analysis
No. In factor analysis, all variables are dependent variables, and they depend on latent factors (and also contain measurement errors). While factor scores are often used in place of the original vari
Understanding factor analysis No. In factor analysis, all variables are dependent variables, and they depend on latent factors (and also contain measurement errors). While factor scores are often used in place of the original variables, which may seem like a data reduction issue, this is precisely what factor analysis is aimed at. In other words, rather than saying, "Wow, I've got a lot of data that I cannot really process and understand; can I come up with a trick to have fewer variables?", factor analysis is usually performed in the situation "I cannot measure a thing directly, so I will try different approaches to it; I know I will have a lot of data, but this would be related data of known structure, and I shall be able to exploit that structure to learn about that thing that I could not measure directly". What you described qualifies either as multivariate regression (don't confuse with multiple regression, which encompasses one dependent variable and many explanatory variables; multivariate regression has many dependent variables and the same set of explanatory variables in each individual regression), or canonical correlations (with some stretch of imagination though), or a multiple indicators and multiple causes structural equation model, may be. But no, this is not factor analysis.
Understanding factor analysis No. In factor analysis, all variables are dependent variables, and they depend on latent factors (and also contain measurement errors). While factor scores are often used in place of the original vari
27,640
Understanding factor analysis
to add to @StasK's excellent response, i will clarify further by saying that this problem does fall under the general umbrella of structural equation modeling (SEM). SEM is a technique that can be employed to model covariance structures and, while typically used with unobserved or latent variables, it can also be applied to models with only observed or manifest variables. applying SEM methodology and terminology to your problem, D and E would be considered endogenous variables while A, B, and C are exogenous variables. endogeny suggests that variance in the particular variable is explained by another variable while exogeny suggests that variance is not explained by another variable, latent or manifest. werner wothke provides some good slides introducing SEM using SAS here. also look for ed rigdon's site discussing a variety of SEM issues (too new, can't link!). getting back to basics, if your goal is to understand factor analysis, i would suggest starting with an applied text like brown's confirmatory factor analysis for applied research.
Understanding factor analysis
to add to @StasK's excellent response, i will clarify further by saying that this problem does fall under the general umbrella of structural equation modeling (SEM). SEM is a technique that can be emp
Understanding factor analysis to add to @StasK's excellent response, i will clarify further by saying that this problem does fall under the general umbrella of structural equation modeling (SEM). SEM is a technique that can be employed to model covariance structures and, while typically used with unobserved or latent variables, it can also be applied to models with only observed or manifest variables. applying SEM methodology and terminology to your problem, D and E would be considered endogenous variables while A, B, and C are exogenous variables. endogeny suggests that variance in the particular variable is explained by another variable while exogeny suggests that variance is not explained by another variable, latent or manifest. werner wothke provides some good slides introducing SEM using SAS here. also look for ed rigdon's site discussing a variety of SEM issues (too new, can't link!). getting back to basics, if your goal is to understand factor analysis, i would suggest starting with an applied text like brown's confirmatory factor analysis for applied research.
Understanding factor analysis to add to @StasK's excellent response, i will clarify further by saying that this problem does fall under the general umbrella of structural equation modeling (SEM). SEM is a technique that can be emp
27,641
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$
Hint: On any given draw, the probability that a card is not chosen is $\frac{n-1}{n}$. And since we're drawing with replacement, I assume we can say that each draw is independent of the others. So the probability that a card is not chosen in $2n$ draws is...
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$
Hint: On any given draw, the probability that a card is not chosen is $\frac{n-1}{n}$. And since we're drawing with replacement, I assume we can say that each draw is independent of the others. So
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$ Hint: On any given draw, the probability that a card is not chosen is $\frac{n-1}{n}$. And since we're drawing with replacement, I assume we can say that each draw is independent of the others. So the probability that a card is not chosen in $2n$ draws is...
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$ Hint: On any given draw, the probability that a card is not chosen is $\frac{n-1}{n}$. And since we're drawing with replacement, I assume we can say that each draw is independent of the others. So
27,642
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$
Thank you Mike for the hint. This is what I came up with. Let $X_i$ be a Bernoulli random variable where $X_i = 1$ if the $i^{th}$ card has never been drawn. Then $p_i = P(X_i=1) = (\frac{n-1}{n})^{2n}$, but since $p_i$ is the same for all $i$, let $p=p_i$. Now let $\displaystyle X = \sum_{i=1}^n X_i$ be the number of cards not drawn after $2n$ draws. Then $\displaystyle E[X] = E[\sum_{i=1}^n X_i] = \sum_{i=1}^n E[X_i] = \sum_{i=1}^n p = np$ And that does it I think.
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$
Thank you Mike for the hint. This is what I came up with. Let $X_i$ be a Bernoulli random variable where $X_i = 1$ if the $i^{th}$ card has never been drawn. Then $p_i = P(X_i=1) = (\frac{n-1}{n})^{2n
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$ Thank you Mike for the hint. This is what I came up with. Let $X_i$ be a Bernoulli random variable where $X_i = 1$ if the $i^{th}$ card has never been drawn. Then $p_i = P(X_i=1) = (\frac{n-1}{n})^{2n}$, but since $p_i$ is the same for all $i$, let $p=p_i$. Now let $\displaystyle X = \sum_{i=1}^n X_i$ be the number of cards not drawn after $2n$ draws. Then $\displaystyle E[X] = E[\sum_{i=1}^n X_i] = \sum_{i=1}^n E[X_i] = \sum_{i=1}^n p = np$ And that does it I think.
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$ Thank you Mike for the hint. This is what I came up with. Let $X_i$ be a Bernoulli random variable where $X_i = 1$ if the $i^{th}$ card has never been drawn. Then $p_i = P(X_i=1) = (\frac{n-1}{n})^{2n
27,643
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$
Here is some R code to validate the theory. evCards <- function(n) { iter <- 10000; cards <- 1:n; result <- 0; for (i in 1:iter) { draws <- sample(cards,2*n,T); uniqueDraws <- unique(draws,F); noUnique <- length(uniqueDraws); noNotSeen <- n - noUnique; result <- result + noNotSeen; } simulAvg <- result/iter; theoryAvg <- n * ((n-1)/n)^(2*n); output <-list(simulAvg=simulAvg,theoryAvg=theoryAvg); return (output); }
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$
Here is some R code to validate the theory. evCards <- function(n) { iter <- 10000; cards <- 1:n; result <- 0; for (i in 1:iter) { draws <- sample(cards,2*n,T); unique
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$ Here is some R code to validate the theory. evCards <- function(n) { iter <- 10000; cards <- 1:n; result <- 0; for (i in 1:iter) { draws <- sample(cards,2*n,T); uniqueDraws <- unique(draws,F); noUnique <- length(uniqueDraws); noNotSeen <- n - noUnique; result <- result + noNotSeen; } simulAvg <- result/iter; theoryAvg <- n * ((n-1)/n)^(2*n); output <-list(simulAvg=simulAvg,theoryAvg=theoryAvg); return (output); }
Expected number of unseen cards when drawing $2n$ cards from a deck of size $n$ Here is some R code to validate the theory. evCards <- function(n) { iter <- 10000; cards <- 1:n; result <- 0; for (i in 1:iter) { draws <- sample(cards,2*n,T); unique
27,644
Finding median survival time from survival function
Assuming your survival curve is the basic Kaplan-Meier type survival curve, this is a way to obtain the median survival time. From Machin et al. Survival Analysis: A Practical Approach: If there are no censored observations (...) the median survival time, $M$, is estimated by the middle observation of the ranked survival times $t_{(1)}, t_{(2)},\ldots,t_{(n)}$ if the number of observations, $n$, is odd, and by the average of $t_{(\frac{n}{2})}$ and $t_{(\frac{n}{2}+1)}$ if $n$ is even, that is, $$ M = \left\{\begin{array}{ll} {t_{(\frac{n + 1}{2})}} & \text{if}\ n\ \text{odd}; \\ \frac{1}{2}\left[{t_{(\frac{n}{2})}} + {t_{(\frac{n}{2} + 1)}}\right] & \text{otherwise}. \end{array}\right. $$ In the presence of censored survival times the median survival is estimated by first calculating the Kaplan-Meier survival curve, then finding the value of $M$ that satisfies the equation $S(M) = 0.5$. This can either be done, as you suggested, using a graphical technique with your curve, or using the survival function estimates used to construct said curve.
Finding median survival time from survival function
Assuming your survival curve is the basic Kaplan-Meier type survival curve, this is a way to obtain the median survival time. From Machin et al. Survival Analysis: A Practical Approach: If there are
Finding median survival time from survival function Assuming your survival curve is the basic Kaplan-Meier type survival curve, this is a way to obtain the median survival time. From Machin et al. Survival Analysis: A Practical Approach: If there are no censored observations (...) the median survival time, $M$, is estimated by the middle observation of the ranked survival times $t_{(1)}, t_{(2)},\ldots,t_{(n)}$ if the number of observations, $n$, is odd, and by the average of $t_{(\frac{n}{2})}$ and $t_{(\frac{n}{2}+1)}$ if $n$ is even, that is, $$ M = \left\{\begin{array}{ll} {t_{(\frac{n + 1}{2})}} & \text{if}\ n\ \text{odd}; \\ \frac{1}{2}\left[{t_{(\frac{n}{2})}} + {t_{(\frac{n}{2} + 1)}}\right] & \text{otherwise}. \end{array}\right. $$ In the presence of censored survival times the median survival is estimated by first calculating the Kaplan-Meier survival curve, then finding the value of $M$ that satisfies the equation $S(M) = 0.5$. This can either be done, as you suggested, using a graphical technique with your curve, or using the survival function estimates used to construct said curve.
Finding median survival time from survival function Assuming your survival curve is the basic Kaplan-Meier type survival curve, this is a way to obtain the median survival time. From Machin et al. Survival Analysis: A Practical Approach: If there are
27,645
Finding median survival time from survival function
In case you wanted a hands-on example on how to get the median survival in R: library(survival) data(aml) # Get the survival curve by x groups leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml) # Get the median time print(leukemia.surv) # Do a KM plot col = c("blue", "red") plot(leukemia.surv, lwd=2, col=col, xlim=c(0, 50), ylab="Survival", xlab="Time") # Mark the 50 % survival abline(a=.5, b=0) title("AML") legend("topright", fill=col, inset=.1, legend=c("Nonmaintained", "Maintained")) This gives you this plot: and the print(leukemia.surv) gives the exact median survival: > print(leukemia.surv) Call: survfit(formula = Surv(time, status) ~ x, data = aml) records n.max n.start events median 0.95LCL 0.95UCL x=Maintained 11 11 11 7 31 18 NA x=Nonmaintained 12 12 12 11 23 8 NA
Finding median survival time from survival function
In case you wanted a hands-on example on how to get the median survival in R: library(survival) data(aml) # Get the survival curve by x groups leukemia.surv <- survfit(Surv(time, status) ~ x, data = a
Finding median survival time from survival function In case you wanted a hands-on example on how to get the median survival in R: library(survival) data(aml) # Get the survival curve by x groups leukemia.surv <- survfit(Surv(time, status) ~ x, data = aml) # Get the median time print(leukemia.surv) # Do a KM plot col = c("blue", "red") plot(leukemia.surv, lwd=2, col=col, xlim=c(0, 50), ylab="Survival", xlab="Time") # Mark the 50 % survival abline(a=.5, b=0) title("AML") legend("topright", fill=col, inset=.1, legend=c("Nonmaintained", "Maintained")) This gives you this plot: and the print(leukemia.surv) gives the exact median survival: > print(leukemia.surv) Call: survfit(formula = Surv(time, status) ~ x, data = aml) records n.max n.start events median 0.95LCL 0.95UCL x=Maintained 11 11 11 7 31 18 NA x=Nonmaintained 12 12 12 11 23 8 NA
Finding median survival time from survival function In case you wanted a hands-on example on how to get the median survival in R: library(survival) data(aml) # Get the survival curve by x groups leukemia.surv <- survfit(Surv(time, status) ~ x, data = a
27,646
Finding median survival time from survival function
Here is some extra: In SAS 9.1, the $p$th sample percentile of the survival time distribution is computed as $q_{p} = \frac{1}{2} \left( \inf \left\{ t: 1 - \hat{S}(t) \geq p \right\} + \sup \left\{ t: 1 - \hat{S}(t) \leq p \right\} \right)$ where the $t$'s are those from your observed survival times. For example, the first sample quartile is given by $q_{0.25} = \frac{1}{2} \left( \inf \left\{ t: 1 - \hat{S}(t) \geq 0.25 \right\} + \sup \left\{ t: 1 - \hat{S}(t) \leq 0.25 \right\} \right)$ The associated $100(1 - \alpha)\%$ confidence interval is calculated as the set $I_{p} = \left\{ t: -z_{1 - \tfrac{\alpha}{2}} \leq \frac{\hat{S}(t) - (1-p)}{\sqrt{\hat{V}(\hat{S}(t))}} \leq z_{1 - \tfrac{\alpha}{2}} \right\}$ where $z_{1 - \tfrac{\alpha}{2}}$ stands for the $(1 - \tfrac{\alpha}{2})$th percentile of a standard normal distribution and where $\hat{V}(\hat{S}(t))$ is given by Greenwood's formula. Note that, for instance, if there is no $t$ such that $\frac{\hat{S}(t) - (1-p)}{\sqrt{\hat{V}(\hat{S}(t))}} \leq z_{1 - \tfrac{\alpha}{2}}$ then the upper limit of $I_{p}$ is undetermined. You can also use the conftype= option to construct a confidence interval based on a $g$-transformed confidence interval for $S(t)$: $I'_{p} = \left\{ t: -z_{1 - \tfrac{\alpha}{2}} \leq \frac{g(\hat{S}(t)) - g((1-p))}{g'(\hat{S}(t)) \sqrt{\hat{V}(\hat{S}(t))}} \leq z_{1 - \tfrac{\alpha}{2}} \right\}$ By default in SAS 9.1, conftype=linear for which $g(x)=x$. We obtain slightly different results when conftype=loglog for example but the prevailing tendency is unchanged. Of note, the confidence of the interval is generally less than $95\%$ and SAS extends it to the next event time (not included).
Finding median survival time from survival function
Here is some extra: In SAS 9.1, the $p$th sample percentile of the survival time distribution is computed as $q_{p} = \frac{1}{2} \left( \inf \left\{ t: 1 - \hat{S}(t) \geq p \right\} + \sup \left\{ t
Finding median survival time from survival function Here is some extra: In SAS 9.1, the $p$th sample percentile of the survival time distribution is computed as $q_{p} = \frac{1}{2} \left( \inf \left\{ t: 1 - \hat{S}(t) \geq p \right\} + \sup \left\{ t: 1 - \hat{S}(t) \leq p \right\} \right)$ where the $t$'s are those from your observed survival times. For example, the first sample quartile is given by $q_{0.25} = \frac{1}{2} \left( \inf \left\{ t: 1 - \hat{S}(t) \geq 0.25 \right\} + \sup \left\{ t: 1 - \hat{S}(t) \leq 0.25 \right\} \right)$ The associated $100(1 - \alpha)\%$ confidence interval is calculated as the set $I_{p} = \left\{ t: -z_{1 - \tfrac{\alpha}{2}} \leq \frac{\hat{S}(t) - (1-p)}{\sqrt{\hat{V}(\hat{S}(t))}} \leq z_{1 - \tfrac{\alpha}{2}} \right\}$ where $z_{1 - \tfrac{\alpha}{2}}$ stands for the $(1 - \tfrac{\alpha}{2})$th percentile of a standard normal distribution and where $\hat{V}(\hat{S}(t))$ is given by Greenwood's formula. Note that, for instance, if there is no $t$ such that $\frac{\hat{S}(t) - (1-p)}{\sqrt{\hat{V}(\hat{S}(t))}} \leq z_{1 - \tfrac{\alpha}{2}}$ then the upper limit of $I_{p}$ is undetermined. You can also use the conftype= option to construct a confidence interval based on a $g$-transformed confidence interval for $S(t)$: $I'_{p} = \left\{ t: -z_{1 - \tfrac{\alpha}{2}} \leq \frac{g(\hat{S}(t)) - g((1-p))}{g'(\hat{S}(t)) \sqrt{\hat{V}(\hat{S}(t))}} \leq z_{1 - \tfrac{\alpha}{2}} \right\}$ By default in SAS 9.1, conftype=linear for which $g(x)=x$. We obtain slightly different results when conftype=loglog for example but the prevailing tendency is unchanged. Of note, the confidence of the interval is generally less than $95\%$ and SAS extends it to the next event time (not included).
Finding median survival time from survival function Here is some extra: In SAS 9.1, the $p$th sample percentile of the survival time distribution is computed as $q_{p} = \frac{1}{2} \left( \inf \left\{ t: 1 - \hat{S}(t) \geq p \right\} + \sup \left\{ t
27,647
How to assess statistical significance of the accuracy of a classifier?
You want to define the distribution of the accuracy of just guessing. Perhaps this is like $X/n$ where $X \sim $ binomial($n$, $p$) for some known $p$ (say 50%). Then calculate the chance of observing the results you did, if this null model were true. In R, you could use binom.test or calculate it directly with pbinom. Usually you'd want to compare accuracy not to "guessing" but to some alternative method, in which case you might use McNemar's test; in R, mcnemar.test.
How to assess statistical significance of the accuracy of a classifier?
You want to define the distribution of the accuracy of just guessing. Perhaps this is like $X/n$ where $X \sim $ binomial($n$, $p$) for some known $p$ (say 50%). Then calculate the chance of observin
How to assess statistical significance of the accuracy of a classifier? You want to define the distribution of the accuracy of just guessing. Perhaps this is like $X/n$ where $X \sim $ binomial($n$, $p$) for some known $p$ (say 50%). Then calculate the chance of observing the results you did, if this null model were true. In R, you could use binom.test or calculate it directly with pbinom. Usually you'd want to compare accuracy not to "guessing" but to some alternative method, in which case you might use McNemar's test; in R, mcnemar.test.
How to assess statistical significance of the accuracy of a classifier? You want to define the distribution of the accuracy of just guessing. Perhaps this is like $X/n$ where $X \sim $ binomial($n$, $p$) for some known $p$ (say 50%). Then calculate the chance of observin
27,648
How to assess statistical significance of the accuracy of a classifier?
I don't see where testing against complete randomness is that helpful. A classifier that can only beat pure random guesses is not very useful. A bigger problem is your use of proportion classified correctly as your accuracy score. This is a discontinuous improper scoring rule that can be easily manipulated because it is arbitrary and insensitive. One (of many) ways to see its deficiencies is to compute the proportion classified correctly if you have a model with only an intercept. It will be high if the outcomes are not close to 0.5 in prevalence. Once you choose a more proper rule it would be valuable to compute a confidence interval for the index. Statistical significance is of little value.
How to assess statistical significance of the accuracy of a classifier?
I don't see where testing against complete randomness is that helpful. A classifier that can only beat pure random guesses is not very useful. A bigger problem is your use of proportion classified c
How to assess statistical significance of the accuracy of a classifier? I don't see where testing against complete randomness is that helpful. A classifier that can only beat pure random guesses is not very useful. A bigger problem is your use of proportion classified correctly as your accuracy score. This is a discontinuous improper scoring rule that can be easily manipulated because it is arbitrary and insensitive. One (of many) ways to see its deficiencies is to compute the proportion classified correctly if you have a model with only an intercept. It will be high if the outcomes are not close to 0.5 in prevalence. Once you choose a more proper rule it would be valuable to compute a confidence interval for the index. Statistical significance is of little value.
How to assess statistical significance of the accuracy of a classifier? I don't see where testing against complete randomness is that helpful. A classifier that can only beat pure random guesses is not very useful. A bigger problem is your use of proportion classified c
27,649
How to assess statistical significance of the accuracy of a classifier?
For sure you can computer a confidence interval. If $\mbox{acc}$ is your accuracy estimated on a test set of $N$ elements, it holds that $$\frac{acc-p}{\sqrt{p(1-p)/N}} \sim \mathcal{N}(0,1)$$ Thus $$ P\bigg( \frac{acc-p}{\sqrt{p(1-p)/N}} \in [-z_{\alpha/2},+z_{\alpha/2}]\bigg) \approx 1 - \alpha$$ So you can say that: $$P(p \in [l,u]) \approx 1 - \alpha$$ For example you can calculate the Wilson interval. $$l = \frac{2 \ N \ \mbox{acc} + z_{\alpha/2}^2 - z_{\alpha/2} \sqrt{z_{\alpha/2}^2+4 \ N \ \mbox{acc}-4 \ N \ \mbox{acc}^2}}{2(N+z_{\alpha/2}^2)}$$ $$u = \frac{2 \ N \ \mbox{acc} + z_{\alpha/2}^2 + z_{\alpha/2} \sqrt{z_{\alpha/2}^2+4 \ N \ \mbox{acc}-4 \ N \ \mbox{acc}^2}}{2(N+z_{\alpha/2}^2)}$$ I think you can calculate how much your performance differs from a random one computing the gain. The accuracy of a random classifier is: $$ \mbox{acc}_r = \sum_{i=1}^{c} p_i^2$$ where $p_i$ is the empirical frequency of the class $i$ estimated on the test set, and $c$ is the number of different classes. On average a random classifier, which classifies random guessing the class $i$ relying on the priors probability of the test set, classifies $p_i\cdot n_i = \frac{n_i}{N} \cdot n_i$ examples of class $i$ correctly. Where $n_i$ is the number of records of class $i$ in the test set. Thus $$ \mbox{acc}_r = \frac{p_1 \cdot n_1 + \dots + p_c \cdot n_c}{n_1 + \dots + n_c} = \frac{p_1\cdot n_1}{N} + \dots + \frac{p_c\cdot n_c}{N} = \sum_{i}^{c} p_i^2$$ You might have a look to a question of mine. The gain is: $$\mbox{gain} = \frac{\mbox{acc}}{\mbox{acc}_r} $$ I actually think a statistical test can be sketched. The numerator could be seen as a Normal random variable, $\mathcal{N}(\mbox{acc},p(1-p)/N)$, but you should figure out what kind of random variable the denominator $\mbox{acc}_r$ could be.
How to assess statistical significance of the accuracy of a classifier?
For sure you can computer a confidence interval. If $\mbox{acc}$ is your accuracy estimated on a test set of $N$ elements, it holds that $$\frac{acc-p}{\sqrt{p(1-p)/N}} \sim \mathcal{N}(0,1)$$ Thus
How to assess statistical significance of the accuracy of a classifier? For sure you can computer a confidence interval. If $\mbox{acc}$ is your accuracy estimated on a test set of $N$ elements, it holds that $$\frac{acc-p}{\sqrt{p(1-p)/N}} \sim \mathcal{N}(0,1)$$ Thus $$ P\bigg( \frac{acc-p}{\sqrt{p(1-p)/N}} \in [-z_{\alpha/2},+z_{\alpha/2}]\bigg) \approx 1 - \alpha$$ So you can say that: $$P(p \in [l,u]) \approx 1 - \alpha$$ For example you can calculate the Wilson interval. $$l = \frac{2 \ N \ \mbox{acc} + z_{\alpha/2}^2 - z_{\alpha/2} \sqrt{z_{\alpha/2}^2+4 \ N \ \mbox{acc}-4 \ N \ \mbox{acc}^2}}{2(N+z_{\alpha/2}^2)}$$ $$u = \frac{2 \ N \ \mbox{acc} + z_{\alpha/2}^2 + z_{\alpha/2} \sqrt{z_{\alpha/2}^2+4 \ N \ \mbox{acc}-4 \ N \ \mbox{acc}^2}}{2(N+z_{\alpha/2}^2)}$$ I think you can calculate how much your performance differs from a random one computing the gain. The accuracy of a random classifier is: $$ \mbox{acc}_r = \sum_{i=1}^{c} p_i^2$$ where $p_i$ is the empirical frequency of the class $i$ estimated on the test set, and $c$ is the number of different classes. On average a random classifier, which classifies random guessing the class $i$ relying on the priors probability of the test set, classifies $p_i\cdot n_i = \frac{n_i}{N} \cdot n_i$ examples of class $i$ correctly. Where $n_i$ is the number of records of class $i$ in the test set. Thus $$ \mbox{acc}_r = \frac{p_1 \cdot n_1 + \dots + p_c \cdot n_c}{n_1 + \dots + n_c} = \frac{p_1\cdot n_1}{N} + \dots + \frac{p_c\cdot n_c}{N} = \sum_{i}^{c} p_i^2$$ You might have a look to a question of mine. The gain is: $$\mbox{gain} = \frac{\mbox{acc}}{\mbox{acc}_r} $$ I actually think a statistical test can be sketched. The numerator could be seen as a Normal random variable, $\mathcal{N}(\mbox{acc},p(1-p)/N)$, but you should figure out what kind of random variable the denominator $\mbox{acc}_r$ could be.
How to assess statistical significance of the accuracy of a classifier? For sure you can computer a confidence interval. If $\mbox{acc}$ is your accuracy estimated on a test set of $N$ elements, it holds that $$\frac{acc-p}{\sqrt{p(1-p)/N}} \sim \mathcal{N}(0,1)$$ Thus
27,650
How to assess statistical significance of the accuracy of a classifier?
You may be interested in the following papers: Eric W. Noreen, Computer-intensive Methods for Testing Hypotheses: An Introduction, John Wiley & Sons, New York, NY, USA, 1989. Alexander Yeh, More accurate tests for the statistical significance of result differences, in: Proceedings of the 18th International Conference on Computational Linguistics, Volume 2, pages 947-953, 2000. I think they cover what Dimitrios Athanasakis talks about. I implemented one option of Yeh in the manner that I understand it: http://www.clips.uantwerpen.be/~vincent/software#art
How to assess statistical significance of the accuracy of a classifier?
You may be interested in the following papers: Eric W. Noreen, Computer-intensive Methods for Testing Hypotheses: An Introduction, John Wiley & Sons, New York, NY, USA, 1989. Alexander Yeh, More acc
How to assess statistical significance of the accuracy of a classifier? You may be interested in the following papers: Eric W. Noreen, Computer-intensive Methods for Testing Hypotheses: An Introduction, John Wiley & Sons, New York, NY, USA, 1989. Alexander Yeh, More accurate tests for the statistical significance of result differences, in: Proceedings of the 18th International Conference on Computational Linguistics, Volume 2, pages 947-953, 2000. I think they cover what Dimitrios Athanasakis talks about. I implemented one option of Yeh in the manner that I understand it: http://www.clips.uantwerpen.be/~vincent/software#art
How to assess statistical significance of the accuracy of a classifier? You may be interested in the following papers: Eric W. Noreen, Computer-intensive Methods for Testing Hypotheses: An Introduction, John Wiley & Sons, New York, NY, USA, 1989. Alexander Yeh, More acc
27,651
How to assess statistical significance of the accuracy of a classifier?
I think that one thing you could try out would be a permutation test. Simply put just randomly permute the input-desired output pairs you feed to your classifier over a number of times. If it fails to reproduce anything at the same level over 100 different permutations than it's significant at the 99% interval and so on. This is basically the same process used to obtain p-values (which correspond to the probability of obtaining a linear correlation of the same mangnitude after randomly permuting the data) and so on.
How to assess statistical significance of the accuracy of a classifier?
I think that one thing you could try out would be a permutation test. Simply put just randomly permute the input-desired output pairs you feed to your classifier over a number of times. If it fails t
How to assess statistical significance of the accuracy of a classifier? I think that one thing you could try out would be a permutation test. Simply put just randomly permute the input-desired output pairs you feed to your classifier over a number of times. If it fails to reproduce anything at the same level over 100 different permutations than it's significant at the 99% interval and so on. This is basically the same process used to obtain p-values (which correspond to the probability of obtaining a linear correlation of the same mangnitude after randomly permuting the data) and so on.
How to assess statistical significance of the accuracy of a classifier? I think that one thing you could try out would be a permutation test. Simply put just randomly permute the input-desired output pairs you feed to your classifier over a number of times. If it fails t
27,652
Bayesian AB testing
I'm working my way through the same questions. There are now a couple of helpful articles that weren't available when you posed this question. "Bayesian A/B testing with theory and code" by Antti Rasinen - the logical conclusion of an unfinished series of articles series "Exact Bayesian Inference for A/B testing" by Evan Haas (partially rescued here part1 and part2). The conjugate prior for the binomial distribution is the beta distribution. Therefore the distribution of the conversion rate for one variant is the beta distribution. You can solve $Pr(A > B)$ numerically or exactly. The author refers to an essay written by Bayes himself, "An Essay towards solving a Problem in the Doctrine of Chances". "Proportionate A/B Testing" by Ian Clarke - Author explains that the beta distribution is the key to understanding how to apply a Bayesian solution to A/B testing. He also discusses the use of Thompson Sampling for determining prior values for $\alpha$ and $\beta$. "Chapter 2: A little more on PyMC" from the book "Bayesian Methods for Hackers" by Cam Davidson Pilon - This is an iPython book explaining Bayesian methods in a number of applications. About half way through Chapter 2 (the section title is Example: Bayesian A/B testing), the author gives a detailed explanation of how to calculate probability that A is better than B (or vice versa) using the pymc library. Full python code is given, including plotting the results. There are also now a number of Bayesian significance calculators online as well: by onlinedialogue by Lyst by Yanir Seroussi
Bayesian AB testing
I'm working my way through the same questions. There are now a couple of helpful articles that weren't available when you posed this question. "Bayesian A/B testing with theory and code" by Antti Ras
Bayesian AB testing I'm working my way through the same questions. There are now a couple of helpful articles that weren't available when you posed this question. "Bayesian A/B testing with theory and code" by Antti Rasinen - the logical conclusion of an unfinished series of articles series "Exact Bayesian Inference for A/B testing" by Evan Haas (partially rescued here part1 and part2). The conjugate prior for the binomial distribution is the beta distribution. Therefore the distribution of the conversion rate for one variant is the beta distribution. You can solve $Pr(A > B)$ numerically or exactly. The author refers to an essay written by Bayes himself, "An Essay towards solving a Problem in the Doctrine of Chances". "Proportionate A/B Testing" by Ian Clarke - Author explains that the beta distribution is the key to understanding how to apply a Bayesian solution to A/B testing. He also discusses the use of Thompson Sampling for determining prior values for $\alpha$ and $\beta$. "Chapter 2: A little more on PyMC" from the book "Bayesian Methods for Hackers" by Cam Davidson Pilon - This is an iPython book explaining Bayesian methods in a number of applications. About half way through Chapter 2 (the section title is Example: Bayesian A/B testing), the author gives a detailed explanation of how to calculate probability that A is better than B (or vice versa) using the pymc library. Full python code is given, including plotting the results. There are also now a number of Bayesian significance calculators online as well: by onlinedialogue by Lyst by Yanir Seroussi
Bayesian AB testing I'm working my way through the same questions. There are now a couple of helpful articles that weren't available when you posed this question. "Bayesian A/B testing with theory and code" by Antti Ras
27,653
Bayesian AB testing
You can perform a Monte-Carlo-Integration of the credible intervals of each group represented by beta distributions to calculate the probability that the true unknown parameter of one group is better than the true unknown parameter of another group. I've done something similar in this question How does a frequentist calculate the chance that group A beats group B regarding binary response where trials=Visitors and successful trials = conversions BUT: Beware that Bayes will give you only subjective probabilities depending on the data collected so far, not the objective "truth". This is rooted in the difference in philosophy between frequentists (which use statistical tests, p-values etc) and Bayesians. Hence you cannot expect to detect a significant difference using Bayes when the statistical procedures fail to do so. To understand why this matters it might help to learn the difference between the confidence interval and the credible interval first, since the above mentioned MC-Integration "only" compares two indepent credible intervals with each other. For further details on this topic see e.g. this questions: Bayesian and Frequentist reasoning in Plain English What, precisely, is a confidence interval? What's the difference between a confidence interval and a credible interval?
Bayesian AB testing
You can perform a Monte-Carlo-Integration of the credible intervals of each group represented by beta distributions to calculate the probability that the true unknown parameter of one group is better
Bayesian AB testing You can perform a Monte-Carlo-Integration of the credible intervals of each group represented by beta distributions to calculate the probability that the true unknown parameter of one group is better than the true unknown parameter of another group. I've done something similar in this question How does a frequentist calculate the chance that group A beats group B regarding binary response where trials=Visitors and successful trials = conversions BUT: Beware that Bayes will give you only subjective probabilities depending on the data collected so far, not the objective "truth". This is rooted in the difference in philosophy between frequentists (which use statistical tests, p-values etc) and Bayesians. Hence you cannot expect to detect a significant difference using Bayes when the statistical procedures fail to do so. To understand why this matters it might help to learn the difference between the confidence interval and the credible interval first, since the above mentioned MC-Integration "only" compares two indepent credible intervals with each other. For further details on this topic see e.g. this questions: Bayesian and Frequentist reasoning in Plain English What, precisely, is a confidence interval? What's the difference between a confidence interval and a credible interval?
Bayesian AB testing You can perform a Monte-Carlo-Integration of the credible intervals of each group represented by beta distributions to calculate the probability that the true unknown parameter of one group is better
27,654
Bayesian AB testing
There are several approaches for doing Bayesian A/B testing. First of all, you should decide whether you want to use an analytic approach (using conjugate distributions as Lenwood mentions) or an MCMC approach. For simple A/B experiments, particularly on conversion rate which is your case, there is really no need to use an MCMC approach: just use a Beta distribution as a prior and your posterior distribution will also be a Beta distribution. Then, you need to decide which decision rule to apply. Here, there seems to be two main approaches for decision making. The first one is based on a paper by John Kruschke from Indiana University (K. Kruschke, Bayesian Estimation Supersedes the t Test, Journal of Experimental Psychology: General, 142, 573 (2013).). The decision rule used in this paper is based on the concept of Region Of Practical Equivalence (ROPE). Another possibility is to use the concept of an Expected Loss. It has been proposed by Chris Stucchio (C. Stucchio, Bayesian A/B Testing at VWO). In principle, you could use a different decision rule. You can find this and much more on this blog post: Bayesian A/B Testing: a step-by-step guide. It also includes some Python code snippets and uses a Python project that is hosted on Github.
Bayesian AB testing
There are several approaches for doing Bayesian A/B testing. First of all, you should decide whether you want to use an analytic approach (using conjugate distributions as Lenwood mentions) or an MCM
Bayesian AB testing There are several approaches for doing Bayesian A/B testing. First of all, you should decide whether you want to use an analytic approach (using conjugate distributions as Lenwood mentions) or an MCMC approach. For simple A/B experiments, particularly on conversion rate which is your case, there is really no need to use an MCMC approach: just use a Beta distribution as a prior and your posterior distribution will also be a Beta distribution. Then, you need to decide which decision rule to apply. Here, there seems to be two main approaches for decision making. The first one is based on a paper by John Kruschke from Indiana University (K. Kruschke, Bayesian Estimation Supersedes the t Test, Journal of Experimental Psychology: General, 142, 573 (2013).). The decision rule used in this paper is based on the concept of Region Of Practical Equivalence (ROPE). Another possibility is to use the concept of an Expected Loss. It has been proposed by Chris Stucchio (C. Stucchio, Bayesian A/B Testing at VWO). In principle, you could use a different decision rule. You can find this and much more on this blog post: Bayesian A/B Testing: a step-by-step guide. It also includes some Python code snippets and uses a Python project that is hosted on Github.
Bayesian AB testing There are several approaches for doing Bayesian A/B testing. First of all, you should decide whether you want to use an analytic approach (using conjugate distributions as Lenwood mentions) or an MCM
27,655
Getting started with biclustering
I never used it directly, so I can only share some papers I had and general thoughts about that technique (which mainly address your questions 1 and 3). My general understanding of biclustering mainly comes from genetic studies (2-6) where we seek to account for clusters of genes and grouping of individuals: in short, we are looking to groups samples sharing similar profile of gene expression together (this might be related to disease state, for instance) and genes that contribute to this pattern of gene profiling. A survey of the state of the art for biological "massive" datasets is available in Pardalos's slides, Biclustering. Note that there is an R package, biclust, with applications to microarray data. In fact, my initial idea was to apply this methodology to clinical diagnosis, because it allows to put features or variables in more than one cluster, which is interesting from a semeiological perpective because symptoms that cluster together allow to define syndrome, but some symptoms can overlap in different diseases. A good discussion may be found in Cramer et al., Comorbidity: A network perspective (Behavioral and Brain Sciences 2010, 33, 137-193). A somewhat related technique is collaborative filtering. A good review was made available by Su and Khoshgoftaar (Advances in Artificial Intelligence, 2009): A Survey of Collaborative Filtering Techniques. Other references are listed at the end. Maybe analysis of frequent itemset, as exemplified in the market-basket problem, is also linked to it, but I never investigated this. Another example of co-clustering is when we want to simultaneously cluster words and documents, as in text mining, e.g. Dhillon (2001). Co-clustering documents and words using bipartite spectral graph partitioning. Proc. KDD, pp. 269–274. About some general references, here is a not very exhaustive list that I hope you may find useful: Jain, A.K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31, 651–666 Carmona-Saez et al. (2006). Biclustering of gene expression data by non-smooth non-negative matrix factorization. BMC Bioinformatics, 7, 78. Prelic et al. (2006). A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics, 22(9), 1122-1129. www.tik.ee.ethz.ch/sop/bimax DiMaggio et al. (2008). Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies. BMC Bioinformatics, 9, 458. Santamaria et al. (2008). BicOverlapper: A tool for bicluster visualization. Bioinformatics, 24(9), 1212-1213. Madeira, S.C. and Oliveira, A.L. (2004) Bicluster algorithms for biological data analysis: a survey. IEEE Trans. Comput. Biol. Bioinform., 1, 24–45. Badea, L. (2009). Generalized Clustergrams for Overlapping Biclusters. IJCAI Symeonidis, P. (2006). Nearest-Biclusters Collaborative Filtering. WEBKDD
Getting started with biclustering
I never used it directly, so I can only share some papers I had and general thoughts about that technique (which mainly address your questions 1 and 3). My general understanding of biclustering mainly
Getting started with biclustering I never used it directly, so I can only share some papers I had and general thoughts about that technique (which mainly address your questions 1 and 3). My general understanding of biclustering mainly comes from genetic studies (2-6) where we seek to account for clusters of genes and grouping of individuals: in short, we are looking to groups samples sharing similar profile of gene expression together (this might be related to disease state, for instance) and genes that contribute to this pattern of gene profiling. A survey of the state of the art for biological "massive" datasets is available in Pardalos's slides, Biclustering. Note that there is an R package, biclust, with applications to microarray data. In fact, my initial idea was to apply this methodology to clinical diagnosis, because it allows to put features or variables in more than one cluster, which is interesting from a semeiological perpective because symptoms that cluster together allow to define syndrome, but some symptoms can overlap in different diseases. A good discussion may be found in Cramer et al., Comorbidity: A network perspective (Behavioral and Brain Sciences 2010, 33, 137-193). A somewhat related technique is collaborative filtering. A good review was made available by Su and Khoshgoftaar (Advances in Artificial Intelligence, 2009): A Survey of Collaborative Filtering Techniques. Other references are listed at the end. Maybe analysis of frequent itemset, as exemplified in the market-basket problem, is also linked to it, but I never investigated this. Another example of co-clustering is when we want to simultaneously cluster words and documents, as in text mining, e.g. Dhillon (2001). Co-clustering documents and words using bipartite spectral graph partitioning. Proc. KDD, pp. 269–274. About some general references, here is a not very exhaustive list that I hope you may find useful: Jain, A.K. (2010). Data clustering: 50 years beyond K-means. Pattern Recognition Letters, 31, 651–666 Carmona-Saez et al. (2006). Biclustering of gene expression data by non-smooth non-negative matrix factorization. BMC Bioinformatics, 7, 78. Prelic et al. (2006). A systematic comparison and evaluation of biclustering methods for gene expression data. Bioinformatics, 22(9), 1122-1129. www.tik.ee.ethz.ch/sop/bimax DiMaggio et al. (2008). Biclustering via optimal re-ordering of data matrices in systems biology: rigorous methods and comparative studies. BMC Bioinformatics, 9, 458. Santamaria et al. (2008). BicOverlapper: A tool for bicluster visualization. Bioinformatics, 24(9), 1212-1213. Madeira, S.C. and Oliveira, A.L. (2004) Bicluster algorithms for biological data analysis: a survey. IEEE Trans. Comput. Biol. Bioinform., 1, 24–45. Badea, L. (2009). Generalized Clustergrams for Overlapping Biclusters. IJCAI Symeonidis, P. (2006). Nearest-Biclusters Collaborative Filtering. WEBKDD
Getting started with biclustering I never used it directly, so I can only share some papers I had and general thoughts about that technique (which mainly address your questions 1 and 3). My general understanding of biclustering mainly
27,656
Getting started with biclustering
Here's a good survey/review: Stanislav Busygin, Oleg Prokopyev, and Panos M. Pardalos. Biclustering in data mining. Computers & Operations Research, 35(9):2964–2987, September 2008.
Getting started with biclustering
Here's a good survey/review: Stanislav Busygin, Oleg Prokopyev, and Panos M. Pardalos. Biclustering in data mining. Computers & Operations Research, 35(9):2964–2987, September 2008.
Getting started with biclustering Here's a good survey/review: Stanislav Busygin, Oleg Prokopyev, and Panos M. Pardalos. Biclustering in data mining. Computers & Operations Research, 35(9):2964–2987, September 2008.
Getting started with biclustering Here's a good survey/review: Stanislav Busygin, Oleg Prokopyev, and Panos M. Pardalos. Biclustering in data mining. Computers & Operations Research, 35(9):2964–2987, September 2008.
27,657
Knot selection for cubic regression splines [duplicate]
This is a tricky problem and most people just select the knots by trial and error. One approach which is growing in popularity is to use penalized regression splines instead. Then knot selection has little effect provided you have lots of knots. The coefficients are constrained to avoid any coefficient being too large. It turns out that this is equivalent to a mixed effects model where the spline coefficients are random. Then the whole problem can be solved using REML without worrying about knot selection or a smoothing parameter. Since you use R, you can fit such a model using the spm() function in the SemiPar package.
Knot selection for cubic regression splines [duplicate]
This is a tricky problem and most people just select the knots by trial and error. One approach which is growing in popularity is to use penalized regression splines instead. Then knot selection has
Knot selection for cubic regression splines [duplicate] This is a tricky problem and most people just select the knots by trial and error. One approach which is growing in popularity is to use penalized regression splines instead. Then knot selection has little effect provided you have lots of knots. The coefficients are constrained to avoid any coefficient being too large. It turns out that this is equivalent to a mixed effects model where the spline coefficients are random. Then the whole problem can be solved using REML without worrying about knot selection or a smoothing parameter. Since you use R, you can fit such a model using the spm() function in the SemiPar package.
Knot selection for cubic regression splines [duplicate] This is a tricky problem and most people just select the knots by trial and error. One approach which is growing in popularity is to use penalized regression splines instead. Then knot selection has
27,658
Knot selection for cubic regression splines [duplicate]
It depends what you mean by "not too wiggly", but you might like to take a look at fractional polynomials for a simpler approach to fitting smooth curves that are not linear but not 'wiggly'. See Royston & Altman 1994 and the mfp package in R or the fracpoly command in Stata.
Knot selection for cubic regression splines [duplicate]
It depends what you mean by "not too wiggly", but you might like to take a look at fractional polynomials for a simpler approach to fitting smooth curves that are not linear but not 'wiggly'. See Roys
Knot selection for cubic regression splines [duplicate] It depends what you mean by "not too wiggly", but you might like to take a look at fractional polynomials for a simpler approach to fitting smooth curves that are not linear but not 'wiggly'. See Royston & Altman 1994 and the mfp package in R or the fracpoly command in Stata.
Knot selection for cubic regression splines [duplicate] It depends what you mean by "not too wiggly", but you might like to take a look at fractional polynomials for a simpler approach to fitting smooth curves that are not linear but not 'wiggly'. See Roys
27,659
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value)
My suggestion is to complement the p-value with a report of the effect size and its confidence interval (and don't use the word "significant"). For your data, the relevant effect size is the relative risk. The relative risk is 5.0 and its 95% CI ranges from 1.3 to 20.2, from a 30% increase to a 20-fold increase. That wide range expresses the uncertainty you want to convey. There are several ways to compute (estimate) the CI. I used Koopman asymptotic score implemented in GraphPad Prism. Here is a screenshot of the results:
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline
My suggestion is to complement the p-value with a report of the effect size and its confidence interval (and don't use the word "significant"). For your data, the relevant effect size is the relative
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value) My suggestion is to complement the p-value with a report of the effect size and its confidence interval (and don't use the word "significant"). For your data, the relevant effect size is the relative risk. The relative risk is 5.0 and its 95% CI ranges from 1.3 to 20.2, from a 30% increase to a 20-fold increase. That wide range expresses the uncertainty you want to convey. There are several ways to compute (estimate) the CI. I used Koopman asymptotic score implemented in GraphPad Prism. Here is a screenshot of the results:
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline My suggestion is to complement the p-value with a report of the effect size and its confidence interval (and don't use the word "significant"). For your data, the relevant effect size is the relative
27,660
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value)
This is a nice example of a situation where the all too conventional all-or-none description of the results as 'significant' or 'not significant' is unhelpful, as you have sensibly noticed. I agree with Harvey Motulsky's answer, but would add a few considerations. First, note that a p-value of 0.04 is only ever fairly weak evidence against the null hypothesis. Where your the sample size is small the false positive error rate does not increase (assuming the statistical model is appropriate), but a 'significant' result will often come from a sample that severely exaggerates the true effect. Next, note that the individual binary data points carry relatively little information, and so even though the p-value falls within the conventionally 'significant' range, the likelihood function from your data will be relatively widely spread and so if you were to do a Bayesian analysis (maybe you should!) you would find that the posterior probability distribution would not be moved very much from your prior. (All of that is to say that a a result of p=0.04 is says that you have fairly weak evidence against the null.) Finally, note that the reliability of a statistical test is only as good as the statistical model is well matched to the actual data generating and sampling systems. Given your thoughtful question, you might find my explanations of the differences between p-value evidence considerations and error rate accountings will clarify the issues. You will find it interesting in any case: A reckless guide to p-values: local evidence, global errors.
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline
This is a nice example of a situation where the all too conventional all-or-none description of the results as 'significant' or 'not significant' is unhelpful, as you have sensibly noticed. I agree wi
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value) This is a nice example of a situation where the all too conventional all-or-none description of the results as 'significant' or 'not significant' is unhelpful, as you have sensibly noticed. I agree with Harvey Motulsky's answer, but would add a few considerations. First, note that a p-value of 0.04 is only ever fairly weak evidence against the null hypothesis. Where your the sample size is small the false positive error rate does not increase (assuming the statistical model is appropriate), but a 'significant' result will often come from a sample that severely exaggerates the true effect. Next, note that the individual binary data points carry relatively little information, and so even though the p-value falls within the conventionally 'significant' range, the likelihood function from your data will be relatively widely spread and so if you were to do a Bayesian analysis (maybe you should!) you would find that the posterior probability distribution would not be moved very much from your prior. (All of that is to say that a a result of p=0.04 is says that you have fairly weak evidence against the null.) Finally, note that the reliability of a statistical test is only as good as the statistical model is well matched to the actual data generating and sampling systems. Given your thoughtful question, you might find my explanations of the differences between p-value evidence considerations and error rate accountings will clarify the issues. You will find it interesting in any case: A reckless guide to p-values: local evidence, global errors.
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline This is a nice example of a situation where the all too conventional all-or-none description of the results as 'significant' or 'not significant' is unhelpful, as you have sensibly noticed. I agree wi
27,661
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value)
Great question. I think the previous answers by @HarveyMotulski and @MichaelLew are solid: you set out to investigate a between-group difference, with positive but weak results, and especially given the replication crisis need to emphasize that while by your decision criterion this counts as a group difference, those results depend on a single observation. Their answers concentrate on describing the experiment you set out to do, and the analysis you would have pre-registered. And I agree you have a duty to report that as planned, to help stave off file drawer bias and other scientific gremlins. However, the key result to me is your surprise at the few successes. This may not be Fleming's petri dish, or Rutherford's reflected alpha particles, but it's a potential discovery. It seems you had a very important but unarticulated background assumption that only became apparent when it was violated. Worth mentioning for its own sake and because that violation might invalidate the statistical test. I'm thinking here of Feynman's story about rats running mazes. Gelman frequently emphasizes "check the fit" as a step that might be outside your formal inference framework: one example here. I think this is also consilient with Mayo's severe testing approach -- but I'm no expert. That may lead you to a new model or explanation -- it would be exploratory work on this dataset and require another experiment to test, which hopefully you have time and resources for, else a good description can inspire someone else to. Best of luck!
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline
Great question. I think the previous answers by @HarveyMotulski and @MichaelLew are solid: you set out to investigate a between-group difference, with positive but weak results, and especially given t
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value) Great question. I think the previous answers by @HarveyMotulski and @MichaelLew are solid: you set out to investigate a between-group difference, with positive but weak results, and especially given the replication crisis need to emphasize that while by your decision criterion this counts as a group difference, those results depend on a single observation. Their answers concentrate on describing the experiment you set out to do, and the analysis you would have pre-registered. And I agree you have a duty to report that as planned, to help stave off file drawer bias and other scientific gremlins. However, the key result to me is your surprise at the few successes. This may not be Fleming's petri dish, or Rutherford's reflected alpha particles, but it's a potential discovery. It seems you had a very important but unarticulated background assumption that only became apparent when it was violated. Worth mentioning for its own sake and because that violation might invalidate the statistical test. I'm thinking here of Feynman's story about rats running mazes. Gelman frequently emphasizes "check the fit" as a step that might be outside your formal inference framework: one example here. I think this is also consilient with Mayo's severe testing approach -- but I'm no expert. That may lead you to a new model or explanation -- it would be exploratory work on this dataset and require another experiment to test, which hopefully you have time and resources for, else a good description can inspire someone else to. Best of luck!
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline Great question. I think the previous answers by @HarveyMotulski and @MichaelLew are solid: you set out to investigate a between-group difference, with positive but weak results, and especially given t
27,662
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value)
I'd suggest giving the p value and CI, but complement them with one additional number, an estimate of the false positive risk. If it's sensible to put a lump of prior probability on a point null (as it often is), then Benjamin & Berger's approach gives a maximum Bayes factor in favour of H1 min BF= (-ep ln(p))^-1, and if we are willing to assume that that H0 and H1 are equally probable, a priori, it follows that the false positive risk is at least FPR = Pr(H0 | p) = 1/(1+BF) If you've observed p = 0.05 and declare a discovery, the chance that its a false positive is 29%. In your case, you've found p = 0.04, so this approach implies a false positive risk of 26%. If you had been doing a t test, my 2019 approach gives similar results. I suggested calling the FPR for prior odds of 1, the FPR_50 (on the grounds that H0 and H1 are assumed to be 50:50, a priori). For p = 0.05 the FPR_50 for a well-powered experiment is 27%. And for p = 0.04 it's 22% (easily found by the web calculator) These approaches emphasize the weakness of the evidence against the null provided by marginal p values.
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline
I'd suggest giving the p value and CI, but complement them with one additional number, an estimate of the false positive risk. If it's sensible to put a lump of prior probability on a point null (as
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline p-value) I'd suggest giving the p value and CI, but complement them with one additional number, an estimate of the false positive risk. If it's sensible to put a lump of prior probability on a point null (as it often is), then Benjamin & Berger's approach gives a maximum Bayes factor in favour of H1 min BF= (-ep ln(p))^-1, and if we are willing to assume that that H0 and H1 are equally probable, a priori, it follows that the false positive risk is at least FPR = Pr(H0 | p) = 1/(1+BF) If you've observed p = 0.05 and declare a discovery, the chance that its a false positive is 29%. In your case, you've found p = 0.04, so this approach implies a false positive risk of 26%. If you had been doing a t test, my 2019 approach gives similar results. I suggested calling the FPR for prior odds of 1, the FPR_50 (on the grounds that H0 and H1 are assumed to be 50:50, a priori). For p = 0.05 the FPR_50 for a well-powered experiment is 27%. And for p = 0.04 it's 22% (easily found by the web calculator) These approaches emphasize the weakness of the evidence against the null provided by marginal p values.
References on how to interpret significant but dubious results (i.e. small numbers, plus borderline I'd suggest giving the p value and CI, but complement them with one additional number, an estimate of the false positive risk. If it's sensible to put a lump of prior probability on a point null (as
27,663
Studies with small sample sizes
A comment made by "Josh" under this Andrew Gelman's blog post: https://statmodeling.stat.columbia.edu/2022/01/24/how-large-a-sample-size-does-he-actually-need-he-got-statistical-significance-twice-isnt-that-enough/ helped me (finally!) understand this issue. I copied the comment here, hope that's OK: "Let’s say we take 100 samples under the null distribution and 100 samples under the alternative distribution. We expect ~5 samples under the null to be significant. If we have power of 80 percent, we’d expect ~80 significant results under the alternative. Here only 5/85 of our results are false positives. However, what if we have statistical power of only 10 percent. Then we expect ~10 samples under the alternative to be significant. Here 5/15 of our significant results would be false positives. Decreasing power increases the likelihood that we are in situation one (significant result with the null) rather than in situation two (significant result with the alternative)."
Studies with small sample sizes
A comment made by "Josh" under this Andrew Gelman's blog post: https://statmodeling.stat.columbia.edu/2022/01/24/how-large-a-sample-size-does-he-actually-need-he-got-statistical-significance-twice-isn
Studies with small sample sizes A comment made by "Josh" under this Andrew Gelman's blog post: https://statmodeling.stat.columbia.edu/2022/01/24/how-large-a-sample-size-does-he-actually-need-he-got-statistical-significance-twice-isnt-that-enough/ helped me (finally!) understand this issue. I copied the comment here, hope that's OK: "Let’s say we take 100 samples under the null distribution and 100 samples under the alternative distribution. We expect ~5 samples under the null to be significant. If we have power of 80 percent, we’d expect ~80 significant results under the alternative. Here only 5/85 of our results are false positives. However, what if we have statistical power of only 10 percent. Then we expect ~10 samples under the alternative to be significant. Here 5/15 of our significant results would be false positives. Decreasing power increases the likelihood that we are in situation one (significant result with the null) rather than in situation two (significant result with the alternative)."
Studies with small sample sizes A comment made by "Josh" under this Andrew Gelman's blog post: https://statmodeling.stat.columbia.edu/2022/01/24/how-large-a-sample-size-does-he-actually-need-he-got-statistical-significance-twice-isn
27,664
Studies with small sample sizes
Let's use reductio ad absurdum to see how the "if you have the sample size to detect an effect, then the sample size was large enough in that sense" logic doesn't hold. Imagine that you had a sample of size $N=1$. In such a sample, you can either observe a result that confirms or disagrees with your hypothesis. Unless you can make a strong hypothesis like "it should never happen" (but then, you don't need statistics, just logical reasoning), there will always be some chance to observe either of the results, even if unlikely. In such a case, your single sample does not "prove" anything, as it could be pure luck. However, if you repeated the experiment many times and saw the same result repeated, it gets less likely that it was luck.
Studies with small sample sizes
Let's use reductio ad absurdum to see how the "if you have the sample size to detect an effect, then the sample size was large enough in that sense" logic doesn't hold. Imagine that you had a sample o
Studies with small sample sizes Let's use reductio ad absurdum to see how the "if you have the sample size to detect an effect, then the sample size was large enough in that sense" logic doesn't hold. Imagine that you had a sample of size $N=1$. In such a sample, you can either observe a result that confirms or disagrees with your hypothesis. Unless you can make a strong hypothesis like "it should never happen" (but then, you don't need statistics, just logical reasoning), there will always be some chance to observe either of the results, even if unlikely. In such a case, your single sample does not "prove" anything, as it could be pure luck. However, if you repeated the experiment many times and saw the same result repeated, it gets less likely that it was luck.
Studies with small sample sizes Let's use reductio ad absurdum to see how the "if you have the sample size to detect an effect, then the sample size was large enough in that sense" logic doesn't hold. Imagine that you had a sample o
27,665
Studies with small sample sizes
In frequentist statistics this is a matter of $\alpha$ (significance level) and $1-\beta$ (power) of a hypothesis test. Usually you would fix $\beta$ (for ex. $80 \%$) and then determine the number of samples required to obtain such power for a desired $\alpha$ (for ex. $0.05$). A specific example: power analysis for a simple t-test given $d = 1$, $1-\beta=0.8$, and $\alpha=0.05$ returns a required sample size of $n \approx 17$. Increasing $1-\beta=0.9$ yields $n \approx 22$. The significance level has stayed the same in both cases, i.e. the CI obtained from such tests will cover the true values with the same proportions, what has changed is the power, the second test has more power than the first (and narrower CI's).
Studies with small sample sizes
In frequentist statistics this is a matter of $\alpha$ (significance level) and $1-\beta$ (power) of a hypothesis test. Usually you would fix $\beta$ (for ex. $80 \%$) and then determine the number of
Studies with small sample sizes In frequentist statistics this is a matter of $\alpha$ (significance level) and $1-\beta$ (power) of a hypothesis test. Usually you would fix $\beta$ (for ex. $80 \%$) and then determine the number of samples required to obtain such power for a desired $\alpha$ (for ex. $0.05$). A specific example: power analysis for a simple t-test given $d = 1$, $1-\beta=0.8$, and $\alpha=0.05$ returns a required sample size of $n \approx 17$. Increasing $1-\beta=0.9$ yields $n \approx 22$. The significance level has stayed the same in both cases, i.e. the CI obtained from such tests will cover the true values with the same proportions, what has changed is the power, the second test has more power than the first (and narrower CI's).
Studies with small sample sizes In frequentist statistics this is a matter of $\alpha$ (significance level) and $1-\beta$ (power) of a hypothesis test. Usually you would fix $\beta$ (for ex. $80 \%$) and then determine the number of
27,666
Averaging SVM and GLM results: sensible or stupid?
This is (potentially) fine. You are describing a simple version of model averaging. Of course, whether it is actually better in your case is an empirical question. For what it's worth, "frequentist" doesn't really contrast with "machine learning". In statistics, frequentist would contrast with Bayesian, but that distinction is orthogonal to this question. You could contrast statistics with machine learning (many people do), but I think the distinction is somewhat forced and artificial, and at any rate, it's orthogonal to this question.
Averaging SVM and GLM results: sensible or stupid?
This is (potentially) fine. You are describing a simple version of model averaging. Of course, whether it is actually better in your case is an empirical question. For what it's worth, "frequentist
Averaging SVM and GLM results: sensible or stupid? This is (potentially) fine. You are describing a simple version of model averaging. Of course, whether it is actually better in your case is an empirical question. For what it's worth, "frequentist" doesn't really contrast with "machine learning". In statistics, frequentist would contrast with Bayesian, but that distinction is orthogonal to this question. You could contrast statistics with machine learning (many people do), but I think the distinction is somewhat forced and artificial, and at any rate, it's orthogonal to this question.
Averaging SVM and GLM results: sensible or stupid? This is (potentially) fine. You are describing a simple version of model averaging. Of course, whether it is actually better in your case is an empirical question. For what it's worth, "frequentist
27,667
Relevance of assumption of normality, ways to check and reading recommendations for non-statisticians
You are right to be confused, as this is a confusing issue indeed. I'm afraid it will be hard to find everything good to know in a single reference, but some may give that to you - I will not look around but rather tell you what I think. (This answer is more about the background, how to think about model assumptions in general; I have written another answer with some more practical hints.) Models are idealisations and they never hold precisely in practice, so we will routinely apply methods to data for which the methods' model assumptions are more or less obviously violated. Having a model assumption for a certain method does not mean that the assumption has to be fulfilled for the method to make sense. It only means that if the model assumption is fulfilled, there is a theoretical guarantee that the method does what it's supposed to do (these guarantees are not always the same and can be stronger or weaker). The tricky issue here is that obviously there is then no theoretical guarantee in case that the model assumption is violated; and in fact it is both possible that our analysis is still fine, or that it is misleading, and it is hard to tell these two possibilities apart. Many statistics are asymptotically normally distributed even if the underlying data are not normal due to the Central Limit Theorem (CLT; which itself has assumptions that may be violated, but see above). This means that if your sample size is large, many deviations from normality are not problematic as results from assuming normality will hold approximately. There is a catch with item 3, which is that it depends on the unknown true underlying distribution how large a sample size is actually required so that results are satisfactorily normal (also it depends on what exactly you do because the CLT doesn't apply to everything; it does apply to the arithmetic mean though, on which many statistics are based). The key issue is not whether data are normal or not, and even not whether data are approximately normal, but rather whether normality is violated in ways that will mislead conclusions. In fact, some tiny violations of normality (a single gross outlier) may be harmful, whereas a distribution such as the uniform that looks very obviously non-normal will usually not cause problems for inference based on a normality assumption. Depending on what exactly you do, in most cases the following things are most problematic: Extreme outliers (either observations where something is wrong or essentially different from the others, or generally heavy distributional tails) and strong skewness. On the other hand, if there are no heavy tails, normality based inference is very often harmless, for example for discrete 5-point (or other) Likert scaled data (data that can only take values -2,-1,0,1,2 and are therefore pretty much guaranteed to not have outliers). In many situations other violations of model assumptions such as dependence are more critical than non-normality. In particular, the CLT requires independently identically distributed data or some alternative assumptions that are not much weaker. I have read once (I think in the Hampel et al. book on Robust Statistics) that many high quality astronomical data sets are heavy tailed, and some of them look more normal than they should due to long range dependence - and for the sake of analysis one would be better off with less normality and less dependence.
Relevance of assumption of normality, ways to check and reading recommendations for non-statistician
You are right to be confused, as this is a confusing issue indeed. I'm afraid it will be hard to find everything good to know in a single reference, but some may give that to you - I will not look aro
Relevance of assumption of normality, ways to check and reading recommendations for non-statisticians You are right to be confused, as this is a confusing issue indeed. I'm afraid it will be hard to find everything good to know in a single reference, but some may give that to you - I will not look around but rather tell you what I think. (This answer is more about the background, how to think about model assumptions in general; I have written another answer with some more practical hints.) Models are idealisations and they never hold precisely in practice, so we will routinely apply methods to data for which the methods' model assumptions are more or less obviously violated. Having a model assumption for a certain method does not mean that the assumption has to be fulfilled for the method to make sense. It only means that if the model assumption is fulfilled, there is a theoretical guarantee that the method does what it's supposed to do (these guarantees are not always the same and can be stronger or weaker). The tricky issue here is that obviously there is then no theoretical guarantee in case that the model assumption is violated; and in fact it is both possible that our analysis is still fine, or that it is misleading, and it is hard to tell these two possibilities apart. Many statistics are asymptotically normally distributed even if the underlying data are not normal due to the Central Limit Theorem (CLT; which itself has assumptions that may be violated, but see above). This means that if your sample size is large, many deviations from normality are not problematic as results from assuming normality will hold approximately. There is a catch with item 3, which is that it depends on the unknown true underlying distribution how large a sample size is actually required so that results are satisfactorily normal (also it depends on what exactly you do because the CLT doesn't apply to everything; it does apply to the arithmetic mean though, on which many statistics are based). The key issue is not whether data are normal or not, and even not whether data are approximately normal, but rather whether normality is violated in ways that will mislead conclusions. In fact, some tiny violations of normality (a single gross outlier) may be harmful, whereas a distribution such as the uniform that looks very obviously non-normal will usually not cause problems for inference based on a normality assumption. Depending on what exactly you do, in most cases the following things are most problematic: Extreme outliers (either observations where something is wrong or essentially different from the others, or generally heavy distributional tails) and strong skewness. On the other hand, if there are no heavy tails, normality based inference is very often harmless, for example for discrete 5-point (or other) Likert scaled data (data that can only take values -2,-1,0,1,2 and are therefore pretty much guaranteed to not have outliers). In many situations other violations of model assumptions such as dependence are more critical than non-normality. In particular, the CLT requires independently identically distributed data or some alternative assumptions that are not much weaker. I have read once (I think in the Hampel et al. book on Robust Statistics) that many high quality astronomical data sets are heavy tailed, and some of them look more normal than they should due to long range dependence - and for the sake of analysis one would be better off with less normality and less dependence.
Relevance of assumption of normality, ways to check and reading recommendations for non-statistician You are right to be confused, as this is a confusing issue indeed. I'm afraid it will be hard to find everything good to know in a single reference, but some may give that to you - I will not look aro
27,668
Relevance of assumption of normality, ways to check and reading recommendations for non-statisticians
I will write a second more practically oriented answer. The first one is rather on a more abstract level about understanding the relevance (or limited relevance) of the model assumptions. So here are some hints. Use some a priori knowledge about your data: Is the value range limited, and would you normally expect that data regularly occur at or close to both extremes? In this case methods backed up by normal assumption theory will usually be fine. (This would for example be the case for most Likert scale data or data that has a natural range between 0 and 1 unless there are reasons to expect that almost all data is on one side.) What are potential reasons for outliers in the data? Can you somehow track outliers down and check whether they are in fact erroneous? In this case the advice would be to remove such observations. (I do not recommend to generally remove outliers as these may be relevant and meaningful, but if they are in fact erroneous, out with them!) Generally, improving data quality is always worthwhile where possible, even though robust methods exist that can deal with some issues. Have a look at your data and see whether there are outliers or extreme skewness. Important: Do not be too picky! Technically, deciding what test (or other method) to use conditionally on the data themselves will invalidate the theory behind the methods. I had written in the first answer that model assumptions are never perfectly fulfilled anyway, and for this reason one can argue that making decisions conditionally on how the data look like can be tolerated to some extent (better to invalidate theory "a bit" than doing something grossly inappropriate), however there is a tendency to overdo this and I'd generally say the less of this you do, the better (I mean less data dependent decision making, not less looking at the data!). So by all means look at your data and get a proper idea of what is going on, however have an analysis plan before you do that and be determined to go through with it unless there is a really strong indication in the data that this will go wrong. You can get a better feel for these things simulating, say, 100 normal datasets of the size of interest, and looking at them. This will show you how much variation there can be if the normal assumption really holds. If your data look quite non-normal, you can also (although this is more sophisticated) simulate many datasets from a skew distribution that looks to some extent like your data, or normal data with added outliers, or a uniform or whatever, compute the test you want to perform, and check whether its performance (type I and type II error) is still fine, or look at the distribution of the test statistic. Such things cause some work but give you a much better feel for what happens in such situations. If you have outliers that can't be pinned down to be erroneous, and the data look otherwise fine, standard robust methods should do fine. There's a school of thought that says you should really always use robust methods as they do little harm even if model assumptions are fulfilled. I don't fully agree with this, as although they will not lose much in case the data are really normal, their quality loss may be more substantial in cases in which normality is not fulfilled but the CLT approximation works well such as uniform data or discrete data (particularly if there is a large percentage of observations on a single value, which can be detrimental for robust methods). As you have already indicated yourself, transformation is often a good tool for skew data, however if you compare groups and different groups have different kinds of skewness, it may not help. One important thing in such situations is that you need to be clear about what exactly you want to compare. t-tests and standard ANOVA compare means. Their performance may be OKish is such situations despite skewness (if the skewness is not too extreme), but a more tricky issue is whether the group mean represents in a proper way the location of the group. In a normal distribution, mean, median and mode are the same, for a skew distribution this is not the case. The mean may be too dependent on extreme observations. Robust ANOVA may help but there are some subtleties. Particularly if you compare two groups that have opposite skew, a robust method may downweight large observations in one group and small observations in the other, which would not be fair (it depends on which exact method you use). Using rank sums, as the Kruskal-Wallis test does, may be more appropriate, however its theory also assumes the same distributional shape in all groups if not normal (it may still perform fairly well otherwise; as long as the rank sums are a good summary of the relative locations of the groups to be compared, I'd think this is fine). In case your distributions are differently distributed in a way that on some areas one group tends to score higher, and another one in other areas (the simplest case is if one group has a slightly smaller mean and a much smaller variance, meaning that the other group has the highest as well as the lowest observations), it may be more appropriate anyway to give a differentiated interpretation of the situation rather than breaking things down to a single statistic/p-value. Whatever your result is, look at the data again afterwards and try to understand how the data led to the specific result, also understanding the statistics that were involved. If the result differs from your intuition what the result should be (from how the data look like - I don't mean subject matter knowledge here as hopefully you don't want to bias your results in favour of your subject matter expectations), either the result or your intuition are misleading, and you can learn something (and maybe use a different method in case your plot shows that the method has done something inappropriate). A comment hinted at methods that explicitly require other assumptions than normality. Obviously this is fine where data are of that kind, however a similar discussion applies to their assumptions. Don't neglect other model assumptions thinking about normality too much. Dependence is often a bigger problem; use knowledge about how the data were obtained to ask yourself whether there may be issues with dependence (and think about experimental design or conditions of data collection if this is a problem; one obvious issue could be several observations that stem from the same person). Plot residuals against observation order if meaningful; also against other conditions that might induce dependence (geographical location etc.). I'm not usually much concerned by moderately different variances, however if variances are strongly different, it may be worthwhile to not use a method that assumes them to be the same but rather visualise the data and give a more detailed description, see above.
Relevance of assumption of normality, ways to check and reading recommendations for non-statistician
I will write a second more practically oriented answer. The first one is rather on a more abstract level about understanding the relevance (or limited relevance) of the model assumptions. So here are
Relevance of assumption of normality, ways to check and reading recommendations for non-statisticians I will write a second more practically oriented answer. The first one is rather on a more abstract level about understanding the relevance (or limited relevance) of the model assumptions. So here are some hints. Use some a priori knowledge about your data: Is the value range limited, and would you normally expect that data regularly occur at or close to both extremes? In this case methods backed up by normal assumption theory will usually be fine. (This would for example be the case for most Likert scale data or data that has a natural range between 0 and 1 unless there are reasons to expect that almost all data is on one side.) What are potential reasons for outliers in the data? Can you somehow track outliers down and check whether they are in fact erroneous? In this case the advice would be to remove such observations. (I do not recommend to generally remove outliers as these may be relevant and meaningful, but if they are in fact erroneous, out with them!) Generally, improving data quality is always worthwhile where possible, even though robust methods exist that can deal with some issues. Have a look at your data and see whether there are outliers or extreme skewness. Important: Do not be too picky! Technically, deciding what test (or other method) to use conditionally on the data themselves will invalidate the theory behind the methods. I had written in the first answer that model assumptions are never perfectly fulfilled anyway, and for this reason one can argue that making decisions conditionally on how the data look like can be tolerated to some extent (better to invalidate theory "a bit" than doing something grossly inappropriate), however there is a tendency to overdo this and I'd generally say the less of this you do, the better (I mean less data dependent decision making, not less looking at the data!). So by all means look at your data and get a proper idea of what is going on, however have an analysis plan before you do that and be determined to go through with it unless there is a really strong indication in the data that this will go wrong. You can get a better feel for these things simulating, say, 100 normal datasets of the size of interest, and looking at them. This will show you how much variation there can be if the normal assumption really holds. If your data look quite non-normal, you can also (although this is more sophisticated) simulate many datasets from a skew distribution that looks to some extent like your data, or normal data with added outliers, or a uniform or whatever, compute the test you want to perform, and check whether its performance (type I and type II error) is still fine, or look at the distribution of the test statistic. Such things cause some work but give you a much better feel for what happens in such situations. If you have outliers that can't be pinned down to be erroneous, and the data look otherwise fine, standard robust methods should do fine. There's a school of thought that says you should really always use robust methods as they do little harm even if model assumptions are fulfilled. I don't fully agree with this, as although they will not lose much in case the data are really normal, their quality loss may be more substantial in cases in which normality is not fulfilled but the CLT approximation works well such as uniform data or discrete data (particularly if there is a large percentage of observations on a single value, which can be detrimental for robust methods). As you have already indicated yourself, transformation is often a good tool for skew data, however if you compare groups and different groups have different kinds of skewness, it may not help. One important thing in such situations is that you need to be clear about what exactly you want to compare. t-tests and standard ANOVA compare means. Their performance may be OKish is such situations despite skewness (if the skewness is not too extreme), but a more tricky issue is whether the group mean represents in a proper way the location of the group. In a normal distribution, mean, median and mode are the same, for a skew distribution this is not the case. The mean may be too dependent on extreme observations. Robust ANOVA may help but there are some subtleties. Particularly if you compare two groups that have opposite skew, a robust method may downweight large observations in one group and small observations in the other, which would not be fair (it depends on which exact method you use). Using rank sums, as the Kruskal-Wallis test does, may be more appropriate, however its theory also assumes the same distributional shape in all groups if not normal (it may still perform fairly well otherwise; as long as the rank sums are a good summary of the relative locations of the groups to be compared, I'd think this is fine). In case your distributions are differently distributed in a way that on some areas one group tends to score higher, and another one in other areas (the simplest case is if one group has a slightly smaller mean and a much smaller variance, meaning that the other group has the highest as well as the lowest observations), it may be more appropriate anyway to give a differentiated interpretation of the situation rather than breaking things down to a single statistic/p-value. Whatever your result is, look at the data again afterwards and try to understand how the data led to the specific result, also understanding the statistics that were involved. If the result differs from your intuition what the result should be (from how the data look like - I don't mean subject matter knowledge here as hopefully you don't want to bias your results in favour of your subject matter expectations), either the result or your intuition are misleading, and you can learn something (and maybe use a different method in case your plot shows that the method has done something inappropriate). A comment hinted at methods that explicitly require other assumptions than normality. Obviously this is fine where data are of that kind, however a similar discussion applies to their assumptions. Don't neglect other model assumptions thinking about normality too much. Dependence is often a bigger problem; use knowledge about how the data were obtained to ask yourself whether there may be issues with dependence (and think about experimental design or conditions of data collection if this is a problem; one obvious issue could be several observations that stem from the same person). Plot residuals against observation order if meaningful; also against other conditions that might induce dependence (geographical location etc.). I'm not usually much concerned by moderately different variances, however if variances are strongly different, it may be worthwhile to not use a method that assumes them to be the same but rather visualise the data and give a more detailed description, see above.
Relevance of assumption of normality, ways to check and reading recommendations for non-statistician I will write a second more practically oriented answer. The first one is rather on a more abstract level about understanding the relevance (or limited relevance) of the model assumptions. So here are
27,669
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expansion?
The rate of expansion is useful to know, but the advantage of $R_t$ is that - although more difficult to measure - it provides a more mechanistic description of the transmission process, and hence it is more useful from the point of view of disease control. $R_t$ can be formulated as $R_t=cp\tau S$, where $c$ is the rate at which a typical person makes contacts with others $p$ is the probability of transmission to a contacted person if that person is susceptible $\tau$ is the mean infectious period $S$ is the proportion of the population susceptible. So, if $R_t$ is currently $2$, say, then to achieve $R_t<1$ we could either reduce $cp$ (social distancing), $\tau$ (isolate infectious individuals), or $S$ (vaccinate) e.g. vaccinating more than $50\%$ of the currently susceptible population would be sufficient to achieve control.
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expans
The rate of expansion is useful to know, but the advantage of $R_t$ is that - although more difficult to measure - it provides a more mechanistic description of the transmission process, and hence it
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expansion? The rate of expansion is useful to know, but the advantage of $R_t$ is that - although more difficult to measure - it provides a more mechanistic description of the transmission process, and hence it is more useful from the point of view of disease control. $R_t$ can be formulated as $R_t=cp\tau S$, where $c$ is the rate at which a typical person makes contacts with others $p$ is the probability of transmission to a contacted person if that person is susceptible $\tau$ is the mean infectious period $S$ is the proportion of the population susceptible. So, if $R_t$ is currently $2$, say, then to achieve $R_t<1$ we could either reduce $cp$ (social distancing), $\tau$ (isolate infectious individuals), or $S$ (vaccinate) e.g. vaccinating more than $50\%$ of the currently susceptible population would be sufficient to achieve control.
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expans The rate of expansion is useful to know, but the advantage of $R_t$ is that - although more difficult to measure - it provides a more mechanistic description of the transmission process, and hence it
27,670
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expansion?
As per the suggestion of the OP, here is my comment as an answer: I would guess that in times when $R\approx1$ (e.g. here in Germany at the time of writing, Nov 2020), the doubling time is about infinite, as the situation is stable. Of course, when $R$ is slightly above 1, that is no longer true, but very small changes in $R$ should imply very large changes in the doubling time, which may not be a very effective way to communicate changes in the pandemic situation. Indeed, during the first wave of the pandemic reporting often happened through doubling time, which was however discarded when the first wave came under control. Here is a source commenting on this step (in German).
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expans
As per the suggestion of the OP, here is my comment as an answer: I would guess that in times when $R\approx1$ (e.g. here in Germany at the time of writing, Nov 2020), the doubling time is about infin
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expansion? As per the suggestion of the OP, here is my comment as an answer: I would guess that in times when $R\approx1$ (e.g. here in Germany at the time of writing, Nov 2020), the doubling time is about infinite, as the situation is stable. Of course, when $R$ is slightly above 1, that is no longer true, but very small changes in $R$ should imply very large changes in the doubling time, which may not be a very effective way to communicate changes in the pandemic situation. Indeed, during the first wave of the pandemic reporting often happened through doubling time, which was however discarded when the first wave came under control. Here is a source commenting on this step (in German).
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expans As per the suggestion of the OP, here is my comment as an answer: I would guess that in times when $R\approx1$ (e.g. here in Germany at the time of writing, Nov 2020), the doubling time is about infin
27,671
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expansion?
Relationship between reproduction number $R(t)$ and growth-rate $C(t)$ The growth rate $C(t)$ and the $R(t)$ are very much related. $C(t)$ is the growth rate per unit of time. It indicates how fast the infections multiply. (doubling time is related to growth rate see as the inverse: How to calculate the doubling rate for infections?) $R(t)$ is the factor by which each generation multiplies. It indicates by how much new infections occur for each infected person. The reproduction rate $R$ tells by which factor the infections multiply each step. But it is not a rate of growth with just different units (per generation instead of per time) because simultaneously infected people will heal or die and the net growth needs to account for those two effects together. So $R(t)$ tells a slightly different story than growth rate. In terms of the SIR compartmental model you can consider two rates: $\beta$ the rate by which new infectections occur, and $\gamma$ the rate by which infections disappear (due to healing or death). The (net) growth rate is the difference of these two $C= \beta -\gamma$. The reproduction rate is the ratio of these two $R = \beta/\gamma$. In the image below you see a schematic for the growth. The nodes represent infected people and from each node, we draw lines to people that will be infected next. In this example, every infected person will infect two new people. The reproduction rate $R(t)$ is the number of links for each node. It represents a multiplication factor in a chain reaction. The growth rate $C(t)$ (how fast this curve rises in time) will depend on the time in between each generation. If you know $C(t)$, then you do not yet know the underlying structure of the chain reaction. This structure of the chain reaction (summarized/simplified by $R(t)$) may be essential in understanding the mechanism and dynamics of the spread (and how it responds to environmental changes like vaccination/immunity or social distancing). If you know $\tau$ the time between infections (which can be viewed differently e.g. serial interval or generation interval) then you can relate the reproduction number with the growth rate $$ C(t) = \frac{R(t)-1}{ \tau }$$ or $$ R(t) = 1 + C(t) \tau$$ There are other relationships possible for more complicated models. The point is mainly that you do not get a simple difference by a scale factor $C(t) = \frac{R(t)}{ \tau }$. The $-1$ term occurs because you do not only have growth/reproduction but also decrease due to people becoming better or dying. A standard work explaining how to relate the growth rate and the reproduction number is "How generation intervals shape the relationship between growth rates and reproductive numbers" from Wallinga and Lipsitch in Proc Biol Sci. 22-02-2007 Vol 274:1609). They relate the reproduction number and the growth rate using the generation interval distribution and the moment generating function $M(s)$ of this distribution to end up with $$R(t) = \frac{1}{M(-C(t))}$$ And for instance, with a gamma distribution you get $M(s) = (1-s \frac{\mu_{\tau}}{k} )^{-k}$ and $$R(t) = \left( 1+ C(t) \frac{\mu_{\tau}}{k}\right)^{k} = 1 + C(t) \mu_{\tau} + \sum_{n=2}^\infty {k\choose n} \left(C(t) \frac{\mu_{\tau}}{k} \right)^n$$ which equals $ R(t) = 1 + C(t) \tau$ if $k = 1$ and will be approximately equal (to first order) when $|C(t)|\frac{\mu_{\tau}}{k} \ll 1$. (in the graph the generation interval is actually a degenerate distribution and you end up with $R = e^{\tau C(t)}$ instead of $ R(t) = 1 + C(t) \tau$) Why $R(t)$ is useful $R(t)$ relates to the chain reaction The $R(t)$ value is important because it is closer to the underlying multiplicative mechanism of growth in terms of a chain reaction. This chain reaction amplifies if each event causes multiple new events in a ratio above 1, if more infections are being created than infections being dissolved (creating a snowball effect). The reaction reduces when each event causes less than one new event. In terms of the $\beta$ (rate of new infected people) and $\gamma$ (rate of infected people healing or dying) you get growth when $\beta > \gamma$. The reproduction rate $R = \beta / \gamma$ relates directly to changes in $\beta$ (which may change due to immunisation/vaccination or social distancing). If $\beta$ changes by a certain factor then $R$ changes by the same factor. For growth rate $C= \beta-\gamma$ it is less directly clear what the effect will be when $\beta$ changes. For instance if $\beta$ reduces by half then this could represent a change of the growth rate $C=\beta-\gamma$ from $3 = 4-1$ to $1 = 2-1$, or it could just as well represent a change of growth rate from $3=9-6$ to $-1.5=4.5-6$. The growth rate on it's own does not allow to make the direct connection between relative changes in $\beta$ and how this influences the growth rate. On the other hand, the reproduction rate $R =\beta/\gamma$ changes in those situations from $4$ to $2$ or from $1.5$ to $0.75$ and expresses more clearly what will happen to the chain reaction (growth versus decrease) if the dynamics of the spread changes (which is more directly governed by $\beta$). With the reproduction rate, which you can see as the multiplication factor in the chain reaction, we know better how the amplification changes in terms of changes in the virus dynamics. For instance, if the reproduction rate is 2 and due to measures (or due to more people becoming immune) the rate is reduced by halve then the reproduction will be 1 and the chain reaction will become neutralized. The growth rate $C$ does not tell you by which factor you need to reduce the spread (the multiplication factor) in order to change the growth from increasing to decreasing. This is because the growth rate does not contain information about the multiplication factor in the underlying chain reaction. The growth rate is therefore a more natural descriptor that explains how the virus spreads. It is an indication how the rate of spread multiplies in each generation. Computation of herd immunity A direct application is for instance in the use of computing the level of immunity that is necessary to reach herd immunity by means of random immunization (vaccination). Future development of epidemiological curve Another useful effect is that the reproduction rate is a better indicator than the growth rate in determining how many people will become infected before the spread reduces. In the graph you see the virus reproduces with a factor 2 each generation but this will slow down because other people get immune (and there will be less people to pass on the virus, the multiplication will decrease). This is illustrated in the image below from this question which tried to fit the growth curves in order to find $R(0)$ but had troubles finding a good fit. One reason for the problem in the fitting is that you can have the same growth rate for different values of $R(0)$. But in the image you see also that further in time the $R(0)$ value has a strong impact on the epidemiological curve. The slow down occurs earlier when the $R(t)$ is closer to 1 (when it is closer to 1 then it needs to drop relatively less much in order to get equal to 1 or below). The growth rate is no indication of how close the reproduction/multiplication rate is to 1. Alternatively you can see it in this way: because the growth rate is related as $C(t) \propto R(t) -1$, reducing the reproduction rate $R(t)$ by some factor will reduce the growth rate by a different factor. Alternative measurements In addition, the $R(t)$ value may be computed either based on other epidemiological parameters (contact rates and such things), or measured 'in the field' by data on contact tracing. Why $R(t)$ is not so usefull The $R(t)$ value is a highly simplified measure. In most models, it represents an average reproduction, but the reality is that there is inhomogeneity and this may have a big influence on conclusions made in relation to $R(t)$ (the same arguments apply to $C(t)$). For instance, consider a population as a mixture of locally different $R(t)$ values. For this case bringing down the rate of spread by a factor of two will not bring down the average $R(t)$ from 2 to 1. There will be some buffering effect of regions with relatively higher local reproduction rates where the spread will keep going on. So, the measures that we take seem to get stuck to $R(t) \approx 1$. (also related is the effect discussed here) Another effect is that the computations for herd immunity are not correct because inhomogeneities mean that immunity will have different effect in different places (and lucky for us it is exactly those places where the spread is stronger and where immunization happens faster, that the immunization will have the strongest effect). In addition computations of $R_0$ may be wrong. Often they are based on the assumption that in the beginning $C(0) = (R_0-1)/\tau$. Then $R_0$ is determined based on measurements of $C(0)$ (the initial growth rate of the epidemiological curves) and $\tau$ (by determining the mean of the distribution of the serial interval). But this falsely assumes that all people are equally susceptible from the start.
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expans
Relationship between reproduction number $R(t)$ and growth-rate $C(t)$ The growth rate $C(t)$ and the $R(t)$ are very much related. $C(t)$ is the growth rate per unit of time. It indicates how fast t
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expansion? Relationship between reproduction number $R(t)$ and growth-rate $C(t)$ The growth rate $C(t)$ and the $R(t)$ are very much related. $C(t)$ is the growth rate per unit of time. It indicates how fast the infections multiply. (doubling time is related to growth rate see as the inverse: How to calculate the doubling rate for infections?) $R(t)$ is the factor by which each generation multiplies. It indicates by how much new infections occur for each infected person. The reproduction rate $R$ tells by which factor the infections multiply each step. But it is not a rate of growth with just different units (per generation instead of per time) because simultaneously infected people will heal or die and the net growth needs to account for those two effects together. So $R(t)$ tells a slightly different story than growth rate. In terms of the SIR compartmental model you can consider two rates: $\beta$ the rate by which new infectections occur, and $\gamma$ the rate by which infections disappear (due to healing or death). The (net) growth rate is the difference of these two $C= \beta -\gamma$. The reproduction rate is the ratio of these two $R = \beta/\gamma$. In the image below you see a schematic for the growth. The nodes represent infected people and from each node, we draw lines to people that will be infected next. In this example, every infected person will infect two new people. The reproduction rate $R(t)$ is the number of links for each node. It represents a multiplication factor in a chain reaction. The growth rate $C(t)$ (how fast this curve rises in time) will depend on the time in between each generation. If you know $C(t)$, then you do not yet know the underlying structure of the chain reaction. This structure of the chain reaction (summarized/simplified by $R(t)$) may be essential in understanding the mechanism and dynamics of the spread (and how it responds to environmental changes like vaccination/immunity or social distancing). If you know $\tau$ the time between infections (which can be viewed differently e.g. serial interval or generation interval) then you can relate the reproduction number with the growth rate $$ C(t) = \frac{R(t)-1}{ \tau }$$ or $$ R(t) = 1 + C(t) \tau$$ There are other relationships possible for more complicated models. The point is mainly that you do not get a simple difference by a scale factor $C(t) = \frac{R(t)}{ \tau }$. The $-1$ term occurs because you do not only have growth/reproduction but also decrease due to people becoming better or dying. A standard work explaining how to relate the growth rate and the reproduction number is "How generation intervals shape the relationship between growth rates and reproductive numbers" from Wallinga and Lipsitch in Proc Biol Sci. 22-02-2007 Vol 274:1609). They relate the reproduction number and the growth rate using the generation interval distribution and the moment generating function $M(s)$ of this distribution to end up with $$R(t) = \frac{1}{M(-C(t))}$$ And for instance, with a gamma distribution you get $M(s) = (1-s \frac{\mu_{\tau}}{k} )^{-k}$ and $$R(t) = \left( 1+ C(t) \frac{\mu_{\tau}}{k}\right)^{k} = 1 + C(t) \mu_{\tau} + \sum_{n=2}^\infty {k\choose n} \left(C(t) \frac{\mu_{\tau}}{k} \right)^n$$ which equals $ R(t) = 1 + C(t) \tau$ if $k = 1$ and will be approximately equal (to first order) when $|C(t)|\frac{\mu_{\tau}}{k} \ll 1$. (in the graph the generation interval is actually a degenerate distribution and you end up with $R = e^{\tau C(t)}$ instead of $ R(t) = 1 + C(t) \tau$) Why $R(t)$ is useful $R(t)$ relates to the chain reaction The $R(t)$ value is important because it is closer to the underlying multiplicative mechanism of growth in terms of a chain reaction. This chain reaction amplifies if each event causes multiple new events in a ratio above 1, if more infections are being created than infections being dissolved (creating a snowball effect). The reaction reduces when each event causes less than one new event. In terms of the $\beta$ (rate of new infected people) and $\gamma$ (rate of infected people healing or dying) you get growth when $\beta > \gamma$. The reproduction rate $R = \beta / \gamma$ relates directly to changes in $\beta$ (which may change due to immunisation/vaccination or social distancing). If $\beta$ changes by a certain factor then $R$ changes by the same factor. For growth rate $C= \beta-\gamma$ it is less directly clear what the effect will be when $\beta$ changes. For instance if $\beta$ reduces by half then this could represent a change of the growth rate $C=\beta-\gamma$ from $3 = 4-1$ to $1 = 2-1$, or it could just as well represent a change of growth rate from $3=9-6$ to $-1.5=4.5-6$. The growth rate on it's own does not allow to make the direct connection between relative changes in $\beta$ and how this influences the growth rate. On the other hand, the reproduction rate $R =\beta/\gamma$ changes in those situations from $4$ to $2$ or from $1.5$ to $0.75$ and expresses more clearly what will happen to the chain reaction (growth versus decrease) if the dynamics of the spread changes (which is more directly governed by $\beta$). With the reproduction rate, which you can see as the multiplication factor in the chain reaction, we know better how the amplification changes in terms of changes in the virus dynamics. For instance, if the reproduction rate is 2 and due to measures (or due to more people becoming immune) the rate is reduced by halve then the reproduction will be 1 and the chain reaction will become neutralized. The growth rate $C$ does not tell you by which factor you need to reduce the spread (the multiplication factor) in order to change the growth from increasing to decreasing. This is because the growth rate does not contain information about the multiplication factor in the underlying chain reaction. The growth rate is therefore a more natural descriptor that explains how the virus spreads. It is an indication how the rate of spread multiplies in each generation. Computation of herd immunity A direct application is for instance in the use of computing the level of immunity that is necessary to reach herd immunity by means of random immunization (vaccination). Future development of epidemiological curve Another useful effect is that the reproduction rate is a better indicator than the growth rate in determining how many people will become infected before the spread reduces. In the graph you see the virus reproduces with a factor 2 each generation but this will slow down because other people get immune (and there will be less people to pass on the virus, the multiplication will decrease). This is illustrated in the image below from this question which tried to fit the growth curves in order to find $R(0)$ but had troubles finding a good fit. One reason for the problem in the fitting is that you can have the same growth rate for different values of $R(0)$. But in the image you see also that further in time the $R(0)$ value has a strong impact on the epidemiological curve. The slow down occurs earlier when the $R(t)$ is closer to 1 (when it is closer to 1 then it needs to drop relatively less much in order to get equal to 1 or below). The growth rate is no indication of how close the reproduction/multiplication rate is to 1. Alternatively you can see it in this way: because the growth rate is related as $C(t) \propto R(t) -1$, reducing the reproduction rate $R(t)$ by some factor will reduce the growth rate by a different factor. Alternative measurements In addition, the $R(t)$ value may be computed either based on other epidemiological parameters (contact rates and such things), or measured 'in the field' by data on contact tracing. Why $R(t)$ is not so usefull The $R(t)$ value is a highly simplified measure. In most models, it represents an average reproduction, but the reality is that there is inhomogeneity and this may have a big influence on conclusions made in relation to $R(t)$ (the same arguments apply to $C(t)$). For instance, consider a population as a mixture of locally different $R(t)$ values. For this case bringing down the rate of spread by a factor of two will not bring down the average $R(t)$ from 2 to 1. There will be some buffering effect of regions with relatively higher local reproduction rates where the spread will keep going on. So, the measures that we take seem to get stuck to $R(t) \approx 1$. (also related is the effect discussed here) Another effect is that the computations for herd immunity are not correct because inhomogeneities mean that immunity will have different effect in different places (and lucky for us it is exactly those places where the spread is stronger and where immunization happens faster, that the immunization will have the strongest effect). In addition computations of $R_0$ may be wrong. Often they are based on the assumption that in the beginning $C(0) = (R_0-1)/\tau$. Then $R_0$ is determined based on measurements of $C(0)$ (the initial growth rate of the epidemiological curves) and $\tau$ (by determining the mean of the distribution of the serial interval). But this falsely assumes that all people are equally susceptible from the start.
Why is $R_t$ (or $R_0$) and not doubling rate or time the go-to metric for measuring Covid-19 expans Relationship between reproduction number $R(t)$ and growth-rate $C(t)$ The growth rate $C(t)$ and the $R(t)$ are very much related. $C(t)$ is the growth rate per unit of time. It indicates how fast t
27,672
Is it in general helpful to add "external" datasets to the training dataset? [closed]
I think the examples you bring are mostly from computer vision/image recognition and that case external datasets are very likely to include similar signal/dynamics as the prior data at hand. A "car" is a "car" irrespective of its surroundings. A "good customer" or "abnormal shopping activity" is different in Luxembourg than it is in Moldova. Unless we actively account for "covariate shift" (input distribution changes) and/or "concept drift" (ie. correct output for a given input changes over time/space/etc.) then "more data is helpful" only if we are lucky. We should note that this includes computer vision too; for example if our additional data is biased in a way we are unaware and/or unable to control (e.g. the photos are always at night-time or are subjected to over-exposure) that will not necessarily help the generalisability of our model.
Is it in general helpful to add "external" datasets to the training dataset? [closed]
I think the examples you bring are mostly from computer vision/image recognition and that case external datasets are very likely to include similar signal/dynamics as the prior data at hand. A "car" i
Is it in general helpful to add "external" datasets to the training dataset? [closed] I think the examples you bring are mostly from computer vision/image recognition and that case external datasets are very likely to include similar signal/dynamics as the prior data at hand. A "car" is a "car" irrespective of its surroundings. A "good customer" or "abnormal shopping activity" is different in Luxembourg than it is in Moldova. Unless we actively account for "covariate shift" (input distribution changes) and/or "concept drift" (ie. correct output for a given input changes over time/space/etc.) then "more data is helpful" only if we are lucky. We should note that this includes computer vision too; for example if our additional data is biased in a way we are unaware and/or unable to control (e.g. the photos are always at night-time or are subjected to over-exposure) that will not necessarily help the generalisability of our model.
Is it in general helpful to add "external" datasets to the training dataset? [closed] I think the examples you bring are mostly from computer vision/image recognition and that case external datasets are very likely to include similar signal/dynamics as the prior data at hand. A "car" i
27,673
Is it in general helpful to add "external" datasets to the training dataset? [closed]
At some point, adding more data will result in overfitting and worse out-of-sample prediction performance. Always. That papers report improved accuracy by leveraging additional data is not surprising at all. After all, people (both in academia and in industry) are heavily incentivized to report precisely this. Here is the relevant algorithm: 1. Pick an external dataset D. 2. Can you tell a story about how D *might* improve accuracy? If no: GOTO 1 3. Fit your model using D. Does it improve accuracy? If no: GOTO 1 4. Publish your accuracy improvement using D. Bonus points if you can get a press release. Note how a publication only happens if accuracy improves. You don't see all the loops where accuracy didn't improve. This is called a "file drawer effect" (everything that is not successful ends up in a file drawer). The end result is a strong publication bias. Note also that step 2 is crucial. An ability to tell a story about how the accuracy improvement might have come about is indispensable, because if you don't have such a story, it's too blatant that you went on a wild goose chase. So: in order to know whether your external data actually did improve matters, you always need to keep from "overfitting on the test set", as the algorithm above does. If you follow this algorithm, don't be surprised if the "winner" does not perform as well in production as after this selection process (which in itself is an example of regression to the mean).
Is it in general helpful to add "external" datasets to the training dataset? [closed]
At some point, adding more data will result in overfitting and worse out-of-sample prediction performance. Always. That papers report improved accuracy by leveraging additional data is not surprising
Is it in general helpful to add "external" datasets to the training dataset? [closed] At some point, adding more data will result in overfitting and worse out-of-sample prediction performance. Always. That papers report improved accuracy by leveraging additional data is not surprising at all. After all, people (both in academia and in industry) are heavily incentivized to report precisely this. Here is the relevant algorithm: 1. Pick an external dataset D. 2. Can you tell a story about how D *might* improve accuracy? If no: GOTO 1 3. Fit your model using D. Does it improve accuracy? If no: GOTO 1 4. Publish your accuracy improvement using D. Bonus points if you can get a press release. Note how a publication only happens if accuracy improves. You don't see all the loops where accuracy didn't improve. This is called a "file drawer effect" (everything that is not successful ends up in a file drawer). The end result is a strong publication bias. Note also that step 2 is crucial. An ability to tell a story about how the accuracy improvement might have come about is indispensable, because if you don't have such a story, it's too blatant that you went on a wild goose chase. So: in order to know whether your external data actually did improve matters, you always need to keep from "overfitting on the test set", as the algorithm above does. If you follow this algorithm, don't be surprised if the "winner" does not perform as well in production as after this selection process (which in itself is an example of regression to the mean).
Is it in general helpful to add "external" datasets to the training dataset? [closed] At some point, adding more data will result in overfitting and worse out-of-sample prediction performance. Always. That papers report improved accuracy by leveraging additional data is not surprising
27,674
Is it in general helpful to add "external" datasets to the training dataset? [closed]
It depends. One way to think about this problem is as follows. The data in your training and test/out-of-sample sets can be modeled as h(x) + noise. Here, the noise is the variability in your data that is not explained by some common (theoretically optimal) model h(x). The important thing here is that if your training and test data are sampled from entirely different/unrelated distributions, then ALL of your training data is noise, even if on their own, both training and test set data are very well structured. What that means is that the more different the external dataset is to your test data, the greater the amount of noise in it. The greater the amount of noise, the easier it is to overfit (i.e. fit your model to noise - as defined above). For your car example, that would mean that a complex model might fit to the specifics of US number plates, which is not part of h(x) when it comes to detecting cars in Japan. Having said that, if your goal is to make your model more robust (i.e. you want your car-in-Japan model to still work if the numberplate design is changed, or in some other way the distribution of your OOS data changes), then introducing the US dataset might help - in this case, the Japanese idiosyncrasies also become a part of 'noise' and in, e.g., cross-validation, you will be forced to come up with perhaps simpler models that pick up features that work both in the US and in Japan, making your model more general and therefore more robust. So the answer is that it really depends on your data, on what the external data is, and on what you are trying to achieve.
Is it in general helpful to add "external" datasets to the training dataset? [closed]
It depends. One way to think about this problem is as follows. The data in your training and test/out-of-sample sets can be modeled as h(x) + noise. Here, the noise is the variability in your data tha
Is it in general helpful to add "external" datasets to the training dataset? [closed] It depends. One way to think about this problem is as follows. The data in your training and test/out-of-sample sets can be modeled as h(x) + noise. Here, the noise is the variability in your data that is not explained by some common (theoretically optimal) model h(x). The important thing here is that if your training and test data are sampled from entirely different/unrelated distributions, then ALL of your training data is noise, even if on their own, both training and test set data are very well structured. What that means is that the more different the external dataset is to your test data, the greater the amount of noise in it. The greater the amount of noise, the easier it is to overfit (i.e. fit your model to noise - as defined above). For your car example, that would mean that a complex model might fit to the specifics of US number plates, which is not part of h(x) when it comes to detecting cars in Japan. Having said that, if your goal is to make your model more robust (i.e. you want your car-in-Japan model to still work if the numberplate design is changed, or in some other way the distribution of your OOS data changes), then introducing the US dataset might help - in this case, the Japanese idiosyncrasies also become a part of 'noise' and in, e.g., cross-validation, you will be forced to come up with perhaps simpler models that pick up features that work both in the US and in Japan, making your model more general and therefore more robust. So the answer is that it really depends on your data, on what the external data is, and on what you are trying to achieve.
Is it in general helpful to add "external" datasets to the training dataset? [closed] It depends. One way to think about this problem is as follows. The data in your training and test/out-of-sample sets can be modeled as h(x) + noise. Here, the noise is the variability in your data tha
27,675
What is the covariance/correlation between $X$ and $X^2$ (without assuming normality)?
The general form of the covariance depends on the first three moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$ and skewness $\gamma$. The covariance of interest exists if $\gamma < \infty$ and does not exist otherwise. Using the relationship between the raw moments and the cumulants, you have the general expression: $$\begin{equation} \begin{aligned} \mathbb{C}(X,X^2) &= \mathbb{E}(X^3) - \mathbb{E}(X) \mathbb{E}(X^2) \\[6pt] &= ( \mu^3 + 3 \mu \sigma^2 + \gamma \sigma^3 ) - \mu ( \mu^2 + \sigma^2 ) \\[6pt] &= 2 \mu \sigma^2 + \gamma \sigma^3. \\[6pt] \end{aligned} \end{equation}$$ The special case for an unskewed distribution with zero mean (e.g., the centred normal distribution) occurs when $\mu = 0$ and $\gamma = 0$, which gives zero covariance. Note that the absence of covariance occurs for any unskewed centred distribution, though independence holds for certain particular distributions. Extension to correlation: If we further assume that $X$ has finite kurtosis $\kappa$ then using this variance result it can be shown that: $$\mathbb{V}(X) = \sigma^2 \quad \quad \quad \quad \quad \mathbb{V}(X^2) = 4 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + (\kappa-1) \sigma^4.$$ It then follows that: $$\begin{align} \mathbb{Corr}(X,X^2) &= \frac{\mathbb{Cov}(X,X^2)}{\sqrt{\mathbb{V}(X) \mathbb{V}(X^2)}} \\[6pt] &= \frac{2 \mu \sigma^2 + \gamma \sigma^3}{\sqrt{\sigma^2 \cdot (4 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + (\kappa-1) \sigma^4)}} \\[6pt] &= \frac{2 \mu + \gamma \sigma}{\sqrt{4 \mu^2 + 4 \mu \gamma \sigma + (\kappa-1) \sigma^2}}. \\[6pt] \end{align}$$ For the special case of a random variable with zero mean we have $\mu=0$ which then gives: $$\mathbb{Corr}(X,X^2) = \frac{\gamma}{\sqrt{\kappa-1}},$$ which is the scale-adjusted skewness parameter.
What is the covariance/correlation between $X$ and $X^2$ (without assuming normality)?
The general form of the covariance depends on the first three moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$ and skewness $\gamma$.
What is the covariance/correlation between $X$ and $X^2$ (without assuming normality)? The general form of the covariance depends on the first three moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$ and skewness $\gamma$. The covariance of interest exists if $\gamma < \infty$ and does not exist otherwise. Using the relationship between the raw moments and the cumulants, you have the general expression: $$\begin{equation} \begin{aligned} \mathbb{C}(X,X^2) &= \mathbb{E}(X^3) - \mathbb{E}(X) \mathbb{E}(X^2) \\[6pt] &= ( \mu^3 + 3 \mu \sigma^2 + \gamma \sigma^3 ) - \mu ( \mu^2 + \sigma^2 ) \\[6pt] &= 2 \mu \sigma^2 + \gamma \sigma^3. \\[6pt] \end{aligned} \end{equation}$$ The special case for an unskewed distribution with zero mean (e.g., the centred normal distribution) occurs when $\mu = 0$ and $\gamma = 0$, which gives zero covariance. Note that the absence of covariance occurs for any unskewed centred distribution, though independence holds for certain particular distributions. Extension to correlation: If we further assume that $X$ has finite kurtosis $\kappa$ then using this variance result it can be shown that: $$\mathbb{V}(X) = \sigma^2 \quad \quad \quad \quad \quad \mathbb{V}(X^2) = 4 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + (\kappa-1) \sigma^4.$$ It then follows that: $$\begin{align} \mathbb{Corr}(X,X^2) &= \frac{\mathbb{Cov}(X,X^2)}{\sqrt{\mathbb{V}(X) \mathbb{V}(X^2)}} \\[6pt] &= \frac{2 \mu \sigma^2 + \gamma \sigma^3}{\sqrt{\sigma^2 \cdot (4 \mu^2 \sigma^2 + 4 \mu \gamma \sigma^3 + (\kappa-1) \sigma^4)}} \\[6pt] &= \frac{2 \mu + \gamma \sigma}{\sqrt{4 \mu^2 + 4 \mu \gamma \sigma + (\kappa-1) \sigma^2}}. \\[6pt] \end{align}$$ For the special case of a random variable with zero mean we have $\mu=0$ which then gives: $$\mathbb{Corr}(X,X^2) = \frac{\gamma}{\sqrt{\kappa-1}},$$ which is the scale-adjusted skewness parameter.
What is the covariance/correlation between $X$ and $X^2$ (without assuming normality)? The general form of the covariance depends on the first three moments of the distribution. To facilitate our analysis, we suppose that $X$ has mean $\mu$, variance $\sigma^2$ and skewness $\gamma$.
27,676
Why is Elastic Net called Elastic Net?
Zou and Hastie in their paper proposing the method give the following explanation: In this paper we propose a new regularization technique which we call the elastic net. Similar to the lasso, the elastic net simultaneously does automatic variable selection and continuous shrinkage, and it can select groups of correlated variables. It is like a stretchable fishing net that retains ‘all the big fish’. Simulation studies and real data examples show that the elastic net often outperforms the lasso in terms of prediction accuracy. Regularization and variable selection via the elastic net (2005)
Why is Elastic Net called Elastic Net?
Zou and Hastie in their paper proposing the method give the following explanation: In this paper we propose a new regularization technique which we call the elastic net. Similar to the lasso, the ela
Why is Elastic Net called Elastic Net? Zou and Hastie in their paper proposing the method give the following explanation: In this paper we propose a new regularization technique which we call the elastic net. Similar to the lasso, the elastic net simultaneously does automatic variable selection and continuous shrinkage, and it can select groups of correlated variables. It is like a stretchable fishing net that retains ‘all the big fish’. Simulation studies and real data examples show that the elastic net often outperforms the lasso in terms of prediction accuracy. Regularization and variable selection via the elastic net (2005)
Why is Elastic Net called Elastic Net? Zou and Hastie in their paper proposing the method give the following explanation: In this paper we propose a new regularization technique which we call the elastic net. Similar to the lasso, the ela
27,677
Model selection for GAM in R
If you want to select from among a group of covariates, then a principled way of doing this is to put some additional shrinkage on each of the smoothers in the model so that they can be penalised out of the model entirely if needed. In the typical setting the wiggliness penalty is based on the curvature (the second derivative) of the estimated function. This penalty affects the wiggly basis functions as they have a non-constant second derivative. The basis expansion that is performed on each covariate results in basis functions that live either in the null space or the range space of the penalty. Those in the range space are the wiggly functions that can be penalised and shrunk to ~zero effect if we don't need to fit such a wiggly function. The basis functions in the null space are a flat function (which is removed via an identifiability constraint as it is confounded with the model intercept) and a linear function, which have zero curvature. As such the penalty doesn't affect them. This is why you can estimate a linear effect in a GAM fitted via mgcv but you can't get rid of the linear part because it is totally unaffected by the penalty as it has no wiggliness. Giampiero Marra and Simon Wood (2011) showed that through an additional penalty targeted specifically at the penalty null space components, effective model selection could be performed in a GAM. The extra penalty only affects the perfectly smooth terms, but it has the effect of shrinking linear effect back to zero effects and thus entirely out of the model if that is justified. There are two options in mgcv for this: shrinkage smoothers, and the double penalty approach. Shrinkage smoothers are special versions of the ordinary basis types but they are subject to an eigen decomposition during formation of the penalty matrix in which those basis functions which are perfectly smooth return zero eigenvalues. The shrinkage smoother just adds a very small value to terms with zero eigenvalue, which results in the terms now being affected by the usual wiggliness penalty used to select the smoothness parameters. This approach says that the wiggly functions should be shrunk more than the functions in the null space as the small addition to the zero-eigenvalue terms means those terms are less affected by the wiggliness penalty than the functions in the range space. Shrinkage smoothers can be selected for some or all smooths by changing the basis type to one of the following: bs = 'ts' — for the shrinkage version of the thin plate regression spline basis, bs = 'cs' — for the shrinkage version of the cubic regression spline basis. This argument is added to whichever s() functions you want to shrink in the formula for the model. The double penalty approach simply adds a second penalty that only affects the functions in the null space. Now there are two penalties in effect; the usual wiggliness penalty that affects functions in the range space, and the shrinkage penalty that affects functions in the penalty null space. The second penalty allow the linear term to be shrunk also and together, both penalties can be result in a smooth function being entirely removed from the model. The advantage of the double penalty approach is that the null space and the range space functions are treated the same way from the point of view of shrinkage. In the shrinkage smoother approach, we are a priori expecting the wiggly terms to be shrunk more than the smooth terms. In the double penalty approach, we do not make that assumption and just let all the functions be shrunk. The disadvantage of the double penalty approach is that each smooth now requires two "smoothness" parameters to be estimated; the usual smoothness parameter associated with the wiggliness penalty, and the smoothness parameter that controls the shrinkage that applies to the functions in the null space. This option is activated in mgcv via the select = TRUE argument to gam(); and which means it is turned on for all smooths in the model formula. Marra and Wood's (2011) results suggested that the double penalty approach worked slightly better than the shrinkage smother approach. Marra, G., and S. N. Wood. 2011. Practical variable selection for generalized additive models. Comput. Stat. Data Anal. 55: 2372–2387. doi:10.1016/j.csda.2011.02.004
Model selection for GAM in R
If you want to select from among a group of covariates, then a principled way of doing this is to put some additional shrinkage on each of the smoothers in the model so that they can be penalised out
Model selection for GAM in R If you want to select from among a group of covariates, then a principled way of doing this is to put some additional shrinkage on each of the smoothers in the model so that they can be penalised out of the model entirely if needed. In the typical setting the wiggliness penalty is based on the curvature (the second derivative) of the estimated function. This penalty affects the wiggly basis functions as they have a non-constant second derivative. The basis expansion that is performed on each covariate results in basis functions that live either in the null space or the range space of the penalty. Those in the range space are the wiggly functions that can be penalised and shrunk to ~zero effect if we don't need to fit such a wiggly function. The basis functions in the null space are a flat function (which is removed via an identifiability constraint as it is confounded with the model intercept) and a linear function, which have zero curvature. As such the penalty doesn't affect them. This is why you can estimate a linear effect in a GAM fitted via mgcv but you can't get rid of the linear part because it is totally unaffected by the penalty as it has no wiggliness. Giampiero Marra and Simon Wood (2011) showed that through an additional penalty targeted specifically at the penalty null space components, effective model selection could be performed in a GAM. The extra penalty only affects the perfectly smooth terms, but it has the effect of shrinking linear effect back to zero effects and thus entirely out of the model if that is justified. There are two options in mgcv for this: shrinkage smoothers, and the double penalty approach. Shrinkage smoothers are special versions of the ordinary basis types but they are subject to an eigen decomposition during formation of the penalty matrix in which those basis functions which are perfectly smooth return zero eigenvalues. The shrinkage smoother just adds a very small value to terms with zero eigenvalue, which results in the terms now being affected by the usual wiggliness penalty used to select the smoothness parameters. This approach says that the wiggly functions should be shrunk more than the functions in the null space as the small addition to the zero-eigenvalue terms means those terms are less affected by the wiggliness penalty than the functions in the range space. Shrinkage smoothers can be selected for some or all smooths by changing the basis type to one of the following: bs = 'ts' — for the shrinkage version of the thin plate regression spline basis, bs = 'cs' — for the shrinkage version of the cubic regression spline basis. This argument is added to whichever s() functions you want to shrink in the formula for the model. The double penalty approach simply adds a second penalty that only affects the functions in the null space. Now there are two penalties in effect; the usual wiggliness penalty that affects functions in the range space, and the shrinkage penalty that affects functions in the penalty null space. The second penalty allow the linear term to be shrunk also and together, both penalties can be result in a smooth function being entirely removed from the model. The advantage of the double penalty approach is that the null space and the range space functions are treated the same way from the point of view of shrinkage. In the shrinkage smoother approach, we are a priori expecting the wiggly terms to be shrunk more than the smooth terms. In the double penalty approach, we do not make that assumption and just let all the functions be shrunk. The disadvantage of the double penalty approach is that each smooth now requires two "smoothness" parameters to be estimated; the usual smoothness parameter associated with the wiggliness penalty, and the smoothness parameter that controls the shrinkage that applies to the functions in the null space. This option is activated in mgcv via the select = TRUE argument to gam(); and which means it is turned on for all smooths in the model formula. Marra and Wood's (2011) results suggested that the double penalty approach worked slightly better than the shrinkage smother approach. Marra, G., and S. N. Wood. 2011. Practical variable selection for generalized additive models. Comput. Stat. Data Anal. 55: 2372–2387. doi:10.1016/j.csda.2011.02.004
Model selection for GAM in R If you want to select from among a group of covariates, then a principled way of doing this is to put some additional shrinkage on each of the smoothers in the model so that they can be penalised out
27,678
How to Compare the Data Distribution of 2 datasets?
You can compare distribution of the two columns using two-sample Kolmogorov-Smirnov test, it is included in the scipy.stats: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html From the stackoverflow topic: from scipy.stats import ks_2samp import numpy as np np.random.seed(123456) x = np.random.normal(0, 1, 1000) y = np.random.normal(0, 1, 1000) z = np.random.normal(1.1, 0.9, 1000) >>> ks_2samp(x, y) Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647) >>> ks_2samp(x, z) Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77) Under the null hypothesis the two distributions are identical. If the K-S statistic is small or the p-value is high (greater than the significance level, say 5%), then we cannot reject the hypothesis that the distributions of the two samples are the same. Conversely, we can reject the null hypothesis if the p-value is low.
How to Compare the Data Distribution of 2 datasets?
You can compare distribution of the two columns using two-sample Kolmogorov-Smirnov test, it is included in the scipy.stats: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.h
How to Compare the Data Distribution of 2 datasets? You can compare distribution of the two columns using two-sample Kolmogorov-Smirnov test, it is included in the scipy.stats: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.html From the stackoverflow topic: from scipy.stats import ks_2samp import numpy as np np.random.seed(123456) x = np.random.normal(0, 1, 1000) y = np.random.normal(0, 1, 1000) z = np.random.normal(1.1, 0.9, 1000) >>> ks_2samp(x, y) Ks_2sampResult(statistic=0.022999999999999909, pvalue=0.95189016804849647) >>> ks_2samp(x, z) Ks_2sampResult(statistic=0.41800000000000004, pvalue=3.7081494119242173e-77) Under the null hypothesis the two distributions are identical. If the K-S statistic is small or the p-value is high (greater than the significance level, say 5%), then we cannot reject the hypothesis that the distributions of the two samples are the same. Conversely, we can reject the null hypothesis if the p-value is low.
How to Compare the Data Distribution of 2 datasets? You can compare distribution of the two columns using two-sample Kolmogorov-Smirnov test, it is included in the scipy.stats: https://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.ks_2samp.h
27,679
How to Compare the Data Distribution of 2 datasets?
To compare all columns to all columns, maybe you can create a response label column with "1" as data from dataset 1 and "0" as data from dataset 2. You can build a classification task over this response label using all columns in the combined dataset. If you can get a good AUC score, then the data is separable and the dataset 1 and dataset 2 are probably from two different distributions.
How to Compare the Data Distribution of 2 datasets?
To compare all columns to all columns, maybe you can create a response label column with "1" as data from dataset 1 and "0" as data from dataset 2. You can build a classification task over this respon
How to Compare the Data Distribution of 2 datasets? To compare all columns to all columns, maybe you can create a response label column with "1" as data from dataset 1 and "0" as data from dataset 2. You can build a classification task over this response label using all columns in the combined dataset. If you can get a good AUC score, then the data is separable and the dataset 1 and dataset 2 are probably from two different distributions.
How to Compare the Data Distribution of 2 datasets? To compare all columns to all columns, maybe you can create a response label column with "1" as data from dataset 1 and "0" as data from dataset 2. You can build a classification task over this respon
27,680
Lasso penalty only applied to subset of regressors
Let $H_2$ be an orthogonal projector onto the column space of $X_2$. We have that \begin{align*} & \min_{\beta_1, \beta_2} \left\{ \|y - X_1\beta_1 - X_2\beta_2\|_2^2 + \lambda \|\beta_1\|_1 \right\} \\ = & \, \min_{\beta_1, \beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\} \\ = & \, \min_{\beta_1 | \beta_2} \min_{\beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\}, \end{align*} where \begin{align*} \hat\beta_2 & = \arg\min_{\beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\} \\ & = \arg\min_{\beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 \right\} \end{align*} satisfies $X_2 \hat\beta_2 = H_2 (y - X_1 \beta_1)$ for all $\beta_1$ since $H_2 (y - X_1 \beta_1) \in \mathrm{col}(X_2)$ for all $\beta_1$. Considering in this sentence the case that $X_2$ is full rank, we further have that $$\hat\beta_2 = (X_2^T X_2)^{-1} X_2^T (y - X_1 \beta_1),$$ since $H_2 = X_2 (X_2^T X_2)^{-1} X_2^T$ in this case. Plugging this into the first optimization problem, we see that \begin{align*} \hat\beta_1 & = \arg\min_{\beta_1} \left\{ 0 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\} \\ & =\arg\min_{\beta_1} \left\{ \|\left(I-H_2\right)y - \left(I-H_2\right)X_1\beta_1 \|_2^2 + \lambda \|\beta_1 \|_1 \right\}, \tag{*} \end{align*} which can be evaluated through the usual lasso computational tools. As whuber suggests in his comment, this result is intuitive since the unrestricted coefficients $\beta_2$ can cover the span of $X_2$, so that only the part of space orthogonal to the span of $X_2$ is of concern when evaluating $\hat\beta_1$. Despite the notation being slightly more general, nearly anyone who has ever used lasso is familiar with this result. To see this, suppose that $X_2 = \mathbf{1}$ is the (length $n$) vectors of ones, representing the intercept. Then, the projection matrix $H_2 = \mathbf{1} \left( \mathbf{1}^T \mathbf{1} \right)^{-1} \mathbf{1}^T = \frac{1}{n} \mathbf{1} \mathbf{1}^T$, and, for any vector $v$, the orthogonal projection $\left( I - H_2 \right) v = v - \bar{v} \mathbf{1}$ just demeans the vector. Considering equation $(*)$, this is exactly what people do when they compute the lasso coefficients! They demean the data so that the intercept doesn't have to be considered. edit:H_2 closed form is missing a transpose.
Lasso penalty only applied to subset of regressors
Let $H_2$ be an orthogonal projector onto the column space of $X_2$. We have that \begin{align*} & \min_{\beta_1, \beta_2} \left\{ \|y - X_1\beta_1 - X_2\beta_2\|_2^2 + \lambda \|\beta_1\|_1 \right\}
Lasso penalty only applied to subset of regressors Let $H_2$ be an orthogonal projector onto the column space of $X_2$. We have that \begin{align*} & \min_{\beta_1, \beta_2} \left\{ \|y - X_1\beta_1 - X_2\beta_2\|_2^2 + \lambda \|\beta_1\|_1 \right\} \\ = & \, \min_{\beta_1, \beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\} \\ = & \, \min_{\beta_1 | \beta_2} \min_{\beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\}, \end{align*} where \begin{align*} \hat\beta_2 & = \arg\min_{\beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\} \\ & = \arg\min_{\beta_2} \left\{ \|H_2\left(y - X_1\beta_1 \right) - X_2 \beta_2\|_2^2 \right\} \end{align*} satisfies $X_2 \hat\beta_2 = H_2 (y - X_1 \beta_1)$ for all $\beta_1$ since $H_2 (y - X_1 \beta_1) \in \mathrm{col}(X_2)$ for all $\beta_1$. Considering in this sentence the case that $X_2$ is full rank, we further have that $$\hat\beta_2 = (X_2^T X_2)^{-1} X_2^T (y - X_1 \beta_1),$$ since $H_2 = X_2 (X_2^T X_2)^{-1} X_2^T$ in this case. Plugging this into the first optimization problem, we see that \begin{align*} \hat\beta_1 & = \arg\min_{\beta_1} \left\{ 0 + \|\left(I-H_2\right)\left(y - X_1\beta_1 \right) \|_2^2 + \lambda \|\beta_1 \|_1 \right\} \\ & =\arg\min_{\beta_1} \left\{ \|\left(I-H_2\right)y - \left(I-H_2\right)X_1\beta_1 \|_2^2 + \lambda \|\beta_1 \|_1 \right\}, \tag{*} \end{align*} which can be evaluated through the usual lasso computational tools. As whuber suggests in his comment, this result is intuitive since the unrestricted coefficients $\beta_2$ can cover the span of $X_2$, so that only the part of space orthogonal to the span of $X_2$ is of concern when evaluating $\hat\beta_1$. Despite the notation being slightly more general, nearly anyone who has ever used lasso is familiar with this result. To see this, suppose that $X_2 = \mathbf{1}$ is the (length $n$) vectors of ones, representing the intercept. Then, the projection matrix $H_2 = \mathbf{1} \left( \mathbf{1}^T \mathbf{1} \right)^{-1} \mathbf{1}^T = \frac{1}{n} \mathbf{1} \mathbf{1}^T$, and, for any vector $v$, the orthogonal projection $\left( I - H_2 \right) v = v - \bar{v} \mathbf{1}$ just demeans the vector. Considering equation $(*)$, this is exactly what people do when they compute the lasso coefficients! They demean the data so that the intercept doesn't have to be considered. edit:H_2 closed form is missing a transpose.
Lasso penalty only applied to subset of regressors Let $H_2$ be an orthogonal projector onto the column space of $X_2$. We have that \begin{align*} & \min_{\beta_1, \beta_2} \left\{ \|y - X_1\beta_1 - X_2\beta_2\|_2^2 + \lambda \|\beta_1\|_1 \right\}
27,681
Lasso penalty only applied to subset of regressors
Don't know that you need much "theory" behind such an approach. Penalized regression approaches (LASSO, ridge, or their hybrid elastic net regression) are tools for making bias-variance tradeoffs to improve model generalizability and performance. You certainly may choose to keep some variables unpenalized, as you propose for $\beta_2$, while others are penalized. For example, this paper examined the effectiveness of a vaccine by keeping the vaccination status unpenalized while incorporating other covariates with a ridge-regression L2 penalty. This approach avoided overfitting on covariates while allowing direct evaluation of the main predictor of interest. Questions about implementations in specific programming environments are off-topic on this site. One general way to approach this issue, as in the glmnet package in R, is to include a predictor-specific penalty factor that multiplies the overall choice of $\lambda$ before evaluating the objective function. Predictors have default penalty factors of 1, but a predictor with a specified penalty factor of 0 would not be penalized at all and one with an infinite penalty factor would always be excluded. Intermediate values of penalty factors differing among predictors can provide any desired differential penalization among the predictors. I suspect that this approach can be incorporated somehow into the tools provided by sklearn.
Lasso penalty only applied to subset of regressors
Don't know that you need much "theory" behind such an approach. Penalized regression approaches (LASSO, ridge, or their hybrid elastic net regression) are tools for making bias-variance tradeoffs to i
Lasso penalty only applied to subset of regressors Don't know that you need much "theory" behind such an approach. Penalized regression approaches (LASSO, ridge, or their hybrid elastic net regression) are tools for making bias-variance tradeoffs to improve model generalizability and performance. You certainly may choose to keep some variables unpenalized, as you propose for $\beta_2$, while others are penalized. For example, this paper examined the effectiveness of a vaccine by keeping the vaccination status unpenalized while incorporating other covariates with a ridge-regression L2 penalty. This approach avoided overfitting on covariates while allowing direct evaluation of the main predictor of interest. Questions about implementations in specific programming environments are off-topic on this site. One general way to approach this issue, as in the glmnet package in R, is to include a predictor-specific penalty factor that multiplies the overall choice of $\lambda$ before evaluating the objective function. Predictors have default penalty factors of 1, but a predictor with a specified penalty factor of 0 would not be penalized at all and one with an infinite penalty factor would always be excluded. Intermediate values of penalty factors differing among predictors can provide any desired differential penalization among the predictors. I suspect that this approach can be incorporated somehow into the tools provided by sklearn.
Lasso penalty only applied to subset of regressors Don't know that you need much "theory" behind such an approach. Penalized regression approaches (LASSO, ridge, or their hybrid elastic net regression) are tools for making bias-variance tradeoffs to i
27,682
Lasso penalty only applied to subset of regressors
I just wrote a module for this on python. Hope it helps. https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties
Lasso penalty only applied to subset of regressors
I just wrote a module for this on python. Hope it helps. https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties
Lasso penalty only applied to subset of regressors I just wrote a module for this on python. Hope it helps. https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties
Lasso penalty only applied to subset of regressors I just wrote a module for this on python. Hope it helps. https://github.com/3zhang/Python-Lasso-ElasticNet-Ridge-Regression-with-Customized-Penalties
27,683
Feature extracted by max pooling vs mean pooling
I wouldn't say the either extract features. Instead, it is the convolutional layers that construct/extract features, and the pooling layers compress them to a lower fidelity. The difference is in the way the compression happens, and what type of fidelity is retained: A max-pool layer compressed by taking the maximum activation in a block. If you have a block with mostly small activation, but a small bit of large activation, you will loose the information on the low activations. I think of this as saying "this type of feature was detected in this general area". A mean-pool layer compresses by taking the mean activation in a block. If large activations are balanced by negative activations, the overall compressed activations will look like no activation at all. On the other hand, you retain some information about low activations in the previous example.
Feature extracted by max pooling vs mean pooling
I wouldn't say the either extract features. Instead, it is the convolutional layers that construct/extract features, and the pooling layers compress them to a lower fidelity. The difference is in th
Feature extracted by max pooling vs mean pooling I wouldn't say the either extract features. Instead, it is the convolutional layers that construct/extract features, and the pooling layers compress them to a lower fidelity. The difference is in the way the compression happens, and what type of fidelity is retained: A max-pool layer compressed by taking the maximum activation in a block. If you have a block with mostly small activation, but a small bit of large activation, you will loose the information on the low activations. I think of this as saying "this type of feature was detected in this general area". A mean-pool layer compresses by taking the mean activation in a block. If large activations are balanced by negative activations, the overall compressed activations will look like no activation at all. On the other hand, you retain some information about low activations in the previous example.
Feature extracted by max pooling vs mean pooling I wouldn't say the either extract features. Instead, it is the convolutional layers that construct/extract features, and the pooling layers compress them to a lower fidelity. The difference is in th
27,684
Feature extracted by max pooling vs mean pooling
my opinion is that max&mean pooling is nothing to do with the type of features, but with translation invariance. Imagine learning to recognise an 'A' vs 'B' (no variation in A's and in B's pixels). First in a fixed position in the image. This can be done by a logistic regression (1 neuron): the weights end up being a template of the difference A - B. Now what happens if you train to recognise on different locations in the image. You cannot do this with logistic regression, sweeping over the image (ie approximating a convolutional layer with one filter) and labelling all sweeps of the image A or B as appropriate, because learning from the different positions interferes - effectively you try to learn the average of A-B as A/B are passed across your filter - but this is just a blur. with max pooling learning is only performed on the location of max activation (which is hopefully centred on the letter). I am not so sure about mean pooling - I would imagine that more learning (ie weight adjustment ) is done at the max activation location and that avoids the blurring)... I would encourage you to just implement such a simple network with 2 classes and 1 filter for convolutional layer, then max/mean pooling and 1 output node and inspect the weights/performance.
Feature extracted by max pooling vs mean pooling
my opinion is that max&mean pooling is nothing to do with the type of features, but with translation invariance. Imagine learning to recognise an 'A' vs 'B' (no variation in A's and in B's pixels).
Feature extracted by max pooling vs mean pooling my opinion is that max&mean pooling is nothing to do with the type of features, but with translation invariance. Imagine learning to recognise an 'A' vs 'B' (no variation in A's and in B's pixels). First in a fixed position in the image. This can be done by a logistic regression (1 neuron): the weights end up being a template of the difference A - B. Now what happens if you train to recognise on different locations in the image. You cannot do this with logistic regression, sweeping over the image (ie approximating a convolutional layer with one filter) and labelling all sweeps of the image A or B as appropriate, because learning from the different positions interferes - effectively you try to learn the average of A-B as A/B are passed across your filter - but this is just a blur. with max pooling learning is only performed on the location of max activation (which is hopefully centred on the letter). I am not so sure about mean pooling - I would imagine that more learning (ie weight adjustment ) is done at the max activation location and that avoids the blurring)... I would encourage you to just implement such a simple network with 2 classes and 1 filter for convolutional layer, then max/mean pooling and 1 output node and inspect the weights/performance.
Feature extracted by max pooling vs mean pooling my opinion is that max&mean pooling is nothing to do with the type of features, but with translation invariance. Imagine learning to recognise an 'A' vs 'B' (no variation in A's and in B's pixels).
27,685
Linear regression: *Why* can you partition sums of squares?
It seems to be that if you have a situation where $A = B + C$, then $A^2 = B^2 + 2BC + C^2$, not $A^2 = B^2 + C^2$. Why isn't that the case here? Conceptually, the idea is that $BC = 0$ because $B$ and $C$ are orthogonal (i.e. are perpendicular). In the context of linear regression here, the residuals $\epsilon_i = y_i - \hat{y}_i$ are orthogonal to the demeaned forecast $\hat{y}_i - \bar{y}$. The forecast from linear regression creates an orthogonal decomposition of $\mathbf{y}$ in a similar sense as $(3,4) = (3,0) + (0,4)$ is an orthogonal decomposition. Linear Algebra version: Let: $$\mathbf{z} = \begin{bmatrix} y_1 - \bar{y} \\ y_2 - \bar{y}\\ \ldots \\ y_n - \bar{y} \end{bmatrix} \quad \quad \mathbf{\hat{z}} = \begin{bmatrix} \hat{y}_1 - \bar{y} \\ \hat{y}_2 - \bar{y} \\ \ldots \\ \hat{y}_n - \bar{y} \end{bmatrix} \quad \quad \boldsymbol{\epsilon} = \begin{bmatrix} y_1 - \hat{y}_1 \\ y_2 - \hat{y}_2 \\ \ldots \\ y_n - \hat{y}_n \end{bmatrix} = \mathbf{z} - \hat{\mathbf{z}}$$ Linear regression (with a constant included) decomposes $\mathbf{z}$ into the sum of two vectors: a forecast $\hat{\mathbf{z}}$ and a residual $\boldsymbol{\epsilon}$ $$ \mathbf{z} = \hat{\mathbf{z}} + \boldsymbol{\epsilon} $$ Let $\langle .,. \rangle$ denote the dot product. (More generally, $\langle X,Y \rangle$ can be the inner product $E[XY]$.) \begin{align*} \langle \mathbf{z} , \mathbf{z} \rangle &= \langle \hat{\mathbf{z}} + \boldsymbol{\epsilon}, \hat{\mathbf{z}} + \boldsymbol{\epsilon} \rangle \\ &= \langle \hat{\mathbf{z}}, \hat{\mathbf{z}} \rangle + 2 \langle \hat{\mathbf{z}},\boldsymbol{\epsilon} \rangle + \langle \boldsymbol{\epsilon},\boldsymbol{\epsilon} \rangle \\ &= \langle \hat{\mathbf{z}}, \hat{\mathbf{z}} \rangle + \langle \boldsymbol{\epsilon},\boldsymbol{\epsilon} \rangle \end{align*} Where the last line follows from the fact that $\langle \hat{\mathbf{z}},\boldsymbol{\epsilon} \rangle = 0$ (i.e. that $\hat{\mathbf{z}}$ and $\boldsymbol{\epsilon} = \mathbf{z}- \hat{\mathbf{z}}$ are orthogonal). You can prove $\hat{\mathbf{z}}$ and $\boldsymbol{\epsilon}$ are orthogonal based upon how the ordinary least squares regression constructs $\hat{\mathbf{z}}$. $\hat{\mathbf{z}}$ is the linear projection of $\mathbf{z}$ onto the subspace defined by the linear span of the regressors $\mathbf{x}_1$, $\mathbf{x}_2$, etc.... The residual $\boldsymbol{\epsilon}$ is orthogonal to that entire subspace hence $\hat{\mathbf{z}}$ (which lies in the span of $\mathbf{x}_1$, $\mathbf{x}_2$, etc...) is orthogonal to $\boldsymbol{\epsilon}$. Note that as I defined $\langle .,.\rangle$ as the dot product, $\langle \mathbf{z} , \mathbf{z} \rangle = \langle \hat{\mathbf{z}}, \hat{\mathbf{z}} \rangle + \langle \boldsymbol{\epsilon},\boldsymbol{\epsilon} \rangle $ is simply another way of writing $\sum_i (y_i - \bar{y})^2 = \sum_i (\hat{y}_i - \bar{y})^2 + \sum_i (y_i - \hat{y}_i)^2$ (i.e. SSTO = SSR + SSE)
Linear regression: *Why* can you partition sums of squares?
It seems to be that if you have a situation where $A = B + C$, then $A^2 = B^2 + 2BC + C^2$, not $A^2 = B^2 + C^2$. Why isn't that the case here? Conceptually, the idea is that $BC = 0$ because $
Linear regression: *Why* can you partition sums of squares? It seems to be that if you have a situation where $A = B + C$, then $A^2 = B^2 + 2BC + C^2$, not $A^2 = B^2 + C^2$. Why isn't that the case here? Conceptually, the idea is that $BC = 0$ because $B$ and $C$ are orthogonal (i.e. are perpendicular). In the context of linear regression here, the residuals $\epsilon_i = y_i - \hat{y}_i$ are orthogonal to the demeaned forecast $\hat{y}_i - \bar{y}$. The forecast from linear regression creates an orthogonal decomposition of $\mathbf{y}$ in a similar sense as $(3,4) = (3,0) + (0,4)$ is an orthogonal decomposition. Linear Algebra version: Let: $$\mathbf{z} = \begin{bmatrix} y_1 - \bar{y} \\ y_2 - \bar{y}\\ \ldots \\ y_n - \bar{y} \end{bmatrix} \quad \quad \mathbf{\hat{z}} = \begin{bmatrix} \hat{y}_1 - \bar{y} \\ \hat{y}_2 - \bar{y} \\ \ldots \\ \hat{y}_n - \bar{y} \end{bmatrix} \quad \quad \boldsymbol{\epsilon} = \begin{bmatrix} y_1 - \hat{y}_1 \\ y_2 - \hat{y}_2 \\ \ldots \\ y_n - \hat{y}_n \end{bmatrix} = \mathbf{z} - \hat{\mathbf{z}}$$ Linear regression (with a constant included) decomposes $\mathbf{z}$ into the sum of two vectors: a forecast $\hat{\mathbf{z}}$ and a residual $\boldsymbol{\epsilon}$ $$ \mathbf{z} = \hat{\mathbf{z}} + \boldsymbol{\epsilon} $$ Let $\langle .,. \rangle$ denote the dot product. (More generally, $\langle X,Y \rangle$ can be the inner product $E[XY]$.) \begin{align*} \langle \mathbf{z} , \mathbf{z} \rangle &= \langle \hat{\mathbf{z}} + \boldsymbol{\epsilon}, \hat{\mathbf{z}} + \boldsymbol{\epsilon} \rangle \\ &= \langle \hat{\mathbf{z}}, \hat{\mathbf{z}} \rangle + 2 \langle \hat{\mathbf{z}},\boldsymbol{\epsilon} \rangle + \langle \boldsymbol{\epsilon},\boldsymbol{\epsilon} \rangle \\ &= \langle \hat{\mathbf{z}}, \hat{\mathbf{z}} \rangle + \langle \boldsymbol{\epsilon},\boldsymbol{\epsilon} \rangle \end{align*} Where the last line follows from the fact that $\langle \hat{\mathbf{z}},\boldsymbol{\epsilon} \rangle = 0$ (i.e. that $\hat{\mathbf{z}}$ and $\boldsymbol{\epsilon} = \mathbf{z}- \hat{\mathbf{z}}$ are orthogonal). You can prove $\hat{\mathbf{z}}$ and $\boldsymbol{\epsilon}$ are orthogonal based upon how the ordinary least squares regression constructs $\hat{\mathbf{z}}$. $\hat{\mathbf{z}}$ is the linear projection of $\mathbf{z}$ onto the subspace defined by the linear span of the regressors $\mathbf{x}_1$, $\mathbf{x}_2$, etc.... The residual $\boldsymbol{\epsilon}$ is orthogonal to that entire subspace hence $\hat{\mathbf{z}}$ (which lies in the span of $\mathbf{x}_1$, $\mathbf{x}_2$, etc...) is orthogonal to $\boldsymbol{\epsilon}$. Note that as I defined $\langle .,.\rangle$ as the dot product, $\langle \mathbf{z} , \mathbf{z} \rangle = \langle \hat{\mathbf{z}}, \hat{\mathbf{z}} \rangle + \langle \boldsymbol{\epsilon},\boldsymbol{\epsilon} \rangle $ is simply another way of writing $\sum_i (y_i - \bar{y})^2 = \sum_i (\hat{y}_i - \bar{y})^2 + \sum_i (y_i - \hat{y}_i)^2$ (i.e. SSTO = SSR + SSE)
Linear regression: *Why* can you partition sums of squares? It seems to be that if you have a situation where $A = B + C$, then $A^2 = B^2 + 2BC + C^2$, not $A^2 = B^2 + C^2$. Why isn't that the case here? Conceptually, the idea is that $BC = 0$ because $
27,686
Linear regression: *Why* can you partition sums of squares?
The whole point is showing that certain vectors are orthogonal and then use Pythagorean theorem. Let us consider multivariate linear regression $Y = X\beta + \epsilon$. We know that the OLS estimator is $\hat{\beta} = (X^tX)^{-1}X^tY$. Now consider the estimate $\hat{Y} = X\hat{\beta} = X(X^tX)^{-1}X^tY = HY$ (H matrix is also called the "hat" matrix) where $H$ is an orthogonal projection matrix of Y onto $S(X)$. Now we have $Y - \hat{Y} = Y - HY = (I - H)Y$ where $(I-H)$ is a projection matrix onto orthogonal complement of $S(X)$ which is $S^{\bot}(X)$. Thus we know that $Y-\hat{Y}$ and $\hat{Y}$ are orthogonal. Now consider a submodel $Y = X_0\beta_0 + \epsilon$ where $X = [X_0 | X_1 ]$ and similarily we have the OLS estimator and estimate $\hat{\beta_0}$ and $\hat{Y_0}$ with projection matrix $H_0$ onto $S(X_0)$. Similarily we have that $Y - \hat{Y_0}$ and $\hat{Y_0}$ are orthogonal. And now $\hat{Y} - \hat{Y_0} = HY - H_0Y = HY - H_0HY = (I - H_0)HY$ where again $(I-H_0)$ is an orthogonal projection matrix on complement of $S(X_0)$ which is $S^{\bot}(X_0)$. Thus we have orthogonality of $\hat{Y} - \hat{Y_0}$ and $\hat{Y_0}$. So in the end we have $||Y - \hat{Y}||^2 = ||Y||^2 - ||\hat{Y}||^2 = ||Y - \hat{Y_0}||^2 + ||\hat{Y_0}||^2 - ||\hat{Y} - \hat{Y_0}||^2 - ||\hat{Y_0}||^2$ and finally $||Y - \hat{Y_0}||^2 = ||Y - \hat{Y}||^2 + ||\hat{Y} - \hat{Y_0}||^2$ Lastly, the mean $\bar{Y}$ is simply the $\hat{Y_0}$ when considering the null model $Y = \beta_0 + e$.
Linear regression: *Why* can you partition sums of squares?
The whole point is showing that certain vectors are orthogonal and then use Pythagorean theorem. Let us consider multivariate linear regression $Y = X\beta + \epsilon$. We know that the OLS estimator
Linear regression: *Why* can you partition sums of squares? The whole point is showing that certain vectors are orthogonal and then use Pythagorean theorem. Let us consider multivariate linear regression $Y = X\beta + \epsilon$. We know that the OLS estimator is $\hat{\beta} = (X^tX)^{-1}X^tY$. Now consider the estimate $\hat{Y} = X\hat{\beta} = X(X^tX)^{-1}X^tY = HY$ (H matrix is also called the "hat" matrix) where $H$ is an orthogonal projection matrix of Y onto $S(X)$. Now we have $Y - \hat{Y} = Y - HY = (I - H)Y$ where $(I-H)$ is a projection matrix onto orthogonal complement of $S(X)$ which is $S^{\bot}(X)$. Thus we know that $Y-\hat{Y}$ and $\hat{Y}$ are orthogonal. Now consider a submodel $Y = X_0\beta_0 + \epsilon$ where $X = [X_0 | X_1 ]$ and similarily we have the OLS estimator and estimate $\hat{\beta_0}$ and $\hat{Y_0}$ with projection matrix $H_0$ onto $S(X_0)$. Similarily we have that $Y - \hat{Y_0}$ and $\hat{Y_0}$ are orthogonal. And now $\hat{Y} - \hat{Y_0} = HY - H_0Y = HY - H_0HY = (I - H_0)HY$ where again $(I-H_0)$ is an orthogonal projection matrix on complement of $S(X_0)$ which is $S^{\bot}(X_0)$. Thus we have orthogonality of $\hat{Y} - \hat{Y_0}$ and $\hat{Y_0}$. So in the end we have $||Y - \hat{Y}||^2 = ||Y||^2 - ||\hat{Y}||^2 = ||Y - \hat{Y_0}||^2 + ||\hat{Y_0}||^2 - ||\hat{Y} - \hat{Y_0}||^2 - ||\hat{Y_0}||^2$ and finally $||Y - \hat{Y_0}||^2 = ||Y - \hat{Y}||^2 + ||\hat{Y} - \hat{Y_0}||^2$ Lastly, the mean $\bar{Y}$ is simply the $\hat{Y_0}$ when considering the null model $Y = \beta_0 + e$.
Linear regression: *Why* can you partition sums of squares? The whole point is showing that certain vectors are orthogonal and then use Pythagorean theorem. Let us consider multivariate linear regression $Y = X\beta + \epsilon$. We know that the OLS estimator
27,687
Neural networks output probability estimates?
If your activation function is for example logistic, then it will output continuous value between 0-1, or you can use softmax in the case of multiple outcome variables.
Neural networks output probability estimates?
If your activation function is for example logistic, then it will output continuous value between 0-1, or you can use softmax in the case of multiple outcome variables.
Neural networks output probability estimates? If your activation function is for example logistic, then it will output continuous value between 0-1, or you can use softmax in the case of multiple outcome variables.
Neural networks output probability estimates? If your activation function is for example logistic, then it will output continuous value between 0-1, or you can use softmax in the case of multiple outcome variables.
27,688
Neural networks output probability estimates?
Just for people who are still interested in this question: Softmax of state-of-art deep learning models is more of a score than probability estimates. Most deep networks nowadays are overconfident, check the follow paper if you are interested: https://arxiv.org/pdf/1706.04599.pdf
Neural networks output probability estimates?
Just for people who are still interested in this question: Softmax of state-of-art deep learning models is more of a score than probability estimates. Most deep networks nowadays are overconfident, c
Neural networks output probability estimates? Just for people who are still interested in this question: Softmax of state-of-art deep learning models is more of a score than probability estimates. Most deep networks nowadays are overconfident, check the follow paper if you are interested: https://arxiv.org/pdf/1706.04599.pdf
Neural networks output probability estimates? Just for people who are still interested in this question: Softmax of state-of-art deep learning models is more of a score than probability estimates. Most deep networks nowadays are overconfident, c
27,689
Neural networks output probability estimates?
In your NN, if you use a softmax output layer, you'll actually end up with an output vector of probabilities. This is actually the most common output layer to use for multi-class classification problems. To fetch the class label, you can perform an argmax() on the output vector to retrieve the index of the max probability across all labels.
Neural networks output probability estimates?
In your NN, if you use a softmax output layer, you'll actually end up with an output vector of probabilities. This is actually the most common output layer to use for multi-class classification probl
Neural networks output probability estimates? In your NN, if you use a softmax output layer, you'll actually end up with an output vector of probabilities. This is actually the most common output layer to use for multi-class classification problems. To fetch the class label, you can perform an argmax() on the output vector to retrieve the index of the max probability across all labels.
Neural networks output probability estimates? In your NN, if you use a softmax output layer, you'll actually end up with an output vector of probabilities. This is actually the most common output layer to use for multi-class classification probl
27,690
Average Marginal Effects interpretation
The average marginal effect gives you an effect on the probability, i.e. a number between 0 and 1. It is the average change in probability when x increases by one unit. Since a probit is a non-linear model, that effect will differ from individual to individual. What the average marginal effect does is compute it for each individual and than compute the average. To get the effect on the percentage you need to multiply by a 100, so the chance of winning decreases by 41 percentage points.
Average Marginal Effects interpretation
The average marginal effect gives you an effect on the probability, i.e. a number between 0 and 1. It is the average change in probability when x increases by one unit. Since a probit is a non-linear
Average Marginal Effects interpretation The average marginal effect gives you an effect on the probability, i.e. a number between 0 and 1. It is the average change in probability when x increases by one unit. Since a probit is a non-linear model, that effect will differ from individual to individual. What the average marginal effect does is compute it for each individual and than compute the average. To get the effect on the percentage you need to multiply by a 100, so the chance of winning decreases by 41 percentage points.
Average Marginal Effects interpretation The average marginal effect gives you an effect on the probability, i.e. a number between 0 and 1. It is the average change in probability when x increases by one unit. Since a probit is a non-linear
27,691
Average Marginal Effects interpretation
These two links can be checked out for detail explanation. Page 8 in https://cran.r-project.org/web/packages/margins/vignettes/TechnicalDetails.pdf and Appendix A in https://www3.nd.edu/~rwilliam/stats3/Margins02.pdf. Briefly, average marginal effect of a variable is the average of predicted changes in fitted values for one unit change in X (if it is continuous) for each X values, i.e., for each observation.
Average Marginal Effects interpretation
These two links can be checked out for detail explanation. Page 8 in https://cran.r-project.org/web/packages/margins/vignettes/TechnicalDetails.pdf and Appendix A in https://www3.nd.edu/~rwilliam/stat
Average Marginal Effects interpretation These two links can be checked out for detail explanation. Page 8 in https://cran.r-project.org/web/packages/margins/vignettes/TechnicalDetails.pdf and Appendix A in https://www3.nd.edu/~rwilliam/stats3/Margins02.pdf. Briefly, average marginal effect of a variable is the average of predicted changes in fitted values for one unit change in X (if it is continuous) for each X values, i.e., for each observation.
Average Marginal Effects interpretation These two links can be checked out for detail explanation. Page 8 in https://cran.r-project.org/web/packages/margins/vignettes/TechnicalDetails.pdf and Appendix A in https://www3.nd.edu/~rwilliam/stat
27,692
Average Marginal Effects interpretation
dydx means the difference in the dependent variable (or regressand) Y for a change in the explanatory variable X (regressor). This is to be interpreted as a regression coefficient in a lineair regression (of which the marginal effect is equal to the coefficient, other than in regressions of binary dependent variables). A score of .41 means that for a 1 unit increase in X, Y (in a probit, this is your probability), will increase by .41 or 41%-points. eyex would return elasticities. correct me if I'm wrong
Average Marginal Effects interpretation
dydx means the difference in the dependent variable (or regressand) Y for a change in the explanatory variable X (regressor). This is to be interpreted as a regression coefficient in a lineair regress
Average Marginal Effects interpretation dydx means the difference in the dependent variable (or regressand) Y for a change in the explanatory variable X (regressor). This is to be interpreted as a regression coefficient in a lineair regression (of which the marginal effect is equal to the coefficient, other than in regressions of binary dependent variables). A score of .41 means that for a 1 unit increase in X, Y (in a probit, this is your probability), will increase by .41 or 41%-points. eyex would return elasticities. correct me if I'm wrong
Average Marginal Effects interpretation dydx means the difference in the dependent variable (or regressand) Y for a change in the explanatory variable X (regressor). This is to be interpreted as a regression coefficient in a lineair regress
27,693
High autocorrelation when taking the L-th order of difference of a sequence of independent random numbers
Theory If the autocorrelation is going to have any meaning, we must suppose the original random variables $X_0, X_1, \ldots, X_N$ have the same variance, which--by a suitable choice of units of measure--we may set to unity. From the formula for the $L^\text{th}$ finite difference $$X^{(L)}_i=(\Delta^L(X))_i = \sum_{k=0}^L (-1)^{L-k}\binom{L}{k} X_{i+k}$$ for $0 \le i \le N-L$ and the independence of the $X_i$ we readily compute $$\operatorname{Var}(X^{(L)}_i) = \sum_{k=0}^L \binom{L}{k}^2 = \binom{2L}{L}\tag{1}$$ and for $0 \lt j \lt L$ and $i \le N-L-j$, $$\operatorname{Cov}(X^{(L)}_i, X^{(L)}_{i+j}) = (-1)^{j}\sum_{k=0}^{L-j} \binom{L}{k}\binom{L}{k+j} = (-1)^{j}\frac{4^L \binom{L}{j} j!\Gamma(L+1/2)}{\sqrt{\pi}(L+j)!}.\tag{2}$$ Dividing $(2)$ by $(1)$ gives the lag-$j$ serial correlation $\rho_j$. It is negative for odd $j$ and positive for even $j$. Stirling's Formula gives a readily interpretable approximation $$\log(|\rho_j|) \approx -\left(\frac{j^2}{L} - \frac{j^2}{2 L^2} + \frac{j^2 \left(j^2+1\right)}{6L^3}-\frac{j^4}{4 L^4} + O(L^{-5})O(j^6)\right)$$ As a function of $j$ its magnitude is roughly a Gaussian ("bell-shaped") curve, as we would expect of any diffusion-based procedure like successive differences. Here is a plot of $|\rho_1|$ through $|\rho_5|$ as a function of $L$, showing how rapidly the serial correlation approaches $1$. In order from top to bottom the dots represent $|\rho_1|$ through $|\rho_5|$. Conclusions Because these are purely mathematical relationships, they reveal little about the $X_i$. In particular, because all finite differences are linear combinations of the original variables, they provide no additional information that could be used to predict $X_{N+1}$ from $X_0, X_1, \ldots, X_N$. Practical observations As $L$ grows, the coefficients in the linear combinations grow exponentially. Notice that each $X^{(L)}_i$ is an alternating sum: specifically, in the middle of that sum appear relatively large coefficients close to $\binom{L}{L/2}$. Consider actual data subject to a little bit of random noise. This noise is multiplied by these large binomial coefficients and then those large results are nearly canceled by the alternating addition and subtraction. As a result, computing such finite differences for large $L$ tends to wipe out all information in the data and merely reflects tiny amounts of noise, including measurement error and floating point roundoff error. The apparent patterns in the differences shown in the question for $L=100$ and $L=168$ almost surely provide no meaningful information. (The binomial coefficients for $L=100$ get as large as $10^{29}$ and as small as $1$, implying double-precision floating point error is going to dominate the calculation.)
High autocorrelation when taking the L-th order of difference of a sequence of independent random nu
Theory If the autocorrelation is going to have any meaning, we must suppose the original random variables $X_0, X_1, \ldots, X_N$ have the same variance, which--by a suitable choice of units of measur
High autocorrelation when taking the L-th order of difference of a sequence of independent random numbers Theory If the autocorrelation is going to have any meaning, we must suppose the original random variables $X_0, X_1, \ldots, X_N$ have the same variance, which--by a suitable choice of units of measure--we may set to unity. From the formula for the $L^\text{th}$ finite difference $$X^{(L)}_i=(\Delta^L(X))_i = \sum_{k=0}^L (-1)^{L-k}\binom{L}{k} X_{i+k}$$ for $0 \le i \le N-L$ and the independence of the $X_i$ we readily compute $$\operatorname{Var}(X^{(L)}_i) = \sum_{k=0}^L \binom{L}{k}^2 = \binom{2L}{L}\tag{1}$$ and for $0 \lt j \lt L$ and $i \le N-L-j$, $$\operatorname{Cov}(X^{(L)}_i, X^{(L)}_{i+j}) = (-1)^{j}\sum_{k=0}^{L-j} \binom{L}{k}\binom{L}{k+j} = (-1)^{j}\frac{4^L \binom{L}{j} j!\Gamma(L+1/2)}{\sqrt{\pi}(L+j)!}.\tag{2}$$ Dividing $(2)$ by $(1)$ gives the lag-$j$ serial correlation $\rho_j$. It is negative for odd $j$ and positive for even $j$. Stirling's Formula gives a readily interpretable approximation $$\log(|\rho_j|) \approx -\left(\frac{j^2}{L} - \frac{j^2}{2 L^2} + \frac{j^2 \left(j^2+1\right)}{6L^3}-\frac{j^4}{4 L^4} + O(L^{-5})O(j^6)\right)$$ As a function of $j$ its magnitude is roughly a Gaussian ("bell-shaped") curve, as we would expect of any diffusion-based procedure like successive differences. Here is a plot of $|\rho_1|$ through $|\rho_5|$ as a function of $L$, showing how rapidly the serial correlation approaches $1$. In order from top to bottom the dots represent $|\rho_1|$ through $|\rho_5|$. Conclusions Because these are purely mathematical relationships, they reveal little about the $X_i$. In particular, because all finite differences are linear combinations of the original variables, they provide no additional information that could be used to predict $X_{N+1}$ from $X_0, X_1, \ldots, X_N$. Practical observations As $L$ grows, the coefficients in the linear combinations grow exponentially. Notice that each $X^{(L)}_i$ is an alternating sum: specifically, in the middle of that sum appear relatively large coefficients close to $\binom{L}{L/2}$. Consider actual data subject to a little bit of random noise. This noise is multiplied by these large binomial coefficients and then those large results are nearly canceled by the alternating addition and subtraction. As a result, computing such finite differences for large $L$ tends to wipe out all information in the data and merely reflects tiny amounts of noise, including measurement error and floating point roundoff error. The apparent patterns in the differences shown in the question for $L=100$ and $L=168$ almost surely provide no meaningful information. (The binomial coefficients for $L=100$ get as large as $10^{29}$ and as small as $1$, implying double-precision floating point error is going to dominate the calculation.)
High autocorrelation when taking the L-th order of difference of a sequence of independent random nu Theory If the autocorrelation is going to have any meaning, we must suppose the original random variables $X_0, X_1, \ldots, X_N$ have the same variance, which--by a suitable choice of units of measur
27,694
High autocorrelation when taking the L-th order of difference of a sequence of independent random numbers
This is more a comment or, at best, maybe a further clue to solve your question, but my reputation doesn't allow me to post comments. I replicated your experiment in Stata using draws from a standard Normal with the following code: clear all set obs 100000 gen t = _n tsset t drawnorm x, n(100000) forvalues i = 1(1)100 { generate D`i' = D`i'.x } Looking at the correlograms of the differenced variables, I was wondering why the confidence bands are so tiny. I have never seen such small confidence bands in a Stata correlogram. Any ideas? I was thinking this could be a clue because, with confidence bands so small, even the tiny autocorrelations from the furthest lags are being counted in your absolute autocorrelation, if I'm interpreting "absolute" correctly. Here is the correlogram for my dX_10... ...and here it is again, zoomed in on the first 10 lags...
High autocorrelation when taking the L-th order of difference of a sequence of independent random nu
This is more a comment or, at best, maybe a further clue to solve your question, but my reputation doesn't allow me to post comments. I replicated your experiment in Stata using draws from a standard
High autocorrelation when taking the L-th order of difference of a sequence of independent random numbers This is more a comment or, at best, maybe a further clue to solve your question, but my reputation doesn't allow me to post comments. I replicated your experiment in Stata using draws from a standard Normal with the following code: clear all set obs 100000 gen t = _n tsset t drawnorm x, n(100000) forvalues i = 1(1)100 { generate D`i' = D`i'.x } Looking at the correlograms of the differenced variables, I was wondering why the confidence bands are so tiny. I have never seen such small confidence bands in a Stata correlogram. Any ideas? I was thinking this could be a clue because, with confidence bands so small, even the tiny autocorrelations from the furthest lags are being counted in your absolute autocorrelation, if I'm interpreting "absolute" correctly. Here is the correlogram for my dX_10... ...and here it is again, zoomed in on the first 10 lags...
High autocorrelation when taking the L-th order of difference of a sequence of independent random nu This is more a comment or, at best, maybe a further clue to solve your question, but my reputation doesn't allow me to post comments. I replicated your experiment in Stata using draws from a standard
27,695
High autocorrelation when taking the L-th order of difference of a sequence of independent random numbers
This is expected because the differences are not independent of each other. For example, $dX_1(1) \equiv X(2) - X(1)$ is directly proportional to $X(2)$ while $dX_1(2) \equiv X(3) - X(2)$ is inversely proportional to $X(2).$ Because the definitions of consecutive elements of $dX_1$ share elements of $X$ in this inverse way, we expect them to be inversely correlated to each other. In fact, as we go to higher order differences $dX_i$, consecutive values share a higher and higher fraction of the elements of $X$ that go into their definition, and their anticorrelation increases. However, if we did not know the shared element ($X(2)$ in my example) we would not be able to calculate any differences that include this element. We therefore cannot use the anticorrelations in the differences to predict unknown elements of $X$ if they are generated independently of the known elements.
High autocorrelation when taking the L-th order of difference of a sequence of independent random nu
This is expected because the differences are not independent of each other. For example, $dX_1(1) \equiv X(2) - X(1)$ is directly proportional to $X(2)$ while $dX_1(2) \equiv X(3) - X(2)$ is inversely
High autocorrelation when taking the L-th order of difference of a sequence of independent random numbers This is expected because the differences are not independent of each other. For example, $dX_1(1) \equiv X(2) - X(1)$ is directly proportional to $X(2)$ while $dX_1(2) \equiv X(3) - X(2)$ is inversely proportional to $X(2).$ Because the definitions of consecutive elements of $dX_1$ share elements of $X$ in this inverse way, we expect them to be inversely correlated to each other. In fact, as we go to higher order differences $dX_i$, consecutive values share a higher and higher fraction of the elements of $X$ that go into their definition, and their anticorrelation increases. However, if we did not know the shared element ($X(2)$ in my example) we would not be able to calculate any differences that include this element. We therefore cannot use the anticorrelations in the differences to predict unknown elements of $X$ if they are generated independently of the known elements.
High autocorrelation when taking the L-th order of difference of a sequence of independent random nu This is expected because the differences are not independent of each other. For example, $dX_1(1) \equiv X(2) - X(1)$ is directly proportional to $X(2)$ while $dX_1(2) \equiv X(3) - X(2)$ is inversely
27,696
Indicator variable for binary data: {-1,1} vs {0,1}
The interpretation of both the estimator of the indicator variable and the intercept differ. Let's start with $\{1,0\}$: Say you have the following model $$y_i = \beta_0 + treatment\cdot\beta_1$$ where $$treatment = \begin{cases} 0 & \text{if placebo} \\ 1 & \text{if drug} \end{cases}$$ In that case you end up with the following formulas for $y_i$: $$y_i = \begin{cases} \beta_0 + 0\cdot\beta_1 = \beta_0 & \text{if placebo} \\ \beta_0 + 1\cdot\beta_1 = \beta_0 + \beta_1 & \text{if drug} \end{cases}$$ So the interpretation of $\beta_0$ is the effect of the placebo and the interpretation of $\beta_1$ is the difference between the effect of the placebo and the effect of the drug. In effect, you can interpret $\beta_1$ as the improvement that the drug offers. Now let's look at $\{-1,1\}$: You then have the following model (again): $$y_i = \beta_0 + treatment\cdot\beta_1$$ but where $$treatment = \begin{cases} -1 & \text{if placebo} \\ 1 & \text{if drug} \end{cases}$$ In that case you end up with the following formulae for $y_i$: $$y_i = \begin{cases} \beta_0 + -1\cdot\beta_1 = \beta_0 - \beta_1& \text{if placebo} \\ \beta_0 + 1\cdot\beta_1 = \beta_0 + \beta_1 & \text{if drug} \end{cases}$$ The interpretation here is that $\beta_0$ is the mean of the placebo's effect and the drug's effect, and $\beta_1$ is the difference of the two treatments to that mean. So which do you use? The interpretation of $\beta_0$ in $\{0,1\}$ is basically a baseline. You set some standard treatment and all the other treatments (there can be multiple) are compared with that standard/baseline. Especially when you start adding in other covariates this remains easy to interpret with regards to the standard medical question: how do these drugs compare with a placebo or the established drug? But in the end it is all a matter of interpretation, which I explained above. So you should evaluate your hypotheses and check which interpretation makes the drawing of conclusions the most straightforward.
Indicator variable for binary data: {-1,1} vs {0,1}
The interpretation of both the estimator of the indicator variable and the intercept differ. Let's start with $\{1,0\}$: Say you have the following model $$y_i = \beta_0 + treatment\cdot\beta_1$$ wher
Indicator variable for binary data: {-1,1} vs {0,1} The interpretation of both the estimator of the indicator variable and the intercept differ. Let's start with $\{1,0\}$: Say you have the following model $$y_i = \beta_0 + treatment\cdot\beta_1$$ where $$treatment = \begin{cases} 0 & \text{if placebo} \\ 1 & \text{if drug} \end{cases}$$ In that case you end up with the following formulas for $y_i$: $$y_i = \begin{cases} \beta_0 + 0\cdot\beta_1 = \beta_0 & \text{if placebo} \\ \beta_0 + 1\cdot\beta_1 = \beta_0 + \beta_1 & \text{if drug} \end{cases}$$ So the interpretation of $\beta_0$ is the effect of the placebo and the interpretation of $\beta_1$ is the difference between the effect of the placebo and the effect of the drug. In effect, you can interpret $\beta_1$ as the improvement that the drug offers. Now let's look at $\{-1,1\}$: You then have the following model (again): $$y_i = \beta_0 + treatment\cdot\beta_1$$ but where $$treatment = \begin{cases} -1 & \text{if placebo} \\ 1 & \text{if drug} \end{cases}$$ In that case you end up with the following formulae for $y_i$: $$y_i = \begin{cases} \beta_0 + -1\cdot\beta_1 = \beta_0 - \beta_1& \text{if placebo} \\ \beta_0 + 1\cdot\beta_1 = \beta_0 + \beta_1 & \text{if drug} \end{cases}$$ The interpretation here is that $\beta_0$ is the mean of the placebo's effect and the drug's effect, and $\beta_1$ is the difference of the two treatments to that mean. So which do you use? The interpretation of $\beta_0$ in $\{0,1\}$ is basically a baseline. You set some standard treatment and all the other treatments (there can be multiple) are compared with that standard/baseline. Especially when you start adding in other covariates this remains easy to interpret with regards to the standard medical question: how do these drugs compare with a placebo or the established drug? But in the end it is all a matter of interpretation, which I explained above. So you should evaluate your hypotheses and check which interpretation makes the drawing of conclusions the most straightforward.
Indicator variable for binary data: {-1,1} vs {0,1} The interpretation of both the estimator of the indicator variable and the intercept differ. Let's start with $\{1,0\}$: Say you have the following model $$y_i = \beta_0 + treatment\cdot\beta_1$$ wher
27,697
Indicator variable for binary data: {-1,1} vs {0,1}
In the context of linear regression, $x_i \in \{0, 1\}$ is more natural (and standard) method for coding binary variables (whether placing them on the left-hand side of right-hand side of the regression). As @Jarko Dubbeldam explains, you can of course use the other interpretation and the meaning of the coefficients will be different. To give an example the other way, coding output variables $y_i \in \{-1, 1\}$ is standard when programming or deriving the math underlying support vector machines. (When calling libraries, you want to pass the data in the format the library expects, which is probably the 0, 1 formulation.) Try to use the notation that is standard for whatever you are doing/using. For any kind of linear model with an intercept term, the two methods will be equivalent in the sense that they're related by a simple linear transformation. Mathematically, it doesn't matter whether you use data matrix $X$ or data matrix $\tilde{X} = XA$ where $A$ is full rank. In generalized linear models, your estimated coefficients either way will be related by the linear transformation $A$ and the fitted values $\hat{y}$ will be the same.
Indicator variable for binary data: {-1,1} vs {0,1}
In the context of linear regression, $x_i \in \{0, 1\}$ is more natural (and standard) method for coding binary variables (whether placing them on the left-hand side of right-hand side of the regressi
Indicator variable for binary data: {-1,1} vs {0,1} In the context of linear regression, $x_i \in \{0, 1\}$ is more natural (and standard) method for coding binary variables (whether placing them on the left-hand side of right-hand side of the regression). As @Jarko Dubbeldam explains, you can of course use the other interpretation and the meaning of the coefficients will be different. To give an example the other way, coding output variables $y_i \in \{-1, 1\}$ is standard when programming or deriving the math underlying support vector machines. (When calling libraries, you want to pass the data in the format the library expects, which is probably the 0, 1 formulation.) Try to use the notation that is standard for whatever you are doing/using. For any kind of linear model with an intercept term, the two methods will be equivalent in the sense that they're related by a simple linear transformation. Mathematically, it doesn't matter whether you use data matrix $X$ or data matrix $\tilde{X} = XA$ where $A$ is full rank. In generalized linear models, your estimated coefficients either way will be related by the linear transformation $A$ and the fitted values $\hat{y}$ will be the same.
Indicator variable for binary data: {-1,1} vs {0,1} In the context of linear regression, $x_i \in \{0, 1\}$ is more natural (and standard) method for coding binary variables (whether placing them on the left-hand side of right-hand side of the regressi
27,698
Indicator variable for binary data: {-1,1} vs {0,1}
This is more abstract (and perhaps useless), but I'll note that these two representations are, in a mathematical sense, actually group representations, and there is a isomorphism between them. The meaning of the indicator variable $T$, at heart a boolean, is "factor is true" or "factor is false". Given two events $T_1$ and $T_2$, you might ask "are the factors of these two events equivalent, e.g. are they either both true or both false?" In boolean logic this is $T_1 \Leftrightarrow T_2$. This defines a group structure $\mathbb{Z}_2$. Now, ${1,0}$ and ${1,-1}$ both form representations of this group, with the group operations $a \Leftrightarrow b = 1 - (a+b)$ and $a \Leftrightarrow b = ab$, respectively. The isomorphism from the first representation to the second is given by $\phi(a) = 2*a-1$. This representation also extends to continuous indicator variables, i.e. probabilities. If $p$ is the probability for $T$ to be true, then the probability for $T \Leftrightarrow T'$ to be true is $p' \Leftrightarrow p = pp' + (1-p)(1-p')$. Under the isomorphism $t(p) = 2p-1$, this is $t \Leftrightarrow t' = tt'$. The quantity $t$ is a signed indicator between -1 and 1. So, calculations about probabilities of boolean operations are often much simpler in this basis.
Indicator variable for binary data: {-1,1} vs {0,1}
This is more abstract (and perhaps useless), but I'll note that these two representations are, in a mathematical sense, actually group representations, and there is a isomorphism between them. The mea
Indicator variable for binary data: {-1,1} vs {0,1} This is more abstract (and perhaps useless), but I'll note that these two representations are, in a mathematical sense, actually group representations, and there is a isomorphism between them. The meaning of the indicator variable $T$, at heart a boolean, is "factor is true" or "factor is false". Given two events $T_1$ and $T_2$, you might ask "are the factors of these two events equivalent, e.g. are they either both true or both false?" In boolean logic this is $T_1 \Leftrightarrow T_2$. This defines a group structure $\mathbb{Z}_2$. Now, ${1,0}$ and ${1,-1}$ both form representations of this group, with the group operations $a \Leftrightarrow b = 1 - (a+b)$ and $a \Leftrightarrow b = ab$, respectively. The isomorphism from the first representation to the second is given by $\phi(a) = 2*a-1$. This representation also extends to continuous indicator variables, i.e. probabilities. If $p$ is the probability for $T$ to be true, then the probability for $T \Leftrightarrow T'$ to be true is $p' \Leftrightarrow p = pp' + (1-p)(1-p')$. Under the isomorphism $t(p) = 2p-1$, this is $t \Leftrightarrow t' = tt'$. The quantity $t$ is a signed indicator between -1 and 1. So, calculations about probabilities of boolean operations are often much simpler in this basis.
Indicator variable for binary data: {-1,1} vs {0,1} This is more abstract (and perhaps useless), but I'll note that these two representations are, in a mathematical sense, actually group representations, and there is a isomorphism between them. The mea
27,699
Central Limit Theorem for Markov Chains
Alex R.'s answer is almost sufficient, but I add a few more details. In On the Markov Chain Central Limit Theorem – Galin L. Jones, if you look at theorem 9, it says, If $X$ is a Harris ergodic Markov chain with stationary distribution $\pi$, then a CLT holds for $f$ if $X$ is uniformly ergodic and $E[f^2] < \infty$. For finite state spaces, all irreducible and aperiodic Markov chains are uniformly ergodic. The proof for this involves some considerable background in Markov chain theory. A good reference would be Page 32, at the bottom of Theorem 18 here. Hence, the Markov chain CLT would hold for any function $f$ that has a finite second moment. The form the CLT takes is described as follows. Let $\bar{f}_n$ be the time averaged estimator of $E_{\pi}[f]$, then as Alex R. points out, as $n \to \infty$, $$\bar{f}_n = \frac{1}{n} \sum_{i=1}^n f(X_i) \overset{\text{a.s.}}{\to} E_\pi[f].$$ The Markov chain CLT is $$\sqrt{n} (\bar{f}_n - E_\pi[f]) \overset{d}{\to} N(0, \sigma^2), $$ where $$\sigma^2 = \underbrace{\operatorname{Var}_\pi(f(X_1))}_\text{Expected term} + \underbrace{2 \sum_{k=1}^\infty \operatorname{Cov}_\pi(f(X_1), f(X_{1+k}))}_\text{Term due to Markov chain}. $$ A derivation for the $\sigma^2$ term can be found on Page 8 and Page 9 of Charles Geyer's MCMC notes here
Central Limit Theorem for Markov Chains
Alex R.'s answer is almost sufficient, but I add a few more details. In On the Markov Chain Central Limit Theorem – Galin L. Jones, if you look at theorem 9, it says, If $X$ is a Harris ergodic Marko
Central Limit Theorem for Markov Chains Alex R.'s answer is almost sufficient, but I add a few more details. In On the Markov Chain Central Limit Theorem – Galin L. Jones, if you look at theorem 9, it says, If $X$ is a Harris ergodic Markov chain with stationary distribution $\pi$, then a CLT holds for $f$ if $X$ is uniformly ergodic and $E[f^2] < \infty$. For finite state spaces, all irreducible and aperiodic Markov chains are uniformly ergodic. The proof for this involves some considerable background in Markov chain theory. A good reference would be Page 32, at the bottom of Theorem 18 here. Hence, the Markov chain CLT would hold for any function $f$ that has a finite second moment. The form the CLT takes is described as follows. Let $\bar{f}_n$ be the time averaged estimator of $E_{\pi}[f]$, then as Alex R. points out, as $n \to \infty$, $$\bar{f}_n = \frac{1}{n} \sum_{i=1}^n f(X_i) \overset{\text{a.s.}}{\to} E_\pi[f].$$ The Markov chain CLT is $$\sqrt{n} (\bar{f}_n - E_\pi[f]) \overset{d}{\to} N(0, \sigma^2), $$ where $$\sigma^2 = \underbrace{\operatorname{Var}_\pi(f(X_1))}_\text{Expected term} + \underbrace{2 \sum_{k=1}^\infty \operatorname{Cov}_\pi(f(X_1), f(X_{1+k}))}_\text{Term due to Markov chain}. $$ A derivation for the $\sigma^2$ term can be found on Page 8 and Page 9 of Charles Geyer's MCMC notes here
Central Limit Theorem for Markov Chains Alex R.'s answer is almost sufficient, but I add a few more details. In On the Markov Chain Central Limit Theorem – Galin L. Jones, if you look at theorem 9, it says, If $X$ is a Harris ergodic Marko
27,700
Central Limit Theorem for Markov Chains
The "usual" result for Markov Chains is the Birkhoff Ergodic Theorem, which says that $$\frac{1}{n}\sum_{i=1}^nf(X_i)\rightarrow E_{\pi}[f],$$ where $\pi$ is the stationary distribution, and $f$ satisfies $E|f(X_1)|<\infty$, and the convergence is almost-sure. Unfortunately the fluctuations of this convergence are generally quite difficult. This is mainly due to the extreme difficulty of figuring out total variation bounds on how quickly $X_i$ converge to the stationary distribution $\pi$. There are known cases where the fluctuations are analogous to the CLT, and you can find some conditions on the drift which make the analogy hold: On the Markov Chain Central Limit Theorem -- Galin L. Jones (See Theorem 1). There are also stupid situations, for example a chain with two states, where one is absorbing (i.e. $P(1\rightarrow 2)=1$ and $P(2\rightarrow 1)=0$. In this case there are no fluctuations, and you get convergence to a degenerate normal distribution (a constant).
Central Limit Theorem for Markov Chains
The "usual" result for Markov Chains is the Birkhoff Ergodic Theorem, which says that $$\frac{1}{n}\sum_{i=1}^nf(X_i)\rightarrow E_{\pi}[f],$$ where $\pi$ is the stationary distribution, and $f$ sati
Central Limit Theorem for Markov Chains The "usual" result for Markov Chains is the Birkhoff Ergodic Theorem, which says that $$\frac{1}{n}\sum_{i=1}^nf(X_i)\rightarrow E_{\pi}[f],$$ where $\pi$ is the stationary distribution, and $f$ satisfies $E|f(X_1)|<\infty$, and the convergence is almost-sure. Unfortunately the fluctuations of this convergence are generally quite difficult. This is mainly due to the extreme difficulty of figuring out total variation bounds on how quickly $X_i$ converge to the stationary distribution $\pi$. There are known cases where the fluctuations are analogous to the CLT, and you can find some conditions on the drift which make the analogy hold: On the Markov Chain Central Limit Theorem -- Galin L. Jones (See Theorem 1). There are also stupid situations, for example a chain with two states, where one is absorbing (i.e. $P(1\rightarrow 2)=1$ and $P(2\rightarrow 1)=0$. In this case there are no fluctuations, and you get convergence to a degenerate normal distribution (a constant).
Central Limit Theorem for Markov Chains The "usual" result for Markov Chains is the Birkhoff Ergodic Theorem, which says that $$\frac{1}{n}\sum_{i=1}^nf(X_i)\rightarrow E_{\pi}[f],$$ where $\pi$ is the stationary distribution, and $f$ sati