idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
13,801
What tests do I use to confirm that residuals are normally distributed?
No test will tell you your residuals are normally distributed. In fact, you can reliably bet that they are not. Hypothesis tests are not generally a good idea as checks on your assumptions. The effect of non-normality on your inference is not generally a function of sample size*, but the result of a significance test is. A small deviation from normality will be obvious at a large sample size even though the answer to the question of actual interest ('to what extent did this impact my inference?') may be 'hardly at all'. Correspondingly, a large deviation from normality at a small sample size may not approach significance. * (added in edit) -- actually that's much too weak a statement. The impact of non-normality actually decreases with sample size pretty much any time the CLT and Slutsky's theorem is going to hold, while the ability to reject normality (and presumably avoid normal-theory procedures) increases with sample size ... so just when you're most able to identify non-normality tends to be when it doesn't matter$^\dagger$ anyway... and the test is no help when it actually matters, in small samples. $\dagger$ well, at least as far as significance level goes. Power can still be an issue though if we are considering large samples as here, that may be less of an issue as well. What comes closer to measuring effect size is some diagnostic (either a display or a statistic) that measures degree of non-normality in some way. A Q-Q plot is an obvious display, and a Q-Q plot from the same population at one sample size and at a different sample size are at least both noisy estimates of the same curve - showing roughly the same 'non-normality'; it should at least approximately be monotonically related to the desired answer to the question of interest. If you must use a test, Shapiro-Wilk is probably about as good as anything else (the Chen-Shapiro test is typically a bit better on alternatives of common interest, but harder to find implementations of) -- but it's answering a question you already know the answer to; every time you fail to reject, it's giving an answer you can be sure is wrong.
What tests do I use to confirm that residuals are normally distributed?
No test will tell you your residuals are normally distributed. In fact, you can reliably bet that they are not. Hypothesis tests are not generally a good idea as checks on your assumptions. The effect
What tests do I use to confirm that residuals are normally distributed? No test will tell you your residuals are normally distributed. In fact, you can reliably bet that they are not. Hypothesis tests are not generally a good idea as checks on your assumptions. The effect of non-normality on your inference is not generally a function of sample size*, but the result of a significance test is. A small deviation from normality will be obvious at a large sample size even though the answer to the question of actual interest ('to what extent did this impact my inference?') may be 'hardly at all'. Correspondingly, a large deviation from normality at a small sample size may not approach significance. * (added in edit) -- actually that's much too weak a statement. The impact of non-normality actually decreases with sample size pretty much any time the CLT and Slutsky's theorem is going to hold, while the ability to reject normality (and presumably avoid normal-theory procedures) increases with sample size ... so just when you're most able to identify non-normality tends to be when it doesn't matter$^\dagger$ anyway... and the test is no help when it actually matters, in small samples. $\dagger$ well, at least as far as significance level goes. Power can still be an issue though if we are considering large samples as here, that may be less of an issue as well. What comes closer to measuring effect size is some diagnostic (either a display or a statistic) that measures degree of non-normality in some way. A Q-Q plot is an obvious display, and a Q-Q plot from the same population at one sample size and at a different sample size are at least both noisy estimates of the same curve - showing roughly the same 'non-normality'; it should at least approximately be monotonically related to the desired answer to the question of interest. If you must use a test, Shapiro-Wilk is probably about as good as anything else (the Chen-Shapiro test is typically a bit better on alternatives of common interest, but harder to find implementations of) -- but it's answering a question you already know the answer to; every time you fail to reject, it's giving an answer you can be sure is wrong.
What tests do I use to confirm that residuals are normally distributed? No test will tell you your residuals are normally distributed. In fact, you can reliably bet that they are not. Hypothesis tests are not generally a good idea as checks on your assumptions. The effect
13,802
What tests do I use to confirm that residuals are normally distributed?
The Shapiro-Wilk test is one possibility. Shapiro-Wilk test This test is implemented in almost all statistical software packages. The null hypothesis is the residuals are normally distributed, thus a small p-value indicates you should reject the null and conclude the residuals are not normally distributed. Note that if your sample size is large you will almost always reject, so visualization of the residuals is more important.
What tests do I use to confirm that residuals are normally distributed?
The Shapiro-Wilk test is one possibility. Shapiro-Wilk test This test is implemented in almost all statistical software packages. The null hypothesis is the residuals are normally distributed, thus a
What tests do I use to confirm that residuals are normally distributed? The Shapiro-Wilk test is one possibility. Shapiro-Wilk test This test is implemented in almost all statistical software packages. The null hypothesis is the residuals are normally distributed, thus a small p-value indicates you should reject the null and conclude the residuals are not normally distributed. Note that if your sample size is large you will almost always reject, so visualization of the residuals is more important.
What tests do I use to confirm that residuals are normally distributed? The Shapiro-Wilk test is one possibility. Shapiro-Wilk test This test is implemented in almost all statistical software packages. The null hypothesis is the residuals are normally distributed, thus a
13,803
What tests do I use to confirm that residuals are normally distributed?
From wikipedia: Tests of univariate normality include D'Agostino's K-squared test, the Jarque–Bera test, the Anderson–Darling test, the Cramér–von Mises criterion, the Lilliefors test for normality (itself an adaptation of the Kolmogorov–Smirnov test), the Shapiro–Wilk test, the Pearson's chi-squared test, and the Shapiro–Francia test. A 2011 paper from The Journal of Statistical Modeling and Analytics [1] concludes that Shapiro-Wilk has the best power for a given significance, followed closely by Anderson-Darling when comparing the Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors, and Anderson-Darling tests.
What tests do I use to confirm that residuals are normally distributed?
From wikipedia: Tests of univariate normality include D'Agostino's K-squared test, the Jarque–Bera test, the Anderson–Darling test, the Cramér–von Mises criterion, the Lilliefors test for normality (
What tests do I use to confirm that residuals are normally distributed? From wikipedia: Tests of univariate normality include D'Agostino's K-squared test, the Jarque–Bera test, the Anderson–Darling test, the Cramér–von Mises criterion, the Lilliefors test for normality (itself an adaptation of the Kolmogorov–Smirnov test), the Shapiro–Wilk test, the Pearson's chi-squared test, and the Shapiro–Francia test. A 2011 paper from The Journal of Statistical Modeling and Analytics [1] concludes that Shapiro-Wilk has the best power for a given significance, followed closely by Anderson-Darling when comparing the Shapiro-Wilk, Kolmogorov-Smirnov, Lilliefors, and Anderson-Darling tests.
What tests do I use to confirm that residuals are normally distributed? From wikipedia: Tests of univariate normality include D'Agostino's K-squared test, the Jarque–Bera test, the Anderson–Darling test, the Cramér–von Mises criterion, the Lilliefors test for normality (
13,804
Why does increasing the sample size of coin flips not improve the normal curve approximation?
In the second case, by increasing the number of tosses, you increase the number of bins a single trial can fall into. While the first case of experiment 2 only has a maximum of 100 bins that can be filled, the last example has 10000 bins. You increased the "resolution" of your experiment by a factor 100 (i.e., one bin in your first experiment is now represented by roughly 100 in your second). Of course this means that you would expect to require a factor 100 more data to fill your bins.
Why does increasing the sample size of coin flips not improve the normal curve approximation?
In the second case, by increasing the number of tosses, you increase the number of bins a single trial can fall into. While the first case of experiment 2 only has a maximum of 100 bins that can be fi
Why does increasing the sample size of coin flips not improve the normal curve approximation? In the second case, by increasing the number of tosses, you increase the number of bins a single trial can fall into. While the first case of experiment 2 only has a maximum of 100 bins that can be filled, the last example has 10000 bins. You increased the "resolution" of your experiment by a factor 100 (i.e., one bin in your first experiment is now represented by roughly 100 in your second). Of course this means that you would expect to require a factor 100 more data to fill your bins.
Why does increasing the sample size of coin flips not improve the normal curve approximation? In the second case, by increasing the number of tosses, you increase the number of bins a single trial can fall into. While the first case of experiment 2 only has a maximum of 100 bins that can be fi
13,805
Why does increasing the sample size of coin flips not improve the normal curve approximation?
You can think of an individual coin flip as an independent Bernoulli trial. One trial will give you either heads/tails or success/failure, respectively. If you repeat this say 100,000 times, the average number of heads will be very close to 0.5, if the coin is fair. Now if you increase the number of trials to 1,000 and keep the repetition at 1, you will get a sequence of 1,000 successes/failures and cannot say much about the probability of observing, on average, 500 heads unless you increase the number of repetitions for each of those independent trials. As the number of repetitions increases, you will get a better and better approximation to the normal distribution. For me it is easier to think of the trials not as “tosses” or “sample sizes” but instead of separate coins and the repetitions as the number of flips of each of those coins. Then it also makes intuitively sense that by increasing the number of coins (or trials), while keeping the total number of repetitions (or flips) constant, the approximation of the data to the normal distribution gets worse.
Why does increasing the sample size of coin flips not improve the normal curve approximation?
You can think of an individual coin flip as an independent Bernoulli trial. One trial will give you either heads/tails or success/failure, respectively. If you repeat this say 100,000 times, the avera
Why does increasing the sample size of coin flips not improve the normal curve approximation? You can think of an individual coin flip as an independent Bernoulli trial. One trial will give you either heads/tails or success/failure, respectively. If you repeat this say 100,000 times, the average number of heads will be very close to 0.5, if the coin is fair. Now if you increase the number of trials to 1,000 and keep the repetition at 1, you will get a sequence of 1,000 successes/failures and cannot say much about the probability of observing, on average, 500 heads unless you increase the number of repetitions for each of those independent trials. As the number of repetitions increases, you will get a better and better approximation to the normal distribution. For me it is easier to think of the trials not as “tosses” or “sample sizes” but instead of separate coins and the repetitions as the number of flips of each of those coins. Then it also makes intuitively sense that by increasing the number of coins (or trials), while keeping the total number of repetitions (or flips) constant, the approximation of the data to the normal distribution gets worse.
Why does increasing the sample size of coin flips not improve the normal curve approximation? You can think of an individual coin flip as an independent Bernoulli trial. One trial will give you either heads/tails or success/failure, respectively. If you repeat this say 100,000 times, the avera
13,806
Why does increasing the sample size of coin flips not improve the normal curve approximation?
I think the other answers here are great, but wanted to add an answer that extends to another statistical tool. You're starting with a baseline that you think should approximate a normal curve, and then going from there to see if you can better approximate a normal curve. Try going the other direction, and see what you can do to do a worse job at approximating. Try simulations where you have 10 flips and 1000 repetitions. Compare this to simulations where you have 1000 flips and 10 repetitions. It should be clear that the former case has the better approximation. The extension that I want to make is to ANOVA (analysis of variance). You see a lot of new data scientists that have a poor grasp of this problem, and design their studies so that they have a lot of flips, but few repetitions. They have a lot of data, but it says less than they'd like. Like measuring every leaf on a tree, but only having two trees. We can say quite a bit about leafs on those two trees, but not leafs on trees in general. You'd have been better off getting a much smaller sample of leafs, and getting a lot of trees.
Why does increasing the sample size of coin flips not improve the normal curve approximation?
I think the other answers here are great, but wanted to add an answer that extends to another statistical tool. You're starting with a baseline that you think should approximate a normal curve, and th
Why does increasing the sample size of coin flips not improve the normal curve approximation? I think the other answers here are great, but wanted to add an answer that extends to another statistical tool. You're starting with a baseline that you think should approximate a normal curve, and then going from there to see if you can better approximate a normal curve. Try going the other direction, and see what you can do to do a worse job at approximating. Try simulations where you have 10 flips and 1000 repetitions. Compare this to simulations where you have 1000 flips and 10 repetitions. It should be clear that the former case has the better approximation. The extension that I want to make is to ANOVA (analysis of variance). You see a lot of new data scientists that have a poor grasp of this problem, and design their studies so that they have a lot of flips, but few repetitions. They have a lot of data, but it says less than they'd like. Like measuring every leaf on a tree, but only having two trees. We can say quite a bit about leafs on those two trees, but not leafs on trees in general. You'd have been better off getting a much smaller sample of leafs, and getting a lot of trees.
Why does increasing the sample size of coin flips not improve the normal curve approximation? I think the other answers here are great, but wanted to add an answer that extends to another statistical tool. You're starting with a baseline that you think should approximate a normal curve, and th
13,807
Why does increasing the sample size of coin flips not improve the normal curve approximation?
To gain some additional intuition consider the following: Imagine you do only one single repetition. In that case you can increase the number of tosses all you want but it is not gonna resemble a normal distribution. And this makes sense since your histogram is only gonna have one single peak. The normal distribution is an approximation for the probabilty distribution (of the binomial distribution). What you did was not creating this distribution. But instead , you approximated this distribution by using a limited (and small) number of simulations. (and what you discovered is that this approximation becomes worse when you increase the number of bins in the histogram) So you both need a high number of tosses and repetitions. when the number of tosses is high then the binomial distribution (multiple coin tosses) can be approximated by normal distribution. when the number of repetitions/simulations is high than histogram of these experiments approximates the density of the binomial distribution.
Why does increasing the sample size of coin flips not improve the normal curve approximation?
To gain some additional intuition consider the following: Imagine you do only one single repetition. In that case you can increase the number of tosses all you want but it is not gonna resemble a no
Why does increasing the sample size of coin flips not improve the normal curve approximation? To gain some additional intuition consider the following: Imagine you do only one single repetition. In that case you can increase the number of tosses all you want but it is not gonna resemble a normal distribution. And this makes sense since your histogram is only gonna have one single peak. The normal distribution is an approximation for the probabilty distribution (of the binomial distribution). What you did was not creating this distribution. But instead , you approximated this distribution by using a limited (and small) number of simulations. (and what you discovered is that this approximation becomes worse when you increase the number of bins in the histogram) So you both need a high number of tosses and repetitions. when the number of tosses is high then the binomial distribution (multiple coin tosses) can be approximated by normal distribution. when the number of repetitions/simulations is high than histogram of these experiments approximates the density of the binomial distribution.
Why does increasing the sample size of coin flips not improve the normal curve approximation? To gain some additional intuition consider the following: Imagine you do only one single repetition. In that case you can increase the number of tosses all you want but it is not gonna resemble a no
13,808
Out of Bag Error makes CV unnecessary in Random Forests?
training error (as in predict(model, data=train)) is typically useless. Unless you do (non-standard) pruning of the trees, it cannot be much above 0 by design of the algorithm. Random forest uses bootstrap aggregation of decision trees, which are known to be overfit badly. This is like training error for a 1-nearest-neighbour classifier. However, the algorithm offers a very elegant way of computing the out-of-bag error estimate which is essentially an out-of-bootstrap estimate of the aggregated model's error). The out-of-bag error is the estimated error for aggregating the predictions of the $\approx \frac{1}{e}$ fraction of the trees that were trained without that particular case. The models aggregated for the out-of-bag error will only be independent, if there is no dependence between the input data rows. I.e. each row = one independent case, no hierarchical data structure / no clustering / no repeated measurements. So the out-of-bag error is not exactly the same (less trees for aggregating, more training case copies) as a cross validation error, but for practical purposes it is close enough. What would make sense to look at in order to detect overfitting is comparing out-of-bag error with an external validation. However, unless you know about clustering in your data, a "simple" cross validation error will be prone to the same optimistic bias as the out-of-bag error: the splitting is done according to very similar principles. You'd need to compare out-of-bag or cross validation with error for a well-designed test experiment to detect this.
Out of Bag Error makes CV unnecessary in Random Forests?
training error (as in predict(model, data=train)) is typically useless. Unless you do (non-standard) pruning of the trees, it cannot be much above 0 by design of the algorithm. Random forest uses boot
Out of Bag Error makes CV unnecessary in Random Forests? training error (as in predict(model, data=train)) is typically useless. Unless you do (non-standard) pruning of the trees, it cannot be much above 0 by design of the algorithm. Random forest uses bootstrap aggregation of decision trees, which are known to be overfit badly. This is like training error for a 1-nearest-neighbour classifier. However, the algorithm offers a very elegant way of computing the out-of-bag error estimate which is essentially an out-of-bootstrap estimate of the aggregated model's error). The out-of-bag error is the estimated error for aggregating the predictions of the $\approx \frac{1}{e}$ fraction of the trees that were trained without that particular case. The models aggregated for the out-of-bag error will only be independent, if there is no dependence between the input data rows. I.e. each row = one independent case, no hierarchical data structure / no clustering / no repeated measurements. So the out-of-bag error is not exactly the same (less trees for aggregating, more training case copies) as a cross validation error, but for practical purposes it is close enough. What would make sense to look at in order to detect overfitting is comparing out-of-bag error with an external validation. However, unless you know about clustering in your data, a "simple" cross validation error will be prone to the same optimistic bias as the out-of-bag error: the splitting is done according to very similar principles. You'd need to compare out-of-bag or cross validation with error for a well-designed test experiment to detect this.
Out of Bag Error makes CV unnecessary in Random Forests? training error (as in predict(model, data=train)) is typically useless. Unless you do (non-standard) pruning of the trees, it cannot be much above 0 by design of the algorithm. Random forest uses boot
13,809
Out of Bag Error makes CV unnecessary in Random Forests?
Out-of-bag error is useful, and may replace other performance estimation protocols (like cross-validation), but should be used with care. Like cross-validation, performance estimation using out-of-bag samples is computed using data that were not used for learning. If the data have been processed in a way that transfers information across samples, the estimate will (probably) be biased. Simple examples that come to mind are performing feature selection or missing value imputation. In both cases (and especially for feature selection) the data are transformed using information from the whole data set, biasing the estimate.
Out of Bag Error makes CV unnecessary in Random Forests?
Out-of-bag error is useful, and may replace other performance estimation protocols (like cross-validation), but should be used with care. Like cross-validation, performance estimation using out-of-bag
Out of Bag Error makes CV unnecessary in Random Forests? Out-of-bag error is useful, and may replace other performance estimation protocols (like cross-validation), but should be used with care. Like cross-validation, performance estimation using out-of-bag samples is computed using data that were not used for learning. If the data have been processed in a way that transfers information across samples, the estimate will (probably) be biased. Simple examples that come to mind are performing feature selection or missing value imputation. In both cases (and especially for feature selection) the data are transformed using information from the whole data set, biasing the estimate.
Out of Bag Error makes CV unnecessary in Random Forests? Out-of-bag error is useful, and may replace other performance estimation protocols (like cross-validation), but should be used with care. Like cross-validation, performance estimation using out-of-bag
13,810
What is the interpretation of the covariance of regression coefficients?
The most basic use of the covariance matrix is to obtain the standard errors of regression estimates. If the researcher is only interested in the standard errors of the individual regression parameters themselves, they can just take the square root of the diagonal to get the individual standard errors. However, often times you may be interested in a linear combination of regression parameters. For example, if you have a indicator variable for a given group, you may be interested in the group mean, which would be $\beta_0 + \beta_{\rm grp}$. Then, to find the standard error for that group's estimated mean, you would have $\sqrt{(X^\top S X)^{-1}}$, where $X$ is a vector of your contrasts and $S$ is the covariance matrix. In our case, if we only have the addition covariate "grp", then $X = (1,1)$ ($1$ for the intercept, $1$ for belonging to the group). Furthermore, the covariance matrix (or more over, the correlation matrix, which is uniquely identified from the covariance matrix but not vice versa) can be very useful for certain model diagnostics. If two variables are highly correlated, one way to think about it is that the model is having trouble figuring out which variable is responsible for an effect (because they are so closely related). This can be helpful for a whole variety of cases, such as choosing subsets of covariates to use in a predictive model; if two variables are highly correlated, you may only want to use one of the two in your predictive model.
What is the interpretation of the covariance of regression coefficients?
The most basic use of the covariance matrix is to obtain the standard errors of regression estimates. If the researcher is only interested in the standard errors of the individual regression parameter
What is the interpretation of the covariance of regression coefficients? The most basic use of the covariance matrix is to obtain the standard errors of regression estimates. If the researcher is only interested in the standard errors of the individual regression parameters themselves, they can just take the square root of the diagonal to get the individual standard errors. However, often times you may be interested in a linear combination of regression parameters. For example, if you have a indicator variable for a given group, you may be interested in the group mean, which would be $\beta_0 + \beta_{\rm grp}$. Then, to find the standard error for that group's estimated mean, you would have $\sqrt{(X^\top S X)^{-1}}$, where $X$ is a vector of your contrasts and $S$ is the covariance matrix. In our case, if we only have the addition covariate "grp", then $X = (1,1)$ ($1$ for the intercept, $1$ for belonging to the group). Furthermore, the covariance matrix (or more over, the correlation matrix, which is uniquely identified from the covariance matrix but not vice versa) can be very useful for certain model diagnostics. If two variables are highly correlated, one way to think about it is that the model is having trouble figuring out which variable is responsible for an effect (because they are so closely related). This can be helpful for a whole variety of cases, such as choosing subsets of covariates to use in a predictive model; if two variables are highly correlated, you may only want to use one of the two in your predictive model.
What is the interpretation of the covariance of regression coefficients? The most basic use of the covariance matrix is to obtain the standard errors of regression estimates. If the researcher is only interested in the standard errors of the individual regression parameter
13,811
What is the interpretation of the covariance of regression coefficients?
There are two "kinds" of regression coefficients: "True" regression coefficients (usually denoted $\beta$) that describe the underlying data-generating process of the data. These are fixed numbers, or "parameters." An example would be the speed of light $c$, which (we assume) is always the same everywhere in the accessible universe. Estimated regression coefficients (usually denoted denoted $b$ or $\hat \beta$) that are calculated from samples of the data. Samples are collections of random variables, so estimated regression coefficients are also random variables. An example would be an estimate for $c$ obtained in an experiment. Now think about what covariance means. Take any two random variables $X$ and $Y$. If $\left| \mathrm{Cov}\left(X,Y\right) \right|$ is high, then whenever you draw a large absolute value of $X$ you can also expect to draw a large absolute value of $Y$ in the same direction. Note that "high" here is relative to the amount of variation in $X$ and $Y$, as pointed out in the comments. The (estimated) covariance of two regression coefficients is the covariance of the estimates, $b$. If the covariance between estimated coefficients $b_1$ and $b_2$ is high, then in any sample where $b_1$ is high, you can also expect $b_2$ to be high. In a more Bayesian sense, $b_1$ contains information about $b_2$. Note again that "high" is relative. Here "$b_1$ is high" means that "$b_1$ is high relative to its standard error," and their covariance being "high" mean "high relative to the product of their standard errors." One way to smooth out these interpretive hiccups is to standardize each regression input to by dividing by its standard deviation (or two standard deviations in some cases). One user on this site described $\mathrm{Cov}\left(b_1,b_2\right)$ as "a bit of a fudge," but I don't entirely agree. For one thing, you could use this interpretation to come up with informative priors in Bayesian regression. As for what this is actually used for, Cliff AB's answer is a good summary.
What is the interpretation of the covariance of regression coefficients?
There are two "kinds" of regression coefficients: "True" regression coefficients (usually denoted $\beta$) that describe the underlying data-generating process of the data. These are fixed numbers, o
What is the interpretation of the covariance of regression coefficients? There are two "kinds" of regression coefficients: "True" regression coefficients (usually denoted $\beta$) that describe the underlying data-generating process of the data. These are fixed numbers, or "parameters." An example would be the speed of light $c$, which (we assume) is always the same everywhere in the accessible universe. Estimated regression coefficients (usually denoted denoted $b$ or $\hat \beta$) that are calculated from samples of the data. Samples are collections of random variables, so estimated regression coefficients are also random variables. An example would be an estimate for $c$ obtained in an experiment. Now think about what covariance means. Take any two random variables $X$ and $Y$. If $\left| \mathrm{Cov}\left(X,Y\right) \right|$ is high, then whenever you draw a large absolute value of $X$ you can also expect to draw a large absolute value of $Y$ in the same direction. Note that "high" here is relative to the amount of variation in $X$ and $Y$, as pointed out in the comments. The (estimated) covariance of two regression coefficients is the covariance of the estimates, $b$. If the covariance between estimated coefficients $b_1$ and $b_2$ is high, then in any sample where $b_1$ is high, you can also expect $b_2$ to be high. In a more Bayesian sense, $b_1$ contains information about $b_2$. Note again that "high" is relative. Here "$b_1$ is high" means that "$b_1$ is high relative to its standard error," and their covariance being "high" mean "high relative to the product of their standard errors." One way to smooth out these interpretive hiccups is to standardize each regression input to by dividing by its standard deviation (or two standard deviations in some cases). One user on this site described $\mathrm{Cov}\left(b_1,b_2\right)$ as "a bit of a fudge," but I don't entirely agree. For one thing, you could use this interpretation to come up with informative priors in Bayesian regression. As for what this is actually used for, Cliff AB's answer is a good summary.
What is the interpretation of the covariance of regression coefficients? There are two "kinds" of regression coefficients: "True" regression coefficients (usually denoted $\beta$) that describe the underlying data-generating process of the data. These are fixed numbers, o
13,812
How to sample from $c^a d^{a-1} / \Gamma(a)$?
Rejection sampling will work exceptionally well when $c d \ge \exp(5)$ and is reasonable for $c d \ge \exp(2)$. To simplify the math a little, let $k = c d$, write $x = a$, and note that $$f(x) \propto \frac{k^x}{\Gamma(x)} dx$$ for $x \ge 1$. Setting $x = u^{3/2}$ gives $$f(u) \propto \frac{k^{u^{3/2}}}{\Gamma(u^{3/2})} u^{1/2} du$$ for $u \ge 1$. When $k \ge \exp(5)$, this distribution is extremely close to Normal (and gets closer as $k$ gets larger). Specifically, you can Find the mode of $f(u)$ numerically (using, e.g., Newton-Raphson). Expand $\log{f(u)}$ to second order about its mode. This yields the parameters of a closely approximate Normal distribution. To high accuracy, this approximating Normal dominates $f(u)$ except in the extreme tails. (When $k \lt \exp(5)$, you may need to scale the Normal pdf up a little bit to assure domination.) Having done this preliminary work for any given value of $k$, and having estimated a constant $M \gt 1$ (as described below), obtaining a random variate is a matter of: Draw a value $u$ from the dominating Normal distribution $g(u)$. If $u \lt 1$ or if a new uniform variate $X$ exceeds $f(u)/(M g(u))$, return to step 1. Set $x = u^{3/2}$. The expected number of evaluations of $f$ due to the discrepancies between $g$ and $f$ is only slightly greater than 1. (Some additional evaluations will occur due to rejections of variates less than $1$, but even when $k$ is as low as $2$ the frequency of such occurrences is small.) This plot shows the logarithms of g and f as a function of u for $k=\exp(5)$. Because the graphs are so close, we need to inspect their ratio to see what's going on: This displays the log ratio $\log(\exp(0.004)g(u)/f(u))$; the factor of $M = \exp(0.004)$ was included to assure the logarithm is positive throughout the main part of the distribution; that is, to assure $Mg(u) \ge f(u)$ except possibly in regions of negligible probability. By making $M$ sufficiently large you can guarantee that $M \cdot g$ dominates $f$ in all but the most extreme tails (which have practically no chance of being chosen in a simulation anyway). However, the larger $M$ is, the more frequently rejections will occur. As $k$ grows large, $M$ can be chosen very close to $1$, which incurs practically no penalty. A similar approach works even for $k \gt \exp(2)$, but fairly large values of $M$ may be needed when $\exp(2) \lt k \lt \exp(5)$, because $f(u)$ is noticeably asymmetric. For instance, with $k = \exp(2)$, to get a reasonably accurate $g$ we need to set $M=1$: The upper red curve is the graph of $\log(\exp(1)g(u))$ while the lower blue curve is the graph of $\log(f(u))$. Rejection sampling of $f$ relative to $\exp(1)g$ will cause about 2/3 of all trial draws to be rejected, tripling the effort: still not bad. The right tail ($u \gt 10$ or $x \gt 10^{3/2} \sim 30$) will be under-represented in the rejection sampling (because $\exp(1)g$ no longer dominates $f$ there), but that tail comprises less than $\exp(-20) \sim 10^{-9}$ of the total probability. To summarize, after an initial effort to compute the mode and evaluate the quadratic term of the power series of $f(u)$ around the mode--an effort that requires a few tens of function evaluations at most--you can use rejection sampling at an expected cost of between 1 and 3 (or so) evaluations per variate. The cost multiplier rapidly drops to 1 as $k = c d$ increases beyond 5. Even when just one draw from $f$ is needed, this method is reasonable. It comes into its own when many independent draws are needed for the same value of $k$, for then the overhead of the initial calculations is amortized over many draws. Addendum @Cardinal has asked, quite reasonably, for support of some of the hand-waving analysis in the forgoing. In particular, why should the transformation $x = u^{3/2}$ make the distribution approximately Normal? In light of the theory of Box-Cox transformations, it is natural to seek some power transformation of the form $x = u^\alpha$ (for a constant $\alpha$, hopefully not too different from unity) that will make a distribution "more" Normal. Recall that all Normal distributions are simply characterized: the logarithms of their pdfs are purely quadratic, with zero linear term and no higher order terms. Therefore we can take any pdf and compare it to a Normal distribution by expanding its logarithm as a power series around its (highest) peak. We seek a value of $\alpha$ that makes (at least) the third power vanish, at least approximately: that is the most we can reasonably hope that a single free coefficient will accomplish. Often this works well. But how to get a handle on this particular distribution? Upon effecting the power transformation, its pdf is $$f(u) = \frac{k^{u^{\alpha}}}{\Gamma(u^{\alpha})} u^{\alpha-1}.$$ Take its logarithm and use Stirling's asymptotic expansion of $\log(\Gamma)$: $$\log(f(u)) \approx \log(k) u^\alpha + (\alpha - 1)\log(u) - \alpha u^\alpha \log(u) + u^\alpha - \log(2 \pi u^\alpha)/2 + c u^{-\alpha}$$ (for small values of $c$, which is not constant). This works provided $\alpha$ is positive, which we will assume to be the case (for otherwise we cannot neglect the remainder of the expansion). Compute its third derivative (which, when divided by $3!$, will be the coefficient of the third power of $u$ in the power series) and exploit the fact that at the peak, the first derivative must be zero. This simplifies the third derivative greatly, giving (approximately, because we are ignoring the derivative of $c$) $$-\frac{1}{2} u^{-(3+\alpha)} \alpha \left(2 \alpha(2 \alpha-3) u^{2 \alpha} + (\alpha^2 - 5\alpha +6)u^\alpha + 12 c \alpha \right).$$ When $k$ is not too small, $u$ will indeed be large at the peak. Because $\alpha$ is positive, the dominant term in this expression is the $2\alpha$ power, which we can set to zero by making its coefficient vanish: $$2 \alpha-3 = 0.$$ That's why $\alpha = 3/2$ works so well: with this choice, the coefficient of the cubic term around the peak behaves like $u^{-3}$, which is close to $\exp(-2 k)$. Once $k$ exceeds 10 or so, you can practically forget about it, and it's reasonably small even for $k$ down to 2. The higher powers, from the fourth on, play less and less of a role as $k$ gets large, because their coefficients grow proportionately smaller, too. Incidentally, the same calculations (based on the second derivative of $log(f(u))$ at its peak) show the standard deviation of this Normal approximation is slightly less than $\frac{2}{3}\exp(k/6)$, with the error proportional to $\exp(-k/2)$.
How to sample from $c^a d^{a-1} / \Gamma(a)$?
Rejection sampling will work exceptionally well when $c d \ge \exp(5)$ and is reasonable for $c d \ge \exp(2)$. To simplify the math a little, let $k = c d$, write $x = a$, and note that $$f(x) \propt
How to sample from $c^a d^{a-1} / \Gamma(a)$? Rejection sampling will work exceptionally well when $c d \ge \exp(5)$ and is reasonable for $c d \ge \exp(2)$. To simplify the math a little, let $k = c d$, write $x = a$, and note that $$f(x) \propto \frac{k^x}{\Gamma(x)} dx$$ for $x \ge 1$. Setting $x = u^{3/2}$ gives $$f(u) \propto \frac{k^{u^{3/2}}}{\Gamma(u^{3/2})} u^{1/2} du$$ for $u \ge 1$. When $k \ge \exp(5)$, this distribution is extremely close to Normal (and gets closer as $k$ gets larger). Specifically, you can Find the mode of $f(u)$ numerically (using, e.g., Newton-Raphson). Expand $\log{f(u)}$ to second order about its mode. This yields the parameters of a closely approximate Normal distribution. To high accuracy, this approximating Normal dominates $f(u)$ except in the extreme tails. (When $k \lt \exp(5)$, you may need to scale the Normal pdf up a little bit to assure domination.) Having done this preliminary work for any given value of $k$, and having estimated a constant $M \gt 1$ (as described below), obtaining a random variate is a matter of: Draw a value $u$ from the dominating Normal distribution $g(u)$. If $u \lt 1$ or if a new uniform variate $X$ exceeds $f(u)/(M g(u))$, return to step 1. Set $x = u^{3/2}$. The expected number of evaluations of $f$ due to the discrepancies between $g$ and $f$ is only slightly greater than 1. (Some additional evaluations will occur due to rejections of variates less than $1$, but even when $k$ is as low as $2$ the frequency of such occurrences is small.) This plot shows the logarithms of g and f as a function of u for $k=\exp(5)$. Because the graphs are so close, we need to inspect their ratio to see what's going on: This displays the log ratio $\log(\exp(0.004)g(u)/f(u))$; the factor of $M = \exp(0.004)$ was included to assure the logarithm is positive throughout the main part of the distribution; that is, to assure $Mg(u) \ge f(u)$ except possibly in regions of negligible probability. By making $M$ sufficiently large you can guarantee that $M \cdot g$ dominates $f$ in all but the most extreme tails (which have practically no chance of being chosen in a simulation anyway). However, the larger $M$ is, the more frequently rejections will occur. As $k$ grows large, $M$ can be chosen very close to $1$, which incurs practically no penalty. A similar approach works even for $k \gt \exp(2)$, but fairly large values of $M$ may be needed when $\exp(2) \lt k \lt \exp(5)$, because $f(u)$ is noticeably asymmetric. For instance, with $k = \exp(2)$, to get a reasonably accurate $g$ we need to set $M=1$: The upper red curve is the graph of $\log(\exp(1)g(u))$ while the lower blue curve is the graph of $\log(f(u))$. Rejection sampling of $f$ relative to $\exp(1)g$ will cause about 2/3 of all trial draws to be rejected, tripling the effort: still not bad. The right tail ($u \gt 10$ or $x \gt 10^{3/2} \sim 30$) will be under-represented in the rejection sampling (because $\exp(1)g$ no longer dominates $f$ there), but that tail comprises less than $\exp(-20) \sim 10^{-9}$ of the total probability. To summarize, after an initial effort to compute the mode and evaluate the quadratic term of the power series of $f(u)$ around the mode--an effort that requires a few tens of function evaluations at most--you can use rejection sampling at an expected cost of between 1 and 3 (or so) evaluations per variate. The cost multiplier rapidly drops to 1 as $k = c d$ increases beyond 5. Even when just one draw from $f$ is needed, this method is reasonable. It comes into its own when many independent draws are needed for the same value of $k$, for then the overhead of the initial calculations is amortized over many draws. Addendum @Cardinal has asked, quite reasonably, for support of some of the hand-waving analysis in the forgoing. In particular, why should the transformation $x = u^{3/2}$ make the distribution approximately Normal? In light of the theory of Box-Cox transformations, it is natural to seek some power transformation of the form $x = u^\alpha$ (for a constant $\alpha$, hopefully not too different from unity) that will make a distribution "more" Normal. Recall that all Normal distributions are simply characterized: the logarithms of their pdfs are purely quadratic, with zero linear term and no higher order terms. Therefore we can take any pdf and compare it to a Normal distribution by expanding its logarithm as a power series around its (highest) peak. We seek a value of $\alpha$ that makes (at least) the third power vanish, at least approximately: that is the most we can reasonably hope that a single free coefficient will accomplish. Often this works well. But how to get a handle on this particular distribution? Upon effecting the power transformation, its pdf is $$f(u) = \frac{k^{u^{\alpha}}}{\Gamma(u^{\alpha})} u^{\alpha-1}.$$ Take its logarithm and use Stirling's asymptotic expansion of $\log(\Gamma)$: $$\log(f(u)) \approx \log(k) u^\alpha + (\alpha - 1)\log(u) - \alpha u^\alpha \log(u) + u^\alpha - \log(2 \pi u^\alpha)/2 + c u^{-\alpha}$$ (for small values of $c$, which is not constant). This works provided $\alpha$ is positive, which we will assume to be the case (for otherwise we cannot neglect the remainder of the expansion). Compute its third derivative (which, when divided by $3!$, will be the coefficient of the third power of $u$ in the power series) and exploit the fact that at the peak, the first derivative must be zero. This simplifies the third derivative greatly, giving (approximately, because we are ignoring the derivative of $c$) $$-\frac{1}{2} u^{-(3+\alpha)} \alpha \left(2 \alpha(2 \alpha-3) u^{2 \alpha} + (\alpha^2 - 5\alpha +6)u^\alpha + 12 c \alpha \right).$$ When $k$ is not too small, $u$ will indeed be large at the peak. Because $\alpha$ is positive, the dominant term in this expression is the $2\alpha$ power, which we can set to zero by making its coefficient vanish: $$2 \alpha-3 = 0.$$ That's why $\alpha = 3/2$ works so well: with this choice, the coefficient of the cubic term around the peak behaves like $u^{-3}$, which is close to $\exp(-2 k)$. Once $k$ exceeds 10 or so, you can practically forget about it, and it's reasonably small even for $k$ down to 2. The higher powers, from the fourth on, play less and less of a role as $k$ gets large, because their coefficients grow proportionately smaller, too. Incidentally, the same calculations (based on the second derivative of $log(f(u))$ at its peak) show the standard deviation of this Normal approximation is slightly less than $\frac{2}{3}\exp(k/6)$, with the error proportional to $\exp(-k/2)$.
How to sample from $c^a d^{a-1} / \Gamma(a)$? Rejection sampling will work exceptionally well when $c d \ge \exp(5)$ and is reasonable for $c d \ge \exp(2)$. To simplify the math a little, let $k = c d$, write $x = a$, and note that $$f(x) \propt
13,813
How to sample from $c^a d^{a-1} / \Gamma(a)$?
I like @whuber's answer very much; it's likely to be very efficient and has a beautiful analysis. But it requires some deep insight with respect to this particular distribution. For situations where you don't have that insight (so for different distributions), I also like the following approach which works for all distributions where the PDF is twice differentiable and that second derivative has finitely many roots. It requires quite a bit of work to set up, but then afterwards you have an engine that works for most distributions you can throw at it. Basically, the idea is to use a piecewise linear upper bound to the PDF which you adapt as you are doing rejection sampling. At the same time you have a piecewise linear lower bound for the PDF which prevents you from having to evaluate the PDF too frequently. The upper and lower bounds are given by chords and tangents to the PDF graph. The initial division into intervals is such that on each interval, the PDF is either all concave or all convex; whenever you have to reject a point (x, y) you subdivide that interval at x. (You can also do an extra subdivision at x if you had to compute the PDF because the lower bound is really bad.) This makes the subdivisions occur especially frequently where the upper (and lower) bounds are bad, so you get a really good approximation of your PDF essentially for free. The details are a little tricky to get right, but I've tried to explain most of them in this series of blog posts - especially the last one. Those posts don't discuss what to do if the PDF is unbounded either in domain or in values; I'd recommend the somewhat obvious solution of either doing a transformation that makes them finite (which would be hard to automate) or using a cutoff. I would choose the cutoff depending on the total number of points you expect to generate, say N, and choose the cutoff so that the removed part has less than $1 / (10 N)$ probability. (This is easy enough if you have a closed form for the CDF; otherwise it might also be tricky.) This method is implemented in Maple as the default method for user-defined continuous distributions. (Full disclosure - I work for Maplesoft.) I did an example run, generating 10^4 points for c = 2, d = 3, specifying [1, 100] as the initial range for the values: There were 23 rejections (in red), 51 points "on probation" which were at the time in between the lower bound and the actual PDF, and 9949 points which were accepted after checking only linear inequalities. That's 74 evaluations of the PDF in total, or about one PDF evaluation per 135 points. The ratio should get better as you generate more points, since the approximation gets better and better (and conversely, if you generate only few points, the ratio is worse).
How to sample from $c^a d^{a-1} / \Gamma(a)$?
I like @whuber's answer very much; it's likely to be very efficient and has a beautiful analysis. But it requires some deep insight with respect to this particular distribution. For situations where y
How to sample from $c^a d^{a-1} / \Gamma(a)$? I like @whuber's answer very much; it's likely to be very efficient and has a beautiful analysis. But it requires some deep insight with respect to this particular distribution. For situations where you don't have that insight (so for different distributions), I also like the following approach which works for all distributions where the PDF is twice differentiable and that second derivative has finitely many roots. It requires quite a bit of work to set up, but then afterwards you have an engine that works for most distributions you can throw at it. Basically, the idea is to use a piecewise linear upper bound to the PDF which you adapt as you are doing rejection sampling. At the same time you have a piecewise linear lower bound for the PDF which prevents you from having to evaluate the PDF too frequently. The upper and lower bounds are given by chords and tangents to the PDF graph. The initial division into intervals is such that on each interval, the PDF is either all concave or all convex; whenever you have to reject a point (x, y) you subdivide that interval at x. (You can also do an extra subdivision at x if you had to compute the PDF because the lower bound is really bad.) This makes the subdivisions occur especially frequently where the upper (and lower) bounds are bad, so you get a really good approximation of your PDF essentially for free. The details are a little tricky to get right, but I've tried to explain most of them in this series of blog posts - especially the last one. Those posts don't discuss what to do if the PDF is unbounded either in domain or in values; I'd recommend the somewhat obvious solution of either doing a transformation that makes them finite (which would be hard to automate) or using a cutoff. I would choose the cutoff depending on the total number of points you expect to generate, say N, and choose the cutoff so that the removed part has less than $1 / (10 N)$ probability. (This is easy enough if you have a closed form for the CDF; otherwise it might also be tricky.) This method is implemented in Maple as the default method for user-defined continuous distributions. (Full disclosure - I work for Maplesoft.) I did an example run, generating 10^4 points for c = 2, d = 3, specifying [1, 100] as the initial range for the values: There were 23 rejections (in red), 51 points "on probation" which were at the time in between the lower bound and the actual PDF, and 9949 points which were accepted after checking only linear inequalities. That's 74 evaluations of the PDF in total, or about one PDF evaluation per 135 points. The ratio should get better as you generate more points, since the approximation gets better and better (and conversely, if you generate only few points, the ratio is worse).
How to sample from $c^a d^{a-1} / \Gamma(a)$? I like @whuber's answer very much; it's likely to be very efficient and has a beautiful analysis. But it requires some deep insight with respect to this particular distribution. For situations where y
13,814
How to sample from $c^a d^{a-1} / \Gamma(a)$?
You could do it by numerically executing the inversion method, which says that if you plug uniform(0,1) random variables in the inverse CDF, you get a draw from the distribution. I've included some R code below that does this, and from the few checks I've done, it is working well, but it is a bit sloppy and I'm sure you could optimize it. If you're not familiar with R, lgamma() is the log of the gamma function; integrate() calculates a definite 1-D integral; uniroot() calculates a root of a function using 1-D bisection. # density. using the log-gamma gives a more numerically stable return for # the subsequent numerical integration (will not work without this trick) f = function(x,c,d) exp( x*log(c) + (x-1)*log(d) - lgamma(x) ) # brute force calculation of the CDF, calculating the normalizing constant numerically F = function(x,c,d) { g = function(x) f(x,c,d) return( integrate(g,1,x)$val/integrate(g,1,Inf)$val ) } # Using bisection to find where the CDF equals p, to give the inverse CDF. This works # since the density given in the problem corresponds to a continuous CDF. F_1 = function(p,c,d) { Q = function(x) F(x,c,d)-p return( uniroot(Q, c(1+1e-10, 1e4))$root ) } # plug uniform(0,1)'s into the inverse CDF. Testing for c=3, d=4. G = function(x) F_1(x,3,4) z = sapply(runif(1000),G) # simulated mean mean(z) [1] 13.10915 # exact mean g = function(x) f(x,3,4) nc = integrate(g,1,Inf)$val h = function(x) f(x,3,4)*x/nc integrate(h,1,Inf)$val [1] 13.00002 # simulated second moment mean(z^2) [1] 183.0266 # exact second moment g = function(x) f(x,3,4) nc = integrate(g,1,Inf)$val h = function(x) f(x,3,4)*(x^2)/nc integrate(h,1,Inf)$val [1] 181.0003 # estimated density from the sample plot(density(z)) # true density s = seq(1,25,length=1000) plot(s, f(s,3,4), type="l", lwd=3) The main arbitrary thing I do here is assuming that $(1,10000)$ is a sufficient bracket for the bisection - I was lazy about this and there might be a more efficient way to choose this bracket. For very large values, the numerical calculation of the CDF (say, $> 100000$) fails, so the bracket must be below this. The CDF is effectively equal to 1 at those points (unless $c, d$ are very large), so something could probably be included that would prevent miscalculation of the CDF for very large input values. Edit: When $cd$ is very large, a numerical problem occurs with this method. As whuber points out in the comments, once this has occurred, the distribution is essentially degenerate at it's mode, making it a trivial sampling problem.
How to sample from $c^a d^{a-1} / \Gamma(a)$?
You could do it by numerically executing the inversion method, which says that if you plug uniform(0,1) random variables in the inverse CDF, you get a draw from the distribution. I've included some R
How to sample from $c^a d^{a-1} / \Gamma(a)$? You could do it by numerically executing the inversion method, which says that if you plug uniform(0,1) random variables in the inverse CDF, you get a draw from the distribution. I've included some R code below that does this, and from the few checks I've done, it is working well, but it is a bit sloppy and I'm sure you could optimize it. If you're not familiar with R, lgamma() is the log of the gamma function; integrate() calculates a definite 1-D integral; uniroot() calculates a root of a function using 1-D bisection. # density. using the log-gamma gives a more numerically stable return for # the subsequent numerical integration (will not work without this trick) f = function(x,c,d) exp( x*log(c) + (x-1)*log(d) - lgamma(x) ) # brute force calculation of the CDF, calculating the normalizing constant numerically F = function(x,c,d) { g = function(x) f(x,c,d) return( integrate(g,1,x)$val/integrate(g,1,Inf)$val ) } # Using bisection to find where the CDF equals p, to give the inverse CDF. This works # since the density given in the problem corresponds to a continuous CDF. F_1 = function(p,c,d) { Q = function(x) F(x,c,d)-p return( uniroot(Q, c(1+1e-10, 1e4))$root ) } # plug uniform(0,1)'s into the inverse CDF. Testing for c=3, d=4. G = function(x) F_1(x,3,4) z = sapply(runif(1000),G) # simulated mean mean(z) [1] 13.10915 # exact mean g = function(x) f(x,3,4) nc = integrate(g,1,Inf)$val h = function(x) f(x,3,4)*x/nc integrate(h,1,Inf)$val [1] 13.00002 # simulated second moment mean(z^2) [1] 183.0266 # exact second moment g = function(x) f(x,3,4) nc = integrate(g,1,Inf)$val h = function(x) f(x,3,4)*(x^2)/nc integrate(h,1,Inf)$val [1] 181.0003 # estimated density from the sample plot(density(z)) # true density s = seq(1,25,length=1000) plot(s, f(s,3,4), type="l", lwd=3) The main arbitrary thing I do here is assuming that $(1,10000)$ is a sufficient bracket for the bisection - I was lazy about this and there might be a more efficient way to choose this bracket. For very large values, the numerical calculation of the CDF (say, $> 100000$) fails, so the bracket must be below this. The CDF is effectively equal to 1 at those points (unless $c, d$ are very large), so something could probably be included that would prevent miscalculation of the CDF for very large input values. Edit: When $cd$ is very large, a numerical problem occurs with this method. As whuber points out in the comments, once this has occurred, the distribution is essentially degenerate at it's mode, making it a trivial sampling problem.
How to sample from $c^a d^{a-1} / \Gamma(a)$? You could do it by numerically executing the inversion method, which says that if you plug uniform(0,1) random variables in the inverse CDF, you get a draw from the distribution. I've included some R
13,815
Avoid overfitting in regression: alternatives to regularization
Two important points that are not directly related to your question: First, even the goal is accuracy instead of interpretation, regularization is still necessary in many cases, since, it will make sure the "high accuracy" on real testing / production data set, not the data used for modeling. Second, if there are billion rows and million columns, it is possible no regularization is needed. This is because the data is huge, and many computational models have "limited power", i.e., it is almost impossible to overfit. This is why some deep neural network has billions of parameters. Now, about your question. As mentioned by Ben and Andrey, there are some options as alternatives to regularization. I would like to add more examples. Use simpler model (For example, reduce number of hidden unit in neural network. Use lower order polynomial kernel in SVM. Reduce number of Gaussians in mixture of Gaussian. etc.) Stop early in the optimization. (For example, reduce the epoch in neural network training, reduce number of iterations in optimization (CG, BFGS, etc.) Average on many models (For example, random forest etc.)
Avoid overfitting in regression: alternatives to regularization
Two important points that are not directly related to your question: First, even the goal is accuracy instead of interpretation, regularization is still necessary in many cases, since, it will make s
Avoid overfitting in regression: alternatives to regularization Two important points that are not directly related to your question: First, even the goal is accuracy instead of interpretation, regularization is still necessary in many cases, since, it will make sure the "high accuracy" on real testing / production data set, not the data used for modeling. Second, if there are billion rows and million columns, it is possible no regularization is needed. This is because the data is huge, and many computational models have "limited power", i.e., it is almost impossible to overfit. This is why some deep neural network has billions of parameters. Now, about your question. As mentioned by Ben and Andrey, there are some options as alternatives to regularization. I would like to add more examples. Use simpler model (For example, reduce number of hidden unit in neural network. Use lower order polynomial kernel in SVM. Reduce number of Gaussians in mixture of Gaussian. etc.) Stop early in the optimization. (For example, reduce the epoch in neural network training, reduce number of iterations in optimization (CG, BFGS, etc.) Average on many models (For example, random forest etc.)
Avoid overfitting in regression: alternatives to regularization Two important points that are not directly related to your question: First, even the goal is accuracy instead of interpretation, regularization is still necessary in many cases, since, it will make s
13,816
Avoid overfitting in regression: alternatives to regularization
Two alternatives to regularization: Have many, many observations Use a simpler model Geoff Hinton (co-inventor of back propogation) once told a story of engineers that told him (paraphrasing heavily), "Geoff, we don't need dropout in our deep nets because we have so much data." And his response, was, "Well, then you should build even deeper nets, until you are overfitting, and then use dropout." Good advice aside, you can apparently avoid regularization even with deep nets, so long as there are enough data. With a fixed number of observations, you can also opt for a simpler model. You probably don't need regularization to estimate an intercept, a slope, and an error variance in a simple linear regression.
Avoid overfitting in regression: alternatives to regularization
Two alternatives to regularization: Have many, many observations Use a simpler model Geoff Hinton (co-inventor of back propogation) once told a story of engineers that told him (paraphrasing heavily
Avoid overfitting in regression: alternatives to regularization Two alternatives to regularization: Have many, many observations Use a simpler model Geoff Hinton (co-inventor of back propogation) once told a story of engineers that told him (paraphrasing heavily), "Geoff, we don't need dropout in our deep nets because we have so much data." And his response, was, "Well, then you should build even deeper nets, until you are overfitting, and then use dropout." Good advice aside, you can apparently avoid regularization even with deep nets, so long as there are enough data. With a fixed number of observations, you can also opt for a simpler model. You probably don't need regularization to estimate an intercept, a slope, and an error variance in a simple linear regression.
Avoid overfitting in regression: alternatives to regularization Two alternatives to regularization: Have many, many observations Use a simpler model Geoff Hinton (co-inventor of back propogation) once told a story of engineers that told him (paraphrasing heavily
13,817
Avoid overfitting in regression: alternatives to regularization
Some additional possibilities to avoid overfitting Dimensionality reduction You can use an algorithm such as principal components analysis (PCA) to obtain a lower dimensional features subspace. The idea of PCA is that the variation of your $m$ dimensional feature space may be approximated well by an $l << m$ dimensional subspace. Feature selection (also dimensionality reduction) You could perform a round of feature selection (eg. using LASSO) to obtain a lower dimensional feature space. Something like feature selection using LASSO can be useful if some large but unknown subset of features are irrelevant. Use algorithms less prone to overfitting such as random forest. (Depending on the settings, number of features etc..., these can be more computationally expensive than ordinary least squares.) Some of the other answers have also mentioned the advantages of boosting and bagging techniques/algorithms. Bayesian methods Adding a prior on the coefficient vector an reduce overfitting. This is conceptually related to regularization: eg. ridge regression is a special case of maximum a posteriori estimation.
Avoid overfitting in regression: alternatives to regularization
Some additional possibilities to avoid overfitting Dimensionality reduction You can use an algorithm such as principal components analysis (PCA) to obtain a lower dimensional features subspace. The i
Avoid overfitting in regression: alternatives to regularization Some additional possibilities to avoid overfitting Dimensionality reduction You can use an algorithm such as principal components analysis (PCA) to obtain a lower dimensional features subspace. The idea of PCA is that the variation of your $m$ dimensional feature space may be approximated well by an $l << m$ dimensional subspace. Feature selection (also dimensionality reduction) You could perform a round of feature selection (eg. using LASSO) to obtain a lower dimensional feature space. Something like feature selection using LASSO can be useful if some large but unknown subset of features are irrelevant. Use algorithms less prone to overfitting such as random forest. (Depending on the settings, number of features etc..., these can be more computationally expensive than ordinary least squares.) Some of the other answers have also mentioned the advantages of boosting and bagging techniques/algorithms. Bayesian methods Adding a prior on the coefficient vector an reduce overfitting. This is conceptually related to regularization: eg. ridge regression is a special case of maximum a posteriori estimation.
Avoid overfitting in regression: alternatives to regularization Some additional possibilities to avoid overfitting Dimensionality reduction You can use an algorithm such as principal components analysis (PCA) to obtain a lower dimensional features subspace. The i
13,818
Avoid overfitting in regression: alternatives to regularization
Two thoughts: I second the "use a simpler model" strategy proposed by Ben Ogorek. I work on really sparse linear classification models with small integer coefficients (e.g. max 5 variables with integer coefficients between -5 and 5). The models generalize well in terms of accuracy and trickier performance metrics (e.g calibration). This method in this paper will scale to large sample sizes for logistic regression, and can be extended to fit other linear classifiers with convex loss functions. It will not handle the cases with lots of features (unless $n/d$ is large enough in which case the data is separable and the classification problem becomes easy). If you can specify additional constraints for your model (e.g. monotonicity constraints, side information), then this can also help with generalization by reducing the hypothesis space (see e.g. this paper). This needs to be done with care (e.g. you probably want to compare your model to a baseline without constraints, and design your training process in a way that ensures you aren't cherry picking constraints).
Avoid overfitting in regression: alternatives to regularization
Two thoughts: I second the "use a simpler model" strategy proposed by Ben Ogorek. I work on really sparse linear classification models with small integer coefficients (e.g. max 5 variables with inte
Avoid overfitting in regression: alternatives to regularization Two thoughts: I second the "use a simpler model" strategy proposed by Ben Ogorek. I work on really sparse linear classification models with small integer coefficients (e.g. max 5 variables with integer coefficients between -5 and 5). The models generalize well in terms of accuracy and trickier performance metrics (e.g calibration). This method in this paper will scale to large sample sizes for logistic regression, and can be extended to fit other linear classifiers with convex loss functions. It will not handle the cases with lots of features (unless $n/d$ is large enough in which case the data is separable and the classification problem becomes easy). If you can specify additional constraints for your model (e.g. monotonicity constraints, side information), then this can also help with generalization by reducing the hypothesis space (see e.g. this paper). This needs to be done with care (e.g. you probably want to compare your model to a baseline without constraints, and design your training process in a way that ensures you aren't cherry picking constraints).
Avoid overfitting in regression: alternatives to regularization Two thoughts: I second the "use a simpler model" strategy proposed by Ben Ogorek. I work on really sparse linear classification models with small integer coefficients (e.g. max 5 variables with inte
13,819
Avoid overfitting in regression: alternatives to regularization
If you are use a model with a solver, where you can define number of iterations/epochs, you can track validation error and apply early stopping: stop the algorithm, when validation error starts increasing.
Avoid overfitting in regression: alternatives to regularization
If you are use a model with a solver, where you can define number of iterations/epochs, you can track validation error and apply early stopping: stop the algorithm, when validation error starts increa
Avoid overfitting in regression: alternatives to regularization If you are use a model with a solver, where you can define number of iterations/epochs, you can track validation error and apply early stopping: stop the algorithm, when validation error starts increasing.
Avoid overfitting in regression: alternatives to regularization If you are use a model with a solver, where you can define number of iterations/epochs, you can track validation error and apply early stopping: stop the algorithm, when validation error starts increa
13,820
Avoid overfitting in regression: alternatives to regularization
What is regularization, really? Perhaps you are conflating L1/L2 regularization (aka. Lasso/ridge regression, Tikhonov regularization...), the most ubiquitous type, as the only type of regularization 🤔 Regularization is actually anything that prevents overfitting, that you can do to a learning algorithm [Wikipedia]. Dropout, batch normalization, early stopping, model ensembling, feature selection, and many of the techniques others have pointed out here... are all just different regularization techniques! Bias-variance tradeoff Perhaps thinking about this issue in terms of the bias-variance tradeoff, a fundamental machine learning concept, could greatly clarify your thoughts. If our goal is prediction accuracy, we want to reduce the expected error of a supervised learner $\hat{f}$, which can be decomposed into bias, variance, and irreducible error: $$ {\displaystyle \operatorname {E} _{D}{\Big [}{\big (}y-{\hat {f}}(x;D){\big )}^{2}{\Big ]}={\Big (}\operatorname {Bias} _{D}{\big [}{\hat {f}}(x;D){\big ]}{\Big )}^{2}+\operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}+\sigma ^{2}} $$ Regularization penalizes complex models, to try to reduce the variance of the estimator (more than the bias is increased), to ultimately reduce the expected error. Philosophically, this is akin to Occam's razor, where we introduce an inductive bias for simplicity on the assumption that "simpler is better". We usually want to regularize From a Bayesian viewpoint, we can also show that including L1/L2 regularization means placing a prior and obtaining a MAP estimate, instead of an MLE estimate (see here). Overfitting is simply when your model is unable to generalize well to your actual data of interest ("test" or "production" dataset), usually because it has fit to your training data too well. We always want to prevent this with some form of regularization.
Avoid overfitting in regression: alternatives to regularization
What is regularization, really? Perhaps you are conflating L1/L2 regularization (aka. Lasso/ridge regression, Tikhonov regularization...), the most ubiquitous type, as the only type of regularization
Avoid overfitting in regression: alternatives to regularization What is regularization, really? Perhaps you are conflating L1/L2 regularization (aka. Lasso/ridge regression, Tikhonov regularization...), the most ubiquitous type, as the only type of regularization 🤔 Regularization is actually anything that prevents overfitting, that you can do to a learning algorithm [Wikipedia]. Dropout, batch normalization, early stopping, model ensembling, feature selection, and many of the techniques others have pointed out here... are all just different regularization techniques! Bias-variance tradeoff Perhaps thinking about this issue in terms of the bias-variance tradeoff, a fundamental machine learning concept, could greatly clarify your thoughts. If our goal is prediction accuracy, we want to reduce the expected error of a supervised learner $\hat{f}$, which can be decomposed into bias, variance, and irreducible error: $$ {\displaystyle \operatorname {E} _{D}{\Big [}{\big (}y-{\hat {f}}(x;D){\big )}^{2}{\Big ]}={\Big (}\operatorname {Bias} _{D}{\big [}{\hat {f}}(x;D){\big ]}{\Big )}^{2}+\operatorname {Var} _{D}{\big [}{\hat {f}}(x;D){\big ]}+\sigma ^{2}} $$ Regularization penalizes complex models, to try to reduce the variance of the estimator (more than the bias is increased), to ultimately reduce the expected error. Philosophically, this is akin to Occam's razor, where we introduce an inductive bias for simplicity on the assumption that "simpler is better". We usually want to regularize From a Bayesian viewpoint, we can also show that including L1/L2 regularization means placing a prior and obtaining a MAP estimate, instead of an MLE estimate (see here). Overfitting is simply when your model is unable to generalize well to your actual data of interest ("test" or "production" dataset), usually because it has fit to your training data too well. We always want to prevent this with some form of regularization.
Avoid overfitting in regression: alternatives to regularization What is regularization, really? Perhaps you are conflating L1/L2 regularization (aka. Lasso/ridge regression, Tikhonov regularization...), the most ubiquitous type, as the only type of regularization
13,821
Avoid overfitting in regression: alternatives to regularization
Other alternatives to regularization: Using ensemble methods. Oversampling and data augmentation. Combine the variables to get new ones, for example with PCA. Adding random noise at every step of the optimization. Smoothing the data. Dropout: typically used with NN, but could also be used with the covariates. Standardization always improves the results. Using a bayesian prior is equivalent to regularization. Replace categorical (with many values) fixed effects with random effects (because it reduces the number of parameters).
Avoid overfitting in regression: alternatives to regularization
Other alternatives to regularization: Using ensemble methods. Oversampling and data augmentation. Combine the variables to get new ones, for example with PCA. Adding random noise at every step of the
Avoid overfitting in regression: alternatives to regularization Other alternatives to regularization: Using ensemble methods. Oversampling and data augmentation. Combine the variables to get new ones, for example with PCA. Adding random noise at every step of the optimization. Smoothing the data. Dropout: typically used with NN, but could also be used with the covariates. Standardization always improves the results. Using a bayesian prior is equivalent to regularization. Replace categorical (with many values) fixed effects with random effects (because it reduces the number of parameters).
Avoid overfitting in regression: alternatives to regularization Other alternatives to regularization: Using ensemble methods. Oversampling and data augmentation. Combine the variables to get new ones, for example with PCA. Adding random noise at every step of the
13,822
Bootstrap vs Monte Carlo, error estimation
As far as I understand your question, the difference between the "Monte Carlo" approach and the bootstrap approach is essentially the difference between parametric and non-parametric statistics. In the parametric framework, one knows exactly how the data $x_1,\ldots,x_N$ is generated, that is, given the parameters of the model ($A$, $\sigma_A$, &tc. in your description), you can produce new realisations of such datasets, and from them new realisations of your statistical procedure (or "output"). It is thus possible to describe entirely and exactly the probability distribution of the output $Z$, either by mathematical derivations or by a Monte Carlo experiment returning a sample of arbitrary size from this distribution. In the non-parametric framework, one does not wish to make such assumptions on the data and thus uses the data and only the data to estimate its distribution, $F$. The bootstrap is such an approach in that the unknown distribution is estimated by the empirical distribution $\hat F$ made by setting a probability weight of $1/n$ on each point of the sample (in the simplest case when the data is iid). Using this empirical distribution $\hat F$ as a replacement for the true distribution $F$, one can derive by Monte Carlo simulations the estimated distribution of the output $Z$. Thus, the main difference between both approaches is whether or not one makes this parametric assumption about the distribution of the data.
Bootstrap vs Monte Carlo, error estimation
As far as I understand your question, the difference between the "Monte Carlo" approach and the bootstrap approach is essentially the difference between parametric and non-parametric statistics. In th
Bootstrap vs Monte Carlo, error estimation As far as I understand your question, the difference between the "Monte Carlo" approach and the bootstrap approach is essentially the difference between parametric and non-parametric statistics. In the parametric framework, one knows exactly how the data $x_1,\ldots,x_N$ is generated, that is, given the parameters of the model ($A$, $\sigma_A$, &tc. in your description), you can produce new realisations of such datasets, and from them new realisations of your statistical procedure (or "output"). It is thus possible to describe entirely and exactly the probability distribution of the output $Z$, either by mathematical derivations or by a Monte Carlo experiment returning a sample of arbitrary size from this distribution. In the non-parametric framework, one does not wish to make such assumptions on the data and thus uses the data and only the data to estimate its distribution, $F$. The bootstrap is such an approach in that the unknown distribution is estimated by the empirical distribution $\hat F$ made by setting a probability weight of $1/n$ on each point of the sample (in the simplest case when the data is iid). Using this empirical distribution $\hat F$ as a replacement for the true distribution $F$, one can derive by Monte Carlo simulations the estimated distribution of the output $Z$. Thus, the main difference between both approaches is whether or not one makes this parametric assumption about the distribution of the data.
Bootstrap vs Monte Carlo, error estimation As far as I understand your question, the difference between the "Monte Carlo" approach and the bootstrap approach is essentially the difference between parametric and non-parametric statistics. In th
13,823
Bootstrap vs Monte Carlo, error estimation
The Random Change in your Monte Carlo Model is represented by a bell curve and the computation probably assumes normally distributed "error" or "Change". At least, your computer needs some assumption about the distribution from which to draw the "change". Bootstrapping does not necessarily make such assumptions. It takes observations as observations and if their error is asymetrically distributed, then it goes into the modell that way. Bootstrapping draws from the observation and thus needs a number of true observations. If you read in a book, that C averages at 5 with a standard deviation of 1, than you can set up a Monte Carlo Modell even if you don't have observations to draw from. If your observation is scarce (think: astronomy) you may set up a Monte Carlo Modell with 6 observations and some assumptions about their distribution but you will not bootstrap from 6 observations. Mixed modells with some input drawn from observed data and some from simulated (say hypothetical) data are possible. Edit: In the following discussion in the comments, the original poster found the following helpfull: The "original program" does not care, whether it gets a value, that you computed from a mean and a deviation or that is a true realisation of a mean and a deviation in a natural process.
Bootstrap vs Monte Carlo, error estimation
The Random Change in your Monte Carlo Model is represented by a bell curve and the computation probably assumes normally distributed "error" or "Change". At least, your computer needs some assumption
Bootstrap vs Monte Carlo, error estimation The Random Change in your Monte Carlo Model is represented by a bell curve and the computation probably assumes normally distributed "error" or "Change". At least, your computer needs some assumption about the distribution from which to draw the "change". Bootstrapping does not necessarily make such assumptions. It takes observations as observations and if their error is asymetrically distributed, then it goes into the modell that way. Bootstrapping draws from the observation and thus needs a number of true observations. If you read in a book, that C averages at 5 with a standard deviation of 1, than you can set up a Monte Carlo Modell even if you don't have observations to draw from. If your observation is scarce (think: astronomy) you may set up a Monte Carlo Modell with 6 observations and some assumptions about their distribution but you will not bootstrap from 6 observations. Mixed modells with some input drawn from observed data and some from simulated (say hypothetical) data are possible. Edit: In the following discussion in the comments, the original poster found the following helpfull: The "original program" does not care, whether it gets a value, that you computed from a mean and a deviation or that is a true realisation of a mean and a deviation in a natural process.
Bootstrap vs Monte Carlo, error estimation The Random Change in your Monte Carlo Model is represented by a bell curve and the computation probably assumes normally distributed "error" or "Change". At least, your computer needs some assumption
13,824
Bootstrap vs Monte Carlo, error estimation
If the function relating the output Z to the inputs is reasonably linear (i.e. within the variation range of the inputs), the variance of Z is a combination of the variances and covariances of the inputs. The details of the distribution do not matter too much... So, both methods should return similar results. See the Supplement 1 to the GUM
Bootstrap vs Monte Carlo, error estimation
If the function relating the output Z to the inputs is reasonably linear (i.e. within the variation range of the inputs), the variance of Z is a combination of the variances and covariances of the in
Bootstrap vs Monte Carlo, error estimation If the function relating the output Z to the inputs is reasonably linear (i.e. within the variation range of the inputs), the variance of Z is a combination of the variances and covariances of the inputs. The details of the distribution do not matter too much... So, both methods should return similar results. See the Supplement 1 to the GUM
Bootstrap vs Monte Carlo, error estimation If the function relating the output Z to the inputs is reasonably linear (i.e. within the variation range of the inputs), the variance of Z is a combination of the variances and covariances of the in
13,825
Bootstrap vs Monte Carlo, error estimation
Bootstrap means letting the data speak for themselves. With Monte Carlo method, you sample many random draws from the imposed CDF (normal; gamma; beta...) via uniform distribution and create an empirical PDF (provided that the CDF is continuous and derivable). An interesting explanation of the whole Monte Carlo process is reported in: Briggs A, Schulper M, Claxton K. Decision modelling for health economic evaluation. Oxford: Oxford University Press, 2006: 93-95.
Bootstrap vs Monte Carlo, error estimation
Bootstrap means letting the data speak for themselves. With Monte Carlo method, you sample many random draws from the imposed CDF (normal; gamma; beta...) via uniform distribution and create an empiri
Bootstrap vs Monte Carlo, error estimation Bootstrap means letting the data speak for themselves. With Monte Carlo method, you sample many random draws from the imposed CDF (normal; gamma; beta...) via uniform distribution and create an empirical PDF (provided that the CDF is continuous and derivable). An interesting explanation of the whole Monte Carlo process is reported in: Briggs A, Schulper M, Claxton K. Decision modelling for health economic evaluation. Oxford: Oxford University Press, 2006: 93-95.
Bootstrap vs Monte Carlo, error estimation Bootstrap means letting the data speak for themselves. With Monte Carlo method, you sample many random draws from the imposed CDF (normal; gamma; beta...) via uniform distribution and create an empiri
13,826
RNNs: When to apply BPTT and/or update weights?
I'll assume we're talking about recurrent neural nets (RNNs) that produce an output at every time step (if output is only available at the end of the sequence, it only makes sense to run backprop at the end). RNNs in this setting are often trained using truncated backpropagation through time (BPTT), operating sequentially on 'chunks' of a sequence. The procedure looks like this: Forward pass: Step through the next $k_1$ time steps, computing the input, hidden, and output states. Compute the loss, summed over the previous time steps (see below). Backward pass: Compute the gradient of the loss w.r.t. all parameters, accumulating over the previous $k_2$ time steps (this requires having stored all activations for these time steps). Clip gradients to avoid the exploding gradient problem (happens rarely). Update parameters (this occurs once per chunk, not incrementally at each time step). If processing multiple chunks of a longer sequence, store the hidden state at the last time step (will be used to initialize hidden state for beginning of next chunk). If we've reached the end of the sequence, reset the memory/hidden state and move to the beginning of the next sequence (or beginning of the same sequence, if there's only one). Repeat from step 1. How the loss is summed depends on $k_1$ and $k_2$. For example, when $k_1 = k_2$, the loss is summed over the past $k_1 = k_2$ time steps, but the procedure is different when $k_2 > k_1$ (see Williams and Peng 1990). Gradient computation and updates are performed every $k_1$ time steps because it's computationally cheaper than updating at every time step. Updating multiple times per sequence (i.e. setting $k_1$ less than the sequence length) can accelerate training because weight updates are more frequent. Backpropagation is performed for only $k_2$ time steps because it's computationally cheaper than propagating back to the beginning of the sequence (which would require storing and repeatedly processing all time steps). Gradients computed in this manner are an approximation to the 'true' gradient computed over all time steps. But, because of the vanishing gradient problem, gradients will tend to approach zero after some number of time steps; propagating beyond this limit wouldn't give any benefit. Setting $k_2$ too short can limit the temporal scale over which the network can learn. However, the network's memory isn't limited to $k_2$ time steps because the hidden units can store information beyond this period (e.g. see Mikolov 2012 and this post). Besides computational considerations, the proper settings for $k_1$ and $k_2$ depend on the statistics of the data (e.g. the temporal scale of the structures that are relevant for producing good outputs). They probably also depend on the details of the network. For example, there are a number of architectures, initialization tricks, etc. designed to mitigate the decaying gradient problem. Your option 1 ('frame-wise backprop') corresponds to setting $k_1$ to $1$ and $k_2$ to the number of time steps from the beginning of the sentence to the current point. Option 2 ('sentence-wise backprop') corresponds to setting both $k_1$ and $k_2$ to the sentence length. Both are valid approaches (with computational/performance considerations as above; #1 would be quite computationally intensive for longer sequences). Neither of these approaches would be called 'truncated' because backpropagation occurs over the entire sequence. Other settings of $k_1$ and $k_2$ are possible; I'll list some examples below. References describing truncated BPTT (procedure, motivation, practical issues): Sutskever (2013). Training recurrent neural networks. Mikolov (2012). Statistical Language Models Based on Neural Networks. Using vanilla RNNs to process text data as a sequence of words, he recommends setting $k_1$ to 10-20 words and $k_2$ to 5 words Performing multiple updates per sequence (i.e. $k_1$ less than the sequence length) works better than updating at the end of the sequence Performing updates once per chunk is better than incrementally (which can be unstable) Williams and Peng (1990). An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Original (?) proposal of the algorithm They discuss the choice of $k_1$ and $k_2$ (which they call $h'$ and $h$). They only consider $k_2 \ge k_1$. Note: They use the phrase "BPTT(h; h')" or 'the improved algorithm' to refer to what the other references call 'truncated BPTT'. They use the phrase 'truncated BPTT' to mean the special case where $k_1 = 1$. Other examples using truncated BPTT: (Karpathy 2015). char-rnn. Description and code Vanilla RNN processing text documents one character at a time. Trained to predict the next character. $k_1 = k_2 = 25$ characters. Network used to generate new text in the style of the training document, with amusing results. Graves (2014). Generating sequences with recurrent neural networks. See section about generating simulated Wikipedia articles. LSTM network processing text data as sequence of bytes. Trained to predict the next byte. $k_1 = k_2 = 100$ bytes. LSTM memory reset every $10,000$ bytes. Sak et al. (2014). Long short term memory based recurrent neural network architectures for large vocabulary speech recognition. Modified LSTM networks, processing sequences of acoustic features. $k_1 = k_2 = 20$. Ollivier et al. (2015). Training recurrent networks online without backtracking. Point of this paper was to propose a different learning algorithm, but they did compare it to truncated BPTT. Used vanilla RNNs to predict sequences of symbols. Only mentioning it here to to say that they used $k_1 = k_2 = 15$. Hochreiter and Schmidhuber (1997). Long short-term memory. They describe a modified procedure for LSTMs
RNNs: When to apply BPTT and/or update weights?
I'll assume we're talking about recurrent neural nets (RNNs) that produce an output at every time step (if output is only available at the end of the sequence, it only makes sense to run backprop at t
RNNs: When to apply BPTT and/or update weights? I'll assume we're talking about recurrent neural nets (RNNs) that produce an output at every time step (if output is only available at the end of the sequence, it only makes sense to run backprop at the end). RNNs in this setting are often trained using truncated backpropagation through time (BPTT), operating sequentially on 'chunks' of a sequence. The procedure looks like this: Forward pass: Step through the next $k_1$ time steps, computing the input, hidden, and output states. Compute the loss, summed over the previous time steps (see below). Backward pass: Compute the gradient of the loss w.r.t. all parameters, accumulating over the previous $k_2$ time steps (this requires having stored all activations for these time steps). Clip gradients to avoid the exploding gradient problem (happens rarely). Update parameters (this occurs once per chunk, not incrementally at each time step). If processing multiple chunks of a longer sequence, store the hidden state at the last time step (will be used to initialize hidden state for beginning of next chunk). If we've reached the end of the sequence, reset the memory/hidden state and move to the beginning of the next sequence (or beginning of the same sequence, if there's only one). Repeat from step 1. How the loss is summed depends on $k_1$ and $k_2$. For example, when $k_1 = k_2$, the loss is summed over the past $k_1 = k_2$ time steps, but the procedure is different when $k_2 > k_1$ (see Williams and Peng 1990). Gradient computation and updates are performed every $k_1$ time steps because it's computationally cheaper than updating at every time step. Updating multiple times per sequence (i.e. setting $k_1$ less than the sequence length) can accelerate training because weight updates are more frequent. Backpropagation is performed for only $k_2$ time steps because it's computationally cheaper than propagating back to the beginning of the sequence (which would require storing and repeatedly processing all time steps). Gradients computed in this manner are an approximation to the 'true' gradient computed over all time steps. But, because of the vanishing gradient problem, gradients will tend to approach zero after some number of time steps; propagating beyond this limit wouldn't give any benefit. Setting $k_2$ too short can limit the temporal scale over which the network can learn. However, the network's memory isn't limited to $k_2$ time steps because the hidden units can store information beyond this period (e.g. see Mikolov 2012 and this post). Besides computational considerations, the proper settings for $k_1$ and $k_2$ depend on the statistics of the data (e.g. the temporal scale of the structures that are relevant for producing good outputs). They probably also depend on the details of the network. For example, there are a number of architectures, initialization tricks, etc. designed to mitigate the decaying gradient problem. Your option 1 ('frame-wise backprop') corresponds to setting $k_1$ to $1$ and $k_2$ to the number of time steps from the beginning of the sentence to the current point. Option 2 ('sentence-wise backprop') corresponds to setting both $k_1$ and $k_2$ to the sentence length. Both are valid approaches (with computational/performance considerations as above; #1 would be quite computationally intensive for longer sequences). Neither of these approaches would be called 'truncated' because backpropagation occurs over the entire sequence. Other settings of $k_1$ and $k_2$ are possible; I'll list some examples below. References describing truncated BPTT (procedure, motivation, practical issues): Sutskever (2013). Training recurrent neural networks. Mikolov (2012). Statistical Language Models Based on Neural Networks. Using vanilla RNNs to process text data as a sequence of words, he recommends setting $k_1$ to 10-20 words and $k_2$ to 5 words Performing multiple updates per sequence (i.e. $k_1$ less than the sequence length) works better than updating at the end of the sequence Performing updates once per chunk is better than incrementally (which can be unstable) Williams and Peng (1990). An efficient gradient-based algorithm for on-line training of recurrent network trajectories. Original (?) proposal of the algorithm They discuss the choice of $k_1$ and $k_2$ (which they call $h'$ and $h$). They only consider $k_2 \ge k_1$. Note: They use the phrase "BPTT(h; h')" or 'the improved algorithm' to refer to what the other references call 'truncated BPTT'. They use the phrase 'truncated BPTT' to mean the special case where $k_1 = 1$. Other examples using truncated BPTT: (Karpathy 2015). char-rnn. Description and code Vanilla RNN processing text documents one character at a time. Trained to predict the next character. $k_1 = k_2 = 25$ characters. Network used to generate new text in the style of the training document, with amusing results. Graves (2014). Generating sequences with recurrent neural networks. See section about generating simulated Wikipedia articles. LSTM network processing text data as sequence of bytes. Trained to predict the next byte. $k_1 = k_2 = 100$ bytes. LSTM memory reset every $10,000$ bytes. Sak et al. (2014). Long short term memory based recurrent neural network architectures for large vocabulary speech recognition. Modified LSTM networks, processing sequences of acoustic features. $k_1 = k_2 = 20$. Ollivier et al. (2015). Training recurrent networks online without backtracking. Point of this paper was to propose a different learning algorithm, but they did compare it to truncated BPTT. Used vanilla RNNs to predict sequences of symbols. Only mentioning it here to to say that they used $k_1 = k_2 = 15$. Hochreiter and Schmidhuber (1997). Long short-term memory. They describe a modified procedure for LSTMs
RNNs: When to apply BPTT and/or update weights? I'll assume we're talking about recurrent neural nets (RNNs) that produce an output at every time step (if output is only available at the end of the sequence, it only makes sense to run backprop at t
13,827
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation?
Let's just go through the parameters one by one: data doesn't require explanation m is the number of imputations, generally speaking, the more the better. Originally (following Rubin, 1987) 5 was considered to be enough (hence the default). So from an accuracy point of view, 5 may be sufficient. However, this was based on an efficiency argument only. In order to achieve better estimates of standard errors, more imputations are needed. These days there is a rule of thumb to use whatever the average percentage rate of missingness is - so if there is 30% missing data on average in a dataset, use 30 imputations - see Bodner (2008) and White et al (2011) for further details. method specifies which imputation method is to be used - this only necessary when the default method is to be over-ridden. For example, continuous data are imputed by predictive mean matching by default, and this usually works very well, but Bayesian linear regression, and several others including a multilevel model for nested/clustered data may be specified instead. Hence, expert/clinical/statistical knowledge may be of use in specifying alternatives to the default method(s). predictorMatrix is a matrix which tells the algorithm which variables predict missingness in which other variables. mice uses a default based on correlations between variables and the proportion of usable cases if this is not specified. Expert/clinical knowledge may be very useful in specifying the predictor matrix, so the default should be used with care. visitSequence specifies the order in which variables are imputed. It is not usually needed. form is used primarily to aid the specification of interaction terms to be used in imputation, and isn't normally needed. post is for post-imputation processing, for example to ensure that positive values are imputed. This isn't normally needed. defaultMethod changes the default imputation methods, and is not normally needed maxit is the number of iterations for each imputation. mice uses an iterative algorithm. It is important that the imputations for all variables reach convergence, otherwise they will be inaccurate. By inspecting the trace plots generated by plot() this can be visually determined. Unlike other Gibbs sampling methods, far fewer iterations are needed - generally in the region of 20-30 or less as a rule of thumb. When the trace lines reach a value and fluctuate slightly around it, convergence has been achieved. The following is an example showing healthy convergence, taken from here : Here, 3 variables are being imputed with 5 imputations (coloured lines) for 20 iterations (x-axis on the plots), the y-axis on the plots are the imputed values for each imputation. diagnostics produces useful diagnostic information by default. printFlag outputs the algorithm progress by default which is useful because the estimated time to completion can easily be ascertained. seed is a random seed parameter which is useful for reproducibility. imputationMethod and defaultImputationMethod are for backwards compatibility only. Bodner, Todd E. (2008) “What improves with increased missing data imputations?” Structural Equation Modeling: A Multidisciplinary Journal 15: 651-675. https://dx.doi.org/10.1080/10705510802339072 Rubin, Donald B. (1987) Multiple Imputation for Nonresponse in Surveys. New York: Wiley. White, Ian R., Patrick Royston and Angela M. Wood (2011) “Multiple imputation using chained equations: Issues and guidance for practice.” Statistics in Medicine 30: 377-399. https://dx.doi.org/10.1002/sim.4067
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation?
Let's just go through the parameters one by one: data doesn't require explanation m is the number of imputations, generally speaking, the more the better. Originally (following Rubin, 1987) 5 was con
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation? Let's just go through the parameters one by one: data doesn't require explanation m is the number of imputations, generally speaking, the more the better. Originally (following Rubin, 1987) 5 was considered to be enough (hence the default). So from an accuracy point of view, 5 may be sufficient. However, this was based on an efficiency argument only. In order to achieve better estimates of standard errors, more imputations are needed. These days there is a rule of thumb to use whatever the average percentage rate of missingness is - so if there is 30% missing data on average in a dataset, use 30 imputations - see Bodner (2008) and White et al (2011) for further details. method specifies which imputation method is to be used - this only necessary when the default method is to be over-ridden. For example, continuous data are imputed by predictive mean matching by default, and this usually works very well, but Bayesian linear regression, and several others including a multilevel model for nested/clustered data may be specified instead. Hence, expert/clinical/statistical knowledge may be of use in specifying alternatives to the default method(s). predictorMatrix is a matrix which tells the algorithm which variables predict missingness in which other variables. mice uses a default based on correlations between variables and the proportion of usable cases if this is not specified. Expert/clinical knowledge may be very useful in specifying the predictor matrix, so the default should be used with care. visitSequence specifies the order in which variables are imputed. It is not usually needed. form is used primarily to aid the specification of interaction terms to be used in imputation, and isn't normally needed. post is for post-imputation processing, for example to ensure that positive values are imputed. This isn't normally needed. defaultMethod changes the default imputation methods, and is not normally needed maxit is the number of iterations for each imputation. mice uses an iterative algorithm. It is important that the imputations for all variables reach convergence, otherwise they will be inaccurate. By inspecting the trace plots generated by plot() this can be visually determined. Unlike other Gibbs sampling methods, far fewer iterations are needed - generally in the region of 20-30 or less as a rule of thumb. When the trace lines reach a value and fluctuate slightly around it, convergence has been achieved. The following is an example showing healthy convergence, taken from here : Here, 3 variables are being imputed with 5 imputations (coloured lines) for 20 iterations (x-axis on the plots), the y-axis on the plots are the imputed values for each imputation. diagnostics produces useful diagnostic information by default. printFlag outputs the algorithm progress by default which is useful because the estimated time to completion can easily be ascertained. seed is a random seed parameter which is useful for reproducibility. imputationMethod and defaultImputationMethod are for backwards compatibility only. Bodner, Todd E. (2008) “What improves with increased missing data imputations?” Structural Equation Modeling: A Multidisciplinary Journal 15: 651-675. https://dx.doi.org/10.1080/10705510802339072 Rubin, Donald B. (1987) Multiple Imputation for Nonresponse in Surveys. New York: Wiley. White, Ian R., Patrick Royston and Angela M. Wood (2011) “Multiple imputation using chained equations: Issues and guidance for practice.” Statistics in Medicine 30: 377-399. https://dx.doi.org/10.1002/sim.4067
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation? Let's just go through the parameters one by one: data doesn't require explanation m is the number of imputations, generally speaking, the more the better. Originally (following Rubin, 1987) 5 was con
13,828
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation?
Until 5 years ago, the most popular rule of thumb was that the number of imputations should be equal to the % of missing information, but it turns out it's not a linear relationship. It's quadratic. Plus the number of imputations depends on how much random noise you can tolerate in your estimates; more imputations-->less noise. In 2020, von Hippel (that's me) published a paper explaining how to calculate the number the imputations you need given your tolerance for noise. There's a blog post here that includes links to software implementations in R, Stata, and SAS: https://statisticalhorizons.com/how-many-imputations/
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation?
Until 5 years ago, the most popular rule of thumb was that the number of imputations should be equal to the % of missing information, but it turns out it's not a linear relationship. It's quadratic. P
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation? Until 5 years ago, the most popular rule of thumb was that the number of imputations should be equal to the % of missing information, but it turns out it's not a linear relationship. It's quadratic. Plus the number of imputations depends on how much random noise you can tolerate in your estimates; more imputations-->less noise. In 2020, von Hippel (that's me) published a paper explaining how to calculate the number the imputations you need given your tolerance for noise. There's a blog post here that includes links to software implementations in R, Stata, and SAS: https://statisticalhorizons.com/how-many-imputations/
How do the number of imputations & the maximum iterations affect accuracy in multiple imputation? Until 5 years ago, the most popular rule of thumb was that the number of imputations should be equal to the % of missing information, but it turns out it's not a linear relationship. It's quadratic. P
13,829
Qualitative variable coding in regression leads to "singularities"
The problem you are having (i.e., "singularities") can be thought of as an instance of multicollinearity. Multicollinearity is often defined as: One or more predictor variables are a linear combination of other predictor variables. This is, in fact, a rather strict definition; it is perfect multicollinearity, and you can easily have a problem with multicollinearity without any of your variables being perfect linear combinations of others. Moreover, perfect multicollinearity rarely occurs. However, you have stumbled across an case where it can occur. Let us see how we can perfectly predict medium quality from our knowledge of the other two categories (we'll do this with a regression model where medium quality is $Y$, and bad quality & high quality are $X_1$ & $X_2$, respectively): $$ Y = \beta_0 + \beta_1X_1 + \beta_2X_2 $$ Note that there is no error term, $\varepsilon$, specified, because we can predict this perfectly. To do so, we set $\beta_0 = 1$, $\beta_1 = -1$, and $\beta_2 = -1$. Now, when you have bad quality, then $X_1=1$, which cancels out $\beta_0$ ($1\; + \;-1\!\times\! 1$), and $X_2=0$ so that term is canceled out as well ($-1\times 0$). Thus, we are left with a predicted value of $0$ for $Y$ (medium quality), which is exactly correct. I will leave it to you to work out the other possibilities (it always works, in your case). So what then should you do? When representing a categorical variable, we typically use reference cell coding (often called 'dummy coding'). To do this, we pick one level of our categorical variable as the reference level; that level does not get its own dummy code, but is simply indicated by having all $0$'s in the dummy codes for all other levels. The other levels of your categorical variable are represented by dummy codes just as you have already done. (For some more information on this, you can see my answer here: Regression based for example on days of week.) If you are using R, you can use a factor and R will do this all for you--it will be done correctly, and it's much more convenient--nonetheless, it's worth understanding that this is what is happening 'behind the scenes'.
Qualitative variable coding in regression leads to "singularities"
The problem you are having (i.e., "singularities") can be thought of as an instance of multicollinearity. Multicollinearity is often defined as: One or more predictor variables are a linear combin
Qualitative variable coding in regression leads to "singularities" The problem you are having (i.e., "singularities") can be thought of as an instance of multicollinearity. Multicollinearity is often defined as: One or more predictor variables are a linear combination of other predictor variables. This is, in fact, a rather strict definition; it is perfect multicollinearity, and you can easily have a problem with multicollinearity without any of your variables being perfect linear combinations of others. Moreover, perfect multicollinearity rarely occurs. However, you have stumbled across an case where it can occur. Let us see how we can perfectly predict medium quality from our knowledge of the other two categories (we'll do this with a regression model where medium quality is $Y$, and bad quality & high quality are $X_1$ & $X_2$, respectively): $$ Y = \beta_0 + \beta_1X_1 + \beta_2X_2 $$ Note that there is no error term, $\varepsilon$, specified, because we can predict this perfectly. To do so, we set $\beta_0 = 1$, $\beta_1 = -1$, and $\beta_2 = -1$. Now, when you have bad quality, then $X_1=1$, which cancels out $\beta_0$ ($1\; + \;-1\!\times\! 1$), and $X_2=0$ so that term is canceled out as well ($-1\times 0$). Thus, we are left with a predicted value of $0$ for $Y$ (medium quality), which is exactly correct. I will leave it to you to work out the other possibilities (it always works, in your case). So what then should you do? When representing a categorical variable, we typically use reference cell coding (often called 'dummy coding'). To do this, we pick one level of our categorical variable as the reference level; that level does not get its own dummy code, but is simply indicated by having all $0$'s in the dummy codes for all other levels. The other levels of your categorical variable are represented by dummy codes just as you have already done. (For some more information on this, you can see my answer here: Regression based for example on days of week.) If you are using R, you can use a factor and R will do this all for you--it will be done correctly, and it's much more convenient--nonetheless, it's worth understanding that this is what is happening 'behind the scenes'.
Qualitative variable coding in regression leads to "singularities" The problem you are having (i.e., "singularities") can be thought of as an instance of multicollinearity. Multicollinearity is often defined as: One or more predictor variables are a linear combin
13,830
Qualitative variable coding in regression leads to "singularities"
@gung has explained the theory clearly. Here's a practical example to illustrate: set.seed(1) pred1 <- factor(c("bad", "med", "high"), levels=c("bad", "med", "high")) df1 <- data.frame(y=20*abs(runif(6)), x=rnorm(6), q=sample(pred1, 6, replace=TRUE) ) l1 <- lm(y ~ x, data=df1) ### add variable q l2 <- lm(y ~ x + q, data=df1) ### look at dummy variables generated in creating model model.matrix(l2) This shows us that the reference level (all $0$s) is bad as seen here in row 4: (Intercept) x qmed qhigh 1 1 1.5952808 1 0 2 1 0.3295078 0 1 3 1 -0.8204684 0 1 4 1 0.4874291 0 0 5 1 0.7383247 1 0 6 1 0.5757814 0 0 Now if we code the dummy variables ourselves and try to fit a model using all of them: df1 <- within(df1, { qbad <- ifelse(q=="bad", 1, 0) qmed <- ifelse(q=="med", 1, 0) qhigh <- ifelse(q=="high", 1, 0) }) lm(y ~ x + qbad + qmed + qhigh, data=df1, singular.ok=FALSE) We get the expected error: singular fit encountered
Qualitative variable coding in regression leads to "singularities"
@gung has explained the theory clearly. Here's a practical example to illustrate: set.seed(1) pred1 <- factor(c("bad", "med", "high"), levels=c("bad", "med", "high")) df1 <- data.frame(y=20*abs(runif(
Qualitative variable coding in regression leads to "singularities" @gung has explained the theory clearly. Here's a practical example to illustrate: set.seed(1) pred1 <- factor(c("bad", "med", "high"), levels=c("bad", "med", "high")) df1 <- data.frame(y=20*abs(runif(6)), x=rnorm(6), q=sample(pred1, 6, replace=TRUE) ) l1 <- lm(y ~ x, data=df1) ### add variable q l2 <- lm(y ~ x + q, data=df1) ### look at dummy variables generated in creating model model.matrix(l2) This shows us that the reference level (all $0$s) is bad as seen here in row 4: (Intercept) x qmed qhigh 1 1 1.5952808 1 0 2 1 0.3295078 0 1 3 1 -0.8204684 0 1 4 1 0.4874291 0 0 5 1 0.7383247 1 0 6 1 0.5757814 0 0 Now if we code the dummy variables ourselves and try to fit a model using all of them: df1 <- within(df1, { qbad <- ifelse(q=="bad", 1, 0) qmed <- ifelse(q=="med", 1, 0) qhigh <- ifelse(q=="high", 1, 0) }) lm(y ~ x + qbad + qmed + qhigh, data=df1, singular.ok=FALSE) We get the expected error: singular fit encountered
Qualitative variable coding in regression leads to "singularities" @gung has explained the theory clearly. Here's a practical example to illustrate: set.seed(1) pred1 <- factor(c("bad", "med", "high"), levels=c("bad", "med", "high")) df1 <- data.frame(y=20*abs(runif(
13,831
Using lmer for prediction
Expressing factors relationships using R formulas follows from Wilkinson's notation, where '*' denotes crossing and '/' nesting, but there are some particularities in the way formula for mixed-effects models, or more generally random effects, are handled. For example, two crossed random effects might be represented as (1|x1)+(1|x2). I have interpreted your description as a case of nesting, much like classes are nested in schools (nested in states, etc.), so a basic formula with lmer would look like (unless otherwise stated, a gaussian family is used by default): y ~ x + (1|A:B) + (1|A) where A and B correspond to your inner and outer factors, respectively. B is nested within A, and both are treated as random factors. In the older nlme package, this would correspond to something like lme(y ~ x, random=~ 1 | A/B). If A was to be considered as a fixed factor, the formula should read y ~ x + A + (1|A:B). But it is worth checking more precisely D. Bates' specifications for the lme4 package, e.g. in his forthcoming textbook, lme4: Mixed-effects Modeling with R, or the many handouts available on the same webpage. In particular, there is an example for such nesting relations in Fitting Linear Mixed-Effects Models, the lme4 Package in R. John Maindonald's tutorial also provides a nice overview: The Anatomy of a Mixed Model Analysis, with R’s lme4 Package. Finally, section 3 of the R vignette on lme4 imlementation includes an example of the analysis of a nested structure. There is no predict() function in lme4 (this function now exists, see comment below), and you have to compute yourself predicted individual values using the estimated fixed (see ?fixef) and random (see ?ranef) effects, but see also this thread on the lack of predict function in lme4. You can also generate a sample from the posterior distribution using the mcmcsamp() function. Sometimes, it might clash, though. See the sig-me mailing list for more updated information.
Using lmer for prediction
Expressing factors relationships using R formulas follows from Wilkinson's notation, where '*' denotes crossing and '/' nesting, but there are some particularities in the way formula for mixed-effects
Using lmer for prediction Expressing factors relationships using R formulas follows from Wilkinson's notation, where '*' denotes crossing and '/' nesting, but there are some particularities in the way formula for mixed-effects models, or more generally random effects, are handled. For example, two crossed random effects might be represented as (1|x1)+(1|x2). I have interpreted your description as a case of nesting, much like classes are nested in schools (nested in states, etc.), so a basic formula with lmer would look like (unless otherwise stated, a gaussian family is used by default): y ~ x + (1|A:B) + (1|A) where A and B correspond to your inner and outer factors, respectively. B is nested within A, and both are treated as random factors. In the older nlme package, this would correspond to something like lme(y ~ x, random=~ 1 | A/B). If A was to be considered as a fixed factor, the formula should read y ~ x + A + (1|A:B). But it is worth checking more precisely D. Bates' specifications for the lme4 package, e.g. in his forthcoming textbook, lme4: Mixed-effects Modeling with R, or the many handouts available on the same webpage. In particular, there is an example for such nesting relations in Fitting Linear Mixed-Effects Models, the lme4 Package in R. John Maindonald's tutorial also provides a nice overview: The Anatomy of a Mixed Model Analysis, with R’s lme4 Package. Finally, section 3 of the R vignette on lme4 imlementation includes an example of the analysis of a nested structure. There is no predict() function in lme4 (this function now exists, see comment below), and you have to compute yourself predicted individual values using the estimated fixed (see ?fixef) and random (see ?ranef) effects, but see also this thread on the lack of predict function in lme4. You can also generate a sample from the posterior distribution using the mcmcsamp() function. Sometimes, it might clash, though. See the sig-me mailing list for more updated information.
Using lmer for prediction Expressing factors relationships using R formulas follows from Wilkinson's notation, where '*' denotes crossing and '/' nesting, but there are some particularities in the way formula for mixed-effects
13,832
Using lmer for prediction
The ez package contains the ezPredict() function, which obtains predictions from lmer models where prediction is based on the fixed effects only. It's really just a wrapper around the approach detailed in the glmm wiki.
Using lmer for prediction
The ez package contains the ezPredict() function, which obtains predictions from lmer models where prediction is based on the fixed effects only. It's really just a wrapper around the approach detaile
Using lmer for prediction The ez package contains the ezPredict() function, which obtains predictions from lmer models where prediction is based on the fixed effects only. It's really just a wrapper around the approach detailed in the glmm wiki.
Using lmer for prediction The ez package contains the ezPredict() function, which obtains predictions from lmer models where prediction is based on the fixed effects only. It's really just a wrapper around the approach detaile
13,833
Using lmer for prediction
I would use the "logit.mixed" function in Zelig, which is a wrapper for lime4 and makes it very convenient to do prediction and simulation.
Using lmer for prediction
I would use the "logit.mixed" function in Zelig, which is a wrapper for lime4 and makes it very convenient to do prediction and simulation.
Using lmer for prediction I would use the "logit.mixed" function in Zelig, which is a wrapper for lime4 and makes it very convenient to do prediction and simulation.
Using lmer for prediction I would use the "logit.mixed" function in Zelig, which is a wrapper for lime4 and makes it very convenient to do prediction and simulation.
13,834
Using lmer for prediction
The development version of lme4 has a built-in predict function (predict.merMod). It can be found on https://github.com/lme4/lme4/. The code to install the "Nearly up-to-date development binaries from lme4 r-forge repository" can be found on above page and is: install.packages("lme4", repos=c("http://lme4.r-forge.r-project.org/repos", getOption("repos")["CRAN"]))
Using lmer for prediction
The development version of lme4 has a built-in predict function (predict.merMod). It can be found on https://github.com/lme4/lme4/. The code to install the "Nearly up-to-date development binaries fro
Using lmer for prediction The development version of lme4 has a built-in predict function (predict.merMod). It can be found on https://github.com/lme4/lme4/. The code to install the "Nearly up-to-date development binaries from lme4 r-forge repository" can be found on above page and is: install.packages("lme4", repos=c("http://lme4.r-forge.r-project.org/repos", getOption("repos")["CRAN"]))
Using lmer for prediction The development version of lme4 has a built-in predict function (predict.merMod). It can be found on https://github.com/lme4/lme4/. The code to install the "Nearly up-to-date development binaries fro
13,835
Using lmer for prediction
Stephen Raudenbush has a book chapter in the Handbook of Multilevel Analysis on "Many Small Groups". If you are only interested in the effects of x on y and have no interest in higher level effects, his suggestion is simply to estimate a fixed effects model (i.e. a dummy variable for all possible higher level groupings). I don't know how applicable that is towards prediction, but I would imagine some of what he writes is applicable to what you are trying to accomplish.
Using lmer for prediction
Stephen Raudenbush has a book chapter in the Handbook of Multilevel Analysis on "Many Small Groups". If you are only interested in the effects of x on y and have no interest in higher level effects, h
Using lmer for prediction Stephen Raudenbush has a book chapter in the Handbook of Multilevel Analysis on "Many Small Groups". If you are only interested in the effects of x on y and have no interest in higher level effects, his suggestion is simply to estimate a fixed effects model (i.e. a dummy variable for all possible higher level groupings). I don't know how applicable that is towards prediction, but I would imagine some of what he writes is applicable to what you are trying to accomplish.
Using lmer for prediction Stephen Raudenbush has a book chapter in the Handbook of Multilevel Analysis on "Many Small Groups". If you are only interested in the effects of x on y and have no interest in higher level effects, h
13,836
Post-hocs for within subjects tests?
Have a look at the multcomp-package and its vignette Simultaneous Inference in General Parametric Models. I think it should do what wan't and the vignette has very good examples and extensive references.
Post-hocs for within subjects tests?
Have a look at the multcomp-package and its vignette Simultaneous Inference in General Parametric Models. I think it should do what wan't and the vignette has very good examples and extensive referenc
Post-hocs for within subjects tests? Have a look at the multcomp-package and its vignette Simultaneous Inference in General Parametric Models. I think it should do what wan't and the vignette has very good examples and extensive references.
Post-hocs for within subjects tests? Have a look at the multcomp-package and its vignette Simultaneous Inference in General Parametric Models. I think it should do what wan't and the vignette has very good examples and extensive referenc
13,837
Post-hocs for within subjects tests?
I am currently writing a paper in which I have the pleasure to conduct both between and within subjects comparisons. After discussion with my supervisor we decided to run t-tests and use the pretty simple Holm-Bonferroni method (wikipedia) for correcting for alpha error cumulation. It controls for familwise error rate but has a greater power than the ordinary Bonferroni procedure. Procedure: You run the t-tests for all comparisons you want to do. You order the p-values according to their value. You test the smallest p-value against alpha / k, the second smallest against alpha /( k - 1), and so forth until the first test turns out non-significant in this sequence of tests. Cite Holm (1979) which can be downloaded via the link at wikipedia.
Post-hocs for within subjects tests?
I am currently writing a paper in which I have the pleasure to conduct both between and within subjects comparisons. After discussion with my supervisor we decided to run t-tests and use the pretty si
Post-hocs for within subjects tests? I am currently writing a paper in which I have the pleasure to conduct both between and within subjects comparisons. After discussion with my supervisor we decided to run t-tests and use the pretty simple Holm-Bonferroni method (wikipedia) for correcting for alpha error cumulation. It controls for familwise error rate but has a greater power than the ordinary Bonferroni procedure. Procedure: You run the t-tests for all comparisons you want to do. You order the p-values according to their value. You test the smallest p-value against alpha / k, the second smallest against alpha /( k - 1), and so forth until the first test turns out non-significant in this sequence of tests. Cite Holm (1979) which can be downloaded via the link at wikipedia.
Post-hocs for within subjects tests? I am currently writing a paper in which I have the pleasure to conduct both between and within subjects comparisons. After discussion with my supervisor we decided to run t-tests and use the pretty si
13,838
Post-hocs for within subjects tests?
I recall some discussion on this in the past; I'm not aware of any implementation of Maxwell & Delaney's approach, although it shouldn't be too difficult to do. Have a look at "Repeated Measures ANOVA using R" which also shows one method of addressing the sphericity issue in Tukey's HSD. You might also find this description of Friedman's test of interest.
Post-hocs for within subjects tests?
I recall some discussion on this in the past; I'm not aware of any implementation of Maxwell & Delaney's approach, although it shouldn't be too difficult to do. Have a look at "Repeated Measures ANOV
Post-hocs for within subjects tests? I recall some discussion on this in the past; I'm not aware of any implementation of Maxwell & Delaney's approach, although it shouldn't be too difficult to do. Have a look at "Repeated Measures ANOVA using R" which also shows one method of addressing the sphericity issue in Tukey's HSD. You might also find this description of Friedman's test of interest.
Post-hocs for within subjects tests? I recall some discussion on this in the past; I'm not aware of any implementation of Maxwell & Delaney's approach, although it shouldn't be too difficult to do. Have a look at "Repeated Measures ANOV
13,839
Post-hocs for within subjects tests?
There are TWO options for the inferential F-tests In SPSS. Multivariate does NOT assume sphericity, adn so makes use of a different pairwise correlation for each pair of variables. The "tests of within subjects effects", including any post hoc tests, assumes sphericity and makes some corrections for using a common correlation across all tests. These procedures are a legacy of the days when computation was expensive, and are a waste of time with modern computing facilities. My recommendation is to take the omnibus MULTIVARIATE F for any repeated measures. Then follow up with post hoc pairwise t-test, or ANOVA with only 2 levels in each repeated measure comparison if there are also between subject factors. I would make the simple bon ferroni correction of dividing the alpha level by the number of tests. Also be sure to look at the effect size [available in the option dialogue]. Large effect sizes that are 'close' to significant may be more worthy of attention [and future experiments] than small, but significant effects. A more sophisticated approach is available in SPSS procedure MIXED, and also in less user friendly [but free] packages such as R. Summary, in SPSSS, multivariate F followed by pairwise post hocs eith Bon Ferroniwith Bonferroni should be sufficient for most needs.
Post-hocs for within subjects tests?
There are TWO options for the inferential F-tests In SPSS. Multivariate does NOT assume sphericity, adn so makes use of a different pairwise correlation for each pair of variables. The "tests of wit
Post-hocs for within subjects tests? There are TWO options for the inferential F-tests In SPSS. Multivariate does NOT assume sphericity, adn so makes use of a different pairwise correlation for each pair of variables. The "tests of within subjects effects", including any post hoc tests, assumes sphericity and makes some corrections for using a common correlation across all tests. These procedures are a legacy of the days when computation was expensive, and are a waste of time with modern computing facilities. My recommendation is to take the omnibus MULTIVARIATE F for any repeated measures. Then follow up with post hoc pairwise t-test, or ANOVA with only 2 levels in each repeated measure comparison if there are also between subject factors. I would make the simple bon ferroni correction of dividing the alpha level by the number of tests. Also be sure to look at the effect size [available in the option dialogue]. Large effect sizes that are 'close' to significant may be more worthy of attention [and future experiments] than small, but significant effects. A more sophisticated approach is available in SPSS procedure MIXED, and also in less user friendly [but free] packages such as R. Summary, in SPSSS, multivariate F followed by pairwise post hocs eith Bon Ferroniwith Bonferroni should be sufficient for most needs.
Post-hocs for within subjects tests? There are TWO options for the inferential F-tests In SPSS. Multivariate does NOT assume sphericity, adn so makes use of a different pairwise correlation for each pair of variables. The "tests of wit
13,840
Post-hocs for within subjects tests?
I shall use R function qtukey(1-alpha, means, df) to make family-wise CIs. For example, R function qtukey(1-0.05, nmeans=4, df=16) gave the critical value $tukey_{0.05,4,16}$=4.046093. Given a between-subject design with k=4 groups, 5*k=20 sample size e.g. (5-1)*k=16 df for $MS_{Error}$, $\begin{align} & Tuke{{y}_{k,df}}=\frac{Ma{{x}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}-Mi{{n}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}}{\sqrt{\chi _{df}^{2}/df}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ \frac{{{M}_{j}}-{{\mu }_{j}}}{{{\sigma }_{M}}} \right\}}{S{{E}_{M}}/{{\sigma }_{M}}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ {{M}_{j}}-{{\mu }_{j}} \right\}}{S{{E}_{M}}} \\ & =\frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{\mu }_{{{j}_{1}}}} \right)-\left( {{M}_{{{j}_{2}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{S{{E}_{M}}} \\ & =\frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{S{{E}_{M}}} \\ \end{align}$ The radius of family-wise 1-α CIs is $S{{E}_{M}}\times tuke{{y}_{\alpha ,4,16}}=\sqrt{\frac{M{{S}_{Error}}}{5}}\times tuke{{y}_{\alpha ,4,16}}$ because-- $$ \begin{align} & \left\{ Tuke{{y}_{k,df}}\le tuke{{y}_{0.05,4,16}} \right\} \\ & =\left\{ \frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{S{{E}_{M}}}\le tuke{{y}_{.05,4,16}} \right\} \\ & ={{\cap }_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right|\le S{{E}_{M}}\times tuke{{y}_{.05,4,16}} \right\} \\ \end{align} $$ Given a within-subject design with k=4 levels, 17 sample size e.g. (17-1)=16 df for $MS_{Error}$, and ${{X}_{i,j}}=\left( {{\mu }_{j}}+{{v}_{i}} \right)+{{\varepsilon }_{i,j}}={{\widetilde{X}}_{i,j}}+{{\varepsilon }_{i,j}}$, the radius of family-wise (1-α) CIs is $\sqrt{M{{S}_{Error}}/17}\times tuke{{y}_{\alpha ,4,16}}$ because-- $$\begin{align} & Tuke{{y}_{k,df}}=\frac{Ma{{x}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}-Mi{{n}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}}{\sqrt{\chi _{df}^{2}/df}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ \frac{Mea{{n}_{1\le i\le n}}\left\{ {{\widetilde{X}}_{i,j}}+{{\varepsilon }_{i,j}} \right\}-Mea{{n}_{1\le i\le n}}\left\{ {{\widetilde{X}}_{i,j}} \right\}}{{{\sigma }_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}} \right\}}{{{{\hat{\sigma }}}_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}/{{\sigma }_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ {{M}_{j}}-\left( {{\mu }_{j}}+Mea{{n}_{1\le i\le n}}\left\{ {{v}_{i}} \right\} \right) \right\}}{{{{\hat{\sigma }}}_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ {{M}_{j}}-{{\mu }_{j}} \right\}}{\sqrt{M{{S}_{Error}}/n}} \\ & =\frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{\sqrt{M{{S}_{Error}}/n}} \\ \end{align}$$
Post-hocs for within subjects tests?
I shall use R function qtukey(1-alpha, means, df) to make family-wise CIs. For example, R function qtukey(1-0.05, nmeans=4, df=16) gave the critical value $tukey_{0.05,4,16}$=4.046093. Given a between
Post-hocs for within subjects tests? I shall use R function qtukey(1-alpha, means, df) to make family-wise CIs. For example, R function qtukey(1-0.05, nmeans=4, df=16) gave the critical value $tukey_{0.05,4,16}$=4.046093. Given a between-subject design with k=4 groups, 5*k=20 sample size e.g. (5-1)*k=16 df for $MS_{Error}$, $\begin{align} & Tuke{{y}_{k,df}}=\frac{Ma{{x}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}-Mi{{n}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}}{\sqrt{\chi _{df}^{2}/df}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ \frac{{{M}_{j}}-{{\mu }_{j}}}{{{\sigma }_{M}}} \right\}}{S{{E}_{M}}/{{\sigma }_{M}}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ {{M}_{j}}-{{\mu }_{j}} \right\}}{S{{E}_{M}}} \\ & =\frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{\mu }_{{{j}_{1}}}} \right)-\left( {{M}_{{{j}_{2}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{S{{E}_{M}}} \\ & =\frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{S{{E}_{M}}} \\ \end{align}$ The radius of family-wise 1-α CIs is $S{{E}_{M}}\times tuke{{y}_{\alpha ,4,16}}=\sqrt{\frac{M{{S}_{Error}}}{5}}\times tuke{{y}_{\alpha ,4,16}}$ because-- $$ \begin{align} & \left\{ Tuke{{y}_{k,df}}\le tuke{{y}_{0.05,4,16}} \right\} \\ & =\left\{ \frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{S{{E}_{M}}}\le tuke{{y}_{.05,4,16}} \right\} \\ & ={{\cap }_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right|\le S{{E}_{M}}\times tuke{{y}_{.05,4,16}} \right\} \\ \end{align} $$ Given a within-subject design with k=4 levels, 17 sample size e.g. (17-1)=16 df for $MS_{Error}$, and ${{X}_{i,j}}=\left( {{\mu }_{j}}+{{v}_{i}} \right)+{{\varepsilon }_{i,j}}={{\widetilde{X}}_{i,j}}+{{\varepsilon }_{i,j}}$, the radius of family-wise (1-α) CIs is $\sqrt{M{{S}_{Error}}/17}\times tuke{{y}_{\alpha ,4,16}}$ because-- $$\begin{align} & Tuke{{y}_{k,df}}=\frac{Ma{{x}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}-Mi{{n}_{j=1,2,\ldots ,k}}\left\{ {{z}_{j}} \right\}}{\sqrt{\chi _{df}^{2}/df}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ \frac{Mea{{n}_{1\le i\le n}}\left\{ {{\widetilde{X}}_{i,j}}+{{\varepsilon }_{i,j}} \right\}-Mea{{n}_{1\le i\le n}}\left\{ {{\widetilde{X}}_{i,j}} \right\}}{{{\sigma }_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}} \right\}}{{{{\hat{\sigma }}}_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}/{{\sigma }_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ {{M}_{j}}-\left( {{\mu }_{j}}+Mea{{n}_{1\le i\le n}}\left\{ {{v}_{i}} \right\} \right) \right\}}{{{{\hat{\sigma }}}_{Mea{{n}_{1\le i\le n}}\left\{ {{\varepsilon }_{i,j}} \right\}}}} \\ & =\frac{Rang{{e}_{j=1,2,\ldots ,k}}\left\{ {{M}_{j}}-{{\mu }_{j}} \right\}}{\sqrt{M{{S}_{Error}}/n}} \\ & =\frac{Ma{{x}_{1\le {{j}_{1}},{{j}_{2}}\le k}}\left\{ \left| \left( {{M}_{{{j}_{1}}}}-{{M}_{{{j}_{2}}}} \right)-\left( {{\mu }_{{{j}_{1}}}}-{{\mu }_{{{j}_{2}}}} \right) \right| \right\}}{\sqrt{M{{S}_{Error}}/n}} \\ \end{align}$$
Post-hocs for within subjects tests? I shall use R function qtukey(1-alpha, means, df) to make family-wise CIs. For example, R function qtukey(1-0.05, nmeans=4, df=16) gave the critical value $tukey_{0.05,4,16}$=4.046093. Given a between
13,841
How does random forest generate the random forest
Implementations of RF differ slightly. I know that Salford Systems' proprietary implementation is supposed to be better than the vanilla one in R. A description of the algorithm is in ESL by Friedman-Hastie-Tibshirani, 2nd ed, 3rd printing. An entire chapter (15th) is devoted to RF, and I find it actually clearer than the original paper. The tree construction algorithm is detailed on p.588; no need for me to reproduce it here, since the book is available online.
How does random forest generate the random forest
Implementations of RF differ slightly. I know that Salford Systems' proprietary implementation is supposed to be better than the vanilla one in R. A description of the algorithm is in ESL by Friedman-
How does random forest generate the random forest Implementations of RF differ slightly. I know that Salford Systems' proprietary implementation is supposed to be better than the vanilla one in R. A description of the algorithm is in ESL by Friedman-Hastie-Tibshirani, 2nd ed, 3rd printing. An entire chapter (15th) is devoted to RF, and I find it actually clearer than the original paper. The tree construction algorithm is detailed on p.588; no need for me to reproduce it here, since the book is available online.
How does random forest generate the random forest Implementations of RF differ slightly. I know that Salford Systems' proprietary implementation is supposed to be better than the vanilla one in R. A description of the algorithm is in ESL by Friedman-
13,842
How does random forest generate the random forest
The main idea is the bagging procedure, not making trees random. In detail, each tree is built on a sample of objects drawn with replacement from the original set; thus each tree has some objects that it hasn't seen, which is what makes the whole ensemble more heterogeneous and thus better in generalizing. Furthermore, trees are being weakened in such a way that on the each split only M (or mtry) randomly selected attributes are considered; M is usually a square root of the number of attributes in the set. This ensures that the trees are overfitted less, since they are not pruned. You can find more details here. On the other hand, there is a variant of RF called Extreme Random Forest, in which trees are made in a random way (there is no optimization of splits) -- consult, I think this reference.
How does random forest generate the random forest
The main idea is the bagging procedure, not making trees random. In detail, each tree is built on a sample of objects drawn with replacement from the original set; thus each tree has some objects that
How does random forest generate the random forest The main idea is the bagging procedure, not making trees random. In detail, each tree is built on a sample of objects drawn with replacement from the original set; thus each tree has some objects that it hasn't seen, which is what makes the whole ensemble more heterogeneous and thus better in generalizing. Furthermore, trees are being weakened in such a way that on the each split only M (or mtry) randomly selected attributes are considered; M is usually a square root of the number of attributes in the set. This ensures that the trees are overfitted less, since they are not pruned. You can find more details here. On the other hand, there is a variant of RF called Extreme Random Forest, in which trees are made in a random way (there is no optimization of splits) -- consult, I think this reference.
How does random forest generate the random forest The main idea is the bagging procedure, not making trees random. In detail, each tree is built on a sample of objects drawn with replacement from the original set; thus each tree has some objects that
13,843
What is predicted and controlled in reinforcement Learning?
The difference between prediction and control is to do with goals regarding the policy. The policy describes the way of acting depending on current state, and in the literature is often noted as $\pi(a|s)$, the probability of taking action $a$ when in state $s$. So, my question is for prediction, predict what? A prediction task in RL is where the policy is supplied, and the goal is to measure how well it performs. That is, to predict the expected total reward from any given state assuming the function $\pi(a|s)$ is fixed. for control, control what? A control task in RL is where the policy is not fixed, and the goal is to find the optimal policy. That is, to find the policy $\pi(a|s)$ that maximises the expected total reward from any given state. A control algorithm based on value functions (of which Monte Carlo Control is one example) usually works by also solving the prediction problem, i.e. it predicts the values of acting in different ways, and adjusts the policy to choose the best actions at each step. As a result, the output of the value-based algorithms is usually an approximately optimal policy and the expected future rewards for following that policy.
What is predicted and controlled in reinforcement Learning?
The difference between prediction and control is to do with goals regarding the policy. The policy describes the way of acting depending on current state, and in the literature is often noted as $\pi(
What is predicted and controlled in reinforcement Learning? The difference between prediction and control is to do with goals regarding the policy. The policy describes the way of acting depending on current state, and in the literature is often noted as $\pi(a|s)$, the probability of taking action $a$ when in state $s$. So, my question is for prediction, predict what? A prediction task in RL is where the policy is supplied, and the goal is to measure how well it performs. That is, to predict the expected total reward from any given state assuming the function $\pi(a|s)$ is fixed. for control, control what? A control task in RL is where the policy is not fixed, and the goal is to find the optimal policy. That is, to find the policy $\pi(a|s)$ that maximises the expected total reward from any given state. A control algorithm based on value functions (of which Monte Carlo Control is one example) usually works by also solving the prediction problem, i.e. it predicts the values of acting in different ways, and adjusts the policy to choose the best actions at each step. As a result, the output of the value-based algorithms is usually an approximately optimal policy and the expected future rewards for following that policy.
What is predicted and controlled in reinforcement Learning? The difference between prediction and control is to do with goals regarding the policy. The policy describes the way of acting depending on current state, and in the literature is often noted as $\pi(
13,844
What is predicted and controlled in reinforcement Learning?
The term control comes from dynamical systems theory, specifically, optimal control. As Richard Sutton writes in the 1.7 Early History of Reinforcement Learning section of his book [1] Connections between optimal control and dynamic programming, on the one hand, and learning, on the other, were slow to be recognized. We cannot be sure about what accounted for this separation, but its main cause was likely the separation between the disciplines involved and their different goals. He even goes on to write We consider all of the work in optimal control also to be, in a sense, work in reinforcement learning. We define a reinforcement learning method as any effective way of solving reinforcement learning problems, and it is now clear that these problems are closely related to optimal control problems, particularly stochastic optimal control problems such as those formulated as MDPs. Accordingly, we must consider the solution methods of optimal control, such as dynamic programming, also to be reinforcement learning methods. Prediction is described as the computation of $v_\pi(s)$ and $q_\pi(s, a)$ for a fixed arbitrary policy $\pi$, where $v_\pi(s)$ is the value of a state $s$ under policy $\pi$, given a set of episodes obtained by following $\pi$ and passing through $s$. $q_\pi(s, a)$ is the action-value for a state-action pair $(s, a)$. It's the expected return when starting in state $s$, taking action $a$, and thereafter following policy $\pi$. Control is described as approximating optimal policies. When doing control, one maintains both an approximate policy and an approximate value function. The value function is repeatedly altered to more closely approximate the value function for the current policy, and the policy is repeatedly improved with respect to the current value function. This is the idea of generalised policy iteration (GPI). See 5.1 Monte Carlo Control in [1]. [1] Reinforcement Learning: An Introduction, by Richard S. Sutton and Andrew G. Barto
What is predicted and controlled in reinforcement Learning?
The term control comes from dynamical systems theory, specifically, optimal control. As Richard Sutton writes in the 1.7 Early History of Reinforcement Learning section of his book [1] Connections be
What is predicted and controlled in reinforcement Learning? The term control comes from dynamical systems theory, specifically, optimal control. As Richard Sutton writes in the 1.7 Early History of Reinforcement Learning section of his book [1] Connections between optimal control and dynamic programming, on the one hand, and learning, on the other, were slow to be recognized. We cannot be sure about what accounted for this separation, but its main cause was likely the separation between the disciplines involved and their different goals. He even goes on to write We consider all of the work in optimal control also to be, in a sense, work in reinforcement learning. We define a reinforcement learning method as any effective way of solving reinforcement learning problems, and it is now clear that these problems are closely related to optimal control problems, particularly stochastic optimal control problems such as those formulated as MDPs. Accordingly, we must consider the solution methods of optimal control, such as dynamic programming, also to be reinforcement learning methods. Prediction is described as the computation of $v_\pi(s)$ and $q_\pi(s, a)$ for a fixed arbitrary policy $\pi$, where $v_\pi(s)$ is the value of a state $s$ under policy $\pi$, given a set of episodes obtained by following $\pi$ and passing through $s$. $q_\pi(s, a)$ is the action-value for a state-action pair $(s, a)$. It's the expected return when starting in state $s$, taking action $a$, and thereafter following policy $\pi$. Control is described as approximating optimal policies. When doing control, one maintains both an approximate policy and an approximate value function. The value function is repeatedly altered to more closely approximate the value function for the current policy, and the policy is repeatedly improved with respect to the current value function. This is the idea of generalised policy iteration (GPI). See 5.1 Monte Carlo Control in [1]. [1] Reinforcement Learning: An Introduction, by Richard S. Sutton and Andrew G. Barto
What is predicted and controlled in reinforcement Learning? The term control comes from dynamical systems theory, specifically, optimal control. As Richard Sutton writes in the 1.7 Early History of Reinforcement Learning section of his book [1] Connections be
13,845
How can we judge the accuracy of Nate Silver's predictions?
Probabilistic forecasts (or, as they are also known, density forecasts) can be evaluated using scoring-rules, i.e., functions that map a density forecast and an observed outcome to a so-called score, which is minimized in expectation if the density forecast indeed is the true density to be forecasted. Proper scoring rules are scoring rules that are minimized in expectation only by the true future density. There are quite a number of such proper scoring rules available, starting with Brier (1950, Monthly Weather Review) in the context of probabilistic weather forecasting. Czado et al. (2009, Biometrics) give a more recent overview for the discrete case. Gneiting & Katzfuss (2014, Annual Review of Statistics and its Application) give an overview of probabilistic forecasting in general - Gneiting in particular has been very active in advancing the cause of proper scoring rules. However, scoring rules are somewhat hard to interpret, and they really only help in comparing multiple probabilistic forecasts - the one with the lower score is better. Up to sampling variation, that is, so it's always better to have a lot of forecasts to evaluate, whose scores we would average. How to include the "updating" of Silver's or others' forecasts is a good question. We can use scoring rules to compare "snapshots" of different forecasts at a single point in time, or we could even look at Silver's probabilistic forecasts over time and calculate scores at each time point. One would hope that the score gets lower and lower (i.e., the density forecasts get better and better) the closer the actual outcome is.
How can we judge the accuracy of Nate Silver's predictions?
Probabilistic forecasts (or, as they are also known, density forecasts) can be evaluated using scoring-rules, i.e., functions that map a density forecast and an observed outcome to a so-called score,
How can we judge the accuracy of Nate Silver's predictions? Probabilistic forecasts (or, as they are also known, density forecasts) can be evaluated using scoring-rules, i.e., functions that map a density forecast and an observed outcome to a so-called score, which is minimized in expectation if the density forecast indeed is the true density to be forecasted. Proper scoring rules are scoring rules that are minimized in expectation only by the true future density. There are quite a number of such proper scoring rules available, starting with Brier (1950, Monthly Weather Review) in the context of probabilistic weather forecasting. Czado et al. (2009, Biometrics) give a more recent overview for the discrete case. Gneiting & Katzfuss (2014, Annual Review of Statistics and its Application) give an overview of probabilistic forecasting in general - Gneiting in particular has been very active in advancing the cause of proper scoring rules. However, scoring rules are somewhat hard to interpret, and they really only help in comparing multiple probabilistic forecasts - the one with the lower score is better. Up to sampling variation, that is, so it's always better to have a lot of forecasts to evaluate, whose scores we would average. How to include the "updating" of Silver's or others' forecasts is a good question. We can use scoring rules to compare "snapshots" of different forecasts at a single point in time, or we could even look at Silver's probabilistic forecasts over time and calculate scores at each time point. One would hope that the score gets lower and lower (i.e., the density forecasts get better and better) the closer the actual outcome is.
How can we judge the accuracy of Nate Silver's predictions? Probabilistic forecasts (or, as they are also known, density forecasts) can be evaluated using scoring-rules, i.e., functions that map a density forecast and an observed outcome to a so-called score,
13,846
How can we judge the accuracy of Nate Silver's predictions?
In Nate Silver's book The Signal and the Noise he writes the following, which may provide some insight for your question: One of the most important tests of a forecast - I would argue that it is the single most important one - is called calibration. Out of all the times you said there was a 40% chance of rain, how often did rain actually occur? If, over the long run, it really did rain about 40% of the time, that means your forecasts were well calibrated. If it wound up raining just 20 percent of the time instead, or 60 percent of the time, they weren't. So this raises a few points. First of all, as you rightly point out, you really can't make any inference about the quality of a single forecast by the result of event which you are forecasting. The best you can do is to see how your model performs over the course of many predictions. Another thing that is important to think about is that the predictions that Nate Silver provides are not an event itself, but the probability distribution of the event. So in the case of presidential race, he is estimating the probability distribution of Clinton, Trump, or Johnson winning the race. So in this case he is estimating a multinomial distribution. But he is actually predicting the race at a far more granular level. His predictions estimate the probability distributions of the percentage of votes each candidate will garner in each state. So if we consider 3 candidates, this might be characterized by a random vector of length 51 * 3 and taking values in the interval [0, 1], subject to the constraint that the proportions sum to 1 for the proportions within a state. The number 51 is because other are 50 states + D.C. (and in fact I think it's actually a few more because some states can split their electoral college votes), and the number 3 is due to the number of candidates. Now you don't have very much data to evaluate his predictions with - he's only provided predictions for the last 3 elections that I'm aware of (was there more?). So I don't think that there is any way to fairly evaluate his model, unless you actually had the model in hand and could evaluate it using simulated data. But there are still some interesting things that you could look at. For example, I think it would be interesting to look at how accurately he predicted the state-by-state voting proportions at a particular time point, e.g. a week out from the election. If you repeat this for multiple time points, e.g. a week out, a month out, 6 months out, and a year out, then you could provide some pretty interesting exposition for his predictions. One important caveat: the results are highly correlated across states within an election so you can't really say that you have 51 states * 3 elections independent prediction instances (i.e. if the model underestimates candidates performance in one state, it will tend to underestimate in other states also). But maybe I would think of it like this anyway just so that you have enough data to do anything meaningful with.
How can we judge the accuracy of Nate Silver's predictions?
In Nate Silver's book The Signal and the Noise he writes the following, which may provide some insight for your question: One of the most important tests of a forecast - I would argue that it is the
How can we judge the accuracy of Nate Silver's predictions? In Nate Silver's book The Signal and the Noise he writes the following, which may provide some insight for your question: One of the most important tests of a forecast - I would argue that it is the single most important one - is called calibration. Out of all the times you said there was a 40% chance of rain, how often did rain actually occur? If, over the long run, it really did rain about 40% of the time, that means your forecasts were well calibrated. If it wound up raining just 20 percent of the time instead, or 60 percent of the time, they weren't. So this raises a few points. First of all, as you rightly point out, you really can't make any inference about the quality of a single forecast by the result of event which you are forecasting. The best you can do is to see how your model performs over the course of many predictions. Another thing that is important to think about is that the predictions that Nate Silver provides are not an event itself, but the probability distribution of the event. So in the case of presidential race, he is estimating the probability distribution of Clinton, Trump, or Johnson winning the race. So in this case he is estimating a multinomial distribution. But he is actually predicting the race at a far more granular level. His predictions estimate the probability distributions of the percentage of votes each candidate will garner in each state. So if we consider 3 candidates, this might be characterized by a random vector of length 51 * 3 and taking values in the interval [0, 1], subject to the constraint that the proportions sum to 1 for the proportions within a state. The number 51 is because other are 50 states + D.C. (and in fact I think it's actually a few more because some states can split their electoral college votes), and the number 3 is due to the number of candidates. Now you don't have very much data to evaluate his predictions with - he's only provided predictions for the last 3 elections that I'm aware of (was there more?). So I don't think that there is any way to fairly evaluate his model, unless you actually had the model in hand and could evaluate it using simulated data. But there are still some interesting things that you could look at. For example, I think it would be interesting to look at how accurately he predicted the state-by-state voting proportions at a particular time point, e.g. a week out from the election. If you repeat this for multiple time points, e.g. a week out, a month out, 6 months out, and a year out, then you could provide some pretty interesting exposition for his predictions. One important caveat: the results are highly correlated across states within an election so you can't really say that you have 51 states * 3 elections independent prediction instances (i.e. if the model underestimates candidates performance in one state, it will tend to underestimate in other states also). But maybe I would think of it like this anyway just so that you have enough data to do anything meaningful with.
How can we judge the accuracy of Nate Silver's predictions? In Nate Silver's book The Signal and the Noise he writes the following, which may provide some insight for your question: One of the most important tests of a forecast - I would argue that it is the
13,847
How can we judge the accuracy of Nate Silver's predictions?
For any single prediction you can't, any more than we can tell if the claim "this coin has a 60% chance of coming up heads" is close to correct from a single toss. However, you can assess his methodology across many predictions -- for a given election he makes lots of predictions, not just of the presidential race overall but many predictions relating to the vote for the president and of many other races (house, senate, gubnertorial and so on), and he also uses broadly similar methodologies over time. There are many ways to do this assessment (some fairly sophisticated), but we can look at some relatively simple ways to get some sense of it. For example you could split the predictions of the probability of a win up into bands of e.g. (50-55%, 55-65% and so on) and then see what proportion of the predictions in that band came up; the proportion of 50-55% predictions that worked should be somewhere between 50-55% depending on where the average was (plus a margin for random variation*). So by that approach (or various other approaches) you can see whether the distribution of outcomes were consistent with predictions across an election, or across several elections (if I remember right, I think his predictions have been more often right than they should have been, which suggests his standard errors have on average been slightly overestimated). * we have to be careful about how to assess that, though because the predictions are not independent.
How can we judge the accuracy of Nate Silver's predictions?
For any single prediction you can't, any more than we can tell if the claim "this coin has a 60% chance of coming up heads" is close to correct from a single toss. However, you can assess his methodol
How can we judge the accuracy of Nate Silver's predictions? For any single prediction you can't, any more than we can tell if the claim "this coin has a 60% chance of coming up heads" is close to correct from a single toss. However, you can assess his methodology across many predictions -- for a given election he makes lots of predictions, not just of the presidential race overall but many predictions relating to the vote for the president and of many other races (house, senate, gubnertorial and so on), and he also uses broadly similar methodologies over time. There are many ways to do this assessment (some fairly sophisticated), but we can look at some relatively simple ways to get some sense of it. For example you could split the predictions of the probability of a win up into bands of e.g. (50-55%, 55-65% and so on) and then see what proportion of the predictions in that band came up; the proportion of 50-55% predictions that worked should be somewhere between 50-55% depending on where the average was (plus a margin for random variation*). So by that approach (or various other approaches) you can see whether the distribution of outcomes were consistent with predictions across an election, or across several elections (if I remember right, I think his predictions have been more often right than they should have been, which suggests his standard errors have on average been slightly overestimated). * we have to be careful about how to assess that, though because the predictions are not independent.
How can we judge the accuracy of Nate Silver's predictions? For any single prediction you can't, any more than we can tell if the claim "this coin has a 60% chance of coming up heads" is close to correct from a single toss. However, you can assess his methodol
13,848
Classification with Gradient Boosting : How to keep the prediction in [0,1]
I like to think of this in analogy with the case of linear models, and their extension to GLMs (generalized linear models). In a linear model, we fit a linear function to predict our response $$ \hat y = \beta_0 + \beta_1 x_1 + \cdots \beta_n x_n $$ To generalize to other situations, we introduce a link function, which transforms the linear part of the model onto the scale of the response (technically this is an inverse link, but I think it's easier to think of it this way, transforming the linear predictor into a response, than transforming the response into a linear predictor). For example, the logistic model uses the sigmoid (or logit) function $$ \hat y = \frac{1}{1 + \exp(-(\beta_0 + \beta_1 x_1 + \cdots \beta_n x_n))} $$ and poisson regression uses an exponential function $$ \hat y = \exp(\beta_0 + \beta_1 x_1 + \cdots \beta_n x_n) $$ To construct an analogy with gradient boosting, we replace the linear part of these models with the sum of the boosted trees. So, for example, the gaussian case (analogous with linear regression) becomes the well known $$ \hat y = \sum_i h_i $$ where $h_i$ is our sequence of weak learners. The binomial case is analogous to logistic regression (as you noted in your answer) $$ \hat y = \frac{1}{1 + \exp\left(-\sum_i h_i\right)} $$ and poisson boosting is analogous to poisson regression $$ \hat y = \exp\left(\sum_i h_i\right) $$ The question remains, how does one fit these boosted models when the link function is involved? For the gaussian case, where the link is the identity function, the often heard mantra of fitting weak learners to the residuals of the current working model works out, but this doesn't really generalize to the more complicated models. The trick is to write the loss function being minimized as a function of the linear part of the model (i.e. the $\sum_i \beta_i x_i$ part of the GLM formulation). For example, the binomial loss is usually encountered as $$ \sum_i y_i \log(p_i) + (1 - y_i)\log(1 - p_i) $$ Here, the loss is a function of $p_i$, the predicted values on the same scale as the response, and $p_i$ is a non-linear transformation of the linear predictor $L_i$. Instead, we can re-express this as a function of $L_i$, (in this case also known as the log odds) $$ \sum_i y_i L_i - \log(1 + \exp(L_i)) $$ Then we can take the gradient of this with respect to $L$, and boost to directly minimize this quantity. Only at the very end, when we want to produce predictions for the user, do we apply the link function to the final sequence of weak learners to put the predictions on the same scale as the response. While fitting the model, we internally work on the linear scale the entire time.
Classification with Gradient Boosting : How to keep the prediction in [0,1]
I like to think of this in analogy with the case of linear models, and their extension to GLMs (generalized linear models). In a linear model, we fit a linear function to predict our response $$ \hat
Classification with Gradient Boosting : How to keep the prediction in [0,1] I like to think of this in analogy with the case of linear models, and their extension to GLMs (generalized linear models). In a linear model, we fit a linear function to predict our response $$ \hat y = \beta_0 + \beta_1 x_1 + \cdots \beta_n x_n $$ To generalize to other situations, we introduce a link function, which transforms the linear part of the model onto the scale of the response (technically this is an inverse link, but I think it's easier to think of it this way, transforming the linear predictor into a response, than transforming the response into a linear predictor). For example, the logistic model uses the sigmoid (or logit) function $$ \hat y = \frac{1}{1 + \exp(-(\beta_0 + \beta_1 x_1 + \cdots \beta_n x_n))} $$ and poisson regression uses an exponential function $$ \hat y = \exp(\beta_0 + \beta_1 x_1 + \cdots \beta_n x_n) $$ To construct an analogy with gradient boosting, we replace the linear part of these models with the sum of the boosted trees. So, for example, the gaussian case (analogous with linear regression) becomes the well known $$ \hat y = \sum_i h_i $$ where $h_i$ is our sequence of weak learners. The binomial case is analogous to logistic regression (as you noted in your answer) $$ \hat y = \frac{1}{1 + \exp\left(-\sum_i h_i\right)} $$ and poisson boosting is analogous to poisson regression $$ \hat y = \exp\left(\sum_i h_i\right) $$ The question remains, how does one fit these boosted models when the link function is involved? For the gaussian case, where the link is the identity function, the often heard mantra of fitting weak learners to the residuals of the current working model works out, but this doesn't really generalize to the more complicated models. The trick is to write the loss function being minimized as a function of the linear part of the model (i.e. the $\sum_i \beta_i x_i$ part of the GLM formulation). For example, the binomial loss is usually encountered as $$ \sum_i y_i \log(p_i) + (1 - y_i)\log(1 - p_i) $$ Here, the loss is a function of $p_i$, the predicted values on the same scale as the response, and $p_i$ is a non-linear transformation of the linear predictor $L_i$. Instead, we can re-express this as a function of $L_i$, (in this case also known as the log odds) $$ \sum_i y_i L_i - \log(1 + \exp(L_i)) $$ Then we can take the gradient of this with respect to $L$, and boost to directly minimize this quantity. Only at the very end, when we want to produce predictions for the user, do we apply the link function to the final sequence of weak learners to put the predictions on the same scale as the response. While fitting the model, we internally work on the linear scale the entire time.
Classification with Gradient Boosting : How to keep the prediction in [0,1] I like to think of this in analogy with the case of linear models, and their extension to GLMs (generalized linear models). In a linear model, we fit a linear function to predict our response $$ \hat
13,849
Classification with Gradient Boosting : How to keep the prediction in [0,1]
After some research, is seems that my intuition and Alex R. comment are right. In order to build a continuous model with predictions in $[0,1]$, one can put the model $H$ into a logistic function (Wikipedia), such that for $H \in \mathbb{R}$, we have $$\frac{1}{1 + e^{-H}} \in [0,1]$$ The gradient boosting steps then take the derivative with respect to $H$ and update the model, as if the logistic function was part of the cost function, and it works. This has been suggested in the paper Additive logistic regression: a statistical view of boosting, by Friedman, Hastie and Tibshirani, to build LogitBoost (Wikipedia), an adaptation of AdaBoost (Wikipedia) to the Logistic Loss. In very basic terms, if it is possible to go from linear regression to logistic regression by the addition of a sigmoid, then it also works to convert regression boosting to classification boosting.
Classification with Gradient Boosting : How to keep the prediction in [0,1]
After some research, is seems that my intuition and Alex R. comment are right. In order to build a continuous model with predictions in $[0,1]$, one can put the model $H$ into a logistic function (Wik
Classification with Gradient Boosting : How to keep the prediction in [0,1] After some research, is seems that my intuition and Alex R. comment are right. In order to build a continuous model with predictions in $[0,1]$, one can put the model $H$ into a logistic function (Wikipedia), such that for $H \in \mathbb{R}$, we have $$\frac{1}{1 + e^{-H}} \in [0,1]$$ The gradient boosting steps then take the derivative with respect to $H$ and update the model, as if the logistic function was part of the cost function, and it works. This has been suggested in the paper Additive logistic regression: a statistical view of boosting, by Friedman, Hastie and Tibshirani, to build LogitBoost (Wikipedia), an adaptation of AdaBoost (Wikipedia) to the Logistic Loss. In very basic terms, if it is possible to go from linear regression to logistic regression by the addition of a sigmoid, then it also works to convert regression boosting to classification boosting.
Classification with Gradient Boosting : How to keep the prediction in [0,1] After some research, is seems that my intuition and Alex R. comment are right. In order to build a continuous model with predictions in $[0,1]$, one can put the model $H$ into a logistic function (Wik
13,850
Item Response Theory vs Confirmatory Factor Analysis
@Philchalmers answer is on point, and if you want a reference from one of the leaders in the field, Muthen (creator of Mplus), here you go: (Edited to include direct quote) An MPlus user asks: I am trying to describe and illustrate current similarities and differences between binary CFA and IRT for my thesis. The default estimation method in Mplus for categorical CFA is WLSMV. To run an IRT model, the example in your manual suggests to use MLR as the estimation method. When I use MLR, is the data input still the tetrachoric correlation matrix or is the original response data matrix used? Bengt Muthen responds: I don't think there is a difference between CFA of categorical variables and IRT. It is sometimes claimed but I don't agree. Which estimator is typically used may differ, but that's not essential. MLR uses the raw data, not a sample tetrachoric correlation matrix. ... The ML(R) approach is the same as the "marginal ML (MML)" approach described in e.g. Bock's work. So using the raw data and integrating over the factors using numerical integration. MML being contrasted with "conditional ML" used e.g. with Rasch approaches. Assuming normal factors, probit (normal ogive) item-factor relations, and conditional independence, the assumptions are the same for ML and for WLSMV, where the latter uses tetrachorics. This is because those assumptions correspond to assuming multivariate normal underlying continuous latent response variables behind the categorical outcomes. So WLSMV only uses 1st- and 2nd-order information, whereas ML goes all the way up to the highest order. The loss of info appears small, however. ML doesn't fit the model to these sample tetrachorics, so perhaps one can say that WLSMV marginalizes in a different way. It's a matter of estimator differences rather than model differences. We have an IRT note on our web site: http://www.statmodel.com/download/MplusIRT2.pdf but again, the ML(R) approach is nothing different from what's used in IRT MML. Source: http://www.statmodel.com/discussion/messages/9/10401.html?1347474605
Item Response Theory vs Confirmatory Factor Analysis
@Philchalmers answer is on point, and if you want a reference from one of the leaders in the field, Muthen (creator of Mplus), here you go: (Edited to include direct quote) An MPlus user asks: I am
Item Response Theory vs Confirmatory Factor Analysis @Philchalmers answer is on point, and if you want a reference from one of the leaders in the field, Muthen (creator of Mplus), here you go: (Edited to include direct quote) An MPlus user asks: I am trying to describe and illustrate current similarities and differences between binary CFA and IRT for my thesis. The default estimation method in Mplus for categorical CFA is WLSMV. To run an IRT model, the example in your manual suggests to use MLR as the estimation method. When I use MLR, is the data input still the tetrachoric correlation matrix or is the original response data matrix used? Bengt Muthen responds: I don't think there is a difference between CFA of categorical variables and IRT. It is sometimes claimed but I don't agree. Which estimator is typically used may differ, but that's not essential. MLR uses the raw data, not a sample tetrachoric correlation matrix. ... The ML(R) approach is the same as the "marginal ML (MML)" approach described in e.g. Bock's work. So using the raw data and integrating over the factors using numerical integration. MML being contrasted with "conditional ML" used e.g. with Rasch approaches. Assuming normal factors, probit (normal ogive) item-factor relations, and conditional independence, the assumptions are the same for ML and for WLSMV, where the latter uses tetrachorics. This is because those assumptions correspond to assuming multivariate normal underlying continuous latent response variables behind the categorical outcomes. So WLSMV only uses 1st- and 2nd-order information, whereas ML goes all the way up to the highest order. The loss of info appears small, however. ML doesn't fit the model to these sample tetrachorics, so perhaps one can say that WLSMV marginalizes in a different way. It's a matter of estimator differences rather than model differences. We have an IRT note on our web site: http://www.statmodel.com/download/MplusIRT2.pdf but again, the ML(R) approach is nothing different from what's used in IRT MML. Source: http://www.statmodel.com/discussion/messages/9/10401.html?1347474605
Item Response Theory vs Confirmatory Factor Analysis @Philchalmers answer is on point, and if you want a reference from one of the leaders in the field, Muthen (creator of Mplus), here you go: (Edited to include direct quote) An MPlus user asks: I am
13,851
Item Response Theory vs Confirmatory Factor Analysis
In some ways you are right, CFA and IRT are cut from the same cloth. But it many ways they are quite different as well. CFA, or more appropriately item CFA, is an adaption of the structural equation/covariance modeling framework to account for a specific type of covariation between categorical items. IRT is more directly about modeling categorical variable relationships without using only first- and second-order information in the variables (it's full information, so its requirements generally aren't as strict). Item CFA has several benefits in that it falls within the SEM framework, and therefore has very wide application to multivariate systems of relationships to other variables. IRT, on the other hand, primarily focuses on the test itself, though covariates can also be included in the test directly (e.g., see topics on explanatory IRT). I've also found that item modeling relationships are far more general in the IRT framework in that non-monotonic, non-parametric, or just plain customized item response models are easier to cope with because one doesn't have to worry about the sufficiency of using the polychoric correlation matrix. Both frameworks have their pros and cons, but in general the CFA is more flexible when the level of modeling abstraction/inference is focused on the relationship within a system of variables, while IRT is generally preferred if the test itself (and items therein) are the focus of interest.
Item Response Theory vs Confirmatory Factor Analysis
In some ways you are right, CFA and IRT are cut from the same cloth. But it many ways they are quite different as well. CFA, or more appropriately item CFA, is an adaption of the structural equation/c
Item Response Theory vs Confirmatory Factor Analysis In some ways you are right, CFA and IRT are cut from the same cloth. But it many ways they are quite different as well. CFA, or more appropriately item CFA, is an adaption of the structural equation/covariance modeling framework to account for a specific type of covariation between categorical items. IRT is more directly about modeling categorical variable relationships without using only first- and second-order information in the variables (it's full information, so its requirements generally aren't as strict). Item CFA has several benefits in that it falls within the SEM framework, and therefore has very wide application to multivariate systems of relationships to other variables. IRT, on the other hand, primarily focuses on the test itself, though covariates can also be included in the test directly (e.g., see topics on explanatory IRT). I've also found that item modeling relationships are far more general in the IRT framework in that non-monotonic, non-parametric, or just plain customized item response models are easier to cope with because one doesn't have to worry about the sufficiency of using the polychoric correlation matrix. Both frameworks have their pros and cons, but in general the CFA is more flexible when the level of modeling abstraction/inference is focused on the relationship within a system of variables, while IRT is generally preferred if the test itself (and items therein) are the focus of interest.
Item Response Theory vs Confirmatory Factor Analysis In some ways you are right, CFA and IRT are cut from the same cloth. But it many ways they are quite different as well. CFA, or more appropriately item CFA, is an adaption of the structural equation/c
13,852
Item Response Theory vs Confirmatory Factor Analysis
I believe Yves Rosseel discusses it briefly in slides 91-93 of his 2014 workshop: http://www.personality-project.org/r/tutorials/summerschool.14/rosseel_sem_cat.pdf Taken from Rosseel (2014, link above): Full information approach: marginal maximum likelihood origins: IRT models (eg Bock & Lieberman, 1970) and GLMMs ... the connection with IRT • the theoretical relationship between SEM and IRT has been well documented: Takane, Y., & De Leeuw, J. (1987). On the relationship between item response theory and factor analysis of discretized variables. Psychometrika, 52, 393-408. Kamata, A., & Bauer, D. J. (2008). A note on the relation between factor analytic and item response theory models. Structural Equation Modeling, 15, 136-153. Joreskog, K. G., & Moustaki, I. (2001). Factor analysis of ordinal variables: A comparison of three approaches. Multivariate Behavioral Research, 36, 347-387. when are they equivalent? • probit (normal-ogive) versus logit: both metrics are used in practice • a single-factor CFA on binary items is equivalent to a 2-parameter IRT model (Birnbaum, 1968): In CFA: ... In IRT: ... (see slide) • a single-factor CFA on polychotomous (ordinal) items is equivalent to the graded response model (Samejima, 1969) • there is no CFA equivalent for the 3-parameter model (with a guessing parameter) • the Rasch model is equivalent to a single-factor CFA on binary items, but where all factor loadings are constrained to be equal (and the probit metric is converted to a logit metric)
Item Response Theory vs Confirmatory Factor Analysis
I believe Yves Rosseel discusses it briefly in slides 91-93 of his 2014 workshop: http://www.personality-project.org/r/tutorials/summerschool.14/rosseel_sem_cat.pdf Taken from Rosseel (2014, link abo
Item Response Theory vs Confirmatory Factor Analysis I believe Yves Rosseel discusses it briefly in slides 91-93 of his 2014 workshop: http://www.personality-project.org/r/tutorials/summerschool.14/rosseel_sem_cat.pdf Taken from Rosseel (2014, link above): Full information approach: marginal maximum likelihood origins: IRT models (eg Bock & Lieberman, 1970) and GLMMs ... the connection with IRT • the theoretical relationship between SEM and IRT has been well documented: Takane, Y., & De Leeuw, J. (1987). On the relationship between item response theory and factor analysis of discretized variables. Psychometrika, 52, 393-408. Kamata, A., & Bauer, D. J. (2008). A note on the relation between factor analytic and item response theory models. Structural Equation Modeling, 15, 136-153. Joreskog, K. G., & Moustaki, I. (2001). Factor analysis of ordinal variables: A comparison of three approaches. Multivariate Behavioral Research, 36, 347-387. when are they equivalent? • probit (normal-ogive) versus logit: both metrics are used in practice • a single-factor CFA on binary items is equivalent to a 2-parameter IRT model (Birnbaum, 1968): In CFA: ... In IRT: ... (see slide) • a single-factor CFA on polychotomous (ordinal) items is equivalent to the graded response model (Samejima, 1969) • there is no CFA equivalent for the 3-parameter model (with a guessing parameter) • the Rasch model is equivalent to a single-factor CFA on binary items, but where all factor loadings are constrained to be equal (and the probit metric is converted to a logit metric)
Item Response Theory vs Confirmatory Factor Analysis I believe Yves Rosseel discusses it briefly in slides 91-93 of his 2014 workshop: http://www.personality-project.org/r/tutorials/summerschool.14/rosseel_sem_cat.pdf Taken from Rosseel (2014, link abo
13,853
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?
The operational meaning of the IID condition is given by the celebrated "representation theorem" of Bruno de Finetti (which, in my humble opinion, is one of the greatest innovations of probability theory ever discovered). According to this brilliant theorem, if we have a sequence $\mathbf{X}=(X_1,X_2,X_3,...)$ with empirical distribution $F_\mathbf{x}$, if the values in the sequence are exchangeable then we have: $$X_1,X_2,X_3, ... | F_\mathbf{x} \sim \text{IID } F_\mathbf{x}.$$ This means that the condition of exchangeability of an infinite sequence of values is the operational condition required for the values to be independent and identically distributed (conditional on some underlying distribution function). The theorem can be applied in both Bayesian and classical statistics (see O'Neill 2009 for further discussion), and in the latter case, the empirical distribution is treated as an "unknown constant" and so we usually drop the conditioning notation. Among other things, this theorem clarifies the requirement for "repeated trials" in the frequentist definition of probability. As with many other probabilistic results, the "representation theorem" actually refers to a class of theorems that apply in various different cases. You can find a good summary of the various representation theorems in Kingman (1978) and Ressel (1985). The original version, due to de Finetti, established this correspondence only for binary sequences of values. This was later extended to the more general version that is the most commonly used (and corresponds to the version shown above), by Hewitt and Savage (1955). This latter representation theorem is sometimes called the de Finetti-Hewitt-Savage theorem, since it is their extension that gives the full power of the theorem. There is another useful extension by Diaconis and Freedman (1980) that establishes a representation theorem for cases of finite exchangeability --- roughly speaking, in this case the values are "almost IID" in the sense that there is a bounded difference in probabilities from the actual probabilities and an IID approximation. As the other answers on this thread point out, the IID condition has various advantages in terms of mathematical convenience and simplicity. While I do not see that as a justification of realism, it is certainly an ancillary benefit of this model structure, and it speaks to the importance of the representation theorems. These theorems give an operational grounding for the IID model, and show that it is sufficient to assume exchangeability of an infinite sequence to obtain this model. Thus, in practice, if you want to know if a sequence of values is IID, all you need to do is ask yourself, "If I took any finite set of values from this sequence, would their probability measure change if I were to change the order of those values?" If the answer is no, then you have an exchangeable sequence, and hence, the IID condition is met.
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?
The operational meaning of the IID condition is given by the celebrated "representation theorem" of Bruno de Finetti (which, in my humble opinion, is one of the greatest innovations of probability the
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks? The operational meaning of the IID condition is given by the celebrated "representation theorem" of Bruno de Finetti (which, in my humble opinion, is one of the greatest innovations of probability theory ever discovered). According to this brilliant theorem, if we have a sequence $\mathbf{X}=(X_1,X_2,X_3,...)$ with empirical distribution $F_\mathbf{x}$, if the values in the sequence are exchangeable then we have: $$X_1,X_2,X_3, ... | F_\mathbf{x} \sim \text{IID } F_\mathbf{x}.$$ This means that the condition of exchangeability of an infinite sequence of values is the operational condition required for the values to be independent and identically distributed (conditional on some underlying distribution function). The theorem can be applied in both Bayesian and classical statistics (see O'Neill 2009 for further discussion), and in the latter case, the empirical distribution is treated as an "unknown constant" and so we usually drop the conditioning notation. Among other things, this theorem clarifies the requirement for "repeated trials" in the frequentist definition of probability. As with many other probabilistic results, the "representation theorem" actually refers to a class of theorems that apply in various different cases. You can find a good summary of the various representation theorems in Kingman (1978) and Ressel (1985). The original version, due to de Finetti, established this correspondence only for binary sequences of values. This was later extended to the more general version that is the most commonly used (and corresponds to the version shown above), by Hewitt and Savage (1955). This latter representation theorem is sometimes called the de Finetti-Hewitt-Savage theorem, since it is their extension that gives the full power of the theorem. There is another useful extension by Diaconis and Freedman (1980) that establishes a representation theorem for cases of finite exchangeability --- roughly speaking, in this case the values are "almost IID" in the sense that there is a bounded difference in probabilities from the actual probabilities and an IID approximation. As the other answers on this thread point out, the IID condition has various advantages in terms of mathematical convenience and simplicity. While I do not see that as a justification of realism, it is certainly an ancillary benefit of this model structure, and it speaks to the importance of the representation theorems. These theorems give an operational grounding for the IID model, and show that it is sufficient to assume exchangeability of an infinite sequence to obtain this model. Thus, in practice, if you want to know if a sequence of values is IID, all you need to do is ask yourself, "If I took any finite set of values from this sequence, would their probability measure change if I were to change the order of those values?" If the answer is no, then you have an exchangeable sequence, and hence, the IID condition is met.
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks? The operational meaning of the IID condition is given by the celebrated "representation theorem" of Bruno de Finetti (which, in my humble opinion, is one of the greatest innovations of probability the
13,854
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?
Yes, samples in the dataset may not be completely iid, but the assumption is present to ease the modelling. To maximize the data likelihood (in almost all models this is explicitly or implicitly part of the optimization), i.e. $P(\mathcal{D}|\theta)$, without the iid assumption, we'd have to model the dependence between the data samples, i.e. the joint distribution and you won't be able to quickly write the following and maximize:$$P(\mathcal{D}|\theta)=\prod_{i=1}^nP(X_i|\theta)$$ Typically, with lots of samples (random variables), the slight dependencies between small set of samples will be negligible. And, you end up with similar performances (assuming the dependence is modelled correctly). For example, in Naive Bayes, not necessarily the samples but features/words are surely dependent. They're part of the same sentence/paragraph, written by the same person etc. However, we model as if they're independent and end up with pretty good models. The shuffling is an another consideration. Some algorithms are not affected by shuffling. But, algorithms using gradient descent are probably affected, specifically neural networks, because we don't train them indefinitely. For example, if you feed the network with all $1$'s at first, then $2$'s etc, you'll go all the way to the place where those $1$'s lead you, then try to turn back to the direction where $2$'s lead you and then $3$'s etc. It might end up in plateaus and hard to go back to other directions etc. Shuffling enables you to go in every possible direction a little bit, without going deeper and deeper in some dedicated direction.
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?
Yes, samples in the dataset may not be completely iid, but the assumption is present to ease the modelling. To maximize the data likelihood (in almost all models this is explicitly or implicitly part
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks? Yes, samples in the dataset may not be completely iid, but the assumption is present to ease the modelling. To maximize the data likelihood (in almost all models this is explicitly or implicitly part of the optimization), i.e. $P(\mathcal{D}|\theta)$, without the iid assumption, we'd have to model the dependence between the data samples, i.e. the joint distribution and you won't be able to quickly write the following and maximize:$$P(\mathcal{D}|\theta)=\prod_{i=1}^nP(X_i|\theta)$$ Typically, with lots of samples (random variables), the slight dependencies between small set of samples will be negligible. And, you end up with similar performances (assuming the dependence is modelled correctly). For example, in Naive Bayes, not necessarily the samples but features/words are surely dependent. They're part of the same sentence/paragraph, written by the same person etc. However, we model as if they're independent and end up with pretty good models. The shuffling is an another consideration. Some algorithms are not affected by shuffling. But, algorithms using gradient descent are probably affected, specifically neural networks, because we don't train them indefinitely. For example, if you feed the network with all $1$'s at first, then $2$'s etc, you'll go all the way to the place where those $1$'s lead you, then try to turn back to the direction where $2$'s lead you and then $3$'s etc. It might end up in plateaus and hard to go back to other directions etc. Shuffling enables you to go in every possible direction a little bit, without going deeper and deeper in some dedicated direction.
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks? Yes, samples in the dataset may not be completely iid, but the assumption is present to ease the modelling. To maximize the data likelihood (in almost all models this is explicitly or implicitly part
13,855
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?
For me, the notion of what i.i.d really is and why it is, in many cases, a necessary assumption makes more sense from the Bayesian perspective. Here, instead of data being thought of as i.i.d in an absolute sense, they are though of as conditionally i.i.d. given model parameters. For instance, consider a normal model from the Bayesian perspective. We specify how we think data were sampled given the parameters: $X_i|\mu, \sigma^2 \stackrel{iid}{\sim} N(\mu, \sigma^2)$ for $i \in \{1, \ldots, n\}$, and express prior belief on those parameters: $\mu \sim P(\mu)$; $\sigma^2 \sim P(\sigma^2)$ (the exact prior used is unimportant). Conditional independence has to do with the fact that the likelihood factorizes: $P(X_1, \ldots, X_n|\mu, \sigma^2) = P(X_1|\mu, \sigma^2)\ldots P(X_n|\mu, \sigma^2)$. But this is not the same thing as saying that the marginal distribution on the data implied by our model factorizes: $P(X_1, \ldots, X_n) \neq P(X_1)\ldots P(X_n)$. And, indeed, in our specific case of the normal distribution, getting the marginal distribution on the data by integrating out the parameters indeed yields a joint distribution which is not independent in general, the form of which will depend on which priors you specified. That is to say: two observations $X_i$ and $X_j$ are not independent; they are only conditionally independent given the model parameters (in math notation, $X_i \perp \!\!\! \perp X_j | \mu, \sigma^2$ but $X_i \not\perp \!\!\! \perp X_j$). A useful way to think about what the independence of two random variables means is that they do not provide any information about each other. It would be completely absurd to say that two data points don't provide any information about each other: of course the data are related in some way. But by making data conditionally independent given some parameters, we are saying that our model encodes the whole of the relationship between the data: that there's "nothing missing" from our model. Effectively, an i.i.d. assumption is an assumption that our model is correct: if we are missing something from our model, data will contain information about one another beyond what is encoded in our model. If we know what that is, we should put it into our model and then make an i.i.d. assumption. If we don't know what it is, we are out of luck. But that we have mispecified the model is a constant and unavoidable risk. And finally, a short note: at first glance, this framework I've described wouldn't seem to fit models such as spatiotemporal models where we have explicit dependence between data hard coded into the model. However, in all cases like this that I am aware of, the model may be reparameterized as one with i.i.d. data and additional (possibly correlated) latent variables.
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks?
For me, the notion of what i.i.d really is and why it is, in many cases, a necessary assumption makes more sense from the Bayesian perspective. Here, instead of data being thought of as i.i.d in an ab
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks? For me, the notion of what i.i.d really is and why it is, in many cases, a necessary assumption makes more sense from the Bayesian perspective. Here, instead of data being thought of as i.i.d in an absolute sense, they are though of as conditionally i.i.d. given model parameters. For instance, consider a normal model from the Bayesian perspective. We specify how we think data were sampled given the parameters: $X_i|\mu, \sigma^2 \stackrel{iid}{\sim} N(\mu, \sigma^2)$ for $i \in \{1, \ldots, n\}$, and express prior belief on those parameters: $\mu \sim P(\mu)$; $\sigma^2 \sim P(\sigma^2)$ (the exact prior used is unimportant). Conditional independence has to do with the fact that the likelihood factorizes: $P(X_1, \ldots, X_n|\mu, \sigma^2) = P(X_1|\mu, \sigma^2)\ldots P(X_n|\mu, \sigma^2)$. But this is not the same thing as saying that the marginal distribution on the data implied by our model factorizes: $P(X_1, \ldots, X_n) \neq P(X_1)\ldots P(X_n)$. And, indeed, in our specific case of the normal distribution, getting the marginal distribution on the data by integrating out the parameters indeed yields a joint distribution which is not independent in general, the form of which will depend on which priors you specified. That is to say: two observations $X_i$ and $X_j$ are not independent; they are only conditionally independent given the model parameters (in math notation, $X_i \perp \!\!\! \perp X_j | \mu, \sigma^2$ but $X_i \not\perp \!\!\! \perp X_j$). A useful way to think about what the independence of two random variables means is that they do not provide any information about each other. It would be completely absurd to say that two data points don't provide any information about each other: of course the data are related in some way. But by making data conditionally independent given some parameters, we are saying that our model encodes the whole of the relationship between the data: that there's "nothing missing" from our model. Effectively, an i.i.d. assumption is an assumption that our model is correct: if we are missing something from our model, data will contain information about one another beyond what is encoded in our model. If we know what that is, we should put it into our model and then make an i.i.d. assumption. If we don't know what it is, we are out of luck. But that we have mispecified the model is a constant and unavoidable risk. And finally, a short note: at first glance, this framework I've described wouldn't seem to fit models such as spatiotemporal models where we have explicit dependence between data hard coded into the model. However, in all cases like this that I am aware of, the model may be reparameterized as one with i.i.d. data and additional (possibly correlated) latent variables.
Realistically, does the i.i.d. assumption hold for the vast majority of supervised learning tasks? For me, the notion of what i.i.d really is and why it is, in many cases, a necessary assumption makes more sense from the Bayesian perspective. Here, instead of data being thought of as i.i.d in an ab
13,856
Likelihood-free inference - what does it mean?
There are many examples of methods not based on likelihoods in statistics (I don't know about machine learning). Some examples: Fisher's pure significance tests. Based only on a sharply defined null hypothesis (such as no difference between milk first and milk last in the Lady Tasting Tea experiment. This assumption leads to a null hypothesis distribution, and then a p-value. No likelihood involved. This minimal inferential machinery cannot in itself give a basis for power analysis (no formally defined alternative) or confidence intervals (no formally defined parameter). Associated to 1. is randomization tests Difference between Randomization test and Permutation test, which in its most basic form is a pure significance test. Bootstrapping is done without the need for a likelihood function. But there are connections to likelihood ideas, for instance empirical likelihood. Rank-based methods don't usually use likelihood. Much of robust statistics. Confidence intervals for the median (or other quantiles) can be based on order statistics. No likelihood is involved in the calculations. Confidence interval for the median, Best estimator for the variance of the empirical median V Vapnik had the idea of transductive learning which seems to be related to https://en.wikipedia.org/wiki/Epilogism as discussed in the Black Swan Taleb and the Black Swan. In the book Data Analysis and Approximate Models Laurie Davis builds a systematic theory of statistical models as approximations, confidence intervals got replaced by approximation intervals, and there are no parametric families of distributions, no $\text{N}(\mu, \sigma^2)$ only $\text{N}(9.37, 2.12^2)$ and so on. And no likelihoods. At the moment you got a likelihood function, there is an immense machinery to build on. Bayesians cannot do without, and most others do use likelihood most of the time. But it is pointed out in a comment that even Bayesians try to do without, see Approximate_Bayesian_computation. There is even a new text on that topic. But where do they come from? To get a likelihood function in the usual way, we need a lot of assumptions which can be difficult to justify. It is interesting to ask if we can construct likelihood functions, in some way, from some of this likelihood-free methods. For instance, point 6. above, can we construct a likelihood function for the median from (a family of) confidence intervals calculated from order statistics? I should ask that as a separate question ... Your last question about GAN's I must leave for others.
Likelihood-free inference - what does it mean?
There are many examples of methods not based on likelihoods in statistics (I don't know about machine learning). Some examples: Fisher's pure significance tests. Based only on a sharply defined nul
Likelihood-free inference - what does it mean? There are many examples of methods not based on likelihoods in statistics (I don't know about machine learning). Some examples: Fisher's pure significance tests. Based only on a sharply defined null hypothesis (such as no difference between milk first and milk last in the Lady Tasting Tea experiment. This assumption leads to a null hypothesis distribution, and then a p-value. No likelihood involved. This minimal inferential machinery cannot in itself give a basis for power analysis (no formally defined alternative) or confidence intervals (no formally defined parameter). Associated to 1. is randomization tests Difference between Randomization test and Permutation test, which in its most basic form is a pure significance test. Bootstrapping is done without the need for a likelihood function. But there are connections to likelihood ideas, for instance empirical likelihood. Rank-based methods don't usually use likelihood. Much of robust statistics. Confidence intervals for the median (or other quantiles) can be based on order statistics. No likelihood is involved in the calculations. Confidence interval for the median, Best estimator for the variance of the empirical median V Vapnik had the idea of transductive learning which seems to be related to https://en.wikipedia.org/wiki/Epilogism as discussed in the Black Swan Taleb and the Black Swan. In the book Data Analysis and Approximate Models Laurie Davis builds a systematic theory of statistical models as approximations, confidence intervals got replaced by approximation intervals, and there are no parametric families of distributions, no $\text{N}(\mu, \sigma^2)$ only $\text{N}(9.37, 2.12^2)$ and so on. And no likelihoods. At the moment you got a likelihood function, there is an immense machinery to build on. Bayesians cannot do without, and most others do use likelihood most of the time. But it is pointed out in a comment that even Bayesians try to do without, see Approximate_Bayesian_computation. There is even a new text on that topic. But where do they come from? To get a likelihood function in the usual way, we need a lot of assumptions which can be difficult to justify. It is interesting to ask if we can construct likelihood functions, in some way, from some of this likelihood-free methods. For instance, point 6. above, can we construct a likelihood function for the median from (a family of) confidence intervals calculated from order statistics? I should ask that as a separate question ... Your last question about GAN's I must leave for others.
Likelihood-free inference - what does it mean? There are many examples of methods not based on likelihoods in statistics (I don't know about machine learning). Some examples: Fisher's pure significance tests. Based only on a sharply defined nul
13,857
Likelihood-free inference - what does it mean?
Specifically, [the recent] likelihood-free methods are a rewording of the ABC algorithms, where ABC stands for approximate Bayesian computation. This intends to cover inference methods that do not require the use of a closed-form likelihood function, but still intend to study a specific statistical model. They are free from the computational difficulty attached with the likelihood but not from the model that produces this likelihood. See for instance Grelaud, A; Marin, J-M; Robert, C; Rodolphe, F; Tally, F (2009). "Likelihood-free methods for model choice in Gibbs random fields". Bayesian Analysis. 3: 427–442. Ratmann, O; Andrieu, C; Wiuf, C; Richardson, S (2009). "Model criticism based on likelihood-free inference, with an application to protein network evolution". Proceedings of the National Academy of Sciences of the United States of America. 106: 10576–10581. Bazin, E., Dawson, K. J., & Beaumont, M. A. (2010). Likelihood-free inference of population structure and local adaptation in a Bayesian hierarchical model. Genetics, 185(2), 587-602. Didelot, X; Everitt, RG; Johansen, AM; Lawson, DJ (2011). "Likelihood-free estimation of model evidence". Bayesian Analysis. 6: 49–76. Gutmann, M. and Corander, J. (2016) Bayesian optimization for likelihood-free inference of simulator-based statistical models Journal of Machine Learning Research.
Likelihood-free inference - what does it mean?
Specifically, [the recent] likelihood-free methods are a rewording of the ABC algorithms, where ABC stands for approximate Bayesian computation. This intends to cover inference methods that do not req
Likelihood-free inference - what does it mean? Specifically, [the recent] likelihood-free methods are a rewording of the ABC algorithms, where ABC stands for approximate Bayesian computation. This intends to cover inference methods that do not require the use of a closed-form likelihood function, but still intend to study a specific statistical model. They are free from the computational difficulty attached with the likelihood but not from the model that produces this likelihood. See for instance Grelaud, A; Marin, J-M; Robert, C; Rodolphe, F; Tally, F (2009). "Likelihood-free methods for model choice in Gibbs random fields". Bayesian Analysis. 3: 427–442. Ratmann, O; Andrieu, C; Wiuf, C; Richardson, S (2009). "Model criticism based on likelihood-free inference, with an application to protein network evolution". Proceedings of the National Academy of Sciences of the United States of America. 106: 10576–10581. Bazin, E., Dawson, K. J., & Beaumont, M. A. (2010). Likelihood-free inference of population structure and local adaptation in a Bayesian hierarchical model. Genetics, 185(2), 587-602. Didelot, X; Everitt, RG; Johansen, AM; Lawson, DJ (2011). "Likelihood-free estimation of model evidence". Bayesian Analysis. 6: 49–76. Gutmann, M. and Corander, J. (2016) Bayesian optimization for likelihood-free inference of simulator-based statistical models Journal of Machine Learning Research.
Likelihood-free inference - what does it mean? Specifically, [the recent] likelihood-free methods are a rewording of the ABC algorithms, where ABC stands for approximate Bayesian computation. This intends to cover inference methods that do not req
13,858
Likelihood-free inference - what does it mean?
On the machine learning side: In machine learning, you usually try to maximize $p(y|x)$, where $x$ is the target, and $y$ is the input (for example, x could be some random noise, and y would be an image). Now, how do we optimize this? A common way to do it, is to assume, that $p(y|x) = N(y|\mu(x), \sigma)$. If we assume this, it leads to the mean squared error. Note, we assumed as form for $p(y|x)$. However, if we dont assume any certain distribution, it is called likelihood-free learning. Why do GANs fall under this? Well, the Loss function is a neural network, and this neural network is not fixed, but learned jointly. Therefore, we dont assume any form anymore (except, that $p(y|x)$ falls in the family of distributions, that can be represented by the discriminator, but for theory sake we say it is a universal function approximator anyway).
Likelihood-free inference - what does it mean?
On the machine learning side: In machine learning, you usually try to maximize $p(y|x)$, where $x$ is the target, and $y$ is the input (for example, x could be some random noise, and y would be an ima
Likelihood-free inference - what does it mean? On the machine learning side: In machine learning, you usually try to maximize $p(y|x)$, where $x$ is the target, and $y$ is the input (for example, x could be some random noise, and y would be an image). Now, how do we optimize this? A common way to do it, is to assume, that $p(y|x) = N(y|\mu(x), \sigma)$. If we assume this, it leads to the mean squared error. Note, we assumed as form for $p(y|x)$. However, if we dont assume any certain distribution, it is called likelihood-free learning. Why do GANs fall under this? Well, the Loss function is a neural network, and this neural network is not fixed, but learned jointly. Therefore, we dont assume any form anymore (except, that $p(y|x)$ falls in the family of distributions, that can be represented by the discriminator, but for theory sake we say it is a universal function approximator anyway).
Likelihood-free inference - what does it mean? On the machine learning side: In machine learning, you usually try to maximize $p(y|x)$, where $x$ is the target, and $y$ is the input (for example, x could be some random noise, and y would be an ima
13,859
Likelihood-free inference - what does it mean?
To add to the litany of answers, asymptotic statistics are in fact free of likelihoods. A "likelihood" here refers to the probability model for the data. I may not care about that. But I may find some simple estimator, like the mean, that is an adequate summary of the data and I want to perform inference about the mean of the distribution (assuming it exists, which is often a reasonable assumption). By the central limit theorem, the mean has an approximating normal distribution in large N when the variance also exists. I can create consistent tests (power goes to 1 as N goes to infinity when null is false) that are of the correct size. While I have a probability model (that is false) for the sampling distribution of the mean in finite sample sizes, I can obtain valid inference and unbiased estimation to augment my "useful summary of the data" (the mean). It should be noted that tests based on the 95% CI for the median (i.e. option 6 in @kjetilbhalvorsen's answer) also rely on the central limit theorem to show that they are consistent. So it is not crazy to consider the simple T-test as a "non-parametric" or "non-likelihood based" test.
Likelihood-free inference - what does it mean?
To add to the litany of answers, asymptotic statistics are in fact free of likelihoods. A "likelihood" here refers to the probability model for the data. I may not care about that. But I may find som
Likelihood-free inference - what does it mean? To add to the litany of answers, asymptotic statistics are in fact free of likelihoods. A "likelihood" here refers to the probability model for the data. I may not care about that. But I may find some simple estimator, like the mean, that is an adequate summary of the data and I want to perform inference about the mean of the distribution (assuming it exists, which is often a reasonable assumption). By the central limit theorem, the mean has an approximating normal distribution in large N when the variance also exists. I can create consistent tests (power goes to 1 as N goes to infinity when null is false) that are of the correct size. While I have a probability model (that is false) for the sampling distribution of the mean in finite sample sizes, I can obtain valid inference and unbiased estimation to augment my "useful summary of the data" (the mean). It should be noted that tests based on the 95% CI for the median (i.e. option 6 in @kjetilbhalvorsen's answer) also rely on the central limit theorem to show that they are consistent. So it is not crazy to consider the simple T-test as a "non-parametric" or "non-likelihood based" test.
Likelihood-free inference - what does it mean? To add to the litany of answers, asymptotic statistics are in fact free of likelihoods. A "likelihood" here refers to the probability model for the data. I may not care about that. But I may find som
13,860
Can an instrumental variable equation be written as a directed acyclic graph (DAG)?
Yes. For example in the DAG below, the instrumental variable $Z$ causes $X$, while the effect of $X$ on $O$ is confounded by unmeasured variable $U$. The instrumental variable model for this DAG would be to estimate the causal effect of $X$ on $O$ using $E(O|\widehat{X})$, where $\widehat{X} = E(X|Z)$. This estimate is an unbiased causal estimate if: $Z$ must be associated with $X$. Edit: And, (as in the above DAG) this association itself must be unconfounded (see Imbens). $Z$ must causally affect $O$ only through $X$ There must not be any prior causes of both $O$ and $Z$. The effect of $X$ on $O$ must be homogeneous. This assumption/requirement has two forms, weak and strong: Weak homogeneity of the effect of $X$ on $O$: The effect of $X$ on $O$ does not vary by the levels of $Z$ (i.e. $Z$ cannot modify the effect of $X$ on $O$). Strong homogeneity of the effect of $X$ on $O$: The effect of $X$ on $O$ is constant across all individuals (or whatever your unit of analysis is). The first three assumptions are represented in the DAG. However, the last assumption is not represented in the DAG. Hernán, M. A. and Robins, J. M. (2020). Causal Inference. Chapter 16: Instrumental variable estimation. Chapman & Hall/CRC.
Can an instrumental variable equation be written as a directed acyclic graph (DAG)?
Yes. For example in the DAG below, the instrumental variable $Z$ causes $X$, while the effect of $X$ on $O$ is confounded by unmeasured variable $U$. The instrumental variable model for this DAG woul
Can an instrumental variable equation be written as a directed acyclic graph (DAG)? Yes. For example in the DAG below, the instrumental variable $Z$ causes $X$, while the effect of $X$ on $O$ is confounded by unmeasured variable $U$. The instrumental variable model for this DAG would be to estimate the causal effect of $X$ on $O$ using $E(O|\widehat{X})$, where $\widehat{X} = E(X|Z)$. This estimate is an unbiased causal estimate if: $Z$ must be associated with $X$. Edit: And, (as in the above DAG) this association itself must be unconfounded (see Imbens). $Z$ must causally affect $O$ only through $X$ There must not be any prior causes of both $O$ and $Z$. The effect of $X$ on $O$ must be homogeneous. This assumption/requirement has two forms, weak and strong: Weak homogeneity of the effect of $X$ on $O$: The effect of $X$ on $O$ does not vary by the levels of $Z$ (i.e. $Z$ cannot modify the effect of $X$ on $O$). Strong homogeneity of the effect of $X$ on $O$: The effect of $X$ on $O$ is constant across all individuals (or whatever your unit of analysis is). The first three assumptions are represented in the DAG. However, the last assumption is not represented in the DAG. Hernán, M. A. and Robins, J. M. (2020). Causal Inference. Chapter 16: Instrumental variable estimation. Chapman & Hall/CRC.
Can an instrumental variable equation be written as a directed acyclic graph (DAG)? Yes. For example in the DAG below, the instrumental variable $Z$ causes $X$, while the effect of $X$ on $O$ is confounded by unmeasured variable $U$. The instrumental variable model for this DAG woul
13,861
Can an instrumental variable equation be written as a directed acyclic graph (DAG)?
Yes, they surely can. As a matter of fact, the SCM/DAG literature has been working on generalized notions of instrumental variables, you might want to check Brito and Pearl, or Chen, Kumor and Bareinboim. The basic IV dag is usually represented as: Where $U$ is unobserved and $Z$ is an instrument for the effect of $X$ on $Y$. Although this is the graph you usually see, there are several different structures that would render $Z$ an instrument. For the basic case, to check whether $Z$ is an instrument for the causal effect of $X$ on $Y$ conditional on a set of covariates $S$, you have to check two conditions: $Z$ is connected to $X$ in the original DAG; $S$ d-separates $Y$ from $Z$, in the DAG where the arrow $X\rightarrow Y$ is removed. The first condition requires $Z$ to be associated with $X$ (relevance condition, otherwise the numerator of the IV estimand is zero). The second condition requires $Z$ to not be connected to $Y$ if but for its effect on $X$ (that is, we cannot have violations of the exclusion and independence restriction, after conditioning on $S$). For example, consider the graph below, with $W$ and $U$ unobserved. Here, $Z$ is, conditional on $L$,an instrument for the causal effect of $X$ on $Y$. We can create more complicated cases where it might not be immediately obvious whether something does qualify as an instrument or not. One final thing you should have in mind is that identification using instrumental variable methods needs parametric assumptions. That is, finding an instrument is not enough for identification of the effect: you need to impose parametric assumptions, such as linearity or monotonicity and so on.
Can an instrumental variable equation be written as a directed acyclic graph (DAG)?
Yes, they surely can. As a matter of fact, the SCM/DAG literature has been working on generalized notions of instrumental variables, you might want to check Brito and Pearl, or Chen, Kumor and Bareinb
Can an instrumental variable equation be written as a directed acyclic graph (DAG)? Yes, they surely can. As a matter of fact, the SCM/DAG literature has been working on generalized notions of instrumental variables, you might want to check Brito and Pearl, or Chen, Kumor and Bareinboim. The basic IV dag is usually represented as: Where $U$ is unobserved and $Z$ is an instrument for the effect of $X$ on $Y$. Although this is the graph you usually see, there are several different structures that would render $Z$ an instrument. For the basic case, to check whether $Z$ is an instrument for the causal effect of $X$ on $Y$ conditional on a set of covariates $S$, you have to check two conditions: $Z$ is connected to $X$ in the original DAG; $S$ d-separates $Y$ from $Z$, in the DAG where the arrow $X\rightarrow Y$ is removed. The first condition requires $Z$ to be associated with $X$ (relevance condition, otherwise the numerator of the IV estimand is zero). The second condition requires $Z$ to not be connected to $Y$ if but for its effect on $X$ (that is, we cannot have violations of the exclusion and independence restriction, after conditioning on $S$). For example, consider the graph below, with $W$ and $U$ unobserved. Here, $Z$ is, conditional on $L$,an instrument for the causal effect of $X$ on $Y$. We can create more complicated cases where it might not be immediately obvious whether something does qualify as an instrument or not. One final thing you should have in mind is that identification using instrumental variable methods needs parametric assumptions. That is, finding an instrument is not enough for identification of the effect: you need to impose parametric assumptions, such as linearity or monotonicity and so on.
Can an instrumental variable equation be written as a directed acyclic graph (DAG)? Yes, they surely can. As a matter of fact, the SCM/DAG literature has been working on generalized notions of instrumental variables, you might want to check Brito and Pearl, or Chen, Kumor and Bareinb
13,862
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
One way to approach the problem is to start with the survival function. In order to have to wait at least $t$ minutes you have to wait for at least $t$ minutes for both the red and the blue train. Thus the overall survival function is just the product of the individual survival functions: $$ S(t) = \left( 1 - \frac{t}{10} \right) \left(1-\frac{t}{15} \right) $$ which, for $0 \le t \le 10$, is the the probability that you'll have to wait at least $t$ minutes for the next train. This takes into account the clarification of the the OP in a comment that the correct assumptions to take are that each train is on a fixed timetable independent of the other and of the traveller's arrival time, and that the phases of the two trains are uniformly distributed, Then the pdf is obtained as $$ p(t) = (1-S(t))' = \frac{1}{10} \left( 1- \frac{t}{15} \right) + \frac{1}{15} \left(1-\frac{t}{10} \right) $$ And the expected value is obtained in the usual way: $E[t] = \int_0^{10} t p(t) dt = \int_0^{10} \frac{t}{10} \left( 1- \frac{t}{15} \right) + \frac{t}{15} \left(1-\frac{t}{10} \right) dt = \int_0^{10} \left( \frac{t}{6} - \frac{t^2}{75} \right) dt$, which works out to $\frac{35}{9}$ minutes.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
One way to approach the problem is to start with the survival function. In order to have to wait at least $t$ minutes you have to wait for at least $t$ minutes for both the red and the blue train.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes One way to approach the problem is to start with the survival function. In order to have to wait at least $t$ minutes you have to wait for at least $t$ minutes for both the red and the blue train. Thus the overall survival function is just the product of the individual survival functions: $$ S(t) = \left( 1 - \frac{t}{10} \right) \left(1-\frac{t}{15} \right) $$ which, for $0 \le t \le 10$, is the the probability that you'll have to wait at least $t$ minutes for the next train. This takes into account the clarification of the the OP in a comment that the correct assumptions to take are that each train is on a fixed timetable independent of the other and of the traveller's arrival time, and that the phases of the two trains are uniformly distributed, Then the pdf is obtained as $$ p(t) = (1-S(t))' = \frac{1}{10} \left( 1- \frac{t}{15} \right) + \frac{1}{15} \left(1-\frac{t}{10} \right) $$ And the expected value is obtained in the usual way: $E[t] = \int_0^{10} t p(t) dt = \int_0^{10} \frac{t}{10} \left( 1- \frac{t}{15} \right) + \frac{t}{15} \left(1-\frac{t}{10} \right) dt = \int_0^{10} \left( \frac{t}{6} - \frac{t^2}{75} \right) dt$, which works out to $\frac{35}{9}$ minutes.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes One way to approach the problem is to start with the survival function. In order to have to wait at least $t$ minutes you have to wait for at least $t$ minutes for both the red and the blue train.
13,863
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
The answer is $$E[t]=\int_x\int_y \min(x,y)\frac 1 {10} \frac 1 {15}dx dy=\int_x\left(\int_{y<x}ydy+\int_{y>x}xdy\right)\frac 1 {10} \frac 1 {15}dx$$ Get the parts inside the parantheses: $$\int_{y<x}ydy=y^2/2|_0^x=x^2/2$$ $$\int_{y>x}xdy=xy|_x^{15}=15x-x^2$$ So, the part is: $$(.)=\left(\int_{y<x}ydy+\int_{y>x}xdy\right)=15x-x^2/2$$ Finally, $$E[t]=\int_x (15x-x^2/2)\frac 1 {10} \frac 1 {15}dx= (15x^2/2-x^3/6)|_0^{10}\frac 1 {10} \frac 1 {15}\\= (1500/2-1000/6)\frac 1 {10} \frac 1 {15}=5-10/9\approx 3.89$$ Here's the MATLAB code to simulate: nsim = 10000000; red= rand(nsim,1)*10; blue= rand(nsim,1)*15; nextbus = min([red,blue],[],2); mean(nextbus)
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
The answer is $$E[t]=\int_x\int_y \min(x,y)\frac 1 {10} \frac 1 {15}dx dy=\int_x\left(\int_{y<x}ydy+\int_{y>x}xdy\right)\frac 1 {10} \frac 1 {15}dx$$ Get the parts inside the parantheses: $$\int_{y<x}
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes The answer is $$E[t]=\int_x\int_y \min(x,y)\frac 1 {10} \frac 1 {15}dx dy=\int_x\left(\int_{y<x}ydy+\int_{y>x}xdy\right)\frac 1 {10} \frac 1 {15}dx$$ Get the parts inside the parantheses: $$\int_{y<x}ydy=y^2/2|_0^x=x^2/2$$ $$\int_{y>x}xdy=xy|_x^{15}=15x-x^2$$ So, the part is: $$(.)=\left(\int_{y<x}ydy+\int_{y>x}xdy\right)=15x-x^2/2$$ Finally, $$E[t]=\int_x (15x-x^2/2)\frac 1 {10} \frac 1 {15}dx= (15x^2/2-x^3/6)|_0^{10}\frac 1 {10} \frac 1 {15}\\= (1500/2-1000/6)\frac 1 {10} \frac 1 {15}=5-10/9\approx 3.89$$ Here's the MATLAB code to simulate: nsim = 10000000; red= rand(nsim,1)*10; blue= rand(nsim,1)*15; nextbus = min([red,blue],[],2); mean(nextbus)
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes The answer is $$E[t]=\int_x\int_y \min(x,y)\frac 1 {10} \frac 1 {15}dx dy=\int_x\left(\int_{y<x}ydy+\int_{y>x}xdy\right)\frac 1 {10} \frac 1 {15}dx$$ Get the parts inside the parantheses: $$\int_{y<x}
13,864
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
Assuming each train is on a fixed timetable independent of the other and of the traveller's arrival time, the probability neither train arrives in the first $x$ minutes is $\frac{10-x}{10} \times \frac{15-x}{15}$ for $0 \le x \le 10$, which when integrated gives $\frac{35}9\approx 3.889$ minutes Alternatively, assuming each train is part of a Poisson process, the joint rate is $\frac{1}{15}+\frac{1}{10}=\frac{1}{6}$ trains a minute, making the expected waiting time $6$ minutes
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
Assuming each train is on a fixed timetable independent of the other and of the traveller's arrival time, the probability neither train arrives in the first $x$ minutes is $\frac{10-x}{10} \times \fra
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes Assuming each train is on a fixed timetable independent of the other and of the traveller's arrival time, the probability neither train arrives in the first $x$ minutes is $\frac{10-x}{10} \times \frac{15-x}{15}$ for $0 \le x \le 10$, which when integrated gives $\frac{35}9\approx 3.889$ minutes Alternatively, assuming each train is part of a Poisson process, the joint rate is $\frac{1}{15}+\frac{1}{10}=\frac{1}{6}$ trains a minute, making the expected waiting time $6$ minutes
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes Assuming each train is on a fixed timetable independent of the other and of the traveller's arrival time, the probability neither train arrives in the first $x$ minutes is $\frac{10-x}{10} \times \fra
13,865
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
I am probably wrong but assuming that each train's starting-time follows a uniform distribution, I would say that when arriving at the station at a random time the expected waiting time for: the $R$ed train is $\mathbb{E}[R] = 5$ mins the $B$lue train is $\mathbb{E}[B] = 7.5$ mins the train that comes the first is $\mathbb{E}[\min(R,B)] =\frac{15}{10}(\mathbb{E}[B]-\mathbb{E}[R]) = \frac{15}{4} = 3.75$ mins As pointed out in comments, I understood "Both of them start from a random time" as "the two trains start at the same random time". Which is a very limiting assumption.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
I am probably wrong but assuming that each train's starting-time follows a uniform distribution, I would say that when arriving at the station at a random time the expected waiting time for: the $R$e
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes I am probably wrong but assuming that each train's starting-time follows a uniform distribution, I would say that when arriving at the station at a random time the expected waiting time for: the $R$ed train is $\mathbb{E}[R] = 5$ mins the $B$lue train is $\mathbb{E}[B] = 7.5$ mins the train that comes the first is $\mathbb{E}[\min(R,B)] =\frac{15}{10}(\mathbb{E}[B]-\mathbb{E}[R]) = \frac{15}{4} = 3.75$ mins As pointed out in comments, I understood "Both of them start from a random time" as "the two trains start at the same random time". Which is a very limiting assumption.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes I am probably wrong but assuming that each train's starting-time follows a uniform distribution, I would say that when arriving at the station at a random time the expected waiting time for: the $R$e
13,866
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
Suppose that red and blue trains arrive on time according to schedule, with the red schedule beginning $\Delta$ minutes after the blue schedule, for some $0\le\Delta<10$. For definiteness suppose the first blue train arrives at time $t=0$. Assume for now that $\Delta$ lies between $0$ and $5$ minutes. Between $t=0$ and $t=30$ minutes we'll see the following trains and interarrival times: blue train, $\Delta$, red train, $10$, red train, $5-\Delta$, blue train, $\Delta + 5$, red train, $10-\Delta$, blue train. Then the schedule repeats, starting with that last blue train. If $W_\Delta(t)$ denotes the waiting time for a passenger arriving at the station at time $t$, then the plot of $W_\Delta(t)$ versus $t$ is piecewise linear, with each line segment decaying to zero with slope $-1$. So the average wait time is the area from $0$ to $30$ of an array of triangles, divided by $30$. This gives $$ \begin{align}\bar W_\Delta &:= \frac1{30}\left(\frac12[\Delta^2+10^2+(5-\Delta)^2+(\Delta+5)^2+(10-\Delta)^2]\right)\\&=\frac1{30}(2\Delta^2-10\Delta+125). \end{align}$$ Notice that in the above development there is a red train arriving $\Delta+5$ minutes after a blue train. Since the schedule repeats every 30 minutes, conclude $\bar W_\Delta=\bar W_{\Delta+5}$, and it suffices to consider $0\le\Delta<5$. If $\Delta$ is not constant, but instead a uniformly distributed random variable, we obtain an average average waiting time of $$ \frac15\int_{\Delta=0}^5\frac1{30}(2\Delta^2-10\Delta+125)\,d\Delta=\frac{35}9.$$
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
Suppose that red and blue trains arrive on time according to schedule, with the red schedule beginning $\Delta$ minutes after the blue schedule, for some $0\le\Delta<10$. For definiteness suppose the
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes Suppose that red and blue trains arrive on time according to schedule, with the red schedule beginning $\Delta$ minutes after the blue schedule, for some $0\le\Delta<10$. For definiteness suppose the first blue train arrives at time $t=0$. Assume for now that $\Delta$ lies between $0$ and $5$ minutes. Between $t=0$ and $t=30$ minutes we'll see the following trains and interarrival times: blue train, $\Delta$, red train, $10$, red train, $5-\Delta$, blue train, $\Delta + 5$, red train, $10-\Delta$, blue train. Then the schedule repeats, starting with that last blue train. If $W_\Delta(t)$ denotes the waiting time for a passenger arriving at the station at time $t$, then the plot of $W_\Delta(t)$ versus $t$ is piecewise linear, with each line segment decaying to zero with slope $-1$. So the average wait time is the area from $0$ to $30$ of an array of triangles, divided by $30$. This gives $$ \begin{align}\bar W_\Delta &:= \frac1{30}\left(\frac12[\Delta^2+10^2+(5-\Delta)^2+(\Delta+5)^2+(10-\Delta)^2]\right)\\&=\frac1{30}(2\Delta^2-10\Delta+125). \end{align}$$ Notice that in the above development there is a red train arriving $\Delta+5$ minutes after a blue train. Since the schedule repeats every 30 minutes, conclude $\bar W_\Delta=\bar W_{\Delta+5}$, and it suffices to consider $0\le\Delta<5$. If $\Delta$ is not constant, but instead a uniformly distributed random variable, we obtain an average average waiting time of $$ \frac15\int_{\Delta=0}^5\frac1{30}(2\Delta^2-10\Delta+125)\,d\Delta=\frac{35}9.$$
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes Suppose that red and blue trains arrive on time according to schedule, with the red schedule beginning $\Delta$ minutes after the blue schedule, for some $0\le\Delta<10$. For definiteness suppose the
13,867
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
This is a Poisson process. The red train arrives according to a Poisson distribution wIth rate parameter 6/hour. The blue train also arrives according to a Poisson distribution with rate 4/hour. Red train arrivals and blue train arrivals are independent. Total number of train arrivals Is also Poisson with rate 10/hour. Since the sum of The time between train arrivals is exponential with mean 6 minutes. Since the exponential mean is the reciprocal of the Poisson rate parameter. Since the exponential distribution is memoryless, your expected wait time is 6 minutes.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes
This is a Poisson process. The red train arrives according to a Poisson distribution wIth rate parameter 6/hour. The blue train also arrives according to a Poisson distribution with rate 4/hour. Red
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes This is a Poisson process. The red train arrives according to a Poisson distribution wIth rate parameter 6/hour. The blue train also arrives according to a Poisson distribution with rate 4/hour. Red train arrivals and blue train arrivals are independent. Total number of train arrivals Is also Poisson with rate 10/hour. Since the sum of The time between train arrivals is exponential with mean 6 minutes. Since the exponential mean is the reciprocal of the Poisson rate parameter. Since the exponential distribution is memoryless, your expected wait time is 6 minutes.
Expected value of waiting time for the first of the two buses running every 10 and 15 minutes This is a Poisson process. The red train arrives according to a Poisson distribution wIth rate parameter 6/hour. The blue train also arrives according to a Poisson distribution with rate 4/hour. Red
13,868
R package for Weighted Random Forest? classwt option?
This thread refers to two other threads and a fine article on this matter. It seems classweighting and downsampling are equally good. I use downsampling as described below. Remember the training set must be large as only 1% will characterize the rare class. Less than 25~50 samples of this class probably will be problematic. Few samples characterizing the class will inevitably make the learned pattern crude and less reproducible. RF uses majority voting as default. The class prevalences of the training set will operate as some kind of effective prior. Thus unless the rare class is perfectly separable, it is unlikely this rare class will win a majority voting when predicting. Instead of aggregating by majority vote, you can aggregate vote fractions. Stratified sampling can be used to increase the influence of the rare class. This is done on the cost on downsampling the other classes. The grown trees will become less deep as much fewer samples need to be split therefore limiting the complexity of the potential pattern learned. The number of trees grown should be large e.g. 4000 such that most observations participate in several trees. In the example below, I have simulated a training data set of 5000 samples with 3 class with prevalences 1%, 49% and 50% respectively. Thus there will 50 samples of class 0. The first figure shows the true class of training set as function of two variables x1 and x2. Four models were trained: A default model, and three stratified models with 1:10:10 1:2:2 and 1:1:1 stratification of classes. Main while the number of inbag samples(incl. redraws) in each tree will be 5000, 1050, 250 and 150. As I do not use majority voting I do not need to make a perfectly balanced stratification. Instead the votes on rare classes could be weighted 10 times or some other decision rule. Your cost of false negatives and false positives should influence this rule. The next figure shows how stratification influences the vote-fractions. Notice the stratified class ratios always is the centroid of predictions. Lastly you can use a ROC-curve to find a voting rule which gives you a good trade-off between specificity and sensitivity. Black line is no stratification, red 1:5:5, green 1:2:2 and blue 1:1:1. For this data set 1:2:2 or 1:1:1 seems best choice. By the way, vote fractions are here out-of-bag crossvalidated. And the code: library(plotrix) library(randomForest) library(AUC) make.data = function(obs=5000,vars=6,noise.factor = .2,smallGroupFraction=.01) { X = data.frame(replicate(vars,rnorm(obs))) yValue = with(X,sin(X1*pi)+sin(X2*pi*2)+rnorm(obs)*noise.factor) yQuantile = quantile(yValue,c(smallGroupFraction,.5)) yClass = apply(sapply(yQuantile,function(x) x<yValue),1,sum) yClass = factor(yClass) print(table(yClass)) #five classes, first class has 1% prevalence only Data=data.frame(X=X,y=yClass) } plot.separation = function(rf,...) { triax.plot(rf$votes,...,col.symbols = c("#FF0000FF", "#00FF0010", "#0000FF10")[as.numeric(rf$y)]) } #make data set where class "0"(red circles) are rare observations #Class 0 is somewhat separateble from class "1" and fully separateble from class "2" Data = make.data() par(mfrow=c(1,1)) plot(Data[,1:2],main="separation problem: identify rare red circles", col = c("#FF0000FF","#00FF0020","#0000FF20")[as.numeric(Data$y)]) #train default RF and with 10x 30x and 100x upsumpling by stratification rf1 = randomForest(y~.,Data,ntree=500, sampsize=5000) rf2 = randomForest(y~.,Data,ntree=4000,sampsize=c(50,500,500),strata=Data$y) rf3 = randomForest(y~.,Data,ntree=4000,sampsize=c(50,100,100),strata=Data$y) rf4 = randomForest(y~.,Data,ntree=4000,sampsize=c(50,50,50) ,strata=Data$y) #plot out-of-bag pluralistic predictions(vote fractions). par(mfrow=c(2,2),mar=c(4,4,3,3)) plot.separation(rf1,main="no stratification") plot.separation(rf2,main="1:10:10") plot.separation(rf3,main="1:5:5") plot.separation(rf4,main="1:1:1") par(mfrow=c(1,1)) plot(roc(rf1$votes[,1],factor(1 * (rf1$y==0))),main="ROC curves for four models predicting class 0") plot(roc(rf2$votes[,1],factor(1 * (rf1$y==0))),col=2,add=T) plot(roc(rf3$votes[,1],factor(1 * (rf1$y==0))),col=3,add=T) plot(roc(rf4$votes[,1],factor(1 * (rf1$y==0))),col=4,add=T)
R package for Weighted Random Forest? classwt option?
This thread refers to two other threads and a fine article on this matter. It seems classweighting and downsampling are equally good. I use downsampling as described below. Remember the training set m
R package for Weighted Random Forest? classwt option? This thread refers to two other threads and a fine article on this matter. It seems classweighting and downsampling are equally good. I use downsampling as described below. Remember the training set must be large as only 1% will characterize the rare class. Less than 25~50 samples of this class probably will be problematic. Few samples characterizing the class will inevitably make the learned pattern crude and less reproducible. RF uses majority voting as default. The class prevalences of the training set will operate as some kind of effective prior. Thus unless the rare class is perfectly separable, it is unlikely this rare class will win a majority voting when predicting. Instead of aggregating by majority vote, you can aggregate vote fractions. Stratified sampling can be used to increase the influence of the rare class. This is done on the cost on downsampling the other classes. The grown trees will become less deep as much fewer samples need to be split therefore limiting the complexity of the potential pattern learned. The number of trees grown should be large e.g. 4000 such that most observations participate in several trees. In the example below, I have simulated a training data set of 5000 samples with 3 class with prevalences 1%, 49% and 50% respectively. Thus there will 50 samples of class 0. The first figure shows the true class of training set as function of two variables x1 and x2. Four models were trained: A default model, and three stratified models with 1:10:10 1:2:2 and 1:1:1 stratification of classes. Main while the number of inbag samples(incl. redraws) in each tree will be 5000, 1050, 250 and 150. As I do not use majority voting I do not need to make a perfectly balanced stratification. Instead the votes on rare classes could be weighted 10 times or some other decision rule. Your cost of false negatives and false positives should influence this rule. The next figure shows how stratification influences the vote-fractions. Notice the stratified class ratios always is the centroid of predictions. Lastly you can use a ROC-curve to find a voting rule which gives you a good trade-off between specificity and sensitivity. Black line is no stratification, red 1:5:5, green 1:2:2 and blue 1:1:1. For this data set 1:2:2 or 1:1:1 seems best choice. By the way, vote fractions are here out-of-bag crossvalidated. And the code: library(plotrix) library(randomForest) library(AUC) make.data = function(obs=5000,vars=6,noise.factor = .2,smallGroupFraction=.01) { X = data.frame(replicate(vars,rnorm(obs))) yValue = with(X,sin(X1*pi)+sin(X2*pi*2)+rnorm(obs)*noise.factor) yQuantile = quantile(yValue,c(smallGroupFraction,.5)) yClass = apply(sapply(yQuantile,function(x) x<yValue),1,sum) yClass = factor(yClass) print(table(yClass)) #five classes, first class has 1% prevalence only Data=data.frame(X=X,y=yClass) } plot.separation = function(rf,...) { triax.plot(rf$votes,...,col.symbols = c("#FF0000FF", "#00FF0010", "#0000FF10")[as.numeric(rf$y)]) } #make data set where class "0"(red circles) are rare observations #Class 0 is somewhat separateble from class "1" and fully separateble from class "2" Data = make.data() par(mfrow=c(1,1)) plot(Data[,1:2],main="separation problem: identify rare red circles", col = c("#FF0000FF","#00FF0020","#0000FF20")[as.numeric(Data$y)]) #train default RF and with 10x 30x and 100x upsumpling by stratification rf1 = randomForest(y~.,Data,ntree=500, sampsize=5000) rf2 = randomForest(y~.,Data,ntree=4000,sampsize=c(50,500,500),strata=Data$y) rf3 = randomForest(y~.,Data,ntree=4000,sampsize=c(50,100,100),strata=Data$y) rf4 = randomForest(y~.,Data,ntree=4000,sampsize=c(50,50,50) ,strata=Data$y) #plot out-of-bag pluralistic predictions(vote fractions). par(mfrow=c(2,2),mar=c(4,4,3,3)) plot.separation(rf1,main="no stratification") plot.separation(rf2,main="1:10:10") plot.separation(rf3,main="1:5:5") plot.separation(rf4,main="1:1:1") par(mfrow=c(1,1)) plot(roc(rf1$votes[,1],factor(1 * (rf1$y==0))),main="ROC curves for four models predicting class 0") plot(roc(rf2$votes[,1],factor(1 * (rf1$y==0))),col=2,add=T) plot(roc(rf3$votes[,1],factor(1 * (rf1$y==0))),col=3,add=T) plot(roc(rf4$votes[,1],factor(1 * (rf1$y==0))),col=4,add=T)
R package for Weighted Random Forest? classwt option? This thread refers to two other threads and a fine article on this matter. It seems classweighting and downsampling are equally good. I use downsampling as described below. Remember the training set m
13,869
How to predict or extend regression lines in ggplot2?
As @Glen mentions you have to use a stat_smooth method which supports extrapolations, which loess does not. lm does however. What you need to do is use the fullrange parameter of stat_smooth and expand the x-axis to include the range you want to predict over. I don't have your data, but here's an example using the mtcars dataset: ggplot(mtcars,aes(x=disp,y=hp)) + geom_point() + xlim(0,700) + stat_smooth(method="lm",fullrange=TRUE)
How to predict or extend regression lines in ggplot2?
As @Glen mentions you have to use a stat_smooth method which supports extrapolations, which loess does not. lm does however. What you need to do is use the fullrange parameter of stat_smooth and expan
How to predict or extend regression lines in ggplot2? As @Glen mentions you have to use a stat_smooth method which supports extrapolations, which loess does not. lm does however. What you need to do is use the fullrange parameter of stat_smooth and expand the x-axis to include the range you want to predict over. I don't have your data, but here's an example using the mtcars dataset: ggplot(mtcars,aes(x=disp,y=hp)) + geom_point() + xlim(0,700) + stat_smooth(method="lm",fullrange=TRUE)
How to predict or extend regression lines in ggplot2? As @Glen mentions you have to use a stat_smooth method which supports extrapolations, which loess does not. lm does however. What you need to do is use the fullrange parameter of stat_smooth and expan
13,870
How to predict or extend regression lines in ggplot2?
You would have to predict the values for future observations outside of ggplot2 and then plot the predicted values, you could also get a confidence interval for these predictions. Look at the loess function, although I'm not sure if it does predictions outside your data range, I'm sure some smooth function does however. However it is usually not wise to predict values outside your data range. I would not put much trust in these predictions. You may want to investigate predicting values using a time series model.
How to predict or extend regression lines in ggplot2?
You would have to predict the values for future observations outside of ggplot2 and then plot the predicted values, you could also get a confidence interval for these predictions. Look at the loess fu
How to predict or extend regression lines in ggplot2? You would have to predict the values for future observations outside of ggplot2 and then plot the predicted values, you could also get a confidence interval for these predictions. Look at the loess function, although I'm not sure if it does predictions outside your data range, I'm sure some smooth function does however. However it is usually not wise to predict values outside your data range. I would not put much trust in these predictions. You may want to investigate predicting values using a time series model.
How to predict or extend regression lines in ggplot2? You would have to predict the values for future observations outside of ggplot2 and then plot the predicted values, you could also get a confidence interval for these predictions. Look at the loess fu
13,871
Whether a AR(P) process is stationary or not?
Extract the roots of the polynomial. If all the roots are outside the unit circle then the process is stationary. Model identification aids can be found on the web. Fundamentally the pattern of the ACF's and the pattern of the PACF's are used to identify which model might be a good starting model. If there are more significant ACF's than significant PACF's then an AR model is suggested as the ACF is dominant. if the converse is true where the PACF is dominant then an MA model might be appropriate. The order of the model is suggested by the number of significant values in the subordinate.
Whether a AR(P) process is stationary or not?
Extract the roots of the polynomial. If all the roots are outside the unit circle then the process is stationary. Model identification aids can be found on the web. Fundamentally the pattern of the AC
Whether a AR(P) process is stationary or not? Extract the roots of the polynomial. If all the roots are outside the unit circle then the process is stationary. Model identification aids can be found on the web. Fundamentally the pattern of the ACF's and the pattern of the PACF's are used to identify which model might be a good starting model. If there are more significant ACF's than significant PACF's then an AR model is suggested as the ACF is dominant. if the converse is true where the PACF is dominant then an MA model might be appropriate. The order of the model is suggested by the number of significant values in the subordinate.
Whether a AR(P) process is stationary or not? Extract the roots of the polynomial. If all the roots are outside the unit circle then the process is stationary. Model identification aids can be found on the web. Fundamentally the pattern of the AC
13,872
Whether a AR(P) process is stationary or not?
If you have an AR(p) process like this: $$ y_t = c + \alpha_1 y_{t - 1} + \cdots + \alpha_p y_{t - p} $$ Then you can build an equation like this: $$ z^p - \alpha_1 z^{p - 1} - \cdots - \alpha_{p - 1} z - \alpha_p = 0 $$ Find the roots of this equation, and if all of them are less than 1 in absolute value, then the process is stationary.
Whether a AR(P) process is stationary or not?
If you have an AR(p) process like this: $$ y_t = c + \alpha_1 y_{t - 1} + \cdots + \alpha_p y_{t - p} $$ Then you can build an equation like this: $$ z^p - \alpha_1 z^{p - 1} - \cdots - \alpha_{p -
Whether a AR(P) process is stationary or not? If you have an AR(p) process like this: $$ y_t = c + \alpha_1 y_{t - 1} + \cdots + \alpha_p y_{t - p} $$ Then you can build an equation like this: $$ z^p - \alpha_1 z^{p - 1} - \cdots - \alpha_{p - 1} z - \alpha_p = 0 $$ Find the roots of this equation, and if all of them are less than 1 in absolute value, then the process is stationary.
Whether a AR(P) process is stationary or not? If you have an AR(p) process like this: $$ y_t = c + \alpha_1 y_{t - 1} + \cdots + \alpha_p y_{t - p} $$ Then you can build an equation like this: $$ z^p - \alpha_1 z^{p - 1} - \cdots - \alpha_{p -
13,873
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
Unbiased estimates are typical in introductory statistics courses because they are: 1) classic, 2) easy to analyze mathematically. The Cramer-Rao lower bound is one of the main tools for 2). Away from unbiased estimates there is possible improvement. The bias-variance trade off is an important concept in statistics for understanding how biased estimates can be better than unbiased estimates. Unfortunately, biased estimators are typically harder to analyze. In regression, much of the research in the past 40 years has been about biased estimation. This began with ridge regression (Hoerl and Kennard, 1970). See Frank and Friedman (1996) and Burr and Fry (2005) for some review and insights. The bias-variance tradeoff becomes more important in high-dimensions, where the number of variables is large. Charles Stein surprised everyone when he proved that in the Normal means problem the sample mean is no longer admissible if $p \geq 3$ (see Stein, 1956). The James-Stein estimator (James and Stein 1961) was the first example of an estimator that dominates the sample mean. However, it is also inadmissible. An important part of the bias-variance problem is determining how bias should be traded off. There is no single “best” estimator. Sparsity has been an important part of research in the past decade. See Hesterberg et al. (2008) for a partial review. Most of the estimators referenced above are non-linear in $Y$. Even ridge regression is non-linear once the data is used to determine the ridge parameter.
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
Unbiased estimates are typical in introductory statistics courses because they are: 1) classic, 2) easy to analyze mathematically. The Cramer-Rao lower bound is one of the main tools for 2). Away fr
OLS is BLUE. But what if I don't care about unbiasedness and linearity? Unbiased estimates are typical in introductory statistics courses because they are: 1) classic, 2) easy to analyze mathematically. The Cramer-Rao lower bound is one of the main tools for 2). Away from unbiased estimates there is possible improvement. The bias-variance trade off is an important concept in statistics for understanding how biased estimates can be better than unbiased estimates. Unfortunately, biased estimators are typically harder to analyze. In regression, much of the research in the past 40 years has been about biased estimation. This began with ridge regression (Hoerl and Kennard, 1970). See Frank and Friedman (1996) and Burr and Fry (2005) for some review and insights. The bias-variance tradeoff becomes more important in high-dimensions, where the number of variables is large. Charles Stein surprised everyone when he proved that in the Normal means problem the sample mean is no longer admissible if $p \geq 3$ (see Stein, 1956). The James-Stein estimator (James and Stein 1961) was the first example of an estimator that dominates the sample mean. However, it is also inadmissible. An important part of the bias-variance problem is determining how bias should be traded off. There is no single “best” estimator. Sparsity has been an important part of research in the past decade. See Hesterberg et al. (2008) for a partial review. Most of the estimators referenced above are non-linear in $Y$. Even ridge regression is non-linear once the data is used to determine the ridge parameter.
OLS is BLUE. But what if I don't care about unbiasedness and linearity? Unbiased estimates are typical in introductory statistics courses because they are: 1) classic, 2) easy to analyze mathematically. The Cramer-Rao lower bound is one of the main tools for 2). Away fr
13,874
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
I don't know if you are OK with the Bayes Estimate? If yes, then depending on the Loss function you can obtain different Bayes Estimates. A theorem by Blackwell states that Bayes Estimates are never unbiased. A decision theoretic argument states that every admissible rule ((i.e. or every other rule against which it is compared, there is a value of the parameter for which the the risk of the present rule is (strictly) less than that of rule against which it's being compared)) is a (generalized) Bayes rule. James-Stein Estimators are another class of estimators (which can be derived by Bayesian methods asymptotically) which are better than OLS in many cases. OLS can be inadmissible in many situations and James-Stein Estimator is an example. (also called Stein's paradox).
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
I don't know if you are OK with the Bayes Estimate? If yes, then depending on the Loss function you can obtain different Bayes Estimates. A theorem by Blackwell states that Bayes Estimates are never u
OLS is BLUE. But what if I don't care about unbiasedness and linearity? I don't know if you are OK with the Bayes Estimate? If yes, then depending on the Loss function you can obtain different Bayes Estimates. A theorem by Blackwell states that Bayes Estimates are never unbiased. A decision theoretic argument states that every admissible rule ((i.e. or every other rule against which it is compared, there is a value of the parameter for which the the risk of the present rule is (strictly) less than that of rule against which it's being compared)) is a (generalized) Bayes rule. James-Stein Estimators are another class of estimators (which can be derived by Bayesian methods asymptotically) which are better than OLS in many cases. OLS can be inadmissible in many situations and James-Stein Estimator is an example. (also called Stein's paradox).
OLS is BLUE. But what if I don't care about unbiasedness and linearity? I don't know if you are OK with the Bayes Estimate? If yes, then depending on the Loss function you can obtain different Bayes Estimates. A theorem by Blackwell states that Bayes Estimates are never u
13,875
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
There is a nice review paper by Kay and Eldar on biased estimation for the purpose of finding estimators with minimum mean square error.
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
There is a nice review paper by Kay and Eldar on biased estimation for the purpose of finding estimators with minimum mean square error.
OLS is BLUE. But what if I don't care about unbiasedness and linearity? There is a nice review paper by Kay and Eldar on biased estimation for the purpose of finding estimators with minimum mean square error.
OLS is BLUE. But what if I don't care about unbiasedness and linearity? There is a nice review paper by Kay and Eldar on biased estimation for the purpose of finding estimators with minimum mean square error.
13,876
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
“Best” in BLUE means the minimum variance. Variance is a non-negative quantity, so its lowest value is zero. If you estimate the coefficients by picking a constant every time, then your estimator has zero variance. In other words, just estimating the coefficients as zero every time, regardless of the data, is a zero-variance estimator. Further, this estimator is a linear combination of elements of $y$, so this is a linear estimator. Somewhat related, it came to my great surprise that this approach need not result in an estimator that is “admissible” under square loss. While this is technically a way of doing estimation, it is a silly approach by human standards. For this reason, I believe that it is important to put some conditions of what you want out of an estimator if you don’t just want the minimum-variance estimator. For instance, perhaps you’re willing to drop linearity and incur some bias, but then you demand that the estimator at least be consistent. Perhaps you do not demand consistency, but you want the minimum MSE. Perhaps you want the estimator that gives the lowest maximum risk across all possible parameter values (a so-called “minimax” estimator).
OLS is BLUE. But what if I don't care about unbiasedness and linearity?
“Best” in BLUE means the minimum variance. Variance is a non-negative quantity, so its lowest value is zero. If you estimate the coefficients by picking a constant every time, then your estimator has
OLS is BLUE. But what if I don't care about unbiasedness and linearity? “Best” in BLUE means the minimum variance. Variance is a non-negative quantity, so its lowest value is zero. If you estimate the coefficients by picking a constant every time, then your estimator has zero variance. In other words, just estimating the coefficients as zero every time, regardless of the data, is a zero-variance estimator. Further, this estimator is a linear combination of elements of $y$, so this is a linear estimator. Somewhat related, it came to my great surprise that this approach need not result in an estimator that is “admissible” under square loss. While this is technically a way of doing estimation, it is a silly approach by human standards. For this reason, I believe that it is important to put some conditions of what you want out of an estimator if you don’t just want the minimum-variance estimator. For instance, perhaps you’re willing to drop linearity and incur some bias, but then you demand that the estimator at least be consistent. Perhaps you do not demand consistency, but you want the minimum MSE. Perhaps you want the estimator that gives the lowest maximum risk across all possible parameter values (a so-called “minimax” estimator).
OLS is BLUE. But what if I don't care about unbiasedness and linearity? “Best” in BLUE means the minimum variance. Variance is a non-negative quantity, so its lowest value is zero. If you estimate the coefficients by picking a constant every time, then your estimator has
13,877
Relationship between Cholesky decomposition and matrix inversion?
Gaussian process models often involve computing some quadratic form, such as $$ y = x^\top\Sigma^{-1}x $$ where $\Sigma$ is positive definite, $x$ is a vector of appropriate dimension, and we wish to compute scalar $y$. Typically, you don't want to compute $\Sigma^{-1}$ directly because of cost or loss of precision. Using a definition of Cholesky factor $L$, we know $\Sigma=LL^\top$. Because $\Sigma$ is PD, the diagonals of $L$ are also positive, which implies $L$ is non-singular. In this exposition, $L$ is lower-triangular. We can rewrite $$\begin{align} y &= x^\top(LL^\top)^{-1}x \\ &= x^\top L^{-\top}L^{-1}x \\ &= (L^{-1}x)^\top L^{-1}x \\ &= z^\top z \end{align} $$ The first to second line is an elementary property of a matrix inverse. The second to third line just rearranges the transpose. The final line re-writes it as an expression of vector dot-products, which is convenient because we only need to compute $z$ once. The nice thing about triangular matrices is that they're dirt-simple to solve, so we don't actually ever compute $L^{-1}x$; instead, we just use forward substitution for $Lz=x$, which is very cheap compared to computing an inverse directly.
Relationship between Cholesky decomposition and matrix inversion?
Gaussian process models often involve computing some quadratic form, such as $$ y = x^\top\Sigma^{-1}x $$ where $\Sigma$ is positive definite, $x$ is a vector of appropriate dimension, and we wish to
Relationship between Cholesky decomposition and matrix inversion? Gaussian process models often involve computing some quadratic form, such as $$ y = x^\top\Sigma^{-1}x $$ where $\Sigma$ is positive definite, $x$ is a vector of appropriate dimension, and we wish to compute scalar $y$. Typically, you don't want to compute $\Sigma^{-1}$ directly because of cost or loss of precision. Using a definition of Cholesky factor $L$, we know $\Sigma=LL^\top$. Because $\Sigma$ is PD, the diagonals of $L$ are also positive, which implies $L$ is non-singular. In this exposition, $L$ is lower-triangular. We can rewrite $$\begin{align} y &= x^\top(LL^\top)^{-1}x \\ &= x^\top L^{-\top}L^{-1}x \\ &= (L^{-1}x)^\top L^{-1}x \\ &= z^\top z \end{align} $$ The first to second line is an elementary property of a matrix inverse. The second to third line just rearranges the transpose. The final line re-writes it as an expression of vector dot-products, which is convenient because we only need to compute $z$ once. The nice thing about triangular matrices is that they're dirt-simple to solve, so we don't actually ever compute $L^{-1}x$; instead, we just use forward substitution for $Lz=x$, which is very cheap compared to computing an inverse directly.
Relationship between Cholesky decomposition and matrix inversion? Gaussian process models often involve computing some quadratic form, such as $$ y = x^\top\Sigma^{-1}x $$ where $\Sigma$ is positive definite, $x$ is a vector of appropriate dimension, and we wish to
13,878
How to choose random- and fixed-effects structure in linear mixed models?
I'm not sure there's really a canonical answer to this, but I'll give it a shot. What is the recommended way to select the best fitting model in this context? When using log-likelihood ratio tests what is the recommended procedure? Generating models upwards (from null model to most complex model) or downwards (from most complex model to null model)? Stepwise inclusion or exclusion? Or is it recommended to put all models in one log-likelihood ratio test and select the model with the lowest p-value? How to compare models that are not nested? It depends what your goals are. In general you should be very, very careful about model selection (see e.g. this answer, or this post, or just Google "Harrell stepwise" ...). If you are interested in having your p-values be meaningful (i.e. you are doing confirmatory hypothesis testing), you should not do model selection. However: it's not so clear to me whether model selection procedures are quite as bad if you are doing model selection on non-focal parts of the model, e.g. doing model selection on the random effects if your primary interest is inference on the fixed effects. There's no such thing as "putting all the models in one likelihood ratio test" -- likelihood ratio testing is a pairwise procedure. If you wanted to do model selection (e.g.) on the random effects, I would probably recommend an "all at once" approach using information criteria as in this example -- that at least avoids some of the problems of stepwise approaches (but not of model selection more generally). Barr et al. 2013 "Keep it maximal" Journal of Memory and Language (doi:10.1016/j.jml.2012.11.001) would recommend using the maximal model (only). Shravan Vasishth disagrees, arguing that such models are going to be underpowered and hence problematic unless the data set is very large (and the signal-to-noise ratio is high) Another reasonably defensible approach is to fit a large but reasonable model and then, if the fit is singular, remove terms until it isn't any more With some caveats (enumerated in the GLMM FAQ), you can use information criteria to compare non-nested models with differing random effects (although Brian Ripley disagrees: see bottom of p. 6 here) Is it recommended to first find the appropriate fixed-effects structure and then the appropriate random-effects structure or the other way round (I have found references for both options...)? I don't think anyone knows. See previous answer about model selection more generally. If you could define your goals sufficiently clearly (which few people do), the question might be answerable. If you have references for both options, it would be useful to edit your question to include them ... (For what it's worth, this example (already quoted above) uses information criteria to select the random effects part, then eschews selection on the fixed-effect part of the model. What is the recommended way of reporting results? Reporting the p-value from the log-likelihood ratio test comparing the full mixed-model (with the effect in question) to reduced model (without the effect in question). Or is it better to use log-likelihood ratio test to find the best fitting model and then use lmerTest to report p-values from the effects in the best fitting model? This is (alas) another difficult question. If you report the marginal effects as reported by lmerTest, you have to worry about marginality (e.g., whether the estimates of the main effects of A and B are meaningful when there is an A-by-B interaction in the model); this is a huge can of worms, but is somewhat mitigated if you use contrasts="sum" as recommend by afex::mixed(). Balanced designs help a little bit too. If you really want to paper over all these cracks, I think I would recommend afex::mixed, which gives you output similar to lmerTest, but tries to deal with these issues.
How to choose random- and fixed-effects structure in linear mixed models?
I'm not sure there's really a canonical answer to this, but I'll give it a shot. What is the recommended way to select the best fitting model in this context? When using log-likelihood ratio tests wh
How to choose random- and fixed-effects structure in linear mixed models? I'm not sure there's really a canonical answer to this, but I'll give it a shot. What is the recommended way to select the best fitting model in this context? When using log-likelihood ratio tests what is the recommended procedure? Generating models upwards (from null model to most complex model) or downwards (from most complex model to null model)? Stepwise inclusion or exclusion? Or is it recommended to put all models in one log-likelihood ratio test and select the model with the lowest p-value? How to compare models that are not nested? It depends what your goals are. In general you should be very, very careful about model selection (see e.g. this answer, or this post, or just Google "Harrell stepwise" ...). If you are interested in having your p-values be meaningful (i.e. you are doing confirmatory hypothesis testing), you should not do model selection. However: it's not so clear to me whether model selection procedures are quite as bad if you are doing model selection on non-focal parts of the model, e.g. doing model selection on the random effects if your primary interest is inference on the fixed effects. There's no such thing as "putting all the models in one likelihood ratio test" -- likelihood ratio testing is a pairwise procedure. If you wanted to do model selection (e.g.) on the random effects, I would probably recommend an "all at once" approach using information criteria as in this example -- that at least avoids some of the problems of stepwise approaches (but not of model selection more generally). Barr et al. 2013 "Keep it maximal" Journal of Memory and Language (doi:10.1016/j.jml.2012.11.001) would recommend using the maximal model (only). Shravan Vasishth disagrees, arguing that such models are going to be underpowered and hence problematic unless the data set is very large (and the signal-to-noise ratio is high) Another reasonably defensible approach is to fit a large but reasonable model and then, if the fit is singular, remove terms until it isn't any more With some caveats (enumerated in the GLMM FAQ), you can use information criteria to compare non-nested models with differing random effects (although Brian Ripley disagrees: see bottom of p. 6 here) Is it recommended to first find the appropriate fixed-effects structure and then the appropriate random-effects structure or the other way round (I have found references for both options...)? I don't think anyone knows. See previous answer about model selection more generally. If you could define your goals sufficiently clearly (which few people do), the question might be answerable. If you have references for both options, it would be useful to edit your question to include them ... (For what it's worth, this example (already quoted above) uses information criteria to select the random effects part, then eschews selection on the fixed-effect part of the model. What is the recommended way of reporting results? Reporting the p-value from the log-likelihood ratio test comparing the full mixed-model (with the effect in question) to reduced model (without the effect in question). Or is it better to use log-likelihood ratio test to find the best fitting model and then use lmerTest to report p-values from the effects in the best fitting model? This is (alas) another difficult question. If you report the marginal effects as reported by lmerTest, you have to worry about marginality (e.g., whether the estimates of the main effects of A and B are meaningful when there is an A-by-B interaction in the model); this is a huge can of worms, but is somewhat mitigated if you use contrasts="sum" as recommend by afex::mixed(). Balanced designs help a little bit too. If you really want to paper over all these cracks, I think I would recommend afex::mixed, which gives you output similar to lmerTest, but tries to deal with these issues.
How to choose random- and fixed-effects structure in linear mixed models? I'm not sure there's really a canonical answer to this, but I'll give it a shot. What is the recommended way to select the best fitting model in this context? When using log-likelihood ratio tests wh
13,879
How to choose random- and fixed-effects structure in linear mixed models?
Update May 2017: As it turns out, a lof of what I have written here is kind of wrongish. Some updates are made throughout the post. I agree a lot with what has been said by Ben Bolker already (thanks for the shout-out to afex::mixed()) but let me add a few more general and specific thoughts on this issue. Focus on fixed versus random effects and how to report results For the type of experimental research that is represented in the example data set from Jonathan Baron you use the important question is usually whether or not a manipulated factor has an overall effect. For example, do we find an overall main effect or interaction of Task? An important point is that in those data sets usually all factors are under complete experimental control and randomly assigned. Consequently, the focus of interest is usually on the fixed effects. In contrast, the random effects components can be seen as "nuisance" parameters that capture systematic variance (i.e., inter-individual differences in the size of the effect) that are not necessarily important for the main question. From this point of view the suggestion of using the maximal random effects structure as advocated by Barr et al. follows somewhat naturally. It is easy to imagine that an experimental manipulation does not affect all individuals in the exact same way and we want to control for this. On the other hand, the number of factors or levels is usually not too large so that the danger of overfitting seems comparatively small. Consequently, I would follow the suggestion of Barr et al. and specify a maximal random effects structure and report tests of the fixed effects as my main results. To test the fixed effects I would also suggest to use afex::mixed() as it reports tests of effects or factors (instead of test of parameters) and calculates those tests in a somewhat sensible way (e.g., uses the same random effects structure for all models in which a single effect is removed, uses sum-to-zero-contrasts, offers different methods to calculate p-values, ...). What about the example data The problem with the example data you gave is that for this dataset the maximal random effects structure leads to an oversaturated model as there is only one data point per cell of the design: > with(df, table(Valence, Subject, Task)) , , Task = Cued Subject Valence Faye Jason Jim Ron Victor Neg 1 1 1 1 1 Neu 1 1 1 1 1 Pos 1 1 1 1 1 , , Task = Free Subject Valence Faye Jason Jim Ron Victor Neg 1 1 1 1 1 Neu 1 1 1 1 1 Pos 1 1 1 1 1 Consequently, lmer chokes on the maximal random effects structure: > lmer(Recall~Task*Valence + (Valence*Task|Subject), df) Error: number of observations (=30) <= number of random effects (=30) for term (Valence * Task | Subject); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable Unfortunately, there is to my knowledge no agreed upon way to deal with this problem. But let me sketch and discuss some: A first solution could be to remove the highest random slope and test the effects for this model: require(afex) mixed(Recall~Task*Valence + (Valence+Task|Subject), df) Effect F ndf ddf F.scaling p.value 1 Task 6.56 1 4.00 1.00 .06 2 Valence 0.80 2 3.00 0.75 .53 3 Task:Valence 0.42 2 8.00 1.00 .67 However, this solution is a little ad-hoc and not overly motivated. Update May 2017: This is the approach I am currently endorsing. See this blog post and the draft of the chapter I am co-authoring, section "Random Effects Structures for Traditional ANOVA Designs". An alternative solution (and one that could be seen as advocated by Barr et al.'s discussion) could be to always remove the random slopes for the smallest effect. This has two problems though: (1) Which random effects structure do we use to find out what the smallest effect is and (2) R is reluctant to remove a lower-order effect such as a main effect if higher order effects such as an interaction of this effect is present (see here). As a consequence one would need to set up this random effects structure by hand and pass the so constructed model matrix to the lmer call. A third solution could be to use an alternative parametrization of the random effects part, namely one that corresponds to the RM-ANOVA model for this data. Unfortunately (?), lmer doesn't allow "negative variances" so this parameterization doesn't exactly correspond to the RM-ANOVA for all data sets, see discussion here and elsewhere (e.g. here and here). The "lmer-ANOVA" for these data would be: > mixed(Recall~Task*Valence + (1|Subject) + (1|Task:Subject) + (1|Valence:Subject), df) Effect F ndf ddf F.scaling p.value 1 Task 7.35 1 4.00 1.00 .05 2 Valence 1.46 2 8.00 1.00 .29 3 Task:Valence 0.29 2 8.00 1.00 .76 Given all this problems I simply wouldn't use lmer for fitting data sets for which there is only one data point per cell of the design unless a more agreed upon solution for the problem of the maximal random effects structure is available. Instead, I would One also could still use the classical ANOVA. Using one of the wrappers to car::Anova() in afex the results would be: > aov4(Recall~Task*Valence + (Valence*Task|Subject), df) Effect df MSE F ges p 1 Valence 1.44, 5.75 4.67 1.46 .02 .29 2 Task 1, 4 4.08 7.35 + .07 .05 3 Valence:Task 1.63, 6.52 2.96 0.29 .003 .71 Note that afex now also allows to return the model fitted with aov which can be passed to lsmeans for post-hoc tests (but for test of effects the ones reported by car::Anova are still more reasonable): > require(lsmeans) > m <- aov4(Recall~Task*Valence + (Valence*Task|Subject), df, return = "aov") > lsmeans(m, ~Task+Valence) Task Valence lsmean SE df lower.CL upper.CL Cued Neg 11.8 1.852026 5.52 7.17157 16.42843 Free Neg 10.2 1.852026 5.52 5.57157 14.82843 Cued Neu 13.0 1.852026 5.52 8.37157 17.62843 Free Neu 11.2 1.852026 5.52 6.57157 15.82843 Cued Pos 13.6 1.852026 5.52 8.97157 18.22843 Free Pos 11.0 1.852026 5.52 6.37157 15.62843 Confidence level used: 0.95
How to choose random- and fixed-effects structure in linear mixed models?
Update May 2017: As it turns out, a lof of what I have written here is kind of wrongish. Some updates are made throughout the post. I agree a lot with what has been said by Ben Bolker already (thanks
How to choose random- and fixed-effects structure in linear mixed models? Update May 2017: As it turns out, a lof of what I have written here is kind of wrongish. Some updates are made throughout the post. I agree a lot with what has been said by Ben Bolker already (thanks for the shout-out to afex::mixed()) but let me add a few more general and specific thoughts on this issue. Focus on fixed versus random effects and how to report results For the type of experimental research that is represented in the example data set from Jonathan Baron you use the important question is usually whether or not a manipulated factor has an overall effect. For example, do we find an overall main effect or interaction of Task? An important point is that in those data sets usually all factors are under complete experimental control and randomly assigned. Consequently, the focus of interest is usually on the fixed effects. In contrast, the random effects components can be seen as "nuisance" parameters that capture systematic variance (i.e., inter-individual differences in the size of the effect) that are not necessarily important for the main question. From this point of view the suggestion of using the maximal random effects structure as advocated by Barr et al. follows somewhat naturally. It is easy to imagine that an experimental manipulation does not affect all individuals in the exact same way and we want to control for this. On the other hand, the number of factors or levels is usually not too large so that the danger of overfitting seems comparatively small. Consequently, I would follow the suggestion of Barr et al. and specify a maximal random effects structure and report tests of the fixed effects as my main results. To test the fixed effects I would also suggest to use afex::mixed() as it reports tests of effects or factors (instead of test of parameters) and calculates those tests in a somewhat sensible way (e.g., uses the same random effects structure for all models in which a single effect is removed, uses sum-to-zero-contrasts, offers different methods to calculate p-values, ...). What about the example data The problem with the example data you gave is that for this dataset the maximal random effects structure leads to an oversaturated model as there is only one data point per cell of the design: > with(df, table(Valence, Subject, Task)) , , Task = Cued Subject Valence Faye Jason Jim Ron Victor Neg 1 1 1 1 1 Neu 1 1 1 1 1 Pos 1 1 1 1 1 , , Task = Free Subject Valence Faye Jason Jim Ron Victor Neg 1 1 1 1 1 Neu 1 1 1 1 1 Pos 1 1 1 1 1 Consequently, lmer chokes on the maximal random effects structure: > lmer(Recall~Task*Valence + (Valence*Task|Subject), df) Error: number of observations (=30) <= number of random effects (=30) for term (Valence * Task | Subject); the random-effects parameters and the residual variance (or scale parameter) are probably unidentifiable Unfortunately, there is to my knowledge no agreed upon way to deal with this problem. But let me sketch and discuss some: A first solution could be to remove the highest random slope and test the effects for this model: require(afex) mixed(Recall~Task*Valence + (Valence+Task|Subject), df) Effect F ndf ddf F.scaling p.value 1 Task 6.56 1 4.00 1.00 .06 2 Valence 0.80 2 3.00 0.75 .53 3 Task:Valence 0.42 2 8.00 1.00 .67 However, this solution is a little ad-hoc and not overly motivated. Update May 2017: This is the approach I am currently endorsing. See this blog post and the draft of the chapter I am co-authoring, section "Random Effects Structures for Traditional ANOVA Designs". An alternative solution (and one that could be seen as advocated by Barr et al.'s discussion) could be to always remove the random slopes for the smallest effect. This has two problems though: (1) Which random effects structure do we use to find out what the smallest effect is and (2) R is reluctant to remove a lower-order effect such as a main effect if higher order effects such as an interaction of this effect is present (see here). As a consequence one would need to set up this random effects structure by hand and pass the so constructed model matrix to the lmer call. A third solution could be to use an alternative parametrization of the random effects part, namely one that corresponds to the RM-ANOVA model for this data. Unfortunately (?), lmer doesn't allow "negative variances" so this parameterization doesn't exactly correspond to the RM-ANOVA for all data sets, see discussion here and elsewhere (e.g. here and here). The "lmer-ANOVA" for these data would be: > mixed(Recall~Task*Valence + (1|Subject) + (1|Task:Subject) + (1|Valence:Subject), df) Effect F ndf ddf F.scaling p.value 1 Task 7.35 1 4.00 1.00 .05 2 Valence 1.46 2 8.00 1.00 .29 3 Task:Valence 0.29 2 8.00 1.00 .76 Given all this problems I simply wouldn't use lmer for fitting data sets for which there is only one data point per cell of the design unless a more agreed upon solution for the problem of the maximal random effects structure is available. Instead, I would One also could still use the classical ANOVA. Using one of the wrappers to car::Anova() in afex the results would be: > aov4(Recall~Task*Valence + (Valence*Task|Subject), df) Effect df MSE F ges p 1 Valence 1.44, 5.75 4.67 1.46 .02 .29 2 Task 1, 4 4.08 7.35 + .07 .05 3 Valence:Task 1.63, 6.52 2.96 0.29 .003 .71 Note that afex now also allows to return the model fitted with aov which can be passed to lsmeans for post-hoc tests (but for test of effects the ones reported by car::Anova are still more reasonable): > require(lsmeans) > m <- aov4(Recall~Task*Valence + (Valence*Task|Subject), df, return = "aov") > lsmeans(m, ~Task+Valence) Task Valence lsmean SE df lower.CL upper.CL Cued Neg 11.8 1.852026 5.52 7.17157 16.42843 Free Neg 10.2 1.852026 5.52 5.57157 14.82843 Cued Neu 13.0 1.852026 5.52 8.37157 17.62843 Free Neu 11.2 1.852026 5.52 6.57157 15.82843 Cued Pos 13.6 1.852026 5.52 8.97157 18.22843 Free Pos 11.0 1.852026 5.52 6.37157 15.62843 Confidence level used: 0.95
How to choose random- and fixed-effects structure in linear mixed models? Update May 2017: As it turns out, a lof of what I have written here is kind of wrongish. Some updates are made throughout the post. I agree a lot with what has been said by Ben Bolker already (thanks
13,880
Interpreting output from anova() when using lm() as input [duplicate]
The anova() function call returns an ANOVA table. You can use it to get an ANOVA table any time you want one. Thus, the question becomes, 'why might I want an ANOVA table when I can just get $t$-tests of my variables with standard output (i.e., the summary.lm() command)?' First of all, you may be perfectly satisfied with the summary output, and that's fine. However, the ANOVA table may offer some advantages. First, if you have a categorical / factor variable with more than two levels, the summary output is hard to interpret. It will give you tests of individual levels against the reference level, but won't give you a test of the factor as a whole. Consider: set.seed(8867) # this makes the example exactly reproducible y = c(rnorm(10, mean=0, sd=1), rnorm(10, mean=-.5, sd=1), rnorm(10, mean=.5, sd=1) ) g = rep(c("A", "B", "C"), each=10) model = lm(y~g) summary(model) # ... # Residuals: # Min 1Q Median 3Q Max # -2.59080 -0.54685 0.04124 0.79890 2.56064 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -0.4440 0.3855 -1.152 0.260 # gB -0.9016 0.5452 -1.654 0.110 # gC 0.6729 0.5452 1.234 0.228 # # Residual standard error: 1.219 on 27 degrees of freedom # Multiple R-squared: 0.2372, Adjusted R-squared: 0.1807 # F-statistic: 4.199 on 2 and 27 DF, p-value: 0.02583 anova(model) # Analysis of Variance Table # # Response: y # Df Sum Sq Mean Sq F value Pr(>F) # g 2 12.484 6.2418 4.199 0.02583 * # Residuals 27 40.135 1.4865 Another reason you might prefer to look at an ANOVA table is that it allows you to use information about the possible associations between your independent variables and your dependent variable that gets thrown away by the $t$-tests in the summary output. Consider your own example, you may notice that the $p$-values from the two don't match (e.g., for v1, the $p$-value in the summary output is 0.93732, but in the ANOVA table it's 0.982400). The reason is that your variables are not perfectly uncorrelated: cor(my_data) # v1 v2 v3 response # v1 1.00000000 -0.23760679 -0.1312995 -0.00357923 # v2 -0.23760679 1.00000000 -0.2358402 0.06069167 # v3 -0.13129952 -0.23584024 1.0000000 0.32818751 # response -0.00357923 0.06069167 0.3281875 1.00000000 The result of this is that there are sums of squares that could be attributed to more than one of the variables. The $t$-tests are equivalent to 'type III' tests of the sums of squares, but other tests are possible. The default ANOVA table uses 'type I' sums of squares, which can allow you to make more precise--and more powerful--tests of your hypotheses. (This topic is fairly advanced, though, for more you may want to read my answer here: How to interpret type I (sequential) ANOVA and MANOVA?)
Interpreting output from anova() when using lm() as input [duplicate]
The anova() function call returns an ANOVA table. You can use it to get an ANOVA table any time you want one. Thus, the question becomes, 'why might I want an ANOVA table when I can just get $t$-tes
Interpreting output from anova() when using lm() as input [duplicate] The anova() function call returns an ANOVA table. You can use it to get an ANOVA table any time you want one. Thus, the question becomes, 'why might I want an ANOVA table when I can just get $t$-tests of my variables with standard output (i.e., the summary.lm() command)?' First of all, you may be perfectly satisfied with the summary output, and that's fine. However, the ANOVA table may offer some advantages. First, if you have a categorical / factor variable with more than two levels, the summary output is hard to interpret. It will give you tests of individual levels against the reference level, but won't give you a test of the factor as a whole. Consider: set.seed(8867) # this makes the example exactly reproducible y = c(rnorm(10, mean=0, sd=1), rnorm(10, mean=-.5, sd=1), rnorm(10, mean=.5, sd=1) ) g = rep(c("A", "B", "C"), each=10) model = lm(y~g) summary(model) # ... # Residuals: # Min 1Q Median 3Q Max # -2.59080 -0.54685 0.04124 0.79890 2.56064 # # Coefficients: # Estimate Std. Error t value Pr(>|t|) # (Intercept) -0.4440 0.3855 -1.152 0.260 # gB -0.9016 0.5452 -1.654 0.110 # gC 0.6729 0.5452 1.234 0.228 # # Residual standard error: 1.219 on 27 degrees of freedom # Multiple R-squared: 0.2372, Adjusted R-squared: 0.1807 # F-statistic: 4.199 on 2 and 27 DF, p-value: 0.02583 anova(model) # Analysis of Variance Table # # Response: y # Df Sum Sq Mean Sq F value Pr(>F) # g 2 12.484 6.2418 4.199 0.02583 * # Residuals 27 40.135 1.4865 Another reason you might prefer to look at an ANOVA table is that it allows you to use information about the possible associations between your independent variables and your dependent variable that gets thrown away by the $t$-tests in the summary output. Consider your own example, you may notice that the $p$-values from the two don't match (e.g., for v1, the $p$-value in the summary output is 0.93732, but in the ANOVA table it's 0.982400). The reason is that your variables are not perfectly uncorrelated: cor(my_data) # v1 v2 v3 response # v1 1.00000000 -0.23760679 -0.1312995 -0.00357923 # v2 -0.23760679 1.00000000 -0.2358402 0.06069167 # v3 -0.13129952 -0.23584024 1.0000000 0.32818751 # response -0.00357923 0.06069167 0.3281875 1.00000000 The result of this is that there are sums of squares that could be attributed to more than one of the variables. The $t$-tests are equivalent to 'type III' tests of the sums of squares, but other tests are possible. The default ANOVA table uses 'type I' sums of squares, which can allow you to make more precise--and more powerful--tests of your hypotheses. (This topic is fairly advanced, though, for more you may want to read my answer here: How to interpret type I (sequential) ANOVA and MANOVA?)
Interpreting output from anova() when using lm() as input [duplicate] The anova() function call returns an ANOVA table. You can use it to get an ANOVA table any time you want one. Thus, the question becomes, 'why might I want an ANOVA table when I can just get $t$-tes
13,881
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation prediction error of the model
The LASSO and forward/backward model selection both have strengths and limitations. No far sweeping recommendation can be made. Simulation can always be explored to address this. Both can be understood in the sense of dimensionality: referring to $p$ the number of model parameters and $n$ the number of observations. If you were able to fit models using backwards model selection, you probably didn't have $p \gg n$. In that case, the "best fitting" model is the one using all parameters... when validated internally! This is simply a matter of overfitting. Overfitting is remedied using split sample cross validation (CV) for model evaluation. Since you didn't describe this, I assume you didn't do it. Unlike stepwise model selection, LASSO uses a tuning parameter to penalize the number of parameters in the model. You can fix the tuning parameter, or use a complicated iterative process to choose this value. By default, LASSO does the latter. This is done with CV so as to minimize the MSE of prediction. I am not aware of any implementation of stepwise model selection that uses such sophisticated techniques, even the BIC as a criterion would suffer from internal validation bias. By my account, that automatically gives LASSO leverage over "out-of-the-box" stepwise model selection. Lastly, stepwise model selection can have different criteria for including/excluding different regressors. If you use the p-values for the specific model parameters' Wald test or the resultant model R^2, you will not do well, mostly because of internal validation bias (again, could be remedied with CV). I find it surprising that this is still the way such models tend to be implemented. AIC or BIC are much better criteria for model selection. There are a number of problems with each method. Stepwise model selection's problems are much better understood, and far worse than those of LASSO. The main problem I see with your question is that you are using feature selection tools to evaluate prediction. They are distinct tasks. LASSO is better for feature selection or sparse model selection. Ridge regression may give better prediction since it uses all variables. LASSO's great strength is that it can estimate models in which $p \gg n$, as can be the case forward (but not backward) stepwise regression. In both cases, these models can be effective for prediction only when there is a handful of very powerful predictors. If an outcome is better predicted by many weak predictors, then ridge regression or bagging/boosting will outperform both forward stepwise regression and LASSO by a long shot. LASSO is much faster than forward stepwise regression. There is obviously a great deal of overlap between feature selection and prediction, but I never tell you about how well a wrench serves as a hammer. In general, for prediction with a sparse number of model coefficients and $p \gg n$, I would prefer LASSO over forward stepwise model selection.
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation pr
The LASSO and forward/backward model selection both have strengths and limitations. No far sweeping recommendation can be made. Simulation can always be explored to address this. Both can be understo
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation prediction error of the model The LASSO and forward/backward model selection both have strengths and limitations. No far sweeping recommendation can be made. Simulation can always be explored to address this. Both can be understood in the sense of dimensionality: referring to $p$ the number of model parameters and $n$ the number of observations. If you were able to fit models using backwards model selection, you probably didn't have $p \gg n$. In that case, the "best fitting" model is the one using all parameters... when validated internally! This is simply a matter of overfitting. Overfitting is remedied using split sample cross validation (CV) for model evaluation. Since you didn't describe this, I assume you didn't do it. Unlike stepwise model selection, LASSO uses a tuning parameter to penalize the number of parameters in the model. You can fix the tuning parameter, or use a complicated iterative process to choose this value. By default, LASSO does the latter. This is done with CV so as to minimize the MSE of prediction. I am not aware of any implementation of stepwise model selection that uses such sophisticated techniques, even the BIC as a criterion would suffer from internal validation bias. By my account, that automatically gives LASSO leverage over "out-of-the-box" stepwise model selection. Lastly, stepwise model selection can have different criteria for including/excluding different regressors. If you use the p-values for the specific model parameters' Wald test or the resultant model R^2, you will not do well, mostly because of internal validation bias (again, could be remedied with CV). I find it surprising that this is still the way such models tend to be implemented. AIC or BIC are much better criteria for model selection. There are a number of problems with each method. Stepwise model selection's problems are much better understood, and far worse than those of LASSO. The main problem I see with your question is that you are using feature selection tools to evaluate prediction. They are distinct tasks. LASSO is better for feature selection or sparse model selection. Ridge regression may give better prediction since it uses all variables. LASSO's great strength is that it can estimate models in which $p \gg n$, as can be the case forward (but not backward) stepwise regression. In both cases, these models can be effective for prediction only when there is a handful of very powerful predictors. If an outcome is better predicted by many weak predictors, then ridge regression or bagging/boosting will outperform both forward stepwise regression and LASSO by a long shot. LASSO is much faster than forward stepwise regression. There is obviously a great deal of overlap between feature selection and prediction, but I never tell you about how well a wrench serves as a hammer. In general, for prediction with a sparse number of model coefficients and $p \gg n$, I would prefer LASSO over forward stepwise model selection.
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation pr The LASSO and forward/backward model selection both have strengths and limitations. No far sweeping recommendation can be made. Simulation can always be explored to address this. Both can be understo
13,882
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation prediction error of the model
You want to choose a subset of predictors according to some criteria. Might be in-sample AIC or adjusted R^2, or cross-validation, doesn't matter. You could test every single predictor subset combination and pick the best subset. However Very time-consuming due to combinatorial explosion of parameters. Works if you have more parameters than observations in the sense that you test all predictor combinations that give a solution You could use forward stepwise selection Less time-consuming, but may not get absolute best combination, esp. when predictors are correlated (may pick one predictor and be unable to get further improvement when adding 2 other predictors would have shown improvement) Works even when you have more parameters than observations You could use backward elimination Doesn't work if you have more parameters than observations, no single good starting point (in theory you could start from all valid starting points, work backwards, pick best one, but that's not what is normally meant by backwards elimination) Like forward stepwise, less time-consuming than all subsets, but may not get absolute best combination, esp. when predictors are correlated You could use LASSO Works even when you have more parameters than observations CPU-efficient when you have many parameters and combinatorial explosion of subsets Adds regularization As to your question of why LASSO performs better on your data in CV One possibility is the path-dependency described above - LASSO may find a better subset. Perhaps it got lucky, perhaps LASSO generally/sometimes gets better subsets, I'm not sure. Perhaps there is literature on the subject. Another (more likely) possibility is the LASSO regularization prevents overfitting, so LASSO performs better in CV/out of sample. Bottom line, LASSO gives you regularization and efficient subset selection, especially when you have a lot of predictors. BTW you can do LASSO and select your model using CV (most common) but also using AIC or some other criterion. Run your model with L1 regularization and no constraint, then gradually tighten the constraint until AIC reaches a minimum, or CV error, or the criterion of your choice. See http://scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_model_selection.html
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation pr
You want to choose a subset of predictors according to some criteria. Might be in-sample AIC or adjusted R^2, or cross-validation, doesn't matter. You could test every single predictor subset combinat
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation prediction error of the model You want to choose a subset of predictors according to some criteria. Might be in-sample AIC or adjusted R^2, or cross-validation, doesn't matter. You could test every single predictor subset combination and pick the best subset. However Very time-consuming due to combinatorial explosion of parameters. Works if you have more parameters than observations in the sense that you test all predictor combinations that give a solution You could use forward stepwise selection Less time-consuming, but may not get absolute best combination, esp. when predictors are correlated (may pick one predictor and be unable to get further improvement when adding 2 other predictors would have shown improvement) Works even when you have more parameters than observations You could use backward elimination Doesn't work if you have more parameters than observations, no single good starting point (in theory you could start from all valid starting points, work backwards, pick best one, but that's not what is normally meant by backwards elimination) Like forward stepwise, less time-consuming than all subsets, but may not get absolute best combination, esp. when predictors are correlated You could use LASSO Works even when you have more parameters than observations CPU-efficient when you have many parameters and combinatorial explosion of subsets Adds regularization As to your question of why LASSO performs better on your data in CV One possibility is the path-dependency described above - LASSO may find a better subset. Perhaps it got lucky, perhaps LASSO generally/sometimes gets better subsets, I'm not sure. Perhaps there is literature on the subject. Another (more likely) possibility is the LASSO regularization prevents overfitting, so LASSO performs better in CV/out of sample. Bottom line, LASSO gives you regularization and efficient subset selection, especially when you have a lot of predictors. BTW you can do LASSO and select your model using CV (most common) but also using AIC or some other criterion. Run your model with L1 regularization and no constraint, then gradually tighten the constraint until AIC reaches a minimum, or CV error, or the criterion of your choice. See http://scikit-learn.org/stable/auto_examples/linear_model/plot_lasso_model_selection.html
Superiority of LASSO over forward selection/backward elimination in terms of the cross validation pr You want to choose a subset of predictors according to some criteria. Might be in-sample AIC or adjusted R^2, or cross-validation, doesn't matter. You could test every single predictor subset combinat
13,883
MCMC on a bounded parameter space?
You have several nice, more-or-less simple, options. Your uniform prior helps make them simpler. Option 1: Independence sampler. You can just set your proposal distribution equal to a uniform distribution over the unit square, which ensures that samples won't fall outside the restricted zone, as you call it. Potential downside: if the posterior is concentrated in a very small region of the unit square, you may have a very low acceptance rate. OTOH, it's hard to generate random numbers faster than from a U(0,1) distribution. Potential upside: less work for you. Option 2: Transform your parameters to something that isn't bounded, make proposals for the transformed parameters, then transform the parameters back for use in the likelihood functions. Note that in this case the prior is going to be on the transformed parameters, because that's what you're making proposals for, so you'll have to mess with the Jacobian of the transform to get the new prior. For your analysis, of course, you'll transform the MCMC-generated parameter random numbers back to the original parameters. Potential downside: more initial work for you. Potential upside: better acceptance rate for your proposals. Option 3: Construct a proposal distribution other than an independence sampler that is on the unit square. This allows you to keep your uniform prior, but at the cost of greater complexity when calculating the proposal probabilities. An example of this, letting $x$ be the current value of one of your parameters, would be a Beta distribution with parameters $(nx, n(1-x))$. The larger $n$ is, the more concentrated your proposal will be around the current value. Potential downside: more initial work for you. Potential upside: better acceptance rate for your proposals - but if you make $n$ too large, and move near to a corner, you might wind up making lots of small moves in the corner before getting out. Option 4: Just reject any proposals that fall outside the unit square (Xian's half-hearted suggestion). Note that this is not the same as just generating another proposal; in this case you are rejecting the proposal, which means your next value for the parameter is the same as the current value for the parameter. This works because it's what would happen if you had a zero prior probability for some region of your parameter space and generated a random number that fell in that region. Potential downside: if you get near a corner, you may have a low acceptance probability and get stuck for a while. Potential upside: less work for you. Option 5: Create an extended problem on the plane which, on the unit square, is the same as the actual problem you face, do everything right, then, when post-processing the results of the MCMC sampling, throw out all the samples outside of the unit square. Potential upside: If it's very easy to create that extended problem, it may be less work for you. Potential downside: if the Markov chain wanders off somewhere outside the unit square for a while, you may have, in effect, horrible acceptance probabilities, as you will throw out most of your samples. No doubt there are other options, I'd be interested to see what other people suggest! The difference between 2 and 3 is to some extent conceptual, although with real implications for what you actually do. I'd probably go with 3, as I'd just let R tell me what the proposal probabilities are (if I'm programming in R) and the amount of extra effort, aside from some tuning of the proposal distribution parameter $n$, looks small to me. If I was using JAGS or BUGS, of course, that would be a whole different matter, since those tools handle their own proposals.
MCMC on a bounded parameter space?
You have several nice, more-or-less simple, options. Your uniform prior helps make them simpler. Option 1: Independence sampler. You can just set your proposal distribution equal to a uniform distri
MCMC on a bounded parameter space? You have several nice, more-or-less simple, options. Your uniform prior helps make them simpler. Option 1: Independence sampler. You can just set your proposal distribution equal to a uniform distribution over the unit square, which ensures that samples won't fall outside the restricted zone, as you call it. Potential downside: if the posterior is concentrated in a very small region of the unit square, you may have a very low acceptance rate. OTOH, it's hard to generate random numbers faster than from a U(0,1) distribution. Potential upside: less work for you. Option 2: Transform your parameters to something that isn't bounded, make proposals for the transformed parameters, then transform the parameters back for use in the likelihood functions. Note that in this case the prior is going to be on the transformed parameters, because that's what you're making proposals for, so you'll have to mess with the Jacobian of the transform to get the new prior. For your analysis, of course, you'll transform the MCMC-generated parameter random numbers back to the original parameters. Potential downside: more initial work for you. Potential upside: better acceptance rate for your proposals. Option 3: Construct a proposal distribution other than an independence sampler that is on the unit square. This allows you to keep your uniform prior, but at the cost of greater complexity when calculating the proposal probabilities. An example of this, letting $x$ be the current value of one of your parameters, would be a Beta distribution with parameters $(nx, n(1-x))$. The larger $n$ is, the more concentrated your proposal will be around the current value. Potential downside: more initial work for you. Potential upside: better acceptance rate for your proposals - but if you make $n$ too large, and move near to a corner, you might wind up making lots of small moves in the corner before getting out. Option 4: Just reject any proposals that fall outside the unit square (Xian's half-hearted suggestion). Note that this is not the same as just generating another proposal; in this case you are rejecting the proposal, which means your next value for the parameter is the same as the current value for the parameter. This works because it's what would happen if you had a zero prior probability for some region of your parameter space and generated a random number that fell in that region. Potential downside: if you get near a corner, you may have a low acceptance probability and get stuck for a while. Potential upside: less work for you. Option 5: Create an extended problem on the plane which, on the unit square, is the same as the actual problem you face, do everything right, then, when post-processing the results of the MCMC sampling, throw out all the samples outside of the unit square. Potential upside: If it's very easy to create that extended problem, it may be less work for you. Potential downside: if the Markov chain wanders off somewhere outside the unit square for a while, you may have, in effect, horrible acceptance probabilities, as you will throw out most of your samples. No doubt there are other options, I'd be interested to see what other people suggest! The difference between 2 and 3 is to some extent conceptual, although with real implications for what you actually do. I'd probably go with 3, as I'd just let R tell me what the proposal probabilities are (if I'm programming in R) and the amount of extra effort, aside from some tuning of the proposal distribution parameter $n$, looks small to me. If I was using JAGS or BUGS, of course, that would be a whole different matter, since those tools handle their own proposals.
MCMC on a bounded parameter space? You have several nice, more-or-less simple, options. Your uniform prior helps make them simpler. Option 1: Independence sampler. You can just set your proposal distribution equal to a uniform distri
13,884
How do I interpret Exp(B) in Cox regression?
Generally speaking, $\exp(\hat\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The parallel with other linear models is that in Cox regression the hazard function is modeled as $h(t)=h_0(t)\exp(\beta'x)$, where $h_0(t)$ is the baseline hazard. This is equivalent to say that $\log(\text{group hazard}/\text{baseline hazard})=\log\big((h(t)/h_0(t)\big)=\sum_i\beta_ix_i$. Then, a unit increase in $x_i$ is associated with $\beta_i$ increase in the log hazard rate. The regression coefficient allow thus to quantify the log of the hazard in the treatment group (compared to the control or placebo group), accounting for the covariates included in the model; it is interpreted as a relative risk (assuming no time-varying coefficients). In the case of logistic regression, the regression coefficient reflects the log of the odds-ratio, hence the interpretation as an k-fold increase in risk. So yes, the interpretation of hazard ratios shares some resemblance with the interpretation of odds ratios. Be sure to check Dave Garson's website where there is some good material on Cox Regression with SPSS.
How do I interpret Exp(B) in Cox regression?
Generally speaking, $\exp(\hat\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The parallel with othe
How do I interpret Exp(B) in Cox regression? Generally speaking, $\exp(\hat\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The parallel with other linear models is that in Cox regression the hazard function is modeled as $h(t)=h_0(t)\exp(\beta'x)$, where $h_0(t)$ is the baseline hazard. This is equivalent to say that $\log(\text{group hazard}/\text{baseline hazard})=\log\big((h(t)/h_0(t)\big)=\sum_i\beta_ix_i$. Then, a unit increase in $x_i$ is associated with $\beta_i$ increase in the log hazard rate. The regression coefficient allow thus to quantify the log of the hazard in the treatment group (compared to the control or placebo group), accounting for the covariates included in the model; it is interpreted as a relative risk (assuming no time-varying coefficients). In the case of logistic regression, the regression coefficient reflects the log of the odds-ratio, hence the interpretation as an k-fold increase in risk. So yes, the interpretation of hazard ratios shares some resemblance with the interpretation of odds ratios. Be sure to check Dave Garson's website where there is some good material on Cox Regression with SPSS.
How do I interpret Exp(B) in Cox regression? Generally speaking, $\exp(\hat\beta_1)$ is the ratio of the hazards between two individuals whose values of $x_1$ differ by one unit when all other covariates are held constant. The parallel with othe
13,885
How do I interpret Exp(B) in Cox regression?
I am not a statistician, but an MD, trying to sort things out in the world of statistics. The way you have to interpret this output is by looking at the $\exp(B)$ values. A value of < 1 says that an increase in one unit for that particular variable, will decrease the probability of experiencing an end point throughout the observation period. By inverting (that is $1/\exp(B)$), you will find the "protective effect", for example if $\exp(B) = 0.407$ (as is the case for your "Gender" value), the interpretation will be that having the value of gender = 1 means that you decrease the probability of experiencing an en point with $1/0.407 = 2.46$, compared to when the Gender value = 0. For $\exp(B) > 1$, the interpretation is even easier, as a value of, say $\exp(B) = 1.259$ (as is the case for your "stenosis" variable), means that scoring "stenosis" = 1 will result in an increased probability (25.9%) of experiencing an end point compared to when "stenosis" = 0. The confidence interval (CI) tells us within which range (of 95% probability) we can expect this value to differ, if we were to repeat this survey for an infinite number of times. If the 95% CI overlaps the value of 1, then the result is not statistically significant (since $\exp(B) = 1$ means that there is no difference between the probability of experiencing an end point if the variable value is either "0" or "1"), and the P value will exceed 0.05. If the 95% CI keeps out of the value 1 (on either side), the $\exp(B)$ is statistically significant. From your analysis, it seems as no one of your variables are significant predictors (at a sign level of 5%) of your endpoint, although being a "high risk" patient is of borderline significance. Reading the book "SPSS survival manual", by Julie Pallant will probably enlighten you further on this (and more) topic(s).
How do I interpret Exp(B) in Cox regression?
I am not a statistician, but an MD, trying to sort things out in the world of statistics. The way you have to interpret this output is by looking at the $\exp(B)$ values. A value of < 1 says that an
How do I interpret Exp(B) in Cox regression? I am not a statistician, but an MD, trying to sort things out in the world of statistics. The way you have to interpret this output is by looking at the $\exp(B)$ values. A value of < 1 says that an increase in one unit for that particular variable, will decrease the probability of experiencing an end point throughout the observation period. By inverting (that is $1/\exp(B)$), you will find the "protective effect", for example if $\exp(B) = 0.407$ (as is the case for your "Gender" value), the interpretation will be that having the value of gender = 1 means that you decrease the probability of experiencing an en point with $1/0.407 = 2.46$, compared to when the Gender value = 0. For $\exp(B) > 1$, the interpretation is even easier, as a value of, say $\exp(B) = 1.259$ (as is the case for your "stenosis" variable), means that scoring "stenosis" = 1 will result in an increased probability (25.9%) of experiencing an end point compared to when "stenosis" = 0. The confidence interval (CI) tells us within which range (of 95% probability) we can expect this value to differ, if we were to repeat this survey for an infinite number of times. If the 95% CI overlaps the value of 1, then the result is not statistically significant (since $\exp(B) = 1$ means that there is no difference between the probability of experiencing an end point if the variable value is either "0" or "1"), and the P value will exceed 0.05. If the 95% CI keeps out of the value 1 (on either side), the $\exp(B)$ is statistically significant. From your analysis, it seems as no one of your variables are significant predictors (at a sign level of 5%) of your endpoint, although being a "high risk" patient is of borderline significance. Reading the book "SPSS survival manual", by Julie Pallant will probably enlighten you further on this (and more) topic(s).
How do I interpret Exp(B) in Cox regression? I am not a statistician, but an MD, trying to sort things out in the world of statistics. The way you have to interpret this output is by looking at the $\exp(B)$ values. A value of < 1 says that an
13,886
Conflict between Poisson confidence interval and p-value
There are several ways to define two-sided $p$-values in this case. Michael Fay lists three in his article. The following is mostly taken from his article. Suppose you have a discrete test statistic $t$ with random variable $T$ such that larger values of $T$ imply larger values of a parameter of interest, $\theta$. Let $F_\theta(t)=\Pr[T\leq t;\theta]$ and $\bar{F}_\theta(t)=\Pr[T\geq t;\theta]$. Suppose the null value is $\theta_0$. The one-sided $p$-values are then denoted by $F_{\theta_0}(t), \bar{F}_{\theta_0}(t)$, respectively. The three ways listed to define two-sided $p$-values are as follows: $\textbf{central:}$ $p_{c}$ is 2 times the minimum of the one-sided $p$-values bounded above by 1: $$ p_c=\min\{1,2\times\min(F_{\theta_0}(t), \bar{F}_{\theta_0}(t))\}. $$ $\textbf{minlike:}$ $p_{m}$ is the sum of probabilities of outcomes with likelihoods less than or equal to the observed likelihood: $$ p_m=\sum_{T:f(T)\leq f(t)} f(T) $$ where $f(t) = \Pr[T=t;\theta_0]$. $\textbf{blaker:}$ $p_b$ combines the probability of the smaller observed tail with the smallest probability of the opposite tail that does not exceed that observed probability. This may be expressed as: $$ p_b=\Pr[\gamma(T)\leq\gamma(t)] $$ where $\gamma(T)=\min\{F_{\theta_0}(T), \bar{F}_{\theta_0}(T))\}$. If $p(\theta_0)$ is a two-sided $p$-value testing $H_0:\theta=\theta_0$, then its $100(1-\alpha)\%$ matching confidence interval is the smallest interval that contains all $\theta_0$ such that $p(\theta_{0})>\alpha$. The matching confidence limits to the $\textbf{central}$ test are $(\theta_{L},\theta_U)$ which are the solutions to: $$ \alpha/2=\bar{F}_{\theta_L}(t) $$ and $$ \alpha/2=F_{\theta_U}(t). $$ The contradiction arises because poisson.test returns $p_m$ ($\textrm{minlike}$) as the $p$-value but confidence limits that are based on the $\textrm{central}$ test! The exactci package returns the correct matching $p$-values and confidence limits (you can set the method using the option tsmethod): library(exactci) poisson.exact(x=10, r=5.22, tsmethod = "central") Exact two-sided Poisson test (central method) data: 10 time base: 1 number of events = 10, time base = 1, p-value = 0.08105 alternative hypothesis: true event rate is not equal to 5.22 95 percent confidence interval: 4.795389 18.390356 sample estimates: event rate 10 Now there is no conflict between the $p$-value and the confidence intervals. In rare cases, even the exactci function will result in inconsistencies, which is mentioned in Michael Fays article.
Conflict between Poisson confidence interval and p-value
There are several ways to define two-sided $p$-values in this case. Michael Fay lists three in his article. The following is mostly taken from his article. Suppose you have a discrete test statistic $
Conflict between Poisson confidence interval and p-value There are several ways to define two-sided $p$-values in this case. Michael Fay lists three in his article. The following is mostly taken from his article. Suppose you have a discrete test statistic $t$ with random variable $T$ such that larger values of $T$ imply larger values of a parameter of interest, $\theta$. Let $F_\theta(t)=\Pr[T\leq t;\theta]$ and $\bar{F}_\theta(t)=\Pr[T\geq t;\theta]$. Suppose the null value is $\theta_0$. The one-sided $p$-values are then denoted by $F_{\theta_0}(t), \bar{F}_{\theta_0}(t)$, respectively. The three ways listed to define two-sided $p$-values are as follows: $\textbf{central:}$ $p_{c}$ is 2 times the minimum of the one-sided $p$-values bounded above by 1: $$ p_c=\min\{1,2\times\min(F_{\theta_0}(t), \bar{F}_{\theta_0}(t))\}. $$ $\textbf{minlike:}$ $p_{m}$ is the sum of probabilities of outcomes with likelihoods less than or equal to the observed likelihood: $$ p_m=\sum_{T:f(T)\leq f(t)} f(T) $$ where $f(t) = \Pr[T=t;\theta_0]$. $\textbf{blaker:}$ $p_b$ combines the probability of the smaller observed tail with the smallest probability of the opposite tail that does not exceed that observed probability. This may be expressed as: $$ p_b=\Pr[\gamma(T)\leq\gamma(t)] $$ where $\gamma(T)=\min\{F_{\theta_0}(T), \bar{F}_{\theta_0}(T))\}$. If $p(\theta_0)$ is a two-sided $p$-value testing $H_0:\theta=\theta_0$, then its $100(1-\alpha)\%$ matching confidence interval is the smallest interval that contains all $\theta_0$ such that $p(\theta_{0})>\alpha$. The matching confidence limits to the $\textbf{central}$ test are $(\theta_{L},\theta_U)$ which are the solutions to: $$ \alpha/2=\bar{F}_{\theta_L}(t) $$ and $$ \alpha/2=F_{\theta_U}(t). $$ The contradiction arises because poisson.test returns $p_m$ ($\textrm{minlike}$) as the $p$-value but confidence limits that are based on the $\textrm{central}$ test! The exactci package returns the correct matching $p$-values and confidence limits (you can set the method using the option tsmethod): library(exactci) poisson.exact(x=10, r=5.22, tsmethod = "central") Exact two-sided Poisson test (central method) data: 10 time base: 1 number of events = 10, time base = 1, p-value = 0.08105 alternative hypothesis: true event rate is not equal to 5.22 95 percent confidence interval: 4.795389 18.390356 sample estimates: event rate 10 Now there is no conflict between the $p$-value and the confidence intervals. In rare cases, even the exactci function will result in inconsistencies, which is mentioned in Michael Fays article.
Conflict between Poisson confidence interval and p-value There are several ways to define two-sided $p$-values in this case. Michael Fay lists three in his article. The following is mostly taken from his article. Suppose you have a discrete test statistic $
13,887
Conflict between Poisson confidence interval and p-value
The correct exact two-sided 95% confidence interval $[\lambda^{-},\lambda^{+}]$ is computed from an observation $x$ of a Poisson variable $X$ using the defining relationships $$\Pr(X\lt x;\lambda^{-}) = \alpha/2$$ and $$\Pr(X \gt x; \lambda^{+}) = 1 - \alpha/2.$$ We may find these limits by exploiting $$e^{-\lambda}\sum_{i=0}^{x}\frac{\lambda^i}{i!} = F_{\text{Poisson}}(x;\lambda) = 1 - F_\Gamma(\lambda;x+1) = \frac{1}{x!}\int_\lambda^\infty t^x e^{-t}\,\mathrm{d}t$$ for natural numbers $x.$ (You can prove this inductively via repeated integrations by parts on the right hand side or you can observe that the left probability is the chance of observing $x$ or fewer points in a homogeneous, unit-rate Poisson process running for time $\lambda;$ while the right probability is the chance that its takes more than $\lambda$ time to observe the $x+1^\text{st}$ point -- which obviously is the same event.) Thus, writing $G=F_\Gamma^{-1}$ for the Gamma quantile function, the confidence interval is $$\left[G(\alpha/2;x), G(1-\alpha/2;x+1)\right].$$ The discreteness in the defining inequalities -- that is, the distinction between "$\lt$" and "$\le$" -- is to blame for the apparent inconsistency with the p-value. Indeed, in most circumstances replacing the lower limit by $G(\alpha/2,x+1)$ actually gives better coverage, as simulations show. Here, for instance, are simulations in R that estimate the coverages of these two procedures. f <- function(x, alpha=0.05) qgamma(c(alpha/2, 1-alpha/2), c(x, x+1)) z <- 10 x <- matrix(rpois(2e6, f(z)), 2) mean(x[1,] <= z & z <= x[2,]) The output, which is identical to that of poisson.test, will be close to 97.7% coverage. The altered interval is f. <- function(x, alpha=0.05) qgamma(c(alpha/2, 1-alpha/2), x+1) x <- matrix(rpois(2e6, f.(z)), 2) mean(x[1,] <= z & z <= x[2,]) The output will be close to 96.3% coverage -- closer to the nominal 95% level. The problem with this somewhat ad hoc modification is that it fails when the true rate is tiny. In the same simulation with a true rate of $1/10$ rather than $10,$ the coverage of the correct interval is around 98% but that of the modified interval is only 94.4%. If your objective is to achieve 95% or higher coverage--not going any lower--than this is unacceptable. For many applications, especially when very small values of the parameter are highly unlikely, the modified interval has much to recommend it and will produce results more consistent with the p value. Reference Hahn, GJ and WQ Meeker, Statistical Intervals. Wiley 1991. Their formula (7.1), expressed in terms of quantiles of chi-squared distributions, is equivalent to the one I give in terms of Gamma distributions. (Chi-squared distributions with $2x$ degrees of freedom are scaled versions of Gamma distributions with $x$ degrees of freedom.)
Conflict between Poisson confidence interval and p-value
The correct exact two-sided 95% confidence interval $[\lambda^{-},\lambda^{+}]$ is computed from an observation $x$ of a Poisson variable $X$ using the defining relationships $$\Pr(X\lt x;\lambda^{-})
Conflict between Poisson confidence interval and p-value The correct exact two-sided 95% confidence interval $[\lambda^{-},\lambda^{+}]$ is computed from an observation $x$ of a Poisson variable $X$ using the defining relationships $$\Pr(X\lt x;\lambda^{-}) = \alpha/2$$ and $$\Pr(X \gt x; \lambda^{+}) = 1 - \alpha/2.$$ We may find these limits by exploiting $$e^{-\lambda}\sum_{i=0}^{x}\frac{\lambda^i}{i!} = F_{\text{Poisson}}(x;\lambda) = 1 - F_\Gamma(\lambda;x+1) = \frac{1}{x!}\int_\lambda^\infty t^x e^{-t}\,\mathrm{d}t$$ for natural numbers $x.$ (You can prove this inductively via repeated integrations by parts on the right hand side or you can observe that the left probability is the chance of observing $x$ or fewer points in a homogeneous, unit-rate Poisson process running for time $\lambda;$ while the right probability is the chance that its takes more than $\lambda$ time to observe the $x+1^\text{st}$ point -- which obviously is the same event.) Thus, writing $G=F_\Gamma^{-1}$ for the Gamma quantile function, the confidence interval is $$\left[G(\alpha/2;x), G(1-\alpha/2;x+1)\right].$$ The discreteness in the defining inequalities -- that is, the distinction between "$\lt$" and "$\le$" -- is to blame for the apparent inconsistency with the p-value. Indeed, in most circumstances replacing the lower limit by $G(\alpha/2,x+1)$ actually gives better coverage, as simulations show. Here, for instance, are simulations in R that estimate the coverages of these two procedures. f <- function(x, alpha=0.05) qgamma(c(alpha/2, 1-alpha/2), c(x, x+1)) z <- 10 x <- matrix(rpois(2e6, f(z)), 2) mean(x[1,] <= z & z <= x[2,]) The output, which is identical to that of poisson.test, will be close to 97.7% coverage. The altered interval is f. <- function(x, alpha=0.05) qgamma(c(alpha/2, 1-alpha/2), x+1) x <- matrix(rpois(2e6, f.(z)), 2) mean(x[1,] <= z & z <= x[2,]) The output will be close to 96.3% coverage -- closer to the nominal 95% level. The problem with this somewhat ad hoc modification is that it fails when the true rate is tiny. In the same simulation with a true rate of $1/10$ rather than $10,$ the coverage of the correct interval is around 98% but that of the modified interval is only 94.4%. If your objective is to achieve 95% or higher coverage--not going any lower--than this is unacceptable. For many applications, especially when very small values of the parameter are highly unlikely, the modified interval has much to recommend it and will produce results more consistent with the p value. Reference Hahn, GJ and WQ Meeker, Statistical Intervals. Wiley 1991. Their formula (7.1), expressed in terms of quantiles of chi-squared distributions, is equivalent to the one I give in terms of Gamma distributions. (Chi-squared distributions with $2x$ degrees of freedom are scaled versions of Gamma distributions with $x$ degrees of freedom.)
Conflict between Poisson confidence interval and p-value The correct exact two-sided 95% confidence interval $[\lambda^{-},\lambda^{+}]$ is computed from an observation $x$ of a Poisson variable $X$ using the defining relationships $$\Pr(X\lt x;\lambda^{-})
13,888
Conflict between Poisson confidence interval and p-value
There are two possibilities. The first, and most obvious, is that it is a bug. I looked up the documentation for poisson.test in R and, originally, it was a one-sided test. It did not support two-sided tests. The second would be that the p-value and the interval are using different loss functions, but I would suspect that is not the case. You should submit a bug report.
Conflict between Poisson confidence interval and p-value
There are two possibilities. The first, and most obvious, is that it is a bug. I looked up the documentation for poisson.test in R and, originally, it was a one-sided test. It did not support two-s
Conflict between Poisson confidence interval and p-value There are two possibilities. The first, and most obvious, is that it is a bug. I looked up the documentation for poisson.test in R and, originally, it was a one-sided test. It did not support two-sided tests. The second would be that the p-value and the interval are using different loss functions, but I would suspect that is not the case. You should submit a bug report.
Conflict between Poisson confidence interval and p-value There are two possibilities. The first, and most obvious, is that it is a bug. I looked up the documentation for poisson.test in R and, originally, it was a one-sided test. It did not support two-s
13,889
Definition and delimitation of regression model
I would say that "regression model" is a kind of meta-concept, in the sense that you will not find a definition of "regression model", but more concrete concepts such as "linear regression", "non-linear regression", "robust regression" and so on. This in the same way as in mathemathics we usually do not define "number", but "natural number", "integers", "real number", "p-adic number" and so on, and if somebody will want to include the quaternions among numbers so be it! it doesn't really matter, what matters is what definitions is used by the book/paper you are reading at the moment. Definitions are tools, and essentialism, that is discussing what is the essence of ..., what a word really means, are seldom worthwhile. So, what distinguishes a "regression model" from other kinds of statistical models? Mostly, that there is a response variable, which you want to model as influenced by (or determined by) some set of predictor variables. We are not interested in influence the other direction, and we are not interested in relationships among the predictor variables. Mostly, we take the predictor variables as given, and treat them as constants in the model, not as random variables. The relationship mentioned above can be linear or nonlinear, specified in a parametric or nonparametric way, and so on. To delineate from other models we better have a look at some other words often taken to denote something different for "regression models", like "errors in variables", when we accept the possibility of measurement errors in the predictor variables. That could well be included in my description of "regression model" above, but is often taken as an alternative model. Also, what is meant might vary among fields, see What is the difference between conditioning on regressors vs. treating them as fixed? To repeat: what matters is the definition used by the authors you are reading now, and not some metaphysics about what it "really is".
Definition and delimitation of regression model
I would say that "regression model" is a kind of meta-concept, in the sense that you will not find a definition of "regression model", but more concrete concepts such as "linear regression", "non-line
Definition and delimitation of regression model I would say that "regression model" is a kind of meta-concept, in the sense that you will not find a definition of "regression model", but more concrete concepts such as "linear regression", "non-linear regression", "robust regression" and so on. This in the same way as in mathemathics we usually do not define "number", but "natural number", "integers", "real number", "p-adic number" and so on, and if somebody will want to include the quaternions among numbers so be it! it doesn't really matter, what matters is what definitions is used by the book/paper you are reading at the moment. Definitions are tools, and essentialism, that is discussing what is the essence of ..., what a word really means, are seldom worthwhile. So, what distinguishes a "regression model" from other kinds of statistical models? Mostly, that there is a response variable, which you want to model as influenced by (or determined by) some set of predictor variables. We are not interested in influence the other direction, and we are not interested in relationships among the predictor variables. Mostly, we take the predictor variables as given, and treat them as constants in the model, not as random variables. The relationship mentioned above can be linear or nonlinear, specified in a parametric or nonparametric way, and so on. To delineate from other models we better have a look at some other words often taken to denote something different for "regression models", like "errors in variables", when we accept the possibility of measurement errors in the predictor variables. That could well be included in my description of "regression model" above, but is often taken as an alternative model. Also, what is meant might vary among fields, see What is the difference between conditioning on regressors vs. treating them as fixed? To repeat: what matters is the definition used by the authors you are reading now, and not some metaphysics about what it "really is".
Definition and delimitation of regression model I would say that "regression model" is a kind of meta-concept, in the sense that you will not find a definition of "regression model", but more concrete concepts such as "linear regression", "non-line
13,890
Definition and delimitation of regression model
Two nice answers were already given, but I'd like to add my two cents. In regression case we have some random variables $Y$ and $X_1,\dots,X_k$. The variables have some unknown distribution and complicated covariance structure. We simplify this problem to focusing solely on conditional distribution, or more precisely on conditional expectation of $Y$ given the other variables. We simplify it to $$ \mu = E(y|x_1,\dots,x_k) = f(x_1,\dots,x_k) $$ Where $f$ is a function of predictors that can take different forms (linear, non-linear) depending on particular regression model and $\mu$ is a mean of some distribution when thinking of regression models in terms of generalized linear models. In GLM's $\mu$ can be location of Poisson, Binomial, Gamma etc. distributions. With $L_1$ regularized regression it is a location of Laplace distribution, for robust model minimizing Huber loss so called Huber density is used. In case of quartile regression we focus on other feature of distribution, we estimate $\mu$ that is distribution's quartile rather then expected value. So instead of looking on full joint distribution, we focus on conditional distribution of $Y$. This simplification is a key feature of regression models.
Definition and delimitation of regression model
Two nice answers were already given, but I'd like to add my two cents. In regression case we have some random variables $Y$ and $X_1,\dots,X_k$. The variables have some unknown distribution and compli
Definition and delimitation of regression model Two nice answers were already given, but I'd like to add my two cents. In regression case we have some random variables $Y$ and $X_1,\dots,X_k$. The variables have some unknown distribution and complicated covariance structure. We simplify this problem to focusing solely on conditional distribution, or more precisely on conditional expectation of $Y$ given the other variables. We simplify it to $$ \mu = E(y|x_1,\dots,x_k) = f(x_1,\dots,x_k) $$ Where $f$ is a function of predictors that can take different forms (linear, non-linear) depending on particular regression model and $\mu$ is a mean of some distribution when thinking of regression models in terms of generalized linear models. In GLM's $\mu$ can be location of Poisson, Binomial, Gamma etc. distributions. With $L_1$ regularized regression it is a location of Laplace distribution, for robust model minimizing Huber loss so called Huber density is used. In case of quartile regression we focus on other feature of distribution, we estimate $\mu$ that is distribution's quartile rather then expected value. So instead of looking on full joint distribution, we focus on conditional distribution of $Y$. This simplification is a key feature of regression models.
Definition and delimitation of regression model Two nice answers were already given, but I'd like to add my two cents. In regression case we have some random variables $Y$ and $X_1,\dots,X_k$. The variables have some unknown distribution and compli
13,891
Definition and delimitation of regression model
Some thoughts based on the literature: F. Hayashi in Chapter 1 of his classic graduate textbook "Econometrics" (2000) states that the following assumptions comprise the classical linear regression model: Linearity Strict exogeneity No multicollinearity Spherical error variance "Fixed" regressors Wooldridge in Chapter 2 of his classic introductory econometrics textbook "Introductory Econometrics: A Modern Approach" (2012) states that the following equation defines the simple linear regression model: $$y=\beta_0+\beta_1 x+u.$$ Greene in Chapter 2 of his popular econometrics textbook "Econometric Analysis" (2011) states The classical linear regression model consists of a set of assumptions about how a data set will be produced by an underlying “data-generating process.” and subsequently gives a list of assumptions similar to that of Hayashi's. Regarding the OP's interest in the GARCH model, Bollerslev "Generalized autoregressive conditional heterosedasticity" (1986) includes a phrase "the GARCH regression model" in the title of section 5 and also in the first sentence of that section. So the father of the GARCH model did not mind calling GARCH a regression model.
Definition and delimitation of regression model
Some thoughts based on the literature: F. Hayashi in Chapter 1 of his classic graduate textbook "Econometrics" (2000) states that the following assumptions comprise the classical linear regression mod
Definition and delimitation of regression model Some thoughts based on the literature: F. Hayashi in Chapter 1 of his classic graduate textbook "Econometrics" (2000) states that the following assumptions comprise the classical linear regression model: Linearity Strict exogeneity No multicollinearity Spherical error variance "Fixed" regressors Wooldridge in Chapter 2 of his classic introductory econometrics textbook "Introductory Econometrics: A Modern Approach" (2012) states that the following equation defines the simple linear regression model: $$y=\beta_0+\beta_1 x+u.$$ Greene in Chapter 2 of his popular econometrics textbook "Econometric Analysis" (2011) states The classical linear regression model consists of a set of assumptions about how a data set will be produced by an underlying “data-generating process.” and subsequently gives a list of assumptions similar to that of Hayashi's. Regarding the OP's interest in the GARCH model, Bollerslev "Generalized autoregressive conditional heterosedasticity" (1986) includes a phrase "the GARCH regression model" in the title of section 5 and also in the first sentence of that section. So the father of the GARCH model did not mind calling GARCH a regression model.
Definition and delimitation of regression model Some thoughts based on the literature: F. Hayashi in Chapter 1 of his classic graduate textbook "Econometrics" (2000) states that the following assumptions comprise the classical linear regression mod
13,892
Definition and delimitation of regression model
Definition and delimitation of regression model In the past me too shared your perplexity about this point. You refers on econometrics literature and me too refers primarily on that. Unfortunately most econometrics books do not helped much. However I achieved a clearer view that seems me consistent. What is the definition of a regression model? It seems me that the correct definition of Regression is as synonym for Conditional Expectation Function (CEF); this definition come from statistical literature. Therefore we can realize that all depends from the joint distribution of the random variables involved $D(y,X)$. The regression is $E[y|X]=g(X)$ eventually we can speak of regression in error form representation $y=E[y|X]+\epsilon$ read here for more about that: Regression and the CEF Frequently people speak about linear regression (model), indeed this is the king object of econometrics. But from previous definition we can realize that linear regression is an explicit specification for $g(X)$. Moreover is possible to show that the true meaning of the famous mean independence assumption $E[\epsilon|X]=0$ is a restriction on $D(y,X)$; it imply linearity for the CEF. Already here we can realize that assume both linearity for the regression and mean independence for his error is redundant. Worse, most econometrics books speak about Exogeneity assumption in place of mean independence assumption and attribute to it a crucial but strongly different meaning; meaning that cannot be attributed to a regression. Worse again, sometimes this assumption is given in the form $E[\epsilon X]=0$ that is always true by construction in all regressions! This fact reveal undoubtedly the contradiction. Two book the you (Richard Hardy) cited in your reply, Wooldridge 2012 and Greene (2011), are among them. The core of the problem swing around a conflation among statistical and causal concepts. Indeed exogeneity is, or should be, a causal concept and not an assumption about regression. What is not a regression model? A structural equation is not a regression equation (model). The conflation between the two concepts seems me the root of the problems in econometrics literature. Indeed when econometric authors speak about Exogeneity (in any form defined) have in mind something like a structural equation not a regression equation. These my replies go deep about those point: How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)? Under which assumptions a regression can be interpreted causally? What is the relationship between minimizing prediction error versus parameter estimation error? moreover other conflation is among the meaning of regression intended as theoretical quantity and properties of his estimator, primarily OLS estimator. For example the “No multicollinearity” assumption is frequently intended as a necessity for uniqueness of OLS estimator, it is an algebraic condition that deal with the data at hand and have few to share with the statistical properties of the random variables involved. Finally, all above for to said (also) that I do not suggest the use of word "regression" as a meta-concept in the sense suggested by kjetil b halvorsen; Indeed I fear that conflation between regression and structural equation come from that. Indeed concept like conditional quantile function can be interesting but refers on that as quantile regression is (widespread) bad custom. Moreover model like GARCH have much to share with the concept of Skedastic function different from regression one. About model like ARIMA I said that: AR subcase are surely regressions; ARMA is a regression that include unobservable terms; ARIMA looks like a regression but the use of integrated series can bring ad hoc statistical issues.
Definition and delimitation of regression model
Definition and delimitation of regression model In the past me too shared your perplexity about this point. You refers on econometrics literature and me too refers primarily on that. Unfortunately mo
Definition and delimitation of regression model Definition and delimitation of regression model In the past me too shared your perplexity about this point. You refers on econometrics literature and me too refers primarily on that. Unfortunately most econometrics books do not helped much. However I achieved a clearer view that seems me consistent. What is the definition of a regression model? It seems me that the correct definition of Regression is as synonym for Conditional Expectation Function (CEF); this definition come from statistical literature. Therefore we can realize that all depends from the joint distribution of the random variables involved $D(y,X)$. The regression is $E[y|X]=g(X)$ eventually we can speak of regression in error form representation $y=E[y|X]+\epsilon$ read here for more about that: Regression and the CEF Frequently people speak about linear regression (model), indeed this is the king object of econometrics. But from previous definition we can realize that linear regression is an explicit specification for $g(X)$. Moreover is possible to show that the true meaning of the famous mean independence assumption $E[\epsilon|X]=0$ is a restriction on $D(y,X)$; it imply linearity for the CEF. Already here we can realize that assume both linearity for the regression and mean independence for his error is redundant. Worse, most econometrics books speak about Exogeneity assumption in place of mean independence assumption and attribute to it a crucial but strongly different meaning; meaning that cannot be attributed to a regression. Worse again, sometimes this assumption is given in the form $E[\epsilon X]=0$ that is always true by construction in all regressions! This fact reveal undoubtedly the contradiction. Two book the you (Richard Hardy) cited in your reply, Wooldridge 2012 and Greene (2011), are among them. The core of the problem swing around a conflation among statistical and causal concepts. Indeed exogeneity is, or should be, a causal concept and not an assumption about regression. What is not a regression model? A structural equation is not a regression equation (model). The conflation between the two concepts seems me the root of the problems in econometrics literature. Indeed when econometric authors speak about Exogeneity (in any form defined) have in mind something like a structural equation not a regression equation. These my replies go deep about those point: How would econometricians answer the objections and recommendations raised by Chen and Pearl (2013)? Under which assumptions a regression can be interpreted causally? What is the relationship between minimizing prediction error versus parameter estimation error? moreover other conflation is among the meaning of regression intended as theoretical quantity and properties of his estimator, primarily OLS estimator. For example the “No multicollinearity” assumption is frequently intended as a necessity for uniqueness of OLS estimator, it is an algebraic condition that deal with the data at hand and have few to share with the statistical properties of the random variables involved. Finally, all above for to said (also) that I do not suggest the use of word "regression" as a meta-concept in the sense suggested by kjetil b halvorsen; Indeed I fear that conflation between regression and structural equation come from that. Indeed concept like conditional quantile function can be interesting but refers on that as quantile regression is (widespread) bad custom. Moreover model like GARCH have much to share with the concept of Skedastic function different from regression one. About model like ARIMA I said that: AR subcase are surely regressions; ARMA is a regression that include unobservable terms; ARIMA looks like a regression but the use of integrated series can bring ad hoc statistical issues.
Definition and delimitation of regression model Definition and delimitation of regression model In the past me too shared your perplexity about this point. You refers on econometrics literature and me too refers primarily on that. Unfortunately mo
13,893
Definition and delimitation of regression model
You can split the question into: (i) "What is a model? and (ii)"What is regression?" A "model" is any given way of making a prediction, in the sense of "all models are wrong, but some are useful". "Regression" is the process of adjusting a model with the intention that its predictions become useful. Typically some class of models is chosen, that is expected (due to some prior knowledge of the problem) to be capable of producing usefully accurate predictions (better than random, and better other models of similar cost and complexity). Then certain parameters of the model are adjusted to improve its performance, by some measure of performance that is deemed appropriate for the problem the user wishes the model to solve. See: https://en.wikipedia.org/wiki/Regression_analysis and https://www.investopedia.com/terms/r/regression.asp Consequently, "What is not a regression model", would be where either or both (i) no prediction is made, (ii) no adjustment is made to improve the prediction. Note that even "descriptive" models still constrain the expected distribution of observations of the thing that they model.
Definition and delimitation of regression model
You can split the question into: (i) "What is a model? and (ii)"What is regression?" A "model" is any given way of making a prediction, in the sense of "all models are wrong, but some are useful". "Re
Definition and delimitation of regression model You can split the question into: (i) "What is a model? and (ii)"What is regression?" A "model" is any given way of making a prediction, in the sense of "all models are wrong, but some are useful". "Regression" is the process of adjusting a model with the intention that its predictions become useful. Typically some class of models is chosen, that is expected (due to some prior knowledge of the problem) to be capable of producing usefully accurate predictions (better than random, and better other models of similar cost and complexity). Then certain parameters of the model are adjusted to improve its performance, by some measure of performance that is deemed appropriate for the problem the user wishes the model to solve. See: https://en.wikipedia.org/wiki/Regression_analysis and https://www.investopedia.com/terms/r/regression.asp Consequently, "What is not a regression model", would be where either or both (i) no prediction is made, (ii) no adjustment is made to improve the prediction. Note that even "descriptive" models still constrain the expected distribution of observations of the thing that they model.
Definition and delimitation of regression model You can split the question into: (i) "What is a model? and (ii)"What is regression?" A "model" is any given way of making a prediction, in the sense of "all models are wrong, but some are useful". "Re
13,894
R: geom_density values in y-axis [duplicate]
Or you can just used the computed ..scaled.. value stat_density provides: library(ggplot2) set.seed(1) vals1 <- rbeta(1000, 0.5, 0.1) vals2 <- rbeta(1000, 0.25, 0.3) gg <- ggplot(data.frame(x=c(vals1, vals2), grp=c(rep("a", 1000), rep("b", 1000)))) gg <- gg + geom_density(aes(x=x, y=..scaled.., fill=grp), alpha=1/2) gg <- gg + theme_bw() gg
R: geom_density values in y-axis [duplicate]
Or you can just used the computed ..scaled.. value stat_density provides: library(ggplot2) set.seed(1) vals1 <- rbeta(1000, 0.5, 0.1) vals2 <- rbeta(1000, 0.25, 0.3) gg <- ggplot(data.frame(x=c(vals
R: geom_density values in y-axis [duplicate] Or you can just used the computed ..scaled.. value stat_density provides: library(ggplot2) set.seed(1) vals1 <- rbeta(1000, 0.5, 0.1) vals2 <- rbeta(1000, 0.25, 0.3) gg <- ggplot(data.frame(x=c(vals1, vals2), grp=c(rep("a", 1000), rep("b", 1000)))) gg <- gg + geom_density(aes(x=x, y=..scaled.., fill=grp), alpha=1/2) gg <- gg + theme_bw() gg
R: geom_density values in y-axis [duplicate] Or you can just used the computed ..scaled.. value stat_density provides: library(ggplot2) set.seed(1) vals1 <- rbeta(1000, 0.5, 0.1) vals2 <- rbeta(1000, 0.25, 0.3) gg <- ggplot(data.frame(x=c(vals
13,895
R: geom_density values in y-axis [duplicate]
It looks like geom_density() is displaying the appropriate values. The area under that whole curve should be 1. To get an estimate of the probability of certain values, you'd have to integrate over an interval on your 'y' axis, and that value should never be greater than 1.
R: geom_density values in y-axis [duplicate]
It looks like geom_density() is displaying the appropriate values. The area under that whole curve should be 1. To get an estimate of the probability of certain values, you'd have to integrate over an
R: geom_density values in y-axis [duplicate] It looks like geom_density() is displaying the appropriate values. The area under that whole curve should be 1. To get an estimate of the probability of certain values, you'd have to integrate over an interval on your 'y' axis, and that value should never be greater than 1.
R: geom_density values in y-axis [duplicate] It looks like geom_density() is displaying the appropriate values. The area under that whole curve should be 1. To get an estimate of the probability of certain values, you'd have to integrate over an
13,896
Why is the error "estimated adjustment 'a' is NA" generated from R boot package when calculating confidence intervals using the bca method?
As you can see from your error message, boot.ci calls bca.ci. Because the boot.out object doesn't supply L, the empirical influence values for the statistic you're calculating on the data, bca.ci tries to calculate them using the empinf function, and then (as Michael says) it uses them to calculate the acceleration constant: L <- empinf(boot.out, index = index, t = t.o, ...) a <- sum(L^3)/(6 * sum(L^2)^1.5) But with a small number of replications, empinf sometimes fails and returns a vector of NA values. The result is that you have no values for L, a can't be calculated, and you get your error. As ocram says, increasing the number of boostrap replications will fix this. Even doubling R to 2000 should probably do it.
Why is the error "estimated adjustment 'a' is NA" generated from R boot package when calculating con
As you can see from your error message, boot.ci calls bca.ci. Because the boot.out object doesn't supply L, the empirical influence values for the statistic you're calculating on the data, bca.ci trie
Why is the error "estimated adjustment 'a' is NA" generated from R boot package when calculating confidence intervals using the bca method? As you can see from your error message, boot.ci calls bca.ci. Because the boot.out object doesn't supply L, the empirical influence values for the statistic you're calculating on the data, bca.ci tries to calculate them using the empinf function, and then (as Michael says) it uses them to calculate the acceleration constant: L <- empinf(boot.out, index = index, t = t.o, ...) a <- sum(L^3)/(6 * sum(L^2)^1.5) But with a small number of replications, empinf sometimes fails and returns a vector of NA values. The result is that you have no values for L, a can't be calculated, and you get your error. As ocram says, increasing the number of boostrap replications will fix this. Even doubling R to 2000 should probably do it.
Why is the error "estimated adjustment 'a' is NA" generated from R boot package when calculating con As you can see from your error message, boot.ci calls bca.ci. Because the boot.out object doesn't supply L, the empirical influence values for the statistic you're calculating on the data, bca.ci trie
13,897
What is the exact definition of profile likelihood?
I would suggest Sprott, D. A. (2000). Statistical Inference in Science. Springer. Chapter 4 Next, I am going to summarise the definition of the Profile or maximised likelihood. Let $\theta$ be a vector parameter that can be decomposed as $\theta = (\delta,\xi)$, where $\delta$ is a vector parameter of interest and $\xi$ is a nuisance vector parameter. This is, you are interested only on some entries of the parameter $\theta$. Then, the likelihood function can be written as $${\mathcal L}(\theta;y)={\mathcal L}(\delta,\xi;y)=f(y;\delta,\xi),$$ where $f$ is the sampling model. An example of this is the case where $f$ is a normal density, $y$ consist of $n$ independent observations, $\theta=(\mu,\sigma)$ and say that you are interested on $\sigma$ solely, then $\mu$ is a nuisance parameter. The profile likelihood of the parameter of interest is defined as $$L_p(\delta)=\sup_{\xi}{\mathcal L}(\delta,\xi;y).$$ Sometimes you are also interested on a normalised version of the profile likelihood which is obtained by dividing this expression by the likelihood evaluated at the maximum likelihood estimator. $$R_p(\delta)=\dfrac{\sup_{\xi}{\mathcal L}(\delta,\xi;y)}{\sup_{(\delta,\xi)}{\mathcal L}(\delta,\xi;y)}.$$ You can find an example with the normal distribution here. I hope this helps.
What is the exact definition of profile likelihood?
I would suggest Sprott, D. A. (2000). Statistical Inference in Science. Springer. Chapter 4 Next, I am going to summarise the definition of the Profile or maximised likelihood. Let $\theta$ be a vecto
What is the exact definition of profile likelihood? I would suggest Sprott, D. A. (2000). Statistical Inference in Science. Springer. Chapter 4 Next, I am going to summarise the definition of the Profile or maximised likelihood. Let $\theta$ be a vector parameter that can be decomposed as $\theta = (\delta,\xi)$, where $\delta$ is a vector parameter of interest and $\xi$ is a nuisance vector parameter. This is, you are interested only on some entries of the parameter $\theta$. Then, the likelihood function can be written as $${\mathcal L}(\theta;y)={\mathcal L}(\delta,\xi;y)=f(y;\delta,\xi),$$ where $f$ is the sampling model. An example of this is the case where $f$ is a normal density, $y$ consist of $n$ independent observations, $\theta=(\mu,\sigma)$ and say that you are interested on $\sigma$ solely, then $\mu$ is a nuisance parameter. The profile likelihood of the parameter of interest is defined as $$L_p(\delta)=\sup_{\xi}{\mathcal L}(\delta,\xi;y).$$ Sometimes you are also interested on a normalised version of the profile likelihood which is obtained by dividing this expression by the likelihood evaluated at the maximum likelihood estimator. $$R_p(\delta)=\dfrac{\sup_{\xi}{\mathcal L}(\delta,\xi;y)}{\sup_{(\delta,\xi)}{\mathcal L}(\delta,\xi;y)}.$$ You can find an example with the normal distribution here. I hope this helps.
What is the exact definition of profile likelihood? I would suggest Sprott, D. A. (2000). Statistical Inference in Science. Springer. Chapter 4 Next, I am going to summarise the definition of the Profile or maximised likelihood. Let $\theta$ be a vecto
13,898
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
The AIC is sensitive to the sample size used to train the models. At small sample sizes, "there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC will overfit". [1] The reference goes on to suggest AICc in this scenario, which introduces an extra penalty term for the number of parameters. This answer by Artem Kaznatcheev suggests a threshold of $n/K < 40$ as cutoff point for whether to use AICc or not, based on Burnham and Anderson. Here $n$ signifies the number of samples and $K$ the number of model parameters. Your data has 234 rows available (listed on the webpage you linked). This would indicate that the cutoff exists at roughly 6 parameters, beyond which you should consider AICc. [1] https://en.m.wikipedia.org/wiki/Akaike_information_criterion#modification_for_small_sample_size
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
The AIC is sensitive to the sample size used to train the models. At small sample sizes, "there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC wi
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model? The AIC is sensitive to the sample size used to train the models. At small sample sizes, "there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC will overfit". [1] The reference goes on to suggest AICc in this scenario, which introduces an extra penalty term for the number of parameters. This answer by Artem Kaznatcheev suggests a threshold of $n/K < 40$ as cutoff point for whether to use AICc or not, based on Burnham and Anderson. Here $n$ signifies the number of samples and $K$ the number of model parameters. Your data has 234 rows available (listed on the webpage you linked). This would indicate that the cutoff exists at roughly 6 parameters, beyond which you should consider AICc. [1] https://en.m.wikipedia.org/wiki/Akaike_information_criterion#modification_for_small_sample_size
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model? The AIC is sensitive to the sample size used to train the models. At small sample sizes, "there is a substantial probability that AIC will select models that have too many parameters, i.e. that AIC wi
13,899
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
Disclaimer: I didn't go through your code line-by-line. At first sight, it seems legit, so I'll assume it is. AIC is just the log-likelihood penalized by the number of parameters $k$ $$ 2k - 2\ln(\hat L) $$ $2$ is a constant. $\ln(\hat L)$ is a sum of the unnormalized likelihood function evaluations over all the data points. $2\ln(\hat L)$ can really be whatever and there are no guarantees that $2$ is the "appropriate" weight for the number of parameters so that it would enable you to pick the best model. The same applies to BIC and all the other criteria like this, they work under a set of assumptions and tend to work well in many cases, but there is no guarantee that things like $2k$ are the penalty that would work for every possible model.
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
Disclaimer: I didn't go through your code line-by-line. At first sight, it seems legit, so I'll assume it is. AIC is just the log-likelihood penalized by the number of parameters $k$ $$ 2k - 2\ln(\hat
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model? Disclaimer: I didn't go through your code line-by-line. At first sight, it seems legit, so I'll assume it is. AIC is just the log-likelihood penalized by the number of parameters $k$ $$ 2k - 2\ln(\hat L) $$ $2$ is a constant. $\ln(\hat L)$ is a sum of the unnormalized likelihood function evaluations over all the data points. $2\ln(\hat L)$ can really be whatever and there are no guarantees that $2$ is the "appropriate" weight for the number of parameters so that it would enable you to pick the best model. The same applies to BIC and all the other criteria like this, they work under a set of assumptions and tend to work well in many cases, but there is no guarantee that things like $2k$ are the penalty that would work for every possible model.
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model? Disclaimer: I didn't go through your code line-by-line. At first sight, it seems legit, so I'll assume it is. AIC is just the log-likelihood penalized by the number of parameters $k$ $$ 2k - 2\ln(\hat
13,900
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
The problem with AIC is that it does not take into account the stochastics of the parameter vector ${\boldsymbol { \beta}}$. Recall that in multiple regression, each estimate of the regression parameters $\beta_0,\ldots,\beta_p$, follow the distribution $\;\hat{\beta_j} \; \sim T(n-p-1)$. Here $n$ is the number of data points and $p$ the size of the parameter vector (plus one, the constant $\beta_0$). Specifically, define the squared inverse of the data matrix: $\bf{W} = (\bf{X}^T\bf{X})^{-1}$, and element $w_{j,j}$ being the $j$'th diagonal element in $\bf{W}$. Let further $s=SRS/(n-p-1)$, where SRS is the sum of the squared residuals: $SRS=(\bf{y}-\bf{X}^T{\boldsymbol { \beta}})^T(\bf{y}-\bf{X}^T{\boldsymbol { \beta}})$. The vector $\bf{y}$ contains the values you are trying to predict. Finally, $t_j=\hat{\beta}_j/(\sqrt{s}\,\sqrt{w_{j,j}})$, which is the test statistic for parameter $\hat{\beta}_j$. Clearly, the degrees-of-freedom normalized variance $s$ is disregarded in the AIC. That is why this model complexity criterion is unsuited for model selection. I suggest that you look further in the literature for a more advanced model complexity criterion. In the end, you want to make the optimal choice between model-bias and model-variance. Instead of the AIC, it is recommended to use a model selection criterion which accounts for the uncertainty of the model fit. Taken from [1], the $SIC_f$ is well-defined and not even complex to compute for a linear regression model. Define $SIC_f$ as \begin{equation} SIC_f = (n-p-2) \ln(s) + \ln \mid \bf{X}^T\bf{X} \mid \end{equation} where $s$ is the estimated residual variance, as defined above and $\mid \; \cdot \; \mid$ is the matrix determinant. So the proposal is to compare $SIC_f(Model_a)$ with $SIC_f(Model_b)$ and choose the preferred model $a$ or $b$. The $Model_{z}$ that has the lowest $SIC_f$ is preferred. I recommend reading the referred article on this score. A. A. Neathy, J. E. Cavanaugh, Regression and time series model selection using variants of the Schwarz information criterion. Communications in Statistics, 26(3), 1997, p. 559-580.
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model?
The problem with AIC is that it does not take into account the stochastics of the parameter vector ${\boldsymbol { \beta}}$. Recall that in multiple regression, each estimate of the regression paramet
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model? The problem with AIC is that it does not take into account the stochastics of the parameter vector ${\boldsymbol { \beta}}$. Recall that in multiple regression, each estimate of the regression parameters $\beta_0,\ldots,\beta_p$, follow the distribution $\;\hat{\beta_j} \; \sim T(n-p-1)$. Here $n$ is the number of data points and $p$ the size of the parameter vector (plus one, the constant $\beta_0$). Specifically, define the squared inverse of the data matrix: $\bf{W} = (\bf{X}^T\bf{X})^{-1}$, and element $w_{j,j}$ being the $j$'th diagonal element in $\bf{W}$. Let further $s=SRS/(n-p-1)$, where SRS is the sum of the squared residuals: $SRS=(\bf{y}-\bf{X}^T{\boldsymbol { \beta}})^T(\bf{y}-\bf{X}^T{\boldsymbol { \beta}})$. The vector $\bf{y}$ contains the values you are trying to predict. Finally, $t_j=\hat{\beta}_j/(\sqrt{s}\,\sqrt{w_{j,j}})$, which is the test statistic for parameter $\hat{\beta}_j$. Clearly, the degrees-of-freedom normalized variance $s$ is disregarded in the AIC. That is why this model complexity criterion is unsuited for model selection. I suggest that you look further in the literature for a more advanced model complexity criterion. In the end, you want to make the optimal choice between model-bias and model-variance. Instead of the AIC, it is recommended to use a model selection criterion which accounts for the uncertainty of the model fit. Taken from [1], the $SIC_f$ is well-defined and not even complex to compute for a linear regression model. Define $SIC_f$ as \begin{equation} SIC_f = (n-p-2) \ln(s) + \ln \mid \bf{X}^T\bf{X} \mid \end{equation} where $s$ is the estimated residual variance, as defined above and $\mid \; \cdot \; \mid$ is the matrix determinant. So the proposal is to compare $SIC_f(Model_a)$ with $SIC_f(Model_b)$ and choose the preferred model $a$ or $b$. The $Model_{z}$ that has the lowest $SIC_f$ is preferred. I recommend reading the referred article on this score. A. A. Neathy, J. E. Cavanaugh, Regression and time series model selection using variants of the Schwarz information criterion. Communications in Statistics, 26(3), 1997, p. 559-580.
Why does the Akaike Information Criterion (AIC) sometimes favor an overfitted model? The problem with AIC is that it does not take into account the stochastics of the parameter vector ${\boldsymbol { \beta}}$. Recall that in multiple regression, each estimate of the regression paramet