idx
int64
1
56k
question
stringlengths
15
155
answer
stringlengths
2
29.2k
βŒ€
question_cut
stringlengths
15
100
answer_cut
stringlengths
2
200
βŒ€
conversation
stringlengths
47
29.3k
conversation_cut
stringlengths
47
301
18,801
How to prove that Elo rating or Page ranking have a meaning for my set?
If you want to test the null hypothesis that each player is equally likely to win or lose each game, I think you want a test of symmetry of the contingency table formed by tabulating winners against losers. Set up the data so that you have two variables, 'winner' and 'loser' containing the ID of the winner and loser for each game, i.e. each 'observation' is a game. You can then construct a contingency table of winner vs loser. Your null hypothesis is that you'd expect this table to be symmetric (on average over repeated tournaments). In your case, you'll get an 8Γ—8 table where most of the entries are zero (corresponding to players that never met), ie. the table will be very sparse, so an 'exact' test will almost certainly be necessary rather than one relying on asymptotics. Such an exact test is available in Stata with the symmetry command. In this case, the syntax would be: symmetry winner loser, exact No doubt it's also implemented in other statistics packages that I'm less familiar with.
How to prove that Elo rating or Page ranking have a meaning for my set?
If you want to test the null hypothesis that each player is equally likely to win or lose each game, I think you want a test of symmetry of the contingency table formed by tabulating winners against l
How to prove that Elo rating or Page ranking have a meaning for my set? If you want to test the null hypothesis that each player is equally likely to win or lose each game, I think you want a test of symmetry of the contingency table formed by tabulating winners against losers. Set up the data so that you have two variables, 'winner' and 'loser' containing the ID of the winner and loser for each game, i.e. each 'observation' is a game. You can then construct a contingency table of winner vs loser. Your null hypothesis is that you'd expect this table to be symmetric (on average over repeated tournaments). In your case, you'll get an 8Γ—8 table where most of the entries are zero (corresponding to players that never met), ie. the table will be very sparse, so an 'exact' test will almost certainly be necessary rather than one relying on asymptotics. Such an exact test is available in Stata with the symmetry command. In this case, the syntax would be: symmetry winner loser, exact No doubt it's also implemented in other statistics packages that I'm less familiar with.
How to prove that Elo rating or Page ranking have a meaning for my set? If you want to test the null hypothesis that each player is equally likely to win or lose each game, I think you want a test of symmetry of the contingency table formed by tabulating winners against l
18,802
How to prove that Elo rating or Page ranking have a meaning for my set?
Probably most famous example for testing how accurate is the method of estimation in rating system was Chess ratings - Elo versus the Rest of the World competition on Kaggle, which structure was the following: Competitors train their rating systems using a training dataset of over 65,000 recent results for 8,631 top players. Participants then use their method to predict the outcome of a further 7,809 games. Winner was Elo++. It seems to be a good test scheme for your needs, theoretically, even if 18 matches are not a good test base. You can even check differences between results for various algorithms (here's a comparison between rankade, our ranking system, and most known, including Elo, Glicko and Trueskill).
How to prove that Elo rating or Page ranking have a meaning for my set?
Probably most famous example for testing how accurate is the method of estimation in rating system was Chess ratings - Elo versus the Rest of the World competition on Kaggle, which structure was the f
How to prove that Elo rating or Page ranking have a meaning for my set? Probably most famous example for testing how accurate is the method of estimation in rating system was Chess ratings - Elo versus the Rest of the World competition on Kaggle, which structure was the following: Competitors train their rating systems using a training dataset of over 65,000 recent results for 8,631 top players. Participants then use their method to predict the outcome of a further 7,809 games. Winner was Elo++. It seems to be a good test scheme for your needs, theoretically, even if 18 matches are not a good test base. You can even check differences between results for various algorithms (here's a comparison between rankade, our ranking system, and most known, including Elo, Glicko and Trueskill).
How to prove that Elo rating or Page ranking have a meaning for my set? Probably most famous example for testing how accurate is the method of estimation in rating system was Chess ratings - Elo versus the Rest of the World competition on Kaggle, which structure was the f
18,803
How to prove that Elo rating or Page ranking have a meaning for my set?
Have you checked some of Mark Glickman's publications? Those seem relevant. Implicit in the standard deviation of the ratings is an expected value of a game. (This standard deviation is fixed at a specific number in basic Elo, and variable in the Glicko system). I say expected value rather than the probability of a win because of draws. The key things to understand about whatever Elo ratings you have is the underlying distribution assumption (normal or logistic, for example) and the standard deviation assumed. The logistic version of the Elo formulas suggests that the expected value of a rating difference of 110 points is .653, for example player A with 1330 and player B with 1220. $$E_A= \frac{1}{1+10^{(R_B-R_A)/400}}$$ (OK,that's a Wikipedia reference but I've already spent too much time on this answer.) So now we have an expected value for each game based on each player's rating, and an outcome based on the game. At this point, the next thing I'd do would be to check this out graphically by arranging the gaps from low to high, and totalling the expected and actual results. So, for the first 5 games we might have total points of 2, and expected points of 1.5. For the first 10 games, we might have total points of 8, and expected points of 8.8, etc. By graphing these two lines cumulatively (as you would for a Kolmogorov-Smirnov test) you can see whether the expected and actual cumulative values track each other well or badly. It's likely someone else can provide a more formal test.
How to prove that Elo rating or Page ranking have a meaning for my set?
Have you checked some of Mark Glickman's publications? Those seem relevant. Implicit in the standard deviation of the ratings is an expected value of a game. (This standard deviation is fixed at a spe
How to prove that Elo rating or Page ranking have a meaning for my set? Have you checked some of Mark Glickman's publications? Those seem relevant. Implicit in the standard deviation of the ratings is an expected value of a game. (This standard deviation is fixed at a specific number in basic Elo, and variable in the Glicko system). I say expected value rather than the probability of a win because of draws. The key things to understand about whatever Elo ratings you have is the underlying distribution assumption (normal or logistic, for example) and the standard deviation assumed. The logistic version of the Elo formulas suggests that the expected value of a rating difference of 110 points is .653, for example player A with 1330 and player B with 1220. $$E_A= \frac{1}{1+10^{(R_B-R_A)/400}}$$ (OK,that's a Wikipedia reference but I've already spent too much time on this answer.) So now we have an expected value for each game based on each player's rating, and an outcome based on the game. At this point, the next thing I'd do would be to check this out graphically by arranging the gaps from low to high, and totalling the expected and actual results. So, for the first 5 games we might have total points of 2, and expected points of 1.5. For the first 10 games, we might have total points of 8, and expected points of 8.8, etc. By graphing these two lines cumulatively (as you would for a Kolmogorov-Smirnov test) you can see whether the expected and actual cumulative values track each other well or badly. It's likely someone else can provide a more formal test.
How to prove that Elo rating or Page ranking have a meaning for my set? Have you checked some of Mark Glickman's publications? Those seem relevant. Implicit in the standard deviation of the ratings is an expected value of a game. (This standard deviation is fixed at a spe
18,804
How to prove that Elo rating or Page ranking have a meaning for my set?
You want to test the hypothesis that probability of a result depends on the matchup. $H_0$, then, is that every game is essentially a coin flip. A simple test for this would be calculating the proportion of times the player with more previous games played will win, and comparing that to the binomial cumulative distribution function. That should show the existence of some kind of effect. If you're interested about the quality of the Elo rating system for your game, a simple method would be to run a 10-fold crossvalidation on the predictive performance of the Elo model (which actually assumes outcomes aren't iid, but I'll ignore that) and comparing that to a coin flip.
How to prove that Elo rating or Page ranking have a meaning for my set?
You want to test the hypothesis that probability of a result depends on the matchup. $H_0$, then, is that every game is essentially a coin flip. A simple test for this would be calculating the proport
How to prove that Elo rating or Page ranking have a meaning for my set? You want to test the hypothesis that probability of a result depends on the matchup. $H_0$, then, is that every game is essentially a coin flip. A simple test for this would be calculating the proportion of times the player with more previous games played will win, and comparing that to the binomial cumulative distribution function. That should show the existence of some kind of effect. If you're interested about the quality of the Elo rating system for your game, a simple method would be to run a 10-fold crossvalidation on the predictive performance of the Elo model (which actually assumes outcomes aren't iid, but I'll ignore that) and comparing that to a coin flip.
How to prove that Elo rating or Page ranking have a meaning for my set? You want to test the hypothesis that probability of a result depends on the matchup. $H_0$, then, is that every game is essentially a coin flip. A simple test for this would be calculating the proport
18,805
Adding coefficients to obtain interaction effects - what to do with SEs?
I think this the expression for $SE_{b_{new}}$: $$\sqrt{SE_1^2 + SE_2^2+2Cov(b_1,b_2)}$$ You can work with this new standard error to find your new test statistic for testing $H_o: \beta=0$
Adding coefficients to obtain interaction effects - what to do with SEs?
I think this the expression for $SE_{b_{new}}$: $$\sqrt{SE_1^2 + SE_2^2+2Cov(b_1,b_2)}$$ You can work with this new standard error to find your new test statistic for testing $H_o: \beta=0$
Adding coefficients to obtain interaction effects - what to do with SEs? I think this the expression for $SE_{b_{new}}$: $$\sqrt{SE_1^2 + SE_2^2+2Cov(b_1,b_2)}$$ You can work with this new standard error to find your new test statistic for testing $H_o: \beta=0$
Adding coefficients to obtain interaction effects - what to do with SEs? I think this the expression for $SE_{b_{new}}$: $$\sqrt{SE_1^2 + SE_2^2+2Cov(b_1,b_2)}$$ You can work with this new standard error to find your new test statistic for testing $H_o: \beta=0$
18,806
Adding coefficients to obtain interaction effects - what to do with SEs?
I assume you mean 'multivariable' regression, not 'multivariate'. 'Multivariate' refers to having multiple dependent variables. It is not considered to be acceptable statistical practice to take a continuous predictor and to chop it up into intervals. This will result in residual confounding and will make interactions misleadingly significant as some interactions can just reflect lack of fit (here, underfitting) of some of the main effects. There is a lot of unexplained variation within the outer quintiles. Plus, it is actually impossible to precisely interpret the "quintile effects." For comparisons of interest, it is easiest to envision them as differences in predicted values. Here is an example using the R rms package. require(rms) f <- ols(y ~ x1 + rcs(x2,3)*treat) # or lrm, cph, psm, Rq, Gls, Glm, ... # This model allows nonlinearity in x2 and interaction between x2 and treat. # x2 is modeled as two separate restricted cubic spline functions with 3 # knots or join points in common (one function for the reference treatment # and one function for the difference in curves between the 2 treatments) contrast(f, list(treat='B', x2=c(.2, .4)), list(treat='A', x2=c(.2, .4))) # Provides a comparison of treatments at 2 values of x2 anova(f) # provides 2 d.f. interaction test and test of whether treatment # is effective at ANY value of x2 (combined treat main effect + treat x x2 # interaction - this has 3 d.f. here)
Adding coefficients to obtain interaction effects - what to do with SEs?
I assume you mean 'multivariable' regression, not 'multivariate'. 'Multivariate' refers to having multiple dependent variables. It is not considered to be acceptable statistical practice to take a co
Adding coefficients to obtain interaction effects - what to do with SEs? I assume you mean 'multivariable' regression, not 'multivariate'. 'Multivariate' refers to having multiple dependent variables. It is not considered to be acceptable statistical practice to take a continuous predictor and to chop it up into intervals. This will result in residual confounding and will make interactions misleadingly significant as some interactions can just reflect lack of fit (here, underfitting) of some of the main effects. There is a lot of unexplained variation within the outer quintiles. Plus, it is actually impossible to precisely interpret the "quintile effects." For comparisons of interest, it is easiest to envision them as differences in predicted values. Here is an example using the R rms package. require(rms) f <- ols(y ~ x1 + rcs(x2,3)*treat) # or lrm, cph, psm, Rq, Gls, Glm, ... # This model allows nonlinearity in x2 and interaction between x2 and treat. # x2 is modeled as two separate restricted cubic spline functions with 3 # knots or join points in common (one function for the reference treatment # and one function for the difference in curves between the 2 treatments) contrast(f, list(treat='B', x2=c(.2, .4)), list(treat='A', x2=c(.2, .4))) # Provides a comparison of treatments at 2 values of x2 anova(f) # provides 2 d.f. interaction test and test of whether treatment # is effective at ANY value of x2 (combined treat main effect + treat x x2 # interaction - this has 3 d.f. here)
Adding coefficients to obtain interaction effects - what to do with SEs? I assume you mean 'multivariable' regression, not 'multivariate'. 'Multivariate' refers to having multiple dependent variables. It is not considered to be acceptable statistical practice to take a co
18,807
Adding coefficients to obtain interaction effects - what to do with SEs?
To be more general, if you create a (row) vector for the estimate that you care about $R$ such that your estimator is equal to $R\beta$, then the variance of that estimator is $R\hat{V}R^\prime$, where $\hat{V}$ is the estimated variance-covariance matrix of your regression. Your estimate is distributed Normal or t, depending upon the assumption that you are making (Law of Large Numbers v. assuming normal errors in your regression model). Alternatively, you can test multiple estimates if you let $R$ be a matrix. This is known as a Wald test. The distribution in this case is a $\chi^2_r$, where $r$ is the number of rows in your matrix (assuming that the rows are linearly independent).
Adding coefficients to obtain interaction effects - what to do with SEs?
To be more general, if you create a (row) vector for the estimate that you care about $R$ such that your estimator is equal to $R\beta$, then the variance of that estimator is $R\hat{V}R^\prime$, wher
Adding coefficients to obtain interaction effects - what to do with SEs? To be more general, if you create a (row) vector for the estimate that you care about $R$ such that your estimator is equal to $R\beta$, then the variance of that estimator is $R\hat{V}R^\prime$, where $\hat{V}$ is the estimated variance-covariance matrix of your regression. Your estimate is distributed Normal or t, depending upon the assumption that you are making (Law of Large Numbers v. assuming normal errors in your regression model). Alternatively, you can test multiple estimates if you let $R$ be a matrix. This is known as a Wald test. The distribution in this case is a $\chi^2_r$, where $r$ is the number of rows in your matrix (assuming that the rows are linearly independent).
Adding coefficients to obtain interaction effects - what to do with SEs? To be more general, if you create a (row) vector for the estimate that you care about $R$ such that your estimator is equal to $R\beta$, then the variance of that estimator is $R\hat{V}R^\prime$, wher
18,808
R package for fixed-effect logistic regression
Conditional logistic regression (I assume that this is what you refered to when talking about Chamberlain's estimator) is available through clogit() in the survival package. I also found this page which contains R code to estimate conditional logit parameters. The survey package also includes a lot of wrapper function for GLM and Survival model in the case of complex sampling, but I didn't look at. Try also to look at logit.mixed in the Zelig package, or directly use the lme4 package which provide methods for mixed-effects models with binomial link (see lmer or glmer). Did you take a look at Econometrics in R, from Grant V. Farnsworth? It seems to provide a gentle overview of applied econometrics in R (with which I am not familiar).
R package for fixed-effect logistic regression
Conditional logistic regression (I assume that this is what you refered to when talking about Chamberlain's estimator) is available through clogit() in the survival package. I also found this page whi
R package for fixed-effect logistic regression Conditional logistic regression (I assume that this is what you refered to when talking about Chamberlain's estimator) is available through clogit() in the survival package. I also found this page which contains R code to estimate conditional logit parameters. The survey package also includes a lot of wrapper function for GLM and Survival model in the case of complex sampling, but I didn't look at. Try also to look at logit.mixed in the Zelig package, or directly use the lme4 package which provide methods for mixed-effects models with binomial link (see lmer or glmer). Did you take a look at Econometrics in R, from Grant V. Farnsworth? It seems to provide a gentle overview of applied econometrics in R (with which I am not familiar).
R package for fixed-effect logistic regression Conditional logistic regression (I assume that this is what you refered to when talking about Chamberlain's estimator) is available through clogit() in the survival package. I also found this page whi
18,809
R package for fixed-effect logistic regression
You can run the Chamberlains model using glmer. It is basically a RE model but with more variables: glmer(y~X + Z + (1|subject), data, model=binomial("probit")) X are the variables you consider explain your fixed effect model (a simple case it is the mean of Z) Z are your exogenous variables Subject is the variable where the heterogeneity comes from I hope this helps.
R package for fixed-effect logistic regression
You can run the Chamberlains model using glmer. It is basically a RE model but with more variables: glmer(y~X + Z + (1|subject), data, model=binomial("probit")) X are the variables you consider expl
R package for fixed-effect logistic regression You can run the Chamberlains model using glmer. It is basically a RE model but with more variables: glmer(y~X + Z + (1|subject), data, model=binomial("probit")) X are the variables you consider explain your fixed effect model (a simple case it is the mean of Z) Z are your exogenous variables Subject is the variable where the heterogeneity comes from I hope this helps.
R package for fixed-effect logistic regression You can run the Chamberlains model using glmer. It is basically a RE model but with more variables: glmer(y~X + Z + (1|subject), data, model=binomial("probit")) X are the variables you consider expl
18,810
R package for fixed-effect logistic regression
The mclogit package seems to implement conditional logit of the Chamberlain variant.
R package for fixed-effect logistic regression
The mclogit package seems to implement conditional logit of the Chamberlain variant.
R package for fixed-effect logistic regression The mclogit package seems to implement conditional logit of the Chamberlain variant.
R package for fixed-effect logistic regression The mclogit package seems to implement conditional logit of the Chamberlain variant.
18,811
COVID in Germany, LOO-CV for time series
Overview quick remarks The model with three points does make a better fit. The fit with three points is only slightly better. The model with only one point is not very bad. The difference in loocv score may indicate that the model with more points is a significant/probable/likely improvement, but the effect size is only small. Even if the three points model is a good fit, it may not need to be physical reality. The better fit should be interpreted as confirmation that the null hypothesis SIR, with one turn point, is likely not true (in the sense 'not exactly true', it might still be a reasonably good description). It does not confirm that the alternative model, with three points, is correct (in a physical sense). The correct model (the true model) might be in reality a different model (e.g. a smooth transition instead of change points). It only confirms that the alternative model performs better. It is hard to believe that the three change points model capture some underling physical reality missing in the one change point model. The fit with three change points is indeed more accurate It is not hard to believe that a model with three change points will do better. A simple SIR model (which assumes homogeneous mixing of all people) is not an exact fit to reality. Those change points will help to make-up for that shortcoming (making it more flexible and able to fit a wider range of different curves). But it might not capture physical reality However, you are right to doubt whether it captures a physical reality. A SIR model is designed as a mechanistic model. However, when it is not accurate enough, then it becomes effectively just an empirical model. The underlying parameters may not necessarily represent some physical reality. (If you like you could fit a mechanistic model which has obviously not at all any physical reality) There are many ways how one may have a decrease in the rate of growth without changes in the epidemiological parameters. In spatial and networked SIR models this may be due to local saturation (e.g. see here an example). As a result a fit with an SIR model will underestimate the $R_0$ value (because lower $R_0$ values tend to fit better deflections in the curve). when the SIR model is made more flexible with change points then the $R_0$ might be higher initially but the fit will indicate a decrease in growth parameter $\beta$ which might in reality not exist. One change point So, are these change points fiction? I think not. The value of $\beta$ in that model does change a lot. I would not expect that this drop in growth rate is not occurring and that it is something due to a strange adjustment to an SIR model that makes it automatically drop. Although when $N$ is lower, which I believe is not included as one of the model parameters and seems to be fixed, then a drastic drop in growth rate may occur without a change of the epidemiological parameters. $$\frac{dI}{dt} = \overbrace{\frac{S}{N}}^{\substack{ \llap{\text{If N or}}\rlap{ \text{ S = N-I}} \\ \llap{\text{are over/un}}\rlap{ \text{der estimated} }\\ \llap{\text{then the dro}}\rlap{ \text{p in this term}} \\ \llap{\text{becomes un}}\rlap{ \text{derestimated}} \\ }} \underbrace{\beta}_{\substack{ \llap{\text{In that case}}\rlap{ \text{ $\beta$ will get}} \\ \llap{\text{underestimate}}\rlap{ \text{d in order to}} \\ \llap{\text{correct for the w}}\rlap{ \text{rong S/N term} }\\ }} I - \mu I $$ If the wrong $N$ is used then the model will be pushed to correct for this. The same is true when we wrongly assume that all cases are being measured (and thus underestimate the number of cases, because we did not include underreporting). But anyway, I guess that it is reasonably to say that there is turnpoint/drop in the $\beta$ there are many epidemiological curves that show a rapid decrease in growth rate. This is, I believe, not due to natural processes like saturation (growing immunity), but instead mostly due to the parameters changing. Two or three points The effect of these models is actually only very subtle. What these extra change points do is make the change from growth to decrease more smooth, and this only occurs over a short period. So instead of one big step you get three small steps between 8 and 22 March. It is not hard to believe that you will get a smooth decrease in $\beta$ (many mechanisms may create such change). More difficult is the interpretation. The change points are being related to particular events. See for instance this quote in the abstract "Focusing on COVID-19 spread in Germany, we detect change points in the effective growth rate that correlate well with the times of publicly announced interventions" Or in the text A third change point ... was inferred on March 24 $(CI [21, 26])]$; this inferred date matches the timing of the third governmental intervention But that is speculation and may be just fiction. This is especially the case since they placed priors exactly on these dates (with standard deviation that more or less matches the size of the credible intervals, we have 'posterior distribution $\approx$ prior distribution' which means that the data did not add so much information regarding the dates): So it is not like they did a three change point model and it turned out to be coincidentally matching the dates of particular interventions (this was my first interpretation after a quick scan of the article). They did not detect change points, and it is more like the model had a build in tendency to correlate well with the particular interventions, and place the 'detected' points near the dates of the interventions. (in addition there is free parameter for a reporting delay which allows some flexibility of a couple days between the date of change in the curves and the date of change in the interventions, so the date of the change points is not pinpointed/detected/inferred very precisely and overall it is more fuzzy) The leave one out cross validation. Is LOO-CV used correctly? I believe that the LOO-CV is correctly applied. (but the interpretation is tricky) I would have to dig into the code to know exactly, but I have little reasons to doubt it. What those scores mean is that the function with three change points did not overfit and was able to better capture the deterministic part of the model (but not that the model with three points is so much better than the model with one point, it is only a small improvement). It is not so strange that the function did not over fit. There are quite some data points to even out the noise and preventing that the fitted function is capturing too much noise instead of the underlying deterministic trend. It is not so strange that the three change points are better able to capture the deterministic model. The standard SIR model is, out of the box, not really a good fit. Instead of the change points you could get similar improvements with high order polynomial fits or splines. That the change points improve the model may not need to be because of a mechanistic underlying reason. You might think, hey but what about the small differences between the three curves red, orange, green? Yes, indeed the differences are only small. The change points occur only over a small time period. While the differences in the LOO-CV scores, from 819 to 796 to 787, may indicate some significance, this may not need to be relating to a 'large' effect and neither does the effect for the alternative model need to be relating to some realistic mechanism. See for instance the example in the image below where an additional $x^2$ term is able to significantly improve a fit, but the difference of the effect is only small and the 'true' effect is a $x^3$ term instead of the $x^2$ term. But for that example the log likelihood scores are significantly different: > lmtest::lrtest(mod1,mod2) Likelihood ratio test Model 1: y ~ x Model 2: y ~ x + I(x^2) #Df LogLik Df Chisq Pr(>Chisq) 1 3 15.345 2 4 19.634 1 8.5773 0.003404 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Also the small differences might be problematic. It is likely not very significant, especially when you consider that the noise is likely correlated. Because of that some degree of overfitting might possibly not be punished in a leave-one-out CV. Example image and code: set.seed(1) x <- seq(0,1,0.02) ydeterministic <- x + 0.5*x^3 y <- ydeterministic + rnorm(length(x),0,0.2) mod1 <- lm(y~x) mod2 <- lm(y~x+I(x^2)) plot(x,y, main="small but significant effect", cex.main = 1, pch = 21, col =1, bg = "white", cex = 0.7, ylim = c(-0.2,1.7)) lines(x,mod1$fitted.values,col="red", lty = 2) lines(x,mod2$fitted.values,col="blue", lty =2) lines(x,ydeterministic, lty = 1 ) lmtest::lrtest(mod1,mod2) legend(0,1.7,c("true model: y = x + xΒ³", "fit 1: y = x", "fit 2: y = x + xΒ²"), col = c("black","red","blue"), lty = c(1,2,2), cex = 0.6) This example is for a linear model, and not a Bayesian setting, but it might help to see intuitively the case of a 'significant but small effect', and how this comparison in terms of log-likelihood values, instead of the effect size, is tangential to that.
COVID in Germany, LOO-CV for time series
Overview quick remarks The model with three points does make a better fit. The fit with three points is only slightly better. The model with only one point is not very bad. The difference in loocv sc
COVID in Germany, LOO-CV for time series Overview quick remarks The model with three points does make a better fit. The fit with three points is only slightly better. The model with only one point is not very bad. The difference in loocv score may indicate that the model with more points is a significant/probable/likely improvement, but the effect size is only small. Even if the three points model is a good fit, it may not need to be physical reality. The better fit should be interpreted as confirmation that the null hypothesis SIR, with one turn point, is likely not true (in the sense 'not exactly true', it might still be a reasonably good description). It does not confirm that the alternative model, with three points, is correct (in a physical sense). The correct model (the true model) might be in reality a different model (e.g. a smooth transition instead of change points). It only confirms that the alternative model performs better. It is hard to believe that the three change points model capture some underling physical reality missing in the one change point model. The fit with three change points is indeed more accurate It is not hard to believe that a model with three change points will do better. A simple SIR model (which assumes homogeneous mixing of all people) is not an exact fit to reality. Those change points will help to make-up for that shortcoming (making it more flexible and able to fit a wider range of different curves). But it might not capture physical reality However, you are right to doubt whether it captures a physical reality. A SIR model is designed as a mechanistic model. However, when it is not accurate enough, then it becomes effectively just an empirical model. The underlying parameters may not necessarily represent some physical reality. (If you like you could fit a mechanistic model which has obviously not at all any physical reality) There are many ways how one may have a decrease in the rate of growth without changes in the epidemiological parameters. In spatial and networked SIR models this may be due to local saturation (e.g. see here an example). As a result a fit with an SIR model will underestimate the $R_0$ value (because lower $R_0$ values tend to fit better deflections in the curve). when the SIR model is made more flexible with change points then the $R_0$ might be higher initially but the fit will indicate a decrease in growth parameter $\beta$ which might in reality not exist. One change point So, are these change points fiction? I think not. The value of $\beta$ in that model does change a lot. I would not expect that this drop in growth rate is not occurring and that it is something due to a strange adjustment to an SIR model that makes it automatically drop. Although when $N$ is lower, which I believe is not included as one of the model parameters and seems to be fixed, then a drastic drop in growth rate may occur without a change of the epidemiological parameters. $$\frac{dI}{dt} = \overbrace{\frac{S}{N}}^{\substack{ \llap{\text{If N or}}\rlap{ \text{ S = N-I}} \\ \llap{\text{are over/un}}\rlap{ \text{der estimated} }\\ \llap{\text{then the dro}}\rlap{ \text{p in this term}} \\ \llap{\text{becomes un}}\rlap{ \text{derestimated}} \\ }} \underbrace{\beta}_{\substack{ \llap{\text{In that case}}\rlap{ \text{ $\beta$ will get}} \\ \llap{\text{underestimate}}\rlap{ \text{d in order to}} \\ \llap{\text{correct for the w}}\rlap{ \text{rong S/N term} }\\ }} I - \mu I $$ If the wrong $N$ is used then the model will be pushed to correct for this. The same is true when we wrongly assume that all cases are being measured (and thus underestimate the number of cases, because we did not include underreporting). But anyway, I guess that it is reasonably to say that there is turnpoint/drop in the $\beta$ there are many epidemiological curves that show a rapid decrease in growth rate. This is, I believe, not due to natural processes like saturation (growing immunity), but instead mostly due to the parameters changing. Two or three points The effect of these models is actually only very subtle. What these extra change points do is make the change from growth to decrease more smooth, and this only occurs over a short period. So instead of one big step you get three small steps between 8 and 22 March. It is not hard to believe that you will get a smooth decrease in $\beta$ (many mechanisms may create such change). More difficult is the interpretation. The change points are being related to particular events. See for instance this quote in the abstract "Focusing on COVID-19 spread in Germany, we detect change points in the effective growth rate that correlate well with the times of publicly announced interventions" Or in the text A third change point ... was inferred on March 24 $(CI [21, 26])]$; this inferred date matches the timing of the third governmental intervention But that is speculation and may be just fiction. This is especially the case since they placed priors exactly on these dates (with standard deviation that more or less matches the size of the credible intervals, we have 'posterior distribution $\approx$ prior distribution' which means that the data did not add so much information regarding the dates): So it is not like they did a three change point model and it turned out to be coincidentally matching the dates of particular interventions (this was my first interpretation after a quick scan of the article). They did not detect change points, and it is more like the model had a build in tendency to correlate well with the particular interventions, and place the 'detected' points near the dates of the interventions. (in addition there is free parameter for a reporting delay which allows some flexibility of a couple days between the date of change in the curves and the date of change in the interventions, so the date of the change points is not pinpointed/detected/inferred very precisely and overall it is more fuzzy) The leave one out cross validation. Is LOO-CV used correctly? I believe that the LOO-CV is correctly applied. (but the interpretation is tricky) I would have to dig into the code to know exactly, but I have little reasons to doubt it. What those scores mean is that the function with three change points did not overfit and was able to better capture the deterministic part of the model (but not that the model with three points is so much better than the model with one point, it is only a small improvement). It is not so strange that the function did not over fit. There are quite some data points to even out the noise and preventing that the fitted function is capturing too much noise instead of the underlying deterministic trend. It is not so strange that the three change points are better able to capture the deterministic model. The standard SIR model is, out of the box, not really a good fit. Instead of the change points you could get similar improvements with high order polynomial fits or splines. That the change points improve the model may not need to be because of a mechanistic underlying reason. You might think, hey but what about the small differences between the three curves red, orange, green? Yes, indeed the differences are only small. The change points occur only over a small time period. While the differences in the LOO-CV scores, from 819 to 796 to 787, may indicate some significance, this may not need to be relating to a 'large' effect and neither does the effect for the alternative model need to be relating to some realistic mechanism. See for instance the example in the image below where an additional $x^2$ term is able to significantly improve a fit, but the difference of the effect is only small and the 'true' effect is a $x^3$ term instead of the $x^2$ term. But for that example the log likelihood scores are significantly different: > lmtest::lrtest(mod1,mod2) Likelihood ratio test Model 1: y ~ x Model 2: y ~ x + I(x^2) #Df LogLik Df Chisq Pr(>Chisq) 1 3 15.345 2 4 19.634 1 8.5773 0.003404 ** --- Signif. codes: 0 β€˜***’ 0.001 β€˜**’ 0.01 β€˜*’ 0.05 β€˜.’ 0.1 β€˜ ’ 1 Also the small differences might be problematic. It is likely not very significant, especially when you consider that the noise is likely correlated. Because of that some degree of overfitting might possibly not be punished in a leave-one-out CV. Example image and code: set.seed(1) x <- seq(0,1,0.02) ydeterministic <- x + 0.5*x^3 y <- ydeterministic + rnorm(length(x),0,0.2) mod1 <- lm(y~x) mod2 <- lm(y~x+I(x^2)) plot(x,y, main="small but significant effect", cex.main = 1, pch = 21, col =1, bg = "white", cex = 0.7, ylim = c(-0.2,1.7)) lines(x,mod1$fitted.values,col="red", lty = 2) lines(x,mod2$fitted.values,col="blue", lty =2) lines(x,ydeterministic, lty = 1 ) lmtest::lrtest(mod1,mod2) legend(0,1.7,c("true model: y = x + xΒ³", "fit 1: y = x", "fit 2: y = x + xΒ²"), col = c("black","red","blue"), lty = c(1,2,2), cex = 0.6) This example is for a linear model, and not a Bayesian setting, but it might help to see intuitively the case of a 'significant but small effect', and how this comparison in terms of log-likelihood values, instead of the effect size, is tangential to that.
COVID in Germany, LOO-CV for time series Overview quick remarks The model with three points does make a better fit. The fit with three points is only slightly better. The model with only one point is not very bad. The difference in loocv sc
18,812
What is the proper unit for F1? Is it a percentage?
Precision and Recall are two measure that can be interpreted as percentages. Their arithmetic mean would be a percentage also. F1 score is actually the harmonic mean of the two; analogously it's still a percentage. From a different perspective, you can think the unit as $U$, and substitute in the definition: $$F_1=2\frac{U.U}{U+U}\propto U$$ i.e. $U+U$ has unit $U$, $U.U$ has unit $U^2$, and $U^2/U$ has the unit $U$.
What is the proper unit for F1? Is it a percentage?
Precision and Recall are two measure that can be interpreted as percentages. Their arithmetic mean would be a percentage also. F1 score is actually the harmonic mean of the two; analogously it's still
What is the proper unit for F1? Is it a percentage? Precision and Recall are two measure that can be interpreted as percentages. Their arithmetic mean would be a percentage also. F1 score is actually the harmonic mean of the two; analogously it's still a percentage. From a different perspective, you can think the unit as $U$, and substitute in the definition: $$F_1=2\frac{U.U}{U+U}\propto U$$ i.e. $U+U$ has unit $U$, $U.U$ has unit $U^2$, and $U^2/U$ has the unit $U$.
What is the proper unit for F1? Is it a percentage? Precision and Recall are two measure that can be interpreted as percentages. Their arithmetic mean would be a percentage also. F1 score is actually the harmonic mean of the two; analogously it's still
18,813
How do backbone and head architecture work in Mask R-CNN?
The backbone refers to the network which takes as input the image and extracts the feature map upon which the rest of the network is based (the output of the backbone is the first block in your figure). "head" refers to everything after the RoI pooling -- in other words what you've labeled as FCN.
How do backbone and head architecture work in Mask R-CNN?
The backbone refers to the network which takes as input the image and extracts the feature map upon which the rest of the network is based (the output of the backbone is the first block in your figure
How do backbone and head architecture work in Mask R-CNN? The backbone refers to the network which takes as input the image and extracts the feature map upon which the rest of the network is based (the output of the backbone is the first block in your figure). "head" refers to everything after the RoI pooling -- in other words what you've labeled as FCN.
How do backbone and head architecture work in Mask R-CNN? The backbone refers to the network which takes as input the image and extracts the feature map upon which the rest of the network is based (the output of the backbone is the first block in your figure
18,814
When to stop refining a model?
Unfortunately, this question does not have a good answer. You can choose the best model based on the fact that it minimizes absolute error, squared error, maximizes likelihood, using some criteria that penalizes likelihood (e.g. AIC, BIC) to mention just a few most common choices. The problem is that neither of those criteria will let you choose the objectively best model, but rather the best from which you compared. Another problem is that while optimizing you can always end up in some local maximum/minimum. Yet another problem is that your choice of criteria for model selection is subjective. In many cases you consciously, or semi-consciously, make a decision on what you are interested in and choose the criteria based on this. For example, using BIC rather than AIC leads to more parsimonious models, with less parameters. Usually, for modeling you are interested in more parsimonious models that lead to some general conclusions about the universe, while for predicting it doesn't have to be so and sometimes more complicated model can have better predictive power (but does not have to and often it does not). In yet other cases, sometimes more complicated models are preferred for practical reasons, for example while estimating Bayesian model with MCMC, model with hierarchical hyperpriors can behave better in simulation than the simpler one. On the other hand, generally we are afraid of overfitting and the simpler model has the lower risk of overfitting, so it is a safer choice. Nice example for this is a automatic stepwise model selection that is generally not recommended because it easily leads to overfitted and biased estimates. There is also a philosophical argument, Occam's razor, that the simplest model is the preferred one. Notice also, that we are discussing here comparing different models, while in real life situations it also can be so that using different statistical tools can lead to different results - so there is an additional layer of choosing the method! All this leads to sad, but entertaining, fact that we can never be sure. We start with uncertainty, use methods to deal with it and we end up with uncertanity. This may be paradoxical, but recall that we use statistics because we believe that world is uncertain and probabilistic (otherwise we would choose a career of prophets), so how could we possibly end up with different conclusions? There is no objective stopping rule, there are multiple possible models, all of them are wrong (sorry for the clichΓ©!) because they try to simplify the complicated (constantly changing and probabilistic) reality. We find some of them more useful than others for our purposes and sometimes we do find different models useful for different purposes. You can go to the very bottom to notice that in many cases we make models of unknown $\theta$'s, that in most cases can never be known, or even do not exist (does a population has any $\mu$ for age?). Most models do not even try to describe the reality but rather provide abstractions and generalizations, so they cannot be "right", or "correct". You can go even deeper and find out that there is no such a thing as "probability" in the reality - it is just some approximation of uncertainty around us and there are also alternative ways of approximating it like e.g. fuzzy logic (see Kosko, 1993 for discussion). Even the very basic tools and theorems that our methods are grounded on are approximations and are not the only ones that are possible. We simply cannot be certain in such a setup. The stopping rule that you are looking for is always problem-specific and subjective, i.e. based on so called professional judgment. By the way, there are lots of research examples that have shown that professionals are often not better and sometimes even worse in their judgment than laypeople (e.g. revived in papers and books by Daniel Kahneman), while being more prone to overconfidence (this is actually an argument on why we should not try to be "sure" about our models). Kosko, B. (1993). Fuzzy thinking: the new science of fuzzy logic. New York: Hyperion.
When to stop refining a model?
Unfortunately, this question does not have a good answer. You can choose the best model based on the fact that it minimizes absolute error, squared error, maximizes likelihood, using some criteria tha
When to stop refining a model? Unfortunately, this question does not have a good answer. You can choose the best model based on the fact that it minimizes absolute error, squared error, maximizes likelihood, using some criteria that penalizes likelihood (e.g. AIC, BIC) to mention just a few most common choices. The problem is that neither of those criteria will let you choose the objectively best model, but rather the best from which you compared. Another problem is that while optimizing you can always end up in some local maximum/minimum. Yet another problem is that your choice of criteria for model selection is subjective. In many cases you consciously, or semi-consciously, make a decision on what you are interested in and choose the criteria based on this. For example, using BIC rather than AIC leads to more parsimonious models, with less parameters. Usually, for modeling you are interested in more parsimonious models that lead to some general conclusions about the universe, while for predicting it doesn't have to be so and sometimes more complicated model can have better predictive power (but does not have to and often it does not). In yet other cases, sometimes more complicated models are preferred for practical reasons, for example while estimating Bayesian model with MCMC, model with hierarchical hyperpriors can behave better in simulation than the simpler one. On the other hand, generally we are afraid of overfitting and the simpler model has the lower risk of overfitting, so it is a safer choice. Nice example for this is a automatic stepwise model selection that is generally not recommended because it easily leads to overfitted and biased estimates. There is also a philosophical argument, Occam's razor, that the simplest model is the preferred one. Notice also, that we are discussing here comparing different models, while in real life situations it also can be so that using different statistical tools can lead to different results - so there is an additional layer of choosing the method! All this leads to sad, but entertaining, fact that we can never be sure. We start with uncertainty, use methods to deal with it and we end up with uncertanity. This may be paradoxical, but recall that we use statistics because we believe that world is uncertain and probabilistic (otherwise we would choose a career of prophets), so how could we possibly end up with different conclusions? There is no objective stopping rule, there are multiple possible models, all of them are wrong (sorry for the clichΓ©!) because they try to simplify the complicated (constantly changing and probabilistic) reality. We find some of them more useful than others for our purposes and sometimes we do find different models useful for different purposes. You can go to the very bottom to notice that in many cases we make models of unknown $\theta$'s, that in most cases can never be known, or even do not exist (does a population has any $\mu$ for age?). Most models do not even try to describe the reality but rather provide abstractions and generalizations, so they cannot be "right", or "correct". You can go even deeper and find out that there is no such a thing as "probability" in the reality - it is just some approximation of uncertainty around us and there are also alternative ways of approximating it like e.g. fuzzy logic (see Kosko, 1993 for discussion). Even the very basic tools and theorems that our methods are grounded on are approximations and are not the only ones that are possible. We simply cannot be certain in such a setup. The stopping rule that you are looking for is always problem-specific and subjective, i.e. based on so called professional judgment. By the way, there are lots of research examples that have shown that professionals are often not better and sometimes even worse in their judgment than laypeople (e.g. revived in papers and books by Daniel Kahneman), while being more prone to overconfidence (this is actually an argument on why we should not try to be "sure" about our models). Kosko, B. (1993). Fuzzy thinking: the new science of fuzzy logic. New York: Hyperion.
When to stop refining a model? Unfortunately, this question does not have a good answer. You can choose the best model based on the fact that it minimizes absolute error, squared error, maximizes likelihood, using some criteria tha
18,815
When to stop refining a model?
There is a whole field called nonparametric statistics that avoids the use of strong models. However, your concern about fitting models, per se, is valid. Unfortunately there is no mechanical procedure to fitting models that would be universally accepted as "optimal". For example, if you want to define the model that maximizes the likelihood of your data, then you will be led to the empirical distribution function. However, we usually have some background assumptions and constraints, such as continuous with finite first and second moments. For cases like these, one approach is to choose a measure like Shannon Differential Entropy and maximize it over the space of continuous distributions that satisfy your boundary constraints. What I'd like to point out is that if you don't just want to default to the ECDF, then you'll need to add assumptions, beyond the data, to get there, and that requires subject matter expertise, and, yes, the dreaded.....professional judgement So, is there a guaranteed stopping point to modeling...the answer is no. Is there a good enough place to stop? Generally, yes, but that point will depend on more than just the data and some statistical desiderata, you're usually going to take into account the risks of different errors, the technical limitations to implementing the models, and the robustness of its estimates, etc. As @Luca pointed out, you can always average over a class of models, but, as you rightly pointed out, that will just push the question up to the next level of hyperparameters. Unfortunately, we seem to live within an infinitely layered onion...in both directions!
When to stop refining a model?
There is a whole field called nonparametric statistics that avoids the use of strong models. However, your concern about fitting models, per se, is valid. Unfortunately there is no mechanical procedur
When to stop refining a model? There is a whole field called nonparametric statistics that avoids the use of strong models. However, your concern about fitting models, per se, is valid. Unfortunately there is no mechanical procedure to fitting models that would be universally accepted as "optimal". For example, if you want to define the model that maximizes the likelihood of your data, then you will be led to the empirical distribution function. However, we usually have some background assumptions and constraints, such as continuous with finite first and second moments. For cases like these, one approach is to choose a measure like Shannon Differential Entropy and maximize it over the space of continuous distributions that satisfy your boundary constraints. What I'd like to point out is that if you don't just want to default to the ECDF, then you'll need to add assumptions, beyond the data, to get there, and that requires subject matter expertise, and, yes, the dreaded.....professional judgement So, is there a guaranteed stopping point to modeling...the answer is no. Is there a good enough place to stop? Generally, yes, but that point will depend on more than just the data and some statistical desiderata, you're usually going to take into account the risks of different errors, the technical limitations to implementing the models, and the robustness of its estimates, etc. As @Luca pointed out, you can always average over a class of models, but, as you rightly pointed out, that will just push the question up to the next level of hyperparameters. Unfortunately, we seem to live within an infinitely layered onion...in both directions!
When to stop refining a model? There is a whole field called nonparametric statistics that avoids the use of strong models. However, your concern about fitting models, per se, is valid. Unfortunately there is no mechanical procedur
18,816
Choosing the range and grid density for regularization parameter in LASSO
This methodology is described in the glmnet paper Regularization Paths for Generalized Linear Models via Coordinate Descent. Although the methodology here is for the general case of both $L^1$ and $L^2$ regularization, it should apply to the LASSO (only $L^1$) as well. The solution for the maximum $\lambda$ is given in section 2.5. When $\tilde\beta = 0$, we see from (5) that $\tilde\beta_j$ will stay zero if $ \frac{1}{N} | \langle x_j , y \rangle | < \lambda \alpha $. Hence $N \alpha \lambda_{max} = \max_l | \langle x_l , y \rangle |$ That is, we observe that the update rule for beta forces all parameter estimates to zero for $\lambda > \lambda_{max}$ as determined above. The determination of $\lambda_{min}$ and the number of grid points seems less principled. In glmnet they set $\lambda_{min} = 0.001 * \lambda_{max}$, and then choose a grid of $100$ equally spaced points on the logarithmic scale. This works well in practice, in my extensive use of glmnet I have never found this grid to be too coarse. In the LASSO ($L^1$) only case things work better, as the LARS method provides a precise calculation for when the various predictors enter into the model. A true LARS does not do a grid search over $\lambda$, instead producing an exact expression for the solution paths for the coefficients. Here is a detailed look at the exact calculation of the coefficient paths in the two predictor case. The case for non-linear models (i.e. logistic, poisson) is more difficult. At a high level, first a quadratic approximation to the loss function is obtained at the initial parameters $\beta = 0$, and then the calculation above is used to determine $\lambda_{max}$. A precise calculation of the parameter paths is not possible in these cases, even when only $L^1$ regularization is provided, so a grid search is the only option. Sample weights also complicate the situation, the inner products must be replaced in appropriate places with weighted inner products.
Choosing the range and grid density for regularization parameter in LASSO
This methodology is described in the glmnet paper Regularization Paths for Generalized Linear Models via Coordinate Descent. Although the methodology here is for the general case of both $L^1$ and $L
Choosing the range and grid density for regularization parameter in LASSO This methodology is described in the glmnet paper Regularization Paths for Generalized Linear Models via Coordinate Descent. Although the methodology here is for the general case of both $L^1$ and $L^2$ regularization, it should apply to the LASSO (only $L^1$) as well. The solution for the maximum $\lambda$ is given in section 2.5. When $\tilde\beta = 0$, we see from (5) that $\tilde\beta_j$ will stay zero if $ \frac{1}{N} | \langle x_j , y \rangle | < \lambda \alpha $. Hence $N \alpha \lambda_{max} = \max_l | \langle x_l , y \rangle |$ That is, we observe that the update rule for beta forces all parameter estimates to zero for $\lambda > \lambda_{max}$ as determined above. The determination of $\lambda_{min}$ and the number of grid points seems less principled. In glmnet they set $\lambda_{min} = 0.001 * \lambda_{max}$, and then choose a grid of $100$ equally spaced points on the logarithmic scale. This works well in practice, in my extensive use of glmnet I have never found this grid to be too coarse. In the LASSO ($L^1$) only case things work better, as the LARS method provides a precise calculation for when the various predictors enter into the model. A true LARS does not do a grid search over $\lambda$, instead producing an exact expression for the solution paths for the coefficients. Here is a detailed look at the exact calculation of the coefficient paths in the two predictor case. The case for non-linear models (i.e. logistic, poisson) is more difficult. At a high level, first a quadratic approximation to the loss function is obtained at the initial parameters $\beta = 0$, and then the calculation above is used to determine $\lambda_{max}$. A precise calculation of the parameter paths is not possible in these cases, even when only $L^1$ regularization is provided, so a grid search is the only option. Sample weights also complicate the situation, the inner products must be replaced in appropriate places with weighted inner products.
Choosing the range and grid density for regularization parameter in LASSO This methodology is described in the glmnet paper Regularization Paths for Generalized Linear Models via Coordinate Descent. Although the methodology here is for the general case of both $L^1$ and $L
18,817
Do alternatives to Elo exist for non-team/individual events?
Yes- there are many versions of multi-participant rating systems, many of which are modified versions of the base Elo system (one of Elo's great strengths is that it can be readily modified). One very interesting multi-participant ranking system is Microsoft's TrueSkill Ranking system based on bayesian inference of player skill. The rating is used in matchmaking and ranking of many of their online offerings. Quite a lot of other research has been done on the system. (Full disclosure: no professional relationship with Microsoft) More theoretical underpinnings and consequences of bayesian approximations as applied to rating systems can be found here, here, and here.
Do alternatives to Elo exist for non-team/individual events?
Yes- there are many versions of multi-participant rating systems, many of which are modified versions of the base Elo system (one of Elo's great strengths is that it can be readily modified). One very
Do alternatives to Elo exist for non-team/individual events? Yes- there are many versions of multi-participant rating systems, many of which are modified versions of the base Elo system (one of Elo's great strengths is that it can be readily modified). One very interesting multi-participant ranking system is Microsoft's TrueSkill Ranking system based on bayesian inference of player skill. The rating is used in matchmaking and ranking of many of their online offerings. Quite a lot of other research has been done on the system. (Full disclosure: no professional relationship with Microsoft) More theoretical underpinnings and consequences of bayesian approximations as applied to rating systems can be found here, here, and here.
Do alternatives to Elo exist for non-team/individual events? Yes- there are many versions of multi-participant rating systems, many of which are modified versions of the base Elo system (one of Elo's great strengths is that it can be readily modified). One very
18,818
Do alternatives to Elo exist for non-team/individual events?
It seems that rankade, our ranking system for sports, games, and more, fits your needs. It's free to use and it's designed to manage rankings for small or large groups of players. It can manage any kind of match: one-on-one, faction vs. faction (two teams, which may be asymmetrical), multiplayer, multi-faction, cooperative games, single player games, and so on. Here's a comparison between most known ranking systems, including Elo, Glicko and Trueskill.
Do alternatives to Elo exist for non-team/individual events?
It seems that rankade, our ranking system for sports, games, and more, fits your needs. It's free to use and it's designed to manage rankings for small or large groups of players. It can manage any ki
Do alternatives to Elo exist for non-team/individual events? It seems that rankade, our ranking system for sports, games, and more, fits your needs. It's free to use and it's designed to manage rankings for small or large groups of players. It can manage any kind of match: one-on-one, faction vs. faction (two teams, which may be asymmetrical), multiplayer, multi-faction, cooperative games, single player games, and so on. Here's a comparison between most known ranking systems, including Elo, Glicko and Trueskill.
Do alternatives to Elo exist for non-team/individual events? It seems that rankade, our ranking system for sports, games, and more, fits your needs. It's free to use and it's designed to manage rankings for small or large groups of players. It can manage any ki
18,819
Do alternatives to Elo exist for non-team/individual events?
TravisVox, not sure if you are wanting to produce rankings of Horses, or Handicappers/Horseplayers. I'm interested in both. You probably already know all this, but: BrisnetDOTcom produces statistics which rate the "Class" of the horses that each horse in today's race has raced against in the past. They also produce a "Power Ranking" comparing only the horses in the race with each other. They also produce a rating which measures the highest level of competition against which a horse has performed well. After all, any horse can enter a high level stakes race if the owner has more money than brains...but the benchmark is how well a horse can perform at higher levels of competition. Regarding horseplayers, I am in the early stages of collecting stats on how to rank horseplayers who compete against each other in Tournaments. Would love to collaborate, or touch base.
Do alternatives to Elo exist for non-team/individual events?
TravisVox, not sure if you are wanting to produce rankings of Horses, or Handicappers/Horseplayers. I'm interested in both. You probably already know all this, but: BrisnetDOTcom produces statistics
Do alternatives to Elo exist for non-team/individual events? TravisVox, not sure if you are wanting to produce rankings of Horses, or Handicappers/Horseplayers. I'm interested in both. You probably already know all this, but: BrisnetDOTcom produces statistics which rate the "Class" of the horses that each horse in today's race has raced against in the past. They also produce a "Power Ranking" comparing only the horses in the race with each other. They also produce a rating which measures the highest level of competition against which a horse has performed well. After all, any horse can enter a high level stakes race if the owner has more money than brains...but the benchmark is how well a horse can perform at higher levels of competition. Regarding horseplayers, I am in the early stages of collecting stats on how to rank horseplayers who compete against each other in Tournaments. Would love to collaborate, or touch base.
Do alternatives to Elo exist for non-team/individual events? TravisVox, not sure if you are wanting to produce rankings of Horses, or Handicappers/Horseplayers. I'm interested in both. You probably already know all this, but: BrisnetDOTcom produces statistics
18,820
Why are the basis functions for natural cubic splines expressed as they are? (ESL)
First it is not the basis but a basis: We want to build a basis for $K$ knots of natural cubic splines. According to the constraints, "a natural cubic splines with $K$ knots is represented by $K$ basis functions". A basis is described with the $K$ elements $N_1, \ldots, N_K$. Note that "$d_K$" is never used to define any of those elements. [This paragraph is explained in details in this answer https://stats.stackexchange.com/q/233286 ] I dug into the exercise that $N_1, \ldots, N_K$ is a basis for $K$ knots of natural cubic splines. (this is Ex. 5.4 of the book) The knots $(\xi_k)$ are fixed. With the truncated power series representation for cubic splines with $K$ interior knots, we have this linear combination of the basis: $$f(x) = \sum_{j=0}^3 \beta_j x^j + \sum_{k=1}^K \theta_k (x - \xi_k)_{+}^{3}.$$ For now, there are $K+4$ degree of freedom, and we will add constraints to reduce it (we already know we need $K$ elements in the basis finally). Part I: Conditions on the coefficients We add the constraint "the function is linear beyond the boundary knots". We want to show the four following equations: $\beta_2 = 0$, $\beta_3 = 0$, $\sum_{k=1}^K \theta_k = 0$ and $\sum_{k=1}^K \theta_k \xi_k = 0$. Proof: For $x < \xi_1$, $$f(x) = \sum_{j=0}^3 \beta_j x^j$$ so $$f''(x) = 2 \beta_2 + 6 \beta_3 x.$$ The equation $f''(x)=0$ leads to $2 \beta_2 + 6 \beta_3 x = 0$ for all $x < \xi_1$. So necessarily, $\beta_2 = 0$ and $\beta_3 = 0$. For $x \geq \xi_K$, we replace $\beta_2$ and $\beta_3$ by $0$ and we obtain: $$f(x) = \sum_{j=0}^1 \beta_j x^j + \sum_{k=1}^K \theta_k (x- \xi_k)^3$$ so $$f''(x) = 6 \sum_{k=1}^K \theta_k (x-\xi_k).$$ The equation $f''(x)=0$ leads to $\left( \sum_{k=1}^K \theta_k \right) x - \sum_{k=1}^K \theta_k \xi_k = 0$ for all $x \geq \xi_k$. So necessarily, $\sum_{k=1}^K \theta_k = 0$ and $\sum_{k=1}^K \theta_k \xi_k = 0$. Part II: Relation between coefficients We get a relation between $\theta_{K-1}$ and $\left( \theta_{1}, \ldots, \theta_{K-2} \right)$. Using equations $\sum_{k=1}^K \theta_k = 0$ and $\sum_{k=1}^K \theta_k \xi_k = 0$ from Part I, we write: $$0 = \left( \sum_{k=1}^K \theta_k \right) \xi_K - \sum_{k=1}^K \theta_k \xi_k = \sum_{k=1}^K \theta_k \left( \xi_K - \xi_k \right) = \sum_{k=1}^{K-1} \theta_k \left( \xi_K - \xi_k \right).$$ We can isolate $\theta_{K-1}$ to get: $$\theta_{K-1} = - \sum_{k=1}^{K-2} \theta_k \frac{\xi_K - \xi_k}{\xi_K - \xi_{K-1}}.$$ Part III: Basis description We want to obtain the base as described in the book. We first use: $\beta_2=0$, $\beta_3=0$, $\theta_K = -\sum_{k=1}^{K-1} \theta_k$ from Part I and replace in $f$: \begin{align*} f(x) &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-1} \theta_k (x - \xi_k)_{+}^{3} - (x - \xi_K)_{+}^{3} \sum_{k=1}^{K-1} \theta_k \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-1} \theta_k \left( (x - \xi_k)_{+}^{3} - (x - \xi_K)_{+}^{3} \right). \end{align*} We have: $(\xi_K - \xi_k) d_k(x) = (x - \xi_k)_{+}^{3} - (x - \xi_K)_{+}^{3}$ so: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-1} \theta_k (\xi_K - \xi_k) d_k(x).$$ We have removed $3$ degree of freedom ($\theta_K$, $\beta_2$ and $\beta_3$). We will proceed to remove $\theta_{K-1}$. We want to use equation obtained in Part II, so we write: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_k(x) + \theta_{K-1} (\xi_K - \xi_{K-1}) d_{K-1}(x).$$ We replace with the relationship obtained in Part II: \begin{align*} f(x) &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_k(x) - \sum_{k=1}^{K-2} \theta_k \frac{\xi_K - \xi_k}{\xi_K - \xi_{K-1}} (\xi_K - \xi_{K-1}) d_{K-1}(x) \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_k(x) - \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_{K-1}(x) \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) (d_k(x) - d_{K-1}(x)). \end{align*} By definition of $N_{k+2}(x)$, we deduce: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) N_{k+2}(x).$$ For each $k$, $\xi_K - \xi_k$ does not depend on $x$, so we can let $\theta'_k := \theta_k (\xi_K - \xi_k)$ and rewrite: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta'_k N_{k+2}(x).$$ We let $\theta'_1 := \beta_0$ and $\theta'_2 := \beta_1$ to get: $$f(x) = \sum_{k=1}^{K} \theta'_k N_{k}(x).$$ The family $(N_k)_k$ has $K$ elements and spans the desired space of dimension $K$. Furthermore, each element verifies the boundary conditions (small exercise, by taking derivatives). Conclusion: $(N_k)_k$ is a basis for $K$ knots of natural cubic splines.
Why are the basis functions for natural cubic splines expressed as they are? (ESL)
First it is not the basis but a basis: We want to build a basis for $K$ knots of natural cubic splines. According to the constraints, "a natural cubic splines with $K$ knots is represented by $K$ basi
Why are the basis functions for natural cubic splines expressed as they are? (ESL) First it is not the basis but a basis: We want to build a basis for $K$ knots of natural cubic splines. According to the constraints, "a natural cubic splines with $K$ knots is represented by $K$ basis functions". A basis is described with the $K$ elements $N_1, \ldots, N_K$. Note that "$d_K$" is never used to define any of those elements. [This paragraph is explained in details in this answer https://stats.stackexchange.com/q/233286 ] I dug into the exercise that $N_1, \ldots, N_K$ is a basis for $K$ knots of natural cubic splines. (this is Ex. 5.4 of the book) The knots $(\xi_k)$ are fixed. With the truncated power series representation for cubic splines with $K$ interior knots, we have this linear combination of the basis: $$f(x) = \sum_{j=0}^3 \beta_j x^j + \sum_{k=1}^K \theta_k (x - \xi_k)_{+}^{3}.$$ For now, there are $K+4$ degree of freedom, and we will add constraints to reduce it (we already know we need $K$ elements in the basis finally). Part I: Conditions on the coefficients We add the constraint "the function is linear beyond the boundary knots". We want to show the four following equations: $\beta_2 = 0$, $\beta_3 = 0$, $\sum_{k=1}^K \theta_k = 0$ and $\sum_{k=1}^K \theta_k \xi_k = 0$. Proof: For $x < \xi_1$, $$f(x) = \sum_{j=0}^3 \beta_j x^j$$ so $$f''(x) = 2 \beta_2 + 6 \beta_3 x.$$ The equation $f''(x)=0$ leads to $2 \beta_2 + 6 \beta_3 x = 0$ for all $x < \xi_1$. So necessarily, $\beta_2 = 0$ and $\beta_3 = 0$. For $x \geq \xi_K$, we replace $\beta_2$ and $\beta_3$ by $0$ and we obtain: $$f(x) = \sum_{j=0}^1 \beta_j x^j + \sum_{k=1}^K \theta_k (x- \xi_k)^3$$ so $$f''(x) = 6 \sum_{k=1}^K \theta_k (x-\xi_k).$$ The equation $f''(x)=0$ leads to $\left( \sum_{k=1}^K \theta_k \right) x - \sum_{k=1}^K \theta_k \xi_k = 0$ for all $x \geq \xi_k$. So necessarily, $\sum_{k=1}^K \theta_k = 0$ and $\sum_{k=1}^K \theta_k \xi_k = 0$. Part II: Relation between coefficients We get a relation between $\theta_{K-1}$ and $\left( \theta_{1}, \ldots, \theta_{K-2} \right)$. Using equations $\sum_{k=1}^K \theta_k = 0$ and $\sum_{k=1}^K \theta_k \xi_k = 0$ from Part I, we write: $$0 = \left( \sum_{k=1}^K \theta_k \right) \xi_K - \sum_{k=1}^K \theta_k \xi_k = \sum_{k=1}^K \theta_k \left( \xi_K - \xi_k \right) = \sum_{k=1}^{K-1} \theta_k \left( \xi_K - \xi_k \right).$$ We can isolate $\theta_{K-1}$ to get: $$\theta_{K-1} = - \sum_{k=1}^{K-2} \theta_k \frac{\xi_K - \xi_k}{\xi_K - \xi_{K-1}}.$$ Part III: Basis description We want to obtain the base as described in the book. We first use: $\beta_2=0$, $\beta_3=0$, $\theta_K = -\sum_{k=1}^{K-1} \theta_k$ from Part I and replace in $f$: \begin{align*} f(x) &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-1} \theta_k (x - \xi_k)_{+}^{3} - (x - \xi_K)_{+}^{3} \sum_{k=1}^{K-1} \theta_k \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-1} \theta_k \left( (x - \xi_k)_{+}^{3} - (x - \xi_K)_{+}^{3} \right). \end{align*} We have: $(\xi_K - \xi_k) d_k(x) = (x - \xi_k)_{+}^{3} - (x - \xi_K)_{+}^{3}$ so: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-1} \theta_k (\xi_K - \xi_k) d_k(x).$$ We have removed $3$ degree of freedom ($\theta_K$, $\beta_2$ and $\beta_3$). We will proceed to remove $\theta_{K-1}$. We want to use equation obtained in Part II, so we write: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_k(x) + \theta_{K-1} (\xi_K - \xi_{K-1}) d_{K-1}(x).$$ We replace with the relationship obtained in Part II: \begin{align*} f(x) &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_k(x) - \sum_{k=1}^{K-2} \theta_k \frac{\xi_K - \xi_k}{\xi_K - \xi_{K-1}} (\xi_K - \xi_{K-1}) d_{K-1}(x) \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_k(x) - \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) d_{K-1}(x) \\ &= \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) (d_k(x) - d_{K-1}(x)). \end{align*} By definition of $N_{k+2}(x)$, we deduce: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta_k (\xi_K - \xi_k) N_{k+2}(x).$$ For each $k$, $\xi_K - \xi_k$ does not depend on $x$, so we can let $\theta'_k := \theta_k (\xi_K - \xi_k)$ and rewrite: $$f(x) = \beta_0 + \beta_1 x + \sum_{k=1}^{K-2} \theta'_k N_{k+2}(x).$$ We let $\theta'_1 := \beta_0$ and $\theta'_2 := \beta_1$ to get: $$f(x) = \sum_{k=1}^{K} \theta'_k N_{k}(x).$$ The family $(N_k)_k$ has $K$ elements and spans the desired space of dimension $K$. Furthermore, each element verifies the boundary conditions (small exercise, by taking derivatives). Conclusion: $(N_k)_k$ is a basis for $K$ knots of natural cubic splines.
Why are the basis functions for natural cubic splines expressed as they are? (ESL) First it is not the basis but a basis: We want to build a basis for $K$ knots of natural cubic splines. According to the constraints, "a natural cubic splines with $K$ knots is represented by $K$ basi
18,821
Confidence intervals when the sample size is very large
This problem has come up in some of my research as well (as a epidemic modeler, I have the luxury of making my own data sets, and with large enough computers, they can be essentially arbitrarily sized. A few thoughts: In terms of reporting, I think you can report more precise confidence intervals, though the utility of this is legitimately a little questionable. But it's not wrong, and with data sets of this size, I don't think there's much call to both demand confidence intervals be reported and then complain that we'd really all like them to be rounded to two digits, etc. In terms of avoiding overconfidence, I think the key is to remember that precision and accuracy are different things, and to avoid trying to conflate the two. It is very tempting, when you have a large sample, to get sucked into how very precise the estimated effect is and not think that it might also be wrong. That I think is the key - a biased data set will have that bias at N = 10, or 100, or 1000 or 100,000. The whole purpose of large data sets is to provide precise estimates, so I don't think you need to shy away from that precision. But you do have to remember that you can't make bad data better simply by collecting larger volumes of bad data.
Confidence intervals when the sample size is very large
This problem has come up in some of my research as well (as a epidemic modeler, I have the luxury of making my own data sets, and with large enough computers, they can be essentially arbitrarily sized
Confidence intervals when the sample size is very large This problem has come up in some of my research as well (as a epidemic modeler, I have the luxury of making my own data sets, and with large enough computers, they can be essentially arbitrarily sized. A few thoughts: In terms of reporting, I think you can report more precise confidence intervals, though the utility of this is legitimately a little questionable. But it's not wrong, and with data sets of this size, I don't think there's much call to both demand confidence intervals be reported and then complain that we'd really all like them to be rounded to two digits, etc. In terms of avoiding overconfidence, I think the key is to remember that precision and accuracy are different things, and to avoid trying to conflate the two. It is very tempting, when you have a large sample, to get sucked into how very precise the estimated effect is and not think that it might also be wrong. That I think is the key - a biased data set will have that bias at N = 10, or 100, or 1000 or 100,000. The whole purpose of large data sets is to provide precise estimates, so I don't think you need to shy away from that precision. But you do have to remember that you can't make bad data better simply by collecting larger volumes of bad data.
Confidence intervals when the sample size is very large This problem has come up in some of my research as well (as a epidemic modeler, I have the luxury of making my own data sets, and with large enough computers, they can be essentially arbitrarily sized
18,822
Confidence intervals when the sample size is very large
This problem has come up in my own manuscripts. 1. Reporting Options: If you have just one or a few CIs to report, then reporting "(e.g., 95% CI: .65878 - .65881)" is not overly verbose, and it highlights the precision of the CI. However, if you have numerous CIs, then a blanket statement might be more helpful to the reader. For example, I'll usually report something to the effect of "with this sample size, the 95% confidence margin of error for each proportion was less than +/- .010." I usually report something like this in the Method, or in the caption of Table or Figure, or in both. 2. Avoiding "over-confidence" even with large sample size: With a sample of 100,000, the central limit theorem will keep you safe when reporting CIs for proportions. So, in the situation you described, you should be okay, unless there are other assumption violations I'm not aware of (e.g., violated i.i.d.).
Confidence intervals when the sample size is very large
This problem has come up in my own manuscripts. 1. Reporting Options: If you have just one or a few CIs to report, then reporting "(e.g., 95% CI: .65878 - .65881)" is not overly verbose, and it high
Confidence intervals when the sample size is very large This problem has come up in my own manuscripts. 1. Reporting Options: If you have just one or a few CIs to report, then reporting "(e.g., 95% CI: .65878 - .65881)" is not overly verbose, and it highlights the precision of the CI. However, if you have numerous CIs, then a blanket statement might be more helpful to the reader. For example, I'll usually report something to the effect of "with this sample size, the 95% confidence margin of error for each proportion was less than +/- .010." I usually report something like this in the Method, or in the caption of Table or Figure, or in both. 2. Avoiding "over-confidence" even with large sample size: With a sample of 100,000, the central limit theorem will keep you safe when reporting CIs for proportions. So, in the situation you described, you should be okay, unless there are other assumption violations I'm not aware of (e.g., violated i.i.d.).
Confidence intervals when the sample size is very large This problem has come up in my own manuscripts. 1. Reporting Options: If you have just one or a few CIs to report, then reporting "(e.g., 95% CI: .65878 - .65881)" is not overly verbose, and it high
18,823
Confidence intervals when the sample size is very large
Don't report confidence intervals. Instead report the exact sample size and the proportions. The reader will be able to calculate his own CIs in any way he wishes.
Confidence intervals when the sample size is very large
Don't report confidence intervals. Instead report the exact sample size and the proportions. The reader will be able to calculate his own CIs in any way he wishes.
Confidence intervals when the sample size is very large Don't report confidence intervals. Instead report the exact sample size and the proportions. The reader will be able to calculate his own CIs in any way he wishes.
Confidence intervals when the sample size is very large Don't report confidence intervals. Instead report the exact sample size and the proportions. The reader will be able to calculate his own CIs in any way he wishes.
18,824
Confidence intervals when the sample size is very large
Consider the possibility that the 100 different hospitals' proportions do not converge to the same mean value. Did you test for between-group variance? If there is a measurable difference between hospitals, then the assumption that the samples are generated from a common normal distribution is not supported & you should not pool them. However if your data really does come from a normally distributed large sample, then you are not going to find useful "statements about uncertainty" as a property of the data, but upon reflection about why or why not your statistics should generalize -- due to some inherent bias in collection, or lack of stationarity, etc. that you should point out.
Confidence intervals when the sample size is very large
Consider the possibility that the 100 different hospitals' proportions do not converge to the same mean value. Did you test for between-group variance? If there is a measurable difference between hosp
Confidence intervals when the sample size is very large Consider the possibility that the 100 different hospitals' proportions do not converge to the same mean value. Did you test for between-group variance? If there is a measurable difference between hospitals, then the assumption that the samples are generated from a common normal distribution is not supported & you should not pool them. However if your data really does come from a normally distributed large sample, then you are not going to find useful "statements about uncertainty" as a property of the data, but upon reflection about why or why not your statistics should generalize -- due to some inherent bias in collection, or lack of stationarity, etc. that you should point out.
Confidence intervals when the sample size is very large Consider the possibility that the 100 different hospitals' proportions do not converge to the same mean value. Did you test for between-group variance? If there is a measurable difference between hosp
18,825
Additive Error or Multiplicative Error?
Which model is appropriate depends on how variation around the mean comes into the observations. It may well come in multiplicatively or additively ... or in some other way. There can even be several sources of this variation, some which may enter multiplicatively and some which enter additively and some in ways that can't really be characterized as either. Sometimes there's clear theory to establish which is suitable. Sometimes pondering the main sources of variation about the mean will reveal an appropriate choice. Frequently people have no clear idea which to use, or if several sources of variation of different kinds may be necessary to adequately describe the process. With the log-linear model, where linear regression is used: $\log(P_t)=log(P_o)+Ξ±\log(V_t)+Ο΅$ the OLS regression model assumes constant log-scale variance, and if that's the case, then the original data will show an increasing spread about the mean as the mean increases. On the other hand, this sort of model: $P_t=P_o(V_t)^Ξ±+Ο΅$ is generally fitted by nonlinear least squares, and again, if constant variance (the default for NLS) is fitted, then the spread about the mean should be constant. [You may have the visual impression that the spread is decreasing with increasing mean in the last image; that's actually an illusion caused by the increasing slope - we tend to judge the spread orthogonal to the curve rather than vertically so we get a distorted impression.] If you have nearly constant spread on either the original or the log scale, that might suggest which of the two models to fit, not because it proves it's additive or multiplicative, but because it leads to an appropriate description of the spread as well as the mean. Of course one might also have the possibility of additive error that had non-constant variance. However, there are other models still where such functional relationships can be fitted that have different relationships between mean and variance (such as a Poisson or quasi-Poisson GLM, which has spread proportional to the square root of the mean).
Additive Error or Multiplicative Error?
Which model is appropriate depends on how variation around the mean comes into the observations. It may well come in multiplicatively or additively ... or in some other way. There can even be several
Additive Error or Multiplicative Error? Which model is appropriate depends on how variation around the mean comes into the observations. It may well come in multiplicatively or additively ... or in some other way. There can even be several sources of this variation, some which may enter multiplicatively and some which enter additively and some in ways that can't really be characterized as either. Sometimes there's clear theory to establish which is suitable. Sometimes pondering the main sources of variation about the mean will reveal an appropriate choice. Frequently people have no clear idea which to use, or if several sources of variation of different kinds may be necessary to adequately describe the process. With the log-linear model, where linear regression is used: $\log(P_t)=log(P_o)+Ξ±\log(V_t)+Ο΅$ the OLS regression model assumes constant log-scale variance, and if that's the case, then the original data will show an increasing spread about the mean as the mean increases. On the other hand, this sort of model: $P_t=P_o(V_t)^Ξ±+Ο΅$ is generally fitted by nonlinear least squares, and again, if constant variance (the default for NLS) is fitted, then the spread about the mean should be constant. [You may have the visual impression that the spread is decreasing with increasing mean in the last image; that's actually an illusion caused by the increasing slope - we tend to judge the spread orthogonal to the curve rather than vertically so we get a distorted impression.] If you have nearly constant spread on either the original or the log scale, that might suggest which of the two models to fit, not because it proves it's additive or multiplicative, but because it leads to an appropriate description of the spread as well as the mean. Of course one might also have the possibility of additive error that had non-constant variance. However, there are other models still where such functional relationships can be fitted that have different relationships between mean and variance (such as a Poisson or quasi-Poisson GLM, which has spread proportional to the square root of the mean).
Additive Error or Multiplicative Error? Which model is appropriate depends on how variation around the mean comes into the observations. It may well come in multiplicatively or additively ... or in some other way. There can even be several
18,826
Same Mean, Different Variance
Although an exact probability cannot be computed (except in special circumstances with $n \le 2$), it can be numerically calculated quickly to high accuracy. Despite this limitation, it can be proven rigorously that the runner with the greatest standard deviation has the greatest chance of winning. The figure depicts the situation and shows why this result is intuitively obvious: Probability densities for the times of five runners are shown. All are continuous and symmetric about a common mean $\mu$. (Scaled Beta densities were used to ensure all times are positive.) One density, drawn in darker blue, has a much greater spread. The visible portion in its left tail represents times that no other runner can usually match. Because that left tail, with its relatively large area, represents appreciable probability, the runner with this density has the greatest chance of winning the race. (They also have the greatest chance of coming in last!) These results are proven for more than just Normal distributions: the methods presented here apply equally well to distributions that are symmetric and continuous. (This will be of interest to anyone who objects to using Normal distributions to model running times.) When these assumptions are violated it is possible that the runner with greatest standard deviation might not have the greatest chance of winning (I leave construction of counterexamples to interested readers), but we can still prove under milder assumptions that the runner with greatest SD will have the best chance of winning provided that SD is sufficiently large. The figure also suggests that the same results could be obtained by considering one-sided analogs of standard deviation (the so-called "semivariance"), which measure the dispersion of a distribution to one side only. A runner with great dispersion to the left (towards better times) ought to have a greater chance of winning, regardless of what happens in the rest of the distribution. These considerations help us appreciate how the property of being the best (in a group) differs from other properties such as averages. Let $X_1, \ldots, X_n$ be random variables representing the runners' times. The question assumes they are independent and Normally distributed with common mean $\mu$. (Although this is literally an impossible model, because it posits positive probabilities for negative times, it can still be a reasonable approximation to reality provided the standard deviations are substantially smaller than $\mu$.) In order to carry out the following argument, retain the supposition of independence but otherwise assume the distributions of the $X_i$ are given by $F_i$ and that these distributional laws can be anything. For convenience, also assume the distribution $F_n$ is continuous with density $f_n$. Later, as needed, we may apply additional assumptions provided they include the case of Normal distributions. For any $y$ and infinitesimal $dy$, the chance that the last runner has a time in the interval $(y-dy, y]$ and is the fastest runner is obtained by multiplying all relevant probabilities (because all times are independent): $$\Pr(X_n \in (y-dy, y], X_1 \gt y, \ldots, X_{n-1} \gt y) = f_n(y)dy(1-F_{1}(y))\cdots(1-F_{n-1}(y)).$$ Integrating over all these mutually exclusive possibilities yields $$\Pr(X_n \le \min(X_1, X_2, \ldots, X_{n-1})) = \int_{\mathbb R} f_n(y)(1-F_1(y))\cdots(1-F_{n-1}(y)) dy.$$ For Normal distributions, this integral cannot be evaluated in closed form when $n\gt 2$: it needs numerical evaluation. This figure plots the integrand for each of five runners having standard deviations in the ratio 1:2:3:4:5. The larger the SD, the more the function is shifted to the left--and the greater its area becomes. The areas are approximately 8:14:21:26:31%. In particular, the runner with the largest SD has a 31% chance of winning. Although a closed form cannot be found, we can still draw solid conclusions and prove that the runner with the largest SD is most likely to win. We need to study what happens as the standard deviation of one of the distributions, say $F_n$, changes. When the random variable $X_n$ is rescaled by $\sigma \gt 0$ around its mean, its SD is multiplied by $\sigma$ and $f_n(y)dy$ will change to $f_n(y/\sigma)dy/\sigma$. Making the change of variable $y=x\sigma$ in the integral gives an expression for the chance of runner $n$ winning, as a function of $\sigma$: $$\phi(\sigma) = \int_{\mathbb R} f_n(y)(1-F_1(y\sigma))\cdots(1-F_{n-1}(y\sigma)) dy.$$ Suppose now that the medians of all $n$ distributions are equal and that all the distributions are symmetric and continuous, with densities $f_i$. (This certainly is the case under the conditions of the question, because a Normal median is its mean.) By a simple (locational) change of variable we may assume this common median is $0$; the symmetry means $f_n(y) = f_n(-y)$ and $1 - F_j(-y) = F_j(y)$ for all $y$. These relationships enable us to combine the integral over $(-\infty, 0]$ with the integral over $(0,\infty)$ to give $$\phi(\sigma) = \int_0^{\infty} f_n(y)\left(\prod_{j=1}^{n-1}\left(1-F_j(y\sigma)\right)+\prod_{j=1}^{n-1}F_j(y\sigma)\right) dy.$$ The function $\phi$ is differentiable. Its derivative, obtained by differentiating the integrand, is a sum of integrals where each term is of the form $$y f_n(y) f_i(y\sigma)\left(\prod_{j\ne i}^{n-1}F_j(y\sigma) - \prod_{j\ne i}^{n-1}(1-F_j(y\sigma))\right)$$ for $i=1, 2, \ldots, n-1$. The assumptions we made about the distributions were designed to assure that $F_j(x) \ge 1-F_j(x)$ for $x\ge 0$. Thus, since $x=y\sigma\ge 0$, each term in the left product exceeds its corresponding term in the right product, implying the difference of products is nonnegative. The other factors $y f_n(y) f_i(y\sigma)$ are clearly nonnegative because densities cannot be negative and $y\ge 0$. We may conclude that $\phi^\prime(\sigma) \ge 0$ for $\sigma \ge 0$, proving that the chance that player $n$ wins increases with the standard deviation of $X_n$. This is enough to prove that runner $n$ will win provided the standard deviation of $X_n$ is sufficiently large. This is not quite satisfactory, because a large SD could result in a physically unrealistic model (where negative winning times have appreciable chances). But suppose all the distributions have identical shapes apart from their standard deviations. In this case, when they all have the same SD, the $X_i$ are independent and identically distributed: nobody can have a greater or lesser chance of winning than anyone else, so all chances are equal (to $1/n$). Start by setting all distributions to that of runner $n$. Now gradually decrease the SDs of all other runners, one at a time. As this occurs, the chance that $n$ wins cannot decrease, while the chances of all the other runners have decreased. Consequently, $n$ has the greatest chances of winning, QED.
Same Mean, Different Variance
Although an exact probability cannot be computed (except in special circumstances with $n \le 2$), it can be numerically calculated quickly to high accuracy. Despite this limitation, it can be proven
Same Mean, Different Variance Although an exact probability cannot be computed (except in special circumstances with $n \le 2$), it can be numerically calculated quickly to high accuracy. Despite this limitation, it can be proven rigorously that the runner with the greatest standard deviation has the greatest chance of winning. The figure depicts the situation and shows why this result is intuitively obvious: Probability densities for the times of five runners are shown. All are continuous and symmetric about a common mean $\mu$. (Scaled Beta densities were used to ensure all times are positive.) One density, drawn in darker blue, has a much greater spread. The visible portion in its left tail represents times that no other runner can usually match. Because that left tail, with its relatively large area, represents appreciable probability, the runner with this density has the greatest chance of winning the race. (They also have the greatest chance of coming in last!) These results are proven for more than just Normal distributions: the methods presented here apply equally well to distributions that are symmetric and continuous. (This will be of interest to anyone who objects to using Normal distributions to model running times.) When these assumptions are violated it is possible that the runner with greatest standard deviation might not have the greatest chance of winning (I leave construction of counterexamples to interested readers), but we can still prove under milder assumptions that the runner with greatest SD will have the best chance of winning provided that SD is sufficiently large. The figure also suggests that the same results could be obtained by considering one-sided analogs of standard deviation (the so-called "semivariance"), which measure the dispersion of a distribution to one side only. A runner with great dispersion to the left (towards better times) ought to have a greater chance of winning, regardless of what happens in the rest of the distribution. These considerations help us appreciate how the property of being the best (in a group) differs from other properties such as averages. Let $X_1, \ldots, X_n$ be random variables representing the runners' times. The question assumes they are independent and Normally distributed with common mean $\mu$. (Although this is literally an impossible model, because it posits positive probabilities for negative times, it can still be a reasonable approximation to reality provided the standard deviations are substantially smaller than $\mu$.) In order to carry out the following argument, retain the supposition of independence but otherwise assume the distributions of the $X_i$ are given by $F_i$ and that these distributional laws can be anything. For convenience, also assume the distribution $F_n$ is continuous with density $f_n$. Later, as needed, we may apply additional assumptions provided they include the case of Normal distributions. For any $y$ and infinitesimal $dy$, the chance that the last runner has a time in the interval $(y-dy, y]$ and is the fastest runner is obtained by multiplying all relevant probabilities (because all times are independent): $$\Pr(X_n \in (y-dy, y], X_1 \gt y, \ldots, X_{n-1} \gt y) = f_n(y)dy(1-F_{1}(y))\cdots(1-F_{n-1}(y)).$$ Integrating over all these mutually exclusive possibilities yields $$\Pr(X_n \le \min(X_1, X_2, \ldots, X_{n-1})) = \int_{\mathbb R} f_n(y)(1-F_1(y))\cdots(1-F_{n-1}(y)) dy.$$ For Normal distributions, this integral cannot be evaluated in closed form when $n\gt 2$: it needs numerical evaluation. This figure plots the integrand for each of five runners having standard deviations in the ratio 1:2:3:4:5. The larger the SD, the more the function is shifted to the left--and the greater its area becomes. The areas are approximately 8:14:21:26:31%. In particular, the runner with the largest SD has a 31% chance of winning. Although a closed form cannot be found, we can still draw solid conclusions and prove that the runner with the largest SD is most likely to win. We need to study what happens as the standard deviation of one of the distributions, say $F_n$, changes. When the random variable $X_n$ is rescaled by $\sigma \gt 0$ around its mean, its SD is multiplied by $\sigma$ and $f_n(y)dy$ will change to $f_n(y/\sigma)dy/\sigma$. Making the change of variable $y=x\sigma$ in the integral gives an expression for the chance of runner $n$ winning, as a function of $\sigma$: $$\phi(\sigma) = \int_{\mathbb R} f_n(y)(1-F_1(y\sigma))\cdots(1-F_{n-1}(y\sigma)) dy.$$ Suppose now that the medians of all $n$ distributions are equal and that all the distributions are symmetric and continuous, with densities $f_i$. (This certainly is the case under the conditions of the question, because a Normal median is its mean.) By a simple (locational) change of variable we may assume this common median is $0$; the symmetry means $f_n(y) = f_n(-y)$ and $1 - F_j(-y) = F_j(y)$ for all $y$. These relationships enable us to combine the integral over $(-\infty, 0]$ with the integral over $(0,\infty)$ to give $$\phi(\sigma) = \int_0^{\infty} f_n(y)\left(\prod_{j=1}^{n-1}\left(1-F_j(y\sigma)\right)+\prod_{j=1}^{n-1}F_j(y\sigma)\right) dy.$$ The function $\phi$ is differentiable. Its derivative, obtained by differentiating the integrand, is a sum of integrals where each term is of the form $$y f_n(y) f_i(y\sigma)\left(\prod_{j\ne i}^{n-1}F_j(y\sigma) - \prod_{j\ne i}^{n-1}(1-F_j(y\sigma))\right)$$ for $i=1, 2, \ldots, n-1$. The assumptions we made about the distributions were designed to assure that $F_j(x) \ge 1-F_j(x)$ for $x\ge 0$. Thus, since $x=y\sigma\ge 0$, each term in the left product exceeds its corresponding term in the right product, implying the difference of products is nonnegative. The other factors $y f_n(y) f_i(y\sigma)$ are clearly nonnegative because densities cannot be negative and $y\ge 0$. We may conclude that $\phi^\prime(\sigma) \ge 0$ for $\sigma \ge 0$, proving that the chance that player $n$ wins increases with the standard deviation of $X_n$. This is enough to prove that runner $n$ will win provided the standard deviation of $X_n$ is sufficiently large. This is not quite satisfactory, because a large SD could result in a physically unrealistic model (where negative winning times have appreciable chances). But suppose all the distributions have identical shapes apart from their standard deviations. In this case, when they all have the same SD, the $X_i$ are independent and identically distributed: nobody can have a greater or lesser chance of winning than anyone else, so all chances are equal (to $1/n$). Start by setting all distributions to that of runner $n$. Now gradually decrease the SDs of all other runners, one at a time. As this occurs, the chance that $n$ wins cannot decrease, while the chances of all the other runners have decreased. Consequently, $n$ has the greatest chances of winning, QED.
Same Mean, Different Variance Although an exact probability cannot be computed (except in special circumstances with $n \le 2$), it can be numerically calculated quickly to high accuracy. Despite this limitation, it can be proven
18,827
Is there any difference between training a stacked autoencoder and a 2-layers neural network?
Stacked autoencoders and the multi-layer neural networks are different. In practice, you'll have the two networks share weights and possibly share memory buffers. So in your implementation the two networks become entwined. Typically, autoencoders are trained in an unsupervised, greedy, layer-wise fashion. (No labels, begin training with just the first layer of the network and then add new layers as you go.) The weights can be learned using a variety of techniques ranging from "batch" gradient descent (please don't do that), to mini-batch stochastic gradient descent (SGD), to quasi-Newton methods like L-BFGS. The idea is that the weights learned in a unsupervised manner to minimize reconstruction error for the representation learning task offer a good starting point to initialize a network for a supervised discriminative task such as classification or similarity. I.e., the network learns something about the underlying distribution by looking at the unlabeled data, allowing it to discriminate between labeled data. However, the weights still need to be "fine-tuned" for this new task. So add a logistic regression layer on the top of the network and then do supervised learning with a labeled dataset. The fine tuning step will do gradient descent and adjust the weights for all layers in the network simultaneously. The advantages to this way of training neural nets are: Unsupervised training lets you show the network more data because it's much easier to get large unsupervised datasets than it is to get labeled ones. You can use the pre-trained network as a "jumping off point" for training new classifiers so you don't have to start from scratch each time. For the paper, see Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Is there any difference between training a stacked autoencoder and a 2-layers neural network?
Stacked autoencoders and the multi-layer neural networks are different. In practice, you'll have the two networks share weights and possibly share memory buffers. So in your implementation the two net
Is there any difference between training a stacked autoencoder and a 2-layers neural network? Stacked autoencoders and the multi-layer neural networks are different. In practice, you'll have the two networks share weights and possibly share memory buffers. So in your implementation the two networks become entwined. Typically, autoencoders are trained in an unsupervised, greedy, layer-wise fashion. (No labels, begin training with just the first layer of the network and then add new layers as you go.) The weights can be learned using a variety of techniques ranging from "batch" gradient descent (please don't do that), to mini-batch stochastic gradient descent (SGD), to quasi-Newton methods like L-BFGS. The idea is that the weights learned in a unsupervised manner to minimize reconstruction error for the representation learning task offer a good starting point to initialize a network for a supervised discriminative task such as classification or similarity. I.e., the network learns something about the underlying distribution by looking at the unlabeled data, allowing it to discriminate between labeled data. However, the weights still need to be "fine-tuned" for this new task. So add a logistic regression layer on the top of the network and then do supervised learning with a labeled dataset. The fine tuning step will do gradient descent and adjust the weights for all layers in the network simultaneously. The advantages to this way of training neural nets are: Unsupervised training lets you show the network more data because it's much easier to get large unsupervised datasets than it is to get labeled ones. You can use the pre-trained network as a "jumping off point" for training new classifiers so you don't have to start from scratch each time. For the paper, see Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion.
Is there any difference between training a stacked autoencoder and a 2-layers neural network? Stacked autoencoders and the multi-layer neural networks are different. In practice, you'll have the two networks share weights and possibly share memory buffers. So in your implementation the two net
18,828
Exact definition of Deviance measure in glmnet package, with crossvalidation?
In Friedman, Hastie, and Tibshirani (2010), the deviance of a binomial model, for the purpose of cross-validation, is calculated as minus twice the log-likelihood on the left-out data (p. 17) Given that this is the paper cited in the documentation for glmnet (on p. 2 and 5), that is probably the formula used in the package. And indeed, in the source code for function cvlognet, the deviance residuals for the response are calculated as -2*((y==2)*log(predmat)+(y==1)*log(1-predmat)) where predmat is simply predict(glmnet.object,x,lambda=lambda) and passed in from the encolsing cv.glmnet function. I used the source code available on the JStatSoft page for the paper, and I don't know how up-to-date that code is. The code for this package is surprisingly simple and readable; you can always check for yourself by typing glmnet:::cv.glmnet.
Exact definition of Deviance measure in glmnet package, with crossvalidation?
In Friedman, Hastie, and Tibshirani (2010), the deviance of a binomial model, for the purpose of cross-validation, is calculated as minus twice the log-likelihood on the left-out data (p. 17) Given
Exact definition of Deviance measure in glmnet package, with crossvalidation? In Friedman, Hastie, and Tibshirani (2010), the deviance of a binomial model, for the purpose of cross-validation, is calculated as minus twice the log-likelihood on the left-out data (p. 17) Given that this is the paper cited in the documentation for glmnet (on p. 2 and 5), that is probably the formula used in the package. And indeed, in the source code for function cvlognet, the deviance residuals for the response are calculated as -2*((y==2)*log(predmat)+(y==1)*log(1-predmat)) where predmat is simply predict(glmnet.object,x,lambda=lambda) and passed in from the encolsing cv.glmnet function. I used the source code available on the JStatSoft page for the paper, and I don't know how up-to-date that code is. The code for this package is surprisingly simple and readable; you can always check for yourself by typing glmnet:::cv.glmnet.
Exact definition of Deviance measure in glmnet package, with crossvalidation? In Friedman, Hastie, and Tibshirani (2010), the deviance of a binomial model, for the purpose of cross-validation, is calculated as minus twice the log-likelihood on the left-out data (p. 17) Given
18,829
Exact definition of Deviance measure in glmnet package, with crossvalidation?
In addition to the @shadowtalker 's answer, when I was using the package glmnet, I feel like the deviance in the cross-validation is somehow normalized. library(glmnet) data(BinomialExample) fit = cv.glmnet(x,y, family = c("binomial"), intercept = FALSE) head(fit$cvm) # deviance from test samples at lambda value # >[1] 1.383916 1.359782 1.324954 1.289653 1.255509 1.223706 # deviance from (test samples? all samples?) at lambda value head(deviance(fit$glmnet.fit)) # >[1] 138.6294 134.5861 131.1912 127.1832 122.8676 119.1637 Ref: deviance R document because if I do the division, head(deviance(fit$glmnet.fit)) / length(y)) the result is [1] 1.386294 1.345861 1.311912 1.271832 1.228676 1.191637 which is very close to the fit$cvm. This may be what the comment from @Hong Ooi said on this question: https://stackoverflow.com/questions/43468665/poisson-deviance-glmnet
Exact definition of Deviance measure in glmnet package, with crossvalidation?
In addition to the @shadowtalker 's answer, when I was using the package glmnet, I feel like the deviance in the cross-validation is somehow normalized. library(glmnet) data(BinomialExample) fit = cv
Exact definition of Deviance measure in glmnet package, with crossvalidation? In addition to the @shadowtalker 's answer, when I was using the package glmnet, I feel like the deviance in the cross-validation is somehow normalized. library(glmnet) data(BinomialExample) fit = cv.glmnet(x,y, family = c("binomial"), intercept = FALSE) head(fit$cvm) # deviance from test samples at lambda value # >[1] 1.383916 1.359782 1.324954 1.289653 1.255509 1.223706 # deviance from (test samples? all samples?) at lambda value head(deviance(fit$glmnet.fit)) # >[1] 138.6294 134.5861 131.1912 127.1832 122.8676 119.1637 Ref: deviance R document because if I do the division, head(deviance(fit$glmnet.fit)) / length(y)) the result is [1] 1.386294 1.345861 1.311912 1.271832 1.228676 1.191637 which is very close to the fit$cvm. This may be what the comment from @Hong Ooi said on this question: https://stackoverflow.com/questions/43468665/poisson-deviance-glmnet
Exact definition of Deviance measure in glmnet package, with crossvalidation? In addition to the @shadowtalker 's answer, when I was using the package glmnet, I feel like the deviance in the cross-validation is somehow normalized. library(glmnet) data(BinomialExample) fit = cv
18,830
Exact definition of Deviance measure in glmnet package, with crossvalidation?
As @vtshen mentions, there must be a standardization of Deviance values in cv.glment. After tracing the function code provided by @shadowtalker, I have arrived at the line that corroborates the hypothesis: cvraw=switch(type, response =-2*((y==2)*log(predmat)+(y==1)*log(1-predmat)), class=y!=predmat ) cvm=apply(cvraw,2,mean) Instead of calculating the sum of residual deviances as deviance() function does, in cv.glmnet (cvlognet) they consider the mean of them. So, yes, there is a normalization of the deviance relative to the number of observations.
Exact definition of Deviance measure in glmnet package, with crossvalidation?
As @vtshen mentions, there must be a standardization of Deviance values in cv.glment. After tracing the function code provided by @shadowtalker, I have arrived at the line that corroborates the hypoth
Exact definition of Deviance measure in glmnet package, with crossvalidation? As @vtshen mentions, there must be a standardization of Deviance values in cv.glment. After tracing the function code provided by @shadowtalker, I have arrived at the line that corroborates the hypothesis: cvraw=switch(type, response =-2*((y==2)*log(predmat)+(y==1)*log(1-predmat)), class=y!=predmat ) cvm=apply(cvraw,2,mean) Instead of calculating the sum of residual deviances as deviance() function does, in cv.glmnet (cvlognet) they consider the mean of them. So, yes, there is a normalization of the deviance relative to the number of observations.
Exact definition of Deviance measure in glmnet package, with crossvalidation? As @vtshen mentions, there must be a standardization of Deviance values in cv.glment. After tracing the function code provided by @shadowtalker, I have arrived at the line that corroborates the hypoth
18,831
difference-in-differences with fixed effects
The model is fine but instead of standardizing the treatment years there is an easier way to incorporate different treatment times in difference in differences (DiD) models which would be to regress, $$y_{it} = \beta_0 + \beta_1 \text{treat}_i + \sum^T_{t=2} \beta_t \text{year}_t + \delta \text{policy}_{it} + \gamma C_{it} + \epsilon_{it}$$ where $\text{treat}$ is a dummy for being in the treatment group, $\text{policy}$ is a dummy for each individual that equals 1 if the individual is in the treatment group after the policy intervention/treatment, $C$ are individual characteristics and $\text{year}$ are a full set of year dummies. This is a different version of the DiD model that you stated above but it does not require standardization of treatment years as it allows for multiple treatment periods (for an explanation see page 8/9 in these slides). With regards to the second question you can include time-invariant variables at the individual level. You cannot add them at the group level (treatment vs control) because these will be absorbed by the $\text{treat}$ dummy. You can still include individual control variables like gender but note that they do not play a mayor role in DiD analyses. Their only benefit is that they may reduce the residual variance and hence increase the power of your statistical tests (see slide 8 here).
difference-in-differences with fixed effects
The model is fine but instead of standardizing the treatment years there is an easier way to incorporate different treatment times in difference in differences (DiD) models which would be to regress,
difference-in-differences with fixed effects The model is fine but instead of standardizing the treatment years there is an easier way to incorporate different treatment times in difference in differences (DiD) models which would be to regress, $$y_{it} = \beta_0 + \beta_1 \text{treat}_i + \sum^T_{t=2} \beta_t \text{year}_t + \delta \text{policy}_{it} + \gamma C_{it} + \epsilon_{it}$$ where $\text{treat}$ is a dummy for being in the treatment group, $\text{policy}$ is a dummy for each individual that equals 1 if the individual is in the treatment group after the policy intervention/treatment, $C$ are individual characteristics and $\text{year}$ are a full set of year dummies. This is a different version of the DiD model that you stated above but it does not require standardization of treatment years as it allows for multiple treatment periods (for an explanation see page 8/9 in these slides). With regards to the second question you can include time-invariant variables at the individual level. You cannot add them at the group level (treatment vs control) because these will be absorbed by the $\text{treat}$ dummy. You can still include individual control variables like gender but note that they do not play a mayor role in DiD analyses. Their only benefit is that they may reduce the residual variance and hence increase the power of your statistical tests (see slide 8 here).
difference-in-differences with fixed effects The model is fine but instead of standardizing the treatment years there is an easier way to incorporate different treatment times in difference in differences (DiD) models which would be to regress,
18,832
How to fit mixture model for clustering
Here is script for using mixture model using mcluster. X <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,65, 3), rnorm(200,80,5)) Y <- c(rnorm(1000, 30, 2)) plot(X,Y, ylim = c(10, 60), pch = 19, col = "gray40") require(mclust) xyMclust <- Mclust(data.frame (X,Y)) plot(xyMclust) In a situation where there are less than 5 clusters: X1 <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,80,5)) Y1 <- c(rnorm(800, 30, 2)) xyMclust <- Mclust(data.frame (X1,Y1)) plot(xyMclust) xyMclust4 <- Mclust(data.frame (X1,Y1), G=3) plot(xyMclust4) In this case we are fitting 3 clusters. What if we fit 5 clusters ? xyMclust4 <- Mclust(data.frame (X1,Y1), G=5) plot(xyMclust4) It can force to make 5 clusters. Also let's introduce some random noise: X2 <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,80,5), runif(50,1,100 )) Y2 <- c(rnorm(850, 30, 2)) xyMclust1 <- Mclust(data.frame (X2,Y2)) plot(xyMclust1) mclust allows model-based clustering with noise, namely outlying observations that do not belong to any cluster. mclust allows to specify a prior distribution to regularize the fit to the data. A function priorControl is provided in mclust for specifying the prior and its parameters. When called with its defaults, it invokes another function called defaultPrior which can serve as a template for specifying alternative priors. To include noise in the modeling, an initial guess of the noise observations must be supplied via the noise component of the initialization argument in Mclust or mclustBIC. The other alternative would be to use mixtools package that allows you to specify mean and sigma for each components. X2 <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,80,5), rpois(50,30)) Y2 <- c(rnorm(800, 30, 2), rpois(50,30)) df <- cbind (X2, Y2) require(mixtools) out <- mvnormalmixEM(df, lambda = NULL, mu = NULL, sigma = NULL, k = 5,arbmean = TRUE, arbvar = TRUE, epsilon = 1e-08, maxit = 10000, verb = FALSE) plot(out, density = TRUE, alpha = c(0.01, 0.05, 0.10, 0.12, 0.15), marginal = TRUE)
How to fit mixture model for clustering
Here is script for using mixture model using mcluster. X <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,65, 3), rnorm(200,80,5)) Y <- c(rnorm(1000, 30, 2)) plot(X,Y, ylim = c(10,
How to fit mixture model for clustering Here is script for using mixture model using mcluster. X <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,65, 3), rnorm(200,80,5)) Y <- c(rnorm(1000, 30, 2)) plot(X,Y, ylim = c(10, 60), pch = 19, col = "gray40") require(mclust) xyMclust <- Mclust(data.frame (X,Y)) plot(xyMclust) In a situation where there are less than 5 clusters: X1 <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,80,5)) Y1 <- c(rnorm(800, 30, 2)) xyMclust <- Mclust(data.frame (X1,Y1)) plot(xyMclust) xyMclust4 <- Mclust(data.frame (X1,Y1), G=3) plot(xyMclust4) In this case we are fitting 3 clusters. What if we fit 5 clusters ? xyMclust4 <- Mclust(data.frame (X1,Y1), G=5) plot(xyMclust4) It can force to make 5 clusters. Also let's introduce some random noise: X2 <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,80,5), runif(50,1,100 )) Y2 <- c(rnorm(850, 30, 2)) xyMclust1 <- Mclust(data.frame (X2,Y2)) plot(xyMclust1) mclust allows model-based clustering with noise, namely outlying observations that do not belong to any cluster. mclust allows to specify a prior distribution to regularize the fit to the data. A function priorControl is provided in mclust for specifying the prior and its parameters. When called with its defaults, it invokes another function called defaultPrior which can serve as a template for specifying alternative priors. To include noise in the modeling, an initial guess of the noise observations must be supplied via the noise component of the initialization argument in Mclust or mclustBIC. The other alternative would be to use mixtools package that allows you to specify mean and sigma for each components. X2 <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,80,5), rpois(50,30)) Y2 <- c(rnorm(800, 30, 2), rpois(50,30)) df <- cbind (X2, Y2) require(mixtools) out <- mvnormalmixEM(df, lambda = NULL, mu = NULL, sigma = NULL, k = 5,arbmean = TRUE, arbvar = TRUE, epsilon = 1e-08, maxit = 10000, verb = FALSE) plot(out, density = TRUE, alpha = c(0.01, 0.05, 0.10, 0.12, 0.15), marginal = TRUE)
How to fit mixture model for clustering Here is script for using mixture model using mcluster. X <- c(rnorm(200, 10, 3), rnorm(200, 25,3), rnorm(200,35,3), rnorm(200,65, 3), rnorm(200,80,5)) Y <- c(rnorm(1000, 30, 2)) plot(X,Y, ylim = c(10,
18,833
How to fit mixture model for clustering
One standard approach is Gaussian Mixture Models which is trained by means of the EM algorithm. But since you also notice that the number of clusters may vary, you may also consider a nonparametric model like the Dirichlet GMM which is also implemented in scikit-learn. In R, these two packages seem to offer that what you need, http://cran.r-project.org/web/packages/dpmixsim/index.html http://cran.r-project.org/web/packages/profdpm/index.html
How to fit mixture model for clustering
One standard approach is Gaussian Mixture Models which is trained by means of the EM algorithm. But since you also notice that the number of clusters may vary, you may also consider a nonparametric mo
How to fit mixture model for clustering One standard approach is Gaussian Mixture Models which is trained by means of the EM algorithm. But since you also notice that the number of clusters may vary, you may also consider a nonparametric model like the Dirichlet GMM which is also implemented in scikit-learn. In R, these two packages seem to offer that what you need, http://cran.r-project.org/web/packages/dpmixsim/index.html http://cran.r-project.org/web/packages/profdpm/index.html
How to fit mixture model for clustering One standard approach is Gaussian Mixture Models which is trained by means of the EM algorithm. But since you also notice that the number of clusters may vary, you may also consider a nonparametric mo
18,834
Why does PCA maximize total variance of the projection?
What is understood by variance in several dimensions ("total variance") is simply a sum of variances in each dimension. Mathematically, it's a trace of the covariance matrix: trace is simply a sum of all diagonal elements. This definition has various nice properties, e.g. trace is invariant under orthogonal linear transformations, which means that if you rotate your coordinate axes, the total variance stays the same. What is proved in Bishop's book (section 12.1.1), is that the leading eigenvector of covariance matrix gives the direction of maximal variance. Second eigenvector gives the direction of maximal variance under an additional constraint that it should be orthogonal to the first eigenvector, etc. (I believe this constitutes the Exercise 12.1). If the goal is to maximize the total variance in the 2D subspace, then this procedure is a greedy maximization: first choose one axis that maximizes variance, then another one. Your question is: why does this greedy procedure obtain a global maximum? Here is a nice argument that @whuber suggested in the comments. Let us first align the coordinate system with the PCA axes. The covariance matrix becomes diagonal: $\boldsymbol{\Sigma} = \mathrm{diag}(\lambda_i)$. For simplicity we will consider the same 2D case, i.e. what is the plane with maximal total variance? We want to prove that it is the plane given by the first two basis vectors (with total variance $\lambda_1+\lambda_2$). Consider a plane spanned by two orthogonal vectors $\mathbf{u}$ and $\mathbf{v}$. The total variance in this plane is $$\mathbf{u}^\top\boldsymbol{\Sigma}\mathbf{u} + \mathbf{v}^\top\boldsymbol{\Sigma}\mathbf{v} = \sum \lambda_i u_i^2 + \sum \lambda_i v_i^2 = \sum \lambda_i (u_i^2+v_i^2).$$ So it is a linear combination of eigenvalues $\lambda_i$ with coefficients that are all positive, do not exceed $1$ (see below), and sum to $2$. If so, then it is almost obvious that the maximum is reached at $\lambda_1 + \lambda_2$. It is only left to show that the coefficients cannot exceed $1$. Notice that $u_k^2+v_k^2 = (\mathbf{u}\cdot\mathbf{k})^2+(\mathbf{v}\cdot\mathbf{k})^2$, where $\mathbf{k}$ is the $k$-th basis vector. This quantity is a squared length of a projection of $\mathbf k$ onto the plane spanned by $\mathbf u$ and $\mathbf v$. Therefore it has to be smaller than the squared length of $\mathbf k$ which is equal to $|\mathbf{k}|^2=1$, QED. See also @cardinal's answer to What is the objective function of PCA? (it follows the same logic).
Why does PCA maximize total variance of the projection?
What is understood by variance in several dimensions ("total variance") is simply a sum of variances in each dimension. Mathematically, it's a trace of the covariance matrix: trace is simply a sum of
Why does PCA maximize total variance of the projection? What is understood by variance in several dimensions ("total variance") is simply a sum of variances in each dimension. Mathematically, it's a trace of the covariance matrix: trace is simply a sum of all diagonal elements. This definition has various nice properties, e.g. trace is invariant under orthogonal linear transformations, which means that if you rotate your coordinate axes, the total variance stays the same. What is proved in Bishop's book (section 12.1.1), is that the leading eigenvector of covariance matrix gives the direction of maximal variance. Second eigenvector gives the direction of maximal variance under an additional constraint that it should be orthogonal to the first eigenvector, etc. (I believe this constitutes the Exercise 12.1). If the goal is to maximize the total variance in the 2D subspace, then this procedure is a greedy maximization: first choose one axis that maximizes variance, then another one. Your question is: why does this greedy procedure obtain a global maximum? Here is a nice argument that @whuber suggested in the comments. Let us first align the coordinate system with the PCA axes. The covariance matrix becomes diagonal: $\boldsymbol{\Sigma} = \mathrm{diag}(\lambda_i)$. For simplicity we will consider the same 2D case, i.e. what is the plane with maximal total variance? We want to prove that it is the plane given by the first two basis vectors (with total variance $\lambda_1+\lambda_2$). Consider a plane spanned by two orthogonal vectors $\mathbf{u}$ and $\mathbf{v}$. The total variance in this plane is $$\mathbf{u}^\top\boldsymbol{\Sigma}\mathbf{u} + \mathbf{v}^\top\boldsymbol{\Sigma}\mathbf{v} = \sum \lambda_i u_i^2 + \sum \lambda_i v_i^2 = \sum \lambda_i (u_i^2+v_i^2).$$ So it is a linear combination of eigenvalues $\lambda_i$ with coefficients that are all positive, do not exceed $1$ (see below), and sum to $2$. If so, then it is almost obvious that the maximum is reached at $\lambda_1 + \lambda_2$. It is only left to show that the coefficients cannot exceed $1$. Notice that $u_k^2+v_k^2 = (\mathbf{u}\cdot\mathbf{k})^2+(\mathbf{v}\cdot\mathbf{k})^2$, where $\mathbf{k}$ is the $k$-th basis vector. This quantity is a squared length of a projection of $\mathbf k$ onto the plane spanned by $\mathbf u$ and $\mathbf v$. Therefore it has to be smaller than the squared length of $\mathbf k$ which is equal to $|\mathbf{k}|^2=1$, QED. See also @cardinal's answer to What is the objective function of PCA? (it follows the same logic).
Why does PCA maximize total variance of the projection? What is understood by variance in several dimensions ("total variance") is simply a sum of variances in each dimension. Mathematically, it's a trace of the covariance matrix: trace is simply a sum of
18,835
Why does PCA maximize total variance of the projection?
If you have $N$ uncorrelated random variables sorted in descending order of their variance and were asked to choose $k$ of them such that the variance of their sum is maximized, would you agree that the greedy approach of picking the first $k$ would accomplish that? The data projected onto the eigenvectors of its covariance matrix is essentially $N$ uncorrelated columns of data and whose variance equals the respective eigenvalues. For the intuition to be clearer we need to relate variance maximization with computing the eigenvector of the covariance matrix with the largest eigenvalue, and relate orthogonal projection to removing correlations. The second relation is clear to me because the correlation coefficient between two (zero mean) vectors is proportional to their inner product. The relation between maximizing variance and the eigen-decomposition of the covariance matrix is as follows. Assume that $D$ is the data matrix after centering the columns. We need to find the direction of maximum variance. For any unit vector $v$, the variance after projecting along $v$ is $E[(Dv)^t Dv] = v^t E[D^tD] v = v^t Cov(D) v$ which is maximized if $v$ is the eigenvector of $Cov(D)$ corresponding to the largest eigenvalue.
Why does PCA maximize total variance of the projection?
If you have $N$ uncorrelated random variables sorted in descending order of their variance and were asked to choose $k$ of them such that the variance of their sum is maximized, would you agree that t
Why does PCA maximize total variance of the projection? If you have $N$ uncorrelated random variables sorted in descending order of their variance and were asked to choose $k$ of them such that the variance of their sum is maximized, would you agree that the greedy approach of picking the first $k$ would accomplish that? The data projected onto the eigenvectors of its covariance matrix is essentially $N$ uncorrelated columns of data and whose variance equals the respective eigenvalues. For the intuition to be clearer we need to relate variance maximization with computing the eigenvector of the covariance matrix with the largest eigenvalue, and relate orthogonal projection to removing correlations. The second relation is clear to me because the correlation coefficient between two (zero mean) vectors is proportional to their inner product. The relation between maximizing variance and the eigen-decomposition of the covariance matrix is as follows. Assume that $D$ is the data matrix after centering the columns. We need to find the direction of maximum variance. For any unit vector $v$, the variance after projecting along $v$ is $E[(Dv)^t Dv] = v^t E[D^tD] v = v^t Cov(D) v$ which is maximized if $v$ is the eigenvector of $Cov(D)$ corresponding to the largest eigenvalue.
Why does PCA maximize total variance of the projection? If you have $N$ uncorrelated random variables sorted in descending order of their variance and were asked to choose $k$ of them such that the variance of their sum is maximized, would you agree that t
18,836
Which is better, stl or decompose?
I would say STL. STL does trend and seasonal see: http://www.wessa.net/download/stl.pdf Decompose only does seasonal see the documentation here: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/decompose.html When you work with them be sure to include your trend type (multiplicative, additive) and season type (multiplicative, additive). Trends can also sometimes have a damping factor too. By multiplicative decomposition I assume you mean in the case for trend. You are not likely to use multiplicative decomposition unless you are decomposing a exponential growth function.
Which is better, stl or decompose?
I would say STL. STL does trend and seasonal see: http://www.wessa.net/download/stl.pdf Decompose only does seasonal see the documentation here: http://stat.ethz.ch/R-manual/R-devel/library/stats/html
Which is better, stl or decompose? I would say STL. STL does trend and seasonal see: http://www.wessa.net/download/stl.pdf Decompose only does seasonal see the documentation here: http://stat.ethz.ch/R-manual/R-devel/library/stats/html/decompose.html When you work with them be sure to include your trend type (multiplicative, additive) and season type (multiplicative, additive). Trends can also sometimes have a damping factor too. By multiplicative decomposition I assume you mean in the case for trend. You are not likely to use multiplicative decomposition unless you are decomposing a exponential growth function.
Which is better, stl or decompose? I would say STL. STL does trend and seasonal see: http://www.wessa.net/download/stl.pdf Decompose only does seasonal see the documentation here: http://stat.ethz.ch/R-manual/R-devel/library/stats/html
18,837
Which is better, stl or decompose?
Disadvantages of decompose function in R: The estimate of the trend is unavailable for the first few and last few observations. It assumes that the seasonal component repeats from year to year. So I would prefer STL. It is possible to obtain a multiplicative decomposition by first taking logs of the data and then back-transforming the components.
Which is better, stl or decompose?
Disadvantages of decompose function in R: The estimate of the trend is unavailable for the first few and last few observations. It assumes that the seasonal component repeats from year to year. So I
Which is better, stl or decompose? Disadvantages of decompose function in R: The estimate of the trend is unavailable for the first few and last few observations. It assumes that the seasonal component repeats from year to year. So I would prefer STL. It is possible to obtain a multiplicative decomposition by first taking logs of the data and then back-transforming the components.
Which is better, stl or decompose? Disadvantages of decompose function in R: The estimate of the trend is unavailable for the first few and last few observations. It assumes that the seasonal component repeats from year to year. So I
18,838
Which is better, stl or decompose?
STL is a more advanced technique to extract seasonality, in the sense that is allows seasonality to vary, which is not the case in decompose. To get an understanding at how STL works: the algorithm estimates every seasonal sub-serie (in a 7-day seasonality, it will estimate 7 sub-series: the Monday time serie, the Tuesday time serie, etc.), it will then estimate the local seasonality by running a loess regression on every sub-serie. This allows to capture the varying effect in the seasonality. If you do not want your seasonality to vary (in other words the estimated effect of each sub-serie will remain constant across the whole time serie), you can specify the seasonal window to be infinite or "periodic". This is equivalent to average each sub-serie and giving an equal weight to all points (you do not have any "local" effect anymore). decompose is essentially the same, as the seasonal sub-components will remain constant across your whole time serie, which is a special configuration of STL. This is pretty well explained here: https://www.otexts.org/fpp/6/1. STL estimates seasonality in an additive way. As explained a few pages later in the previous source, you can estimate seasonality in a multiplicative way by resorting to log transformation (or Cox-Box transformation).
Which is better, stl or decompose?
STL is a more advanced technique to extract seasonality, in the sense that is allows seasonality to vary, which is not the case in decompose. To get an understanding at how STL works: the algorithm e
Which is better, stl or decompose? STL is a more advanced technique to extract seasonality, in the sense that is allows seasonality to vary, which is not the case in decompose. To get an understanding at how STL works: the algorithm estimates every seasonal sub-serie (in a 7-day seasonality, it will estimate 7 sub-series: the Monday time serie, the Tuesday time serie, etc.), it will then estimate the local seasonality by running a loess regression on every sub-serie. This allows to capture the varying effect in the seasonality. If you do not want your seasonality to vary (in other words the estimated effect of each sub-serie will remain constant across the whole time serie), you can specify the seasonal window to be infinite or "periodic". This is equivalent to average each sub-serie and giving an equal weight to all points (you do not have any "local" effect anymore). decompose is essentially the same, as the seasonal sub-components will remain constant across your whole time serie, which is a special configuration of STL. This is pretty well explained here: https://www.otexts.org/fpp/6/1. STL estimates seasonality in an additive way. As explained a few pages later in the previous source, you can estimate seasonality in a multiplicative way by resorting to log transformation (or Cox-Box transformation).
Which is better, stl or decompose? STL is a more advanced technique to extract seasonality, in the sense that is allows seasonality to vary, which is not the case in decompose. To get an understanding at how STL works: the algorithm e
18,839
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
If the ARMA process is causal there is a general formula that provides the autocovariance coefficients. Consider the causal $\text{ARMA}(p,q)$ process $$ y_t = \sum_{i = 1}^p \phi_i y_{t-1} + \sum_{j = 1}^q \theta_j \epsilon_{t - j} + \epsilon_t, $$ where $\epsilon_t$ is a white noise with mean zero and variance $\sigma_\epsilon^2$. By the causality property, the process can be written as $$ y_t = \sum_{j = 0}^\infty \psi_j \epsilon_{t - j}, $$ where $\psi_j$ denotes the $\psi$-weights. The general homogeneous equation for the autocovariance coefficients of a causal $\text{ARMA}(p,q)$ process is $$ \gamma (k) - \phi_1 \gamma (k-1) - \cdots - \phi_p \gamma (k-p) = 0, \quad k \geq \max (p, q+1), $$ with initial conditions $$ \gamma (k) - \sum_{j = 1}^p \phi_j \gamma (k-j) = \sigma_\epsilon^2 \sum_{j = k}^q \theta_j \psi_{j - k}, \quad 0 \leq k < \max (p, q+1). $$
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
If the ARMA process is causal there is a general formula that provides the autocovariance coefficients. Consider the causal $\text{ARMA}(p,q)$ process $$ y_t = \sum_{i = 1}^p \phi_i y_{t-1} + \sum_{j
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ If the ARMA process is causal there is a general formula that provides the autocovariance coefficients. Consider the causal $\text{ARMA}(p,q)$ process $$ y_t = \sum_{i = 1}^p \phi_i y_{t-1} + \sum_{j = 1}^q \theta_j \epsilon_{t - j} + \epsilon_t, $$ where $\epsilon_t$ is a white noise with mean zero and variance $\sigma_\epsilon^2$. By the causality property, the process can be written as $$ y_t = \sum_{j = 0}^\infty \psi_j \epsilon_{t - j}, $$ where $\psi_j$ denotes the $\psi$-weights. The general homogeneous equation for the autocovariance coefficients of a causal $\text{ARMA}(p,q)$ process is $$ \gamma (k) - \phi_1 \gamma (k-1) - \cdots - \phi_p \gamma (k-p) = 0, \quad k \geq \max (p, q+1), $$ with initial conditions $$ \gamma (k) - \sum_{j = 1}^p \phi_j \gamma (k-j) = \sigma_\epsilon^2 \sum_{j = k}^q \theta_j \psi_{j - k}, \quad 0 \leq k < \max (p, q+1). $$
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ If the ARMA process is causal there is a general formula that provides the autocovariance coefficients. Consider the causal $\text{ARMA}(p,q)$ process $$ y_t = \sum_{i = 1}^p \phi_i y_{t-1} + \sum_{j
18,840
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
Your calculation mistake in your original question lies in $$\theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right] = \theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}\right]\mathrm{E}\left[y_{t-1}\right] = 0 \qquad \text{(mistaken)}$$ You cannot separate the expectation $\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right]$ - $\epsilon_{t-1}$ and $y_{t-1}$ are not independent.
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
Your calculation mistake in your original question lies in $$\theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right] = \theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}\right]\mathrm{E}\left[y_{t-1}\ri
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ Your calculation mistake in your original question lies in $$\theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right] = \theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}\right]\mathrm{E}\left[y_{t-1}\right] = 0 \qquad \text{(mistaken)}$$ You cannot separate the expectation $\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right]$ - $\epsilon_{t-1}$ and $y_{t-1}$ are not independent.
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ Your calculation mistake in your original question lies in $$\theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right] = \theta_1\phi_1\mathrm{E}\left[\epsilon_{t-1}\right]\mathrm{E}\left[y_{t-1}\ri
18,841
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
I would like to share here an approach for calculating the initial conditions mentioned in the answer from @QuantIbex, since the appearance of the $\psi$ parameters from the $MA(\infty)$ representation in that equation does not allow for immediate calculation based on the original ARMA parameters. Using ARAM(2,2) as an example, we first multiply the ARMA equation $$\phi_0 y_t + \phi_1 y_{t-1} + \phi_2 y_{t-2} = \theta_0 \epsilon_t + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2}$$ by $\epsilon_t$, $\epsilon_{t-1}$, and $\epsilon_{t-2}$, and take expectations to get three equations which can be written is a matrix form as $$ \begin{bmatrix} \delta_0 & 0 & 0\\ \delta_1 & \delta_0 & 0\\ \delta_2 & \delta_1 & \delta_0 \end{bmatrix} \begin{bmatrix} \phi_0 \\ \phi_1 \\ \phi_2 \end{bmatrix} = \sigma^2_{\epsilon} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix} $$ where $\delta_{j-i} = \mathbb{E}[\epsilon_{t-j}, y_{t-i}]$ and which is equal to 0 for $j > i$. The left-hand side of the above can be re-arranged so that we can write $$ \mathbf{L} \boldsymbol{\delta} = \begin{bmatrix} \phi_0 & 0 & 0\\ \phi_1 & \phi_0 & 0\\ \phi_2 & \phi_1 & \phi_0 \end{bmatrix} \begin{bmatrix} \delta_0 \\ \delta_1 \\ \delta_2 \end{bmatrix} = \sigma^2_{\epsilon} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix} = \sigma^2_{\epsilon} \boldsymbol{\theta} $$ which can be readily solved by forward substitution which we express formally as $$ \boldsymbol{\delta} = \sigma^2_{\epsilon} \boldsymbol{L}^{-1} \boldsymbol{\theta}. $$ Often we would have $\phi_0 = \theta_0 = 1$, which would results in $\delta_0 = \sigma^2_{\epsilon}$. Then we multiply the ARMA equation by $y_t$, $y_{t-1}$, and $y_{t-2}$, and take expectations to get three equations which can be written is a matrix form as $$ \mathbf{\Gamma} \boldsymbol{\phi} = \begin{bmatrix} \gamma_0 & \gamma_1 & \gamma_2 \\ \gamma_1 & \gamma_0 & \gamma_1 \\ \gamma_2 & \gamma_1 & \gamma_0 \end{bmatrix} \begin{bmatrix} \phi_0 \\ \phi_1 \\ \phi_2 \end{bmatrix} = \begin{bmatrix} \delta_0 & \delta_1 & \delta_2 \\ 0 & \delta_0 & \delta_1 \\ 0 & 0 & \delta_0 \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix} $$ where $$\gamma_{|i-j|} = \mathbb{E}[y_{t-i}y_{t-j}] = \mathbb{Cov}[y_{t-i},y_{t-j}]$$ where the last equality is due to us using a "demeaned" process for which $\mathbb{E}[y_{t}] = 0$. The above can also be rearranged as $$ \mathbf{\Phi} \boldsymbol{\gamma} = \begin{bmatrix} \phi_0 & \phi_1 & \phi_2 \\ \phi_1 & \phi_0 + \phi_2 & 0\\ \phi_2 & \phi_1 & \phi_0 \end{bmatrix} \begin{bmatrix} \gamma_0 \\ \gamma_1 \\ \gamma_2 \end{bmatrix} = \begin{bmatrix} \theta_0 & \theta_1 & \theta_2 \\ \theta_1 & \theta_2 & 0 \\ \theta_2 & 0 & 0 \end{bmatrix} \begin{bmatrix} \delta_0 \\ \delta_1 \\ \delta_2 \end{bmatrix} = \mathbf{U} \boldsymbol{\delta} $$ which leads to a solution for the initial condition of the auto-covariance function $$ \boldsymbol{\gamma} = \mathbf{\Phi}^{-1}\mathbf{U} \boldsymbol{\delta} = \sigma^2_{\epsilon} \mathbf{\Phi}^{-1}\mathbf{U} \boldsymbol{L}^{-1} \boldsymbol{\theta}. $$ The general forms for the triangular matrices $\mathbf{L}$ and $\mathbf{U}$ are apprant from the above example. The elements of $\mathbf{\Phi}$ in row $i$ and column $j$ can be inferred by looking at the row numbers of $\gamma_i$ in column $j$ of the $\boldsymbol{\Gamma}$ matrix and are in general given by (I believe) $$ (\mathbf{\Phi})_{ij} = \sum_{k=0}^{p} \phi_k \mathbb{I}(|k - i| = j) $$ where we assume that the row and column index begins with $0$ (not $1$) and where $\mathbb{I}(A)$ is the indicator function which is equal to 1 when $A$ is TRUE and $0$ otherwise.
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
I would like to share here an approach for calculating the initial conditions mentioned in the answer from @QuantIbex, since the appearance of the $\psi$ parameters from the $MA(\infty)$ representatio
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ I would like to share here an approach for calculating the initial conditions mentioned in the answer from @QuantIbex, since the appearance of the $\psi$ parameters from the $MA(\infty)$ representation in that equation does not allow for immediate calculation based on the original ARMA parameters. Using ARAM(2,2) as an example, we first multiply the ARMA equation $$\phi_0 y_t + \phi_1 y_{t-1} + \phi_2 y_{t-2} = \theta_0 \epsilon_t + \theta_1 \epsilon_{t-1} + \theta_2 \epsilon_{t-2}$$ by $\epsilon_t$, $\epsilon_{t-1}$, and $\epsilon_{t-2}$, and take expectations to get three equations which can be written is a matrix form as $$ \begin{bmatrix} \delta_0 & 0 & 0\\ \delta_1 & \delta_0 & 0\\ \delta_2 & \delta_1 & \delta_0 \end{bmatrix} \begin{bmatrix} \phi_0 \\ \phi_1 \\ \phi_2 \end{bmatrix} = \sigma^2_{\epsilon} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix} $$ where $\delta_{j-i} = \mathbb{E}[\epsilon_{t-j}, y_{t-i}]$ and which is equal to 0 for $j > i$. The left-hand side of the above can be re-arranged so that we can write $$ \mathbf{L} \boldsymbol{\delta} = \begin{bmatrix} \phi_0 & 0 & 0\\ \phi_1 & \phi_0 & 0\\ \phi_2 & \phi_1 & \phi_0 \end{bmatrix} \begin{bmatrix} \delta_0 \\ \delta_1 \\ \delta_2 \end{bmatrix} = \sigma^2_{\epsilon} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix} = \sigma^2_{\epsilon} \boldsymbol{\theta} $$ which can be readily solved by forward substitution which we express formally as $$ \boldsymbol{\delta} = \sigma^2_{\epsilon} \boldsymbol{L}^{-1} \boldsymbol{\theta}. $$ Often we would have $\phi_0 = \theta_0 = 1$, which would results in $\delta_0 = \sigma^2_{\epsilon}$. Then we multiply the ARMA equation by $y_t$, $y_{t-1}$, and $y_{t-2}$, and take expectations to get three equations which can be written is a matrix form as $$ \mathbf{\Gamma} \boldsymbol{\phi} = \begin{bmatrix} \gamma_0 & \gamma_1 & \gamma_2 \\ \gamma_1 & \gamma_0 & \gamma_1 \\ \gamma_2 & \gamma_1 & \gamma_0 \end{bmatrix} \begin{bmatrix} \phi_0 \\ \phi_1 \\ \phi_2 \end{bmatrix} = \begin{bmatrix} \delta_0 & \delta_1 & \delta_2 \\ 0 & \delta_0 & \delta_1 \\ 0 & 0 & \delta_0 \end{bmatrix} \begin{bmatrix} \theta_0 \\ \theta_1 \\ \theta_2 \end{bmatrix} $$ where $$\gamma_{|i-j|} = \mathbb{E}[y_{t-i}y_{t-j}] = \mathbb{Cov}[y_{t-i},y_{t-j}]$$ where the last equality is due to us using a "demeaned" process for which $\mathbb{E}[y_{t}] = 0$. The above can also be rearranged as $$ \mathbf{\Phi} \boldsymbol{\gamma} = \begin{bmatrix} \phi_0 & \phi_1 & \phi_2 \\ \phi_1 & \phi_0 + \phi_2 & 0\\ \phi_2 & \phi_1 & \phi_0 \end{bmatrix} \begin{bmatrix} \gamma_0 \\ \gamma_1 \\ \gamma_2 \end{bmatrix} = \begin{bmatrix} \theta_0 & \theta_1 & \theta_2 \\ \theta_1 & \theta_2 & 0 \\ \theta_2 & 0 & 0 \end{bmatrix} \begin{bmatrix} \delta_0 \\ \delta_1 \\ \delta_2 \end{bmatrix} = \mathbf{U} \boldsymbol{\delta} $$ which leads to a solution for the initial condition of the auto-covariance function $$ \boldsymbol{\gamma} = \mathbf{\Phi}^{-1}\mathbf{U} \boldsymbol{\delta} = \sigma^2_{\epsilon} \mathbf{\Phi}^{-1}\mathbf{U} \boldsymbol{L}^{-1} \boldsymbol{\theta}. $$ The general forms for the triangular matrices $\mathbf{L}$ and $\mathbf{U}$ are apprant from the above example. The elements of $\mathbf{\Phi}$ in row $i$ and column $j$ can be inferred by looking at the row numbers of $\gamma_i$ in column $j$ of the $\boldsymbol{\Gamma}$ matrix and are in general given by (I believe) $$ (\mathbf{\Phi})_{ij} = \sum_{k=0}^{p} \phi_k \mathbb{I}(|k - i| = j) $$ where we assume that the row and column index begins with $0$ (not $1$) and where $\mathbb{I}(A)$ is the indicator function which is equal to 1 when $A$ is TRUE and $0$ otherwise.
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ I would like to share here an approach for calculating the initial conditions mentioned in the answer from @QuantIbex, since the appearance of the $\psi$ parameters from the $MA(\infty)$ representatio
18,842
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
OK. So the process of writing the post actually pointed me to the solution. Consider the Expectation terms 1, 2, 5 and 6 from above that I thought should be 0. Immediately for terms 5 - $\mathrm{E}\left[\epsilon_ty_{t-1}\right]$ - and 6 - $\mathrm{E}\left[\epsilon_ty_{t-2}\right]$: these terms are definitely zero, because $y_{t-1}$ and $y_{t-2}$ are independent of $\epsilon_t$ and $\mathrm{E}\left[\epsilon_t\right] = 0$. However, terms 1 and 2 look as though the Expectation is of two correlated variables. So, consider the expressions for $y_{t-1}$ and $y_{t-2}$ thus: $$ y_{t-1} = \phi_1y_{t-2}+\phi_2y_{t-3}+\theta_1\epsilon_{t-2}+\epsilon_{t-1}\\ y_{t-2} = \phi_1y_{t-3}+\phi_2y_{t-4}+\theta_1\epsilon_{t-3}+\epsilon_{t-2} $$ And recall term 1 - $\phi_1\theta_1\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right]$. If we multiply both sides of the expression for $y_{t-1}$ by $\epsilon_{t-1}$ and then take Expectations, it is clear that all terms on the right hand side except the last become zero (because the values of $y_{t-2}$, $y_{t-3}$, and $\epsilon_{t-2}$ are independent of $\epsilon_{t-1}$ and $\mathrm{E}\left[\epsilon_{t-1}\right]=0$) to give: $$\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right] = \mathrm{E}\left[\left(\epsilon_{t-1}\right)^2\right] = \sigma_{\epsilon}^2$$ So term 1 becomes $+\phi_1\theta_1\sigma_{\epsilon}^2$. For term 2, it should be clear that, by the same logic, all terms are zero. Hence the original model answer was correct. However, if anyone can suggest an alternative way to obtain a general (even if messy) solution, I would be very pleased to hear it!
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$
OK. So the process of writing the post actually pointed me to the solution. Consider the Expectation terms 1, 2, 5 and 6 from above that I thought should be 0. Immediately for terms 5 - $\mathrm{E}\
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ OK. So the process of writing the post actually pointed me to the solution. Consider the Expectation terms 1, 2, 5 and 6 from above that I thought should be 0. Immediately for terms 5 - $\mathrm{E}\left[\epsilon_ty_{t-1}\right]$ - and 6 - $\mathrm{E}\left[\epsilon_ty_{t-2}\right]$: these terms are definitely zero, because $y_{t-1}$ and $y_{t-2}$ are independent of $\epsilon_t$ and $\mathrm{E}\left[\epsilon_t\right] = 0$. However, terms 1 and 2 look as though the Expectation is of two correlated variables. So, consider the expressions for $y_{t-1}$ and $y_{t-2}$ thus: $$ y_{t-1} = \phi_1y_{t-2}+\phi_2y_{t-3}+\theta_1\epsilon_{t-2}+\epsilon_{t-1}\\ y_{t-2} = \phi_1y_{t-3}+\phi_2y_{t-4}+\theta_1\epsilon_{t-3}+\epsilon_{t-2} $$ And recall term 1 - $\phi_1\theta_1\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right]$. If we multiply both sides of the expression for $y_{t-1}$ by $\epsilon_{t-1}$ and then take Expectations, it is clear that all terms on the right hand side except the last become zero (because the values of $y_{t-2}$, $y_{t-3}$, and $\epsilon_{t-2}$ are independent of $\epsilon_{t-1}$ and $\mathrm{E}\left[\epsilon_{t-1}\right]=0$) to give: $$\mathrm{E}\left[\epsilon_{t-1}y_{t-1}\right] = \mathrm{E}\left[\left(\epsilon_{t-1}\right)^2\right] = \sigma_{\epsilon}^2$$ So term 1 becomes $+\phi_1\theta_1\sigma_{\epsilon}^2$. For term 2, it should be clear that, by the same logic, all terms are zero. Hence the original model answer was correct. However, if anyone can suggest an alternative way to obtain a general (even if messy) solution, I would be very pleased to hear it!
Autocovariance of an ARMA(2,1) process - derivation of analytical model for $\gamma( k)$ OK. So the process of writing the post actually pointed me to the solution. Consider the Expectation terms 1, 2, 5 and 6 from above that I thought should be 0. Immediately for terms 5 - $\mathrm{E}\
18,843
Gaussian Processes: How to use GPML for multi-dimensional output
I believe Twin Gaussian Processes is exactly what you are looking for. I can't describe the model better than the abstract of the paper itself, so I'm just gonna copy paste it: We describe twin Gaussian processes (TGP) 1, a generic structured prediction method that uses Gaussian process (GP) priors [2] on both covariates and responses, both multivariate, and estimates outputs by minimizing the Kullback-Leibler divergence between two GP modeled as normal distributions over finite index sets of training and testing examples, emphasizing the goal that similar inputs should produce similar percepts and this should hold, on average, between their marginal distributions. TGP captures not only the interdependencies between covariates, as in a typical GP, but also those between responses, so correlations among both inputs and outputs are accounted for. TGP is exemplified, with promising results, for the reconstruction of 3d human poses from monocular and multicamera video sequences in the recently introduced HumanEva benchmark, where we achieve 5 cm error on average per 3d marker for models trained jointly, using data from multiple people and multiple activities. The method is fast and automatic: it requires no hand-crafting of the initial pose, camera calibration parameters, or the availability of a 3d body model associated with human subjects used for training or testing. The authors have generously provided code and sample datasets for getting started.
Gaussian Processes: How to use GPML for multi-dimensional output
I believe Twin Gaussian Processes is exactly what you are looking for. I can't describe the model better than the abstract of the paper itself, so I'm just gonna copy paste it: We describe twin Gau
Gaussian Processes: How to use GPML for multi-dimensional output I believe Twin Gaussian Processes is exactly what you are looking for. I can't describe the model better than the abstract of the paper itself, so I'm just gonna copy paste it: We describe twin Gaussian processes (TGP) 1, a generic structured prediction method that uses Gaussian process (GP) priors [2] on both covariates and responses, both multivariate, and estimates outputs by minimizing the Kullback-Leibler divergence between two GP modeled as normal distributions over finite index sets of training and testing examples, emphasizing the goal that similar inputs should produce similar percepts and this should hold, on average, between their marginal distributions. TGP captures not only the interdependencies between covariates, as in a typical GP, but also those between responses, so correlations among both inputs and outputs are accounted for. TGP is exemplified, with promising results, for the reconstruction of 3d human poses from monocular and multicamera video sequences in the recently introduced HumanEva benchmark, where we achieve 5 cm error on average per 3d marker for models trained jointly, using data from multiple people and multiple activities. The method is fast and automatic: it requires no hand-crafting of the initial pose, camera calibration parameters, or the availability of a 3d body model associated with human subjects used for training or testing. The authors have generously provided code and sample datasets for getting started.
Gaussian Processes: How to use GPML for multi-dimensional output I believe Twin Gaussian Processes is exactly what you are looking for. I can't describe the model better than the abstract of the paper itself, so I'm just gonna copy paste it: We describe twin Gau
18,844
Gaussian Processes: How to use GPML for multi-dimensional output
Short answer Regression for multi-dimensional output is a little tricky and in my current level of knowledge not directly incorporated in the GPML toolbox. Long answer You can break down your multi-dimensional output regression problem into 3 different parts. Outputs are not related with each other - Just regress the outputs individually like the demo script for 1d case. Outputs are related but don't know the relation between them - You would basically like to learn the inner relations between the outputs. As the book mentions coKriging is a good way to start. There are softwares other than GPML which can directly let you perform cokriging eg. ooDace Outputs are related and you know the relation between them - Perform a regular cokriging but you can apply hard-constraints between the outputs either by applying the constraints in the optimizer (while you minimize the log marginal likelihood) as said by Hall & Huang 2001 or apply the relationships in the prior function as said by Constantinescu & Anitescu 2013. I hope it helps :)
Gaussian Processes: How to use GPML for multi-dimensional output
Short answer Regression for multi-dimensional output is a little tricky and in my current level of knowledge not directly incorporated in the GPML toolbox. Long answer You can break down your multi-d
Gaussian Processes: How to use GPML for multi-dimensional output Short answer Regression for multi-dimensional output is a little tricky and in my current level of knowledge not directly incorporated in the GPML toolbox. Long answer You can break down your multi-dimensional output regression problem into 3 different parts. Outputs are not related with each other - Just regress the outputs individually like the demo script for 1d case. Outputs are related but don't know the relation between them - You would basically like to learn the inner relations between the outputs. As the book mentions coKriging is a good way to start. There are softwares other than GPML which can directly let you perform cokriging eg. ooDace Outputs are related and you know the relation between them - Perform a regular cokriging but you can apply hard-constraints between the outputs either by applying the constraints in the optimizer (while you minimize the log marginal likelihood) as said by Hall & Huang 2001 or apply the relationships in the prior function as said by Constantinescu & Anitescu 2013. I hope it helps :)
Gaussian Processes: How to use GPML for multi-dimensional output Short answer Regression for multi-dimensional output is a little tricky and in my current level of knowledge not directly incorporated in the GPML toolbox. Long answer You can break down your multi-d
18,845
Gaussian Processes: How to use GPML for multi-dimensional output
This is a module from scikit-learn which worked for me surprisingly good: http://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gp_regression.html # Instanciate a Gaussian Process model gp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1e-1, random_start=100) # Fit to data using Maximum Likelihood Estimation of the parameters gp.fit(X, y) # Make the prediction on the meshed x-axis (ask for MSE as well) y_pred, MSE = gp.predict(x, eval_MSE=True) sigma = np.sqrt(MSE)
Gaussian Processes: How to use GPML for multi-dimensional output
This is a module from scikit-learn which worked for me surprisingly good: http://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gp_regression.html # Instanciate a Gaussian Process model g
Gaussian Processes: How to use GPML for multi-dimensional output This is a module from scikit-learn which worked for me surprisingly good: http://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gp_regression.html # Instanciate a Gaussian Process model gp = GaussianProcess(corr='cubic', theta0=1e-2, thetaL=1e-4, thetaU=1e-1, random_start=100) # Fit to data using Maximum Likelihood Estimation of the parameters gp.fit(X, y) # Make the prediction on the meshed x-axis (ask for MSE as well) y_pred, MSE = gp.predict(x, eval_MSE=True) sigma = np.sqrt(MSE)
Gaussian Processes: How to use GPML for multi-dimensional output This is a module from scikit-learn which worked for me surprisingly good: http://scikit-learn.org/stable/auto_examples/gaussian_process/plot_gp_regression.html # Instanciate a Gaussian Process model g
18,846
Gaussian Processes: How to use GPML for multi-dimensional output
I was searching for multi output Gaussian Processes and found many ways to act with it like, convolution method, mixed effects modeling method and latest this one Twin Gaussian Processes (TGP). I have a doubt in the concept of Twin Gaussian Processes(TGP). Can anybody help me with that? In TGP, the authors are finding out the predicted output ($\hat{y}$) minimizing the KL divergence between the input and output vice versa. But in general, we look for the predictive distribution of output i.e. $p(y^*|\mathbf{y}) \sim \mathcal(\mu, \sigma^2)$. One thing to be remarked here that the predictive variance i.e. $\sigma^2$, $y$ does not have any role in it. In the case of TGP, is the predicted output $\hat{y}$ is same as the mean of the predictive distribution of $y$?
Gaussian Processes: How to use GPML for multi-dimensional output
I was searching for multi output Gaussian Processes and found many ways to act with it like, convolution method, mixed effects modeling method and latest this one Twin Gaussian Processes (TGP). I hav
Gaussian Processes: How to use GPML for multi-dimensional output I was searching for multi output Gaussian Processes and found many ways to act with it like, convolution method, mixed effects modeling method and latest this one Twin Gaussian Processes (TGP). I have a doubt in the concept of Twin Gaussian Processes(TGP). Can anybody help me with that? In TGP, the authors are finding out the predicted output ($\hat{y}$) minimizing the KL divergence between the input and output vice versa. But in general, we look for the predictive distribution of output i.e. $p(y^*|\mathbf{y}) \sim \mathcal(\mu, \sigma^2)$. One thing to be remarked here that the predictive variance i.e. $\sigma^2$, $y$ does not have any role in it. In the case of TGP, is the predicted output $\hat{y}$ is same as the mean of the predictive distribution of $y$?
Gaussian Processes: How to use GPML for multi-dimensional output I was searching for multi output Gaussian Processes and found many ways to act with it like, convolution method, mixed effects modeling method and latest this one Twin Gaussian Processes (TGP). I hav
18,847
Variance-covariance matrix of the errors in linear regression
The covariance matrix for a model of the type $y = X\beta + \epsilon$ is usually computed as $$(X^t X)^{-1}\frac{\sigma^2}{d}$$ where $\sigma^2$ is the residual sum of squares, $\sigma^2=\sum_i (y_i - X_i\hat\beta)^2$ and $d$ is the degrees of freedom (typically the number of observations minus the number of parameters). For robust and or clustered standard errors, the product $X^t X$ is modified slightly. There may also be other ways to calculate the covariance matrix, e.g. as suggested by the expectation of outer products.
Variance-covariance matrix of the errors in linear regression
The covariance matrix for a model of the type $y = X\beta + \epsilon$ is usually computed as $$(X^t X)^{-1}\frac{\sigma^2}{d}$$ where $\sigma^2$ is the residual sum of squares, $\sigma^2=\sum_i (y_i -
Variance-covariance matrix of the errors in linear regression The covariance matrix for a model of the type $y = X\beta + \epsilon$ is usually computed as $$(X^t X)^{-1}\frac{\sigma^2}{d}$$ where $\sigma^2$ is the residual sum of squares, $\sigma^2=\sum_i (y_i - X_i\hat\beta)^2$ and $d$ is the degrees of freedom (typically the number of observations minus the number of parameters). For robust and or clustered standard errors, the product $X^t X$ is modified slightly. There may also be other ways to calculate the covariance matrix, e.g. as suggested by the expectation of outer products.
Variance-covariance matrix of the errors in linear regression The covariance matrix for a model of the type $y = X\beta + \epsilon$ is usually computed as $$(X^t X)^{-1}\frac{\sigma^2}{d}$$ where $\sigma^2$ is the residual sum of squares, $\sigma^2=\sum_i (y_i -
18,848
Variance-covariance matrix of the errors in linear regression
OLS estimation of the error variance, $\sigma^2$: $$s^2=\frac{\hat \varepsilon^\top\hat \varepsilon}{n-p}$$ This is included in Practical Regression and Anova using R by Julian J. Faraway, page 21 . Example of its calculation in R, based on linear model of miles-per-gallon regressed on multiple car model specs included in the mtcars database: ols = lm(mpg ~ disp + drat + wt, mtcars). These are the manual calculations and the output of the lm() function: > rdf = nrow(X) - ncol(X) # Residual degrees of freedom > s.sq = as.vector((t(ols$residuals) %*% ols$residuals) / rdf) > # s square (OLS estimate of sigma square) > (sigma = sqrt(s.sq)) # Residual standar error [1] 2.950507 > summary(ols) Call: lm(formula = mpg ~ disp + drat + wt, data = mtcars) ... Residual standard error: 2.951 on 28 degrees of freedom Variance - Covariance matrix of the estimated coefficients, $\hat \beta$: $$\mathrm{Var}\left[\hat \beta \mid X \right] =\sigma^2 \left(X^\top X\right)^{-1}$$ estimated as in page 8 of this online document as $$\hat{\mathrm{Var}}\left[\hat \beta \mid X \right] =s^2 \left(X^\top X\right)^{-1}$$ > X = model.matrix(ols) # Model matrix X > XtX = t(X) %*% X # X transpose X > Sigma = solve(XtX) * s.sq # Variance - covariance matrix > all.equal(Sigma, vcov(ols)) # Same as built-in formula [1] TRUE > sqrt(diag(Sigma)) # Calculated Std. Errors of coef's (Intercept) disp drat wt 7.099791769 0.009578313 1.455050731 1.217156605 > summary(ols)[[4]][,2] # Output of lm() function (Intercept) disp drat wt 7.099791769 0.009578313 1.455050731 1.217156605
Variance-covariance matrix of the errors in linear regression
OLS estimation of the error variance, $\sigma^2$: $$s^2=\frac{\hat \varepsilon^\top\hat \varepsilon}{n-p}$$ This is included in Practical Regression and Anova using R by Julian J. Faraway, page 21 .
Variance-covariance matrix of the errors in linear regression OLS estimation of the error variance, $\sigma^2$: $$s^2=\frac{\hat \varepsilon^\top\hat \varepsilon}{n-p}$$ This is included in Practical Regression and Anova using R by Julian J. Faraway, page 21 . Example of its calculation in R, based on linear model of miles-per-gallon regressed on multiple car model specs included in the mtcars database: ols = lm(mpg ~ disp + drat + wt, mtcars). These are the manual calculations and the output of the lm() function: > rdf = nrow(X) - ncol(X) # Residual degrees of freedom > s.sq = as.vector((t(ols$residuals) %*% ols$residuals) / rdf) > # s square (OLS estimate of sigma square) > (sigma = sqrt(s.sq)) # Residual standar error [1] 2.950507 > summary(ols) Call: lm(formula = mpg ~ disp + drat + wt, data = mtcars) ... Residual standard error: 2.951 on 28 degrees of freedom Variance - Covariance matrix of the estimated coefficients, $\hat \beta$: $$\mathrm{Var}\left[\hat \beta \mid X \right] =\sigma^2 \left(X^\top X\right)^{-1}$$ estimated as in page 8 of this online document as $$\hat{\mathrm{Var}}\left[\hat \beta \mid X \right] =s^2 \left(X^\top X\right)^{-1}$$ > X = model.matrix(ols) # Model matrix X > XtX = t(X) %*% X # X transpose X > Sigma = solve(XtX) * s.sq # Variance - covariance matrix > all.equal(Sigma, vcov(ols)) # Same as built-in formula [1] TRUE > sqrt(diag(Sigma)) # Calculated Std. Errors of coef's (Intercept) disp drat wt 7.099791769 0.009578313 1.455050731 1.217156605 > summary(ols)[[4]][,2] # Output of lm() function (Intercept) disp drat wt 7.099791769 0.009578313 1.455050731 1.217156605
Variance-covariance matrix of the errors in linear regression OLS estimation of the error variance, $\sigma^2$: $$s^2=\frac{\hat \varepsilon^\top\hat \varepsilon}{n-p}$$ This is included in Practical Regression and Anova using R by Julian J. Faraway, page 21 .
18,849
Variance-covariance matrix of the errors in linear regression
With linear regression we are fitting a model $Y = \beta*X +\varepsilon$. $Y$ is the dependent variable, the $X$'s are the predictor (explanatory) variables. We use the data provided to us (the training set or the sample) to estimate the population $\beta$'s. The $X$'s are not considered random variables. The $Y$'s are random because of the error component.
Variance-covariance matrix of the errors in linear regression
With linear regression we are fitting a model $Y = \beta*X +\varepsilon$. $Y$ is the dependent variable, the $X$'s are the predictor (explanatory) variables. We use the data provided to us (the traini
Variance-covariance matrix of the errors in linear regression With linear regression we are fitting a model $Y = \beta*X +\varepsilon$. $Y$ is the dependent variable, the $X$'s are the predictor (explanatory) variables. We use the data provided to us (the training set or the sample) to estimate the population $\beta$'s. The $X$'s are not considered random variables. The $Y$'s are random because of the error component.
Variance-covariance matrix of the errors in linear regression With linear regression we are fitting a model $Y = \beta*X +\varepsilon$. $Y$ is the dependent variable, the $X$'s are the predictor (explanatory) variables. We use the data provided to us (the traini
18,850
Can bootstrap be used to replace non-parametric tests?
The bootstrap works without needing assumptions like normality, but it can be highly variable when the sample size is small and the population is not normal. So it can be better in the sense of the assumptions holding, but it is not better in all ways. The bootstrap samples with replacement, permutation tests sample without replacement. The Mann-Whitney and other nonparametric tests are actually special cases of the permutation test. I actually prefer the permutation test here because you can specify a meaningful test statistic. The decision on which test to use should be based on the question being answered and knowledge about the science leading to the data. The Central Limit Theorem tells us that we can still get very good approximations from t-tests even when the population is not normal. How good the approximations are depends on the shape of the population distribution (not the sample) and the sample size. There are many cases where a t-test is still reasonable for smaller samples (and some cases where it is not good enough in very large samples).
Can bootstrap be used to replace non-parametric tests?
The bootstrap works without needing assumptions like normality, but it can be highly variable when the sample size is small and the population is not normal. So it can be better in the sense of the a
Can bootstrap be used to replace non-parametric tests? The bootstrap works without needing assumptions like normality, but it can be highly variable when the sample size is small and the population is not normal. So it can be better in the sense of the assumptions holding, but it is not better in all ways. The bootstrap samples with replacement, permutation tests sample without replacement. The Mann-Whitney and other nonparametric tests are actually special cases of the permutation test. I actually prefer the permutation test here because you can specify a meaningful test statistic. The decision on which test to use should be based on the question being answered and knowledge about the science leading to the data. The Central Limit Theorem tells us that we can still get very good approximations from t-tests even when the population is not normal. How good the approximations are depends on the shape of the population distribution (not the sample) and the sample size. There are many cases where a t-test is still reasonable for smaller samples (and some cases where it is not good enough in very large samples).
Can bootstrap be used to replace non-parametric tests? The bootstrap works without needing assumptions like normality, but it can be highly variable when the sample size is small and the population is not normal. So it can be better in the sense of the a
18,851
What is an unbiased estimate of population R-square?
Evaluation of analytic adjustments to R-square @ttnphns referred me to the Yin and Fan (2001) article that compares different analytic methods of estimating $R^2$. As per my question they discriminate between two types of estimators. They use the following terminology: $\rho^2$: Estimator of the squared population multiple correlation coefficient $\rho_c^2$: Estimator of the squared population cross-validity coefficient Their results are summarised in the abstract: The authors conducted a Monte Carlo experiment to investigate the effectiveness of the analytical formulas for estimating $R^2$ shrinkage, with 4 fully crossed factors (squared population multiple correlation coefficient, number of predictors, sample size, and degree of multicollinearity) and 500 replications in each cell. The results indicated that the most widely used Wherry formula (in both SAS and SPSS) is probably not the most effective analytical formula for estimating $\rho^2$. Instead, the Pratt formula and the Browne formula outperformed other analytical formulas in estimating $\rho^2$ and $\rho_c^2$, respectively. Thus, the article implies that the Pratt formula (p.209) is a good choice for estimating $\rho^2$: $$\hat{R}^2=1 - \frac{(N-3)(1 - R^2)}{(N-p-1)} \left[ 1 + \frac{2(1-R^2)}{N-p-2.3} \right]$$ where N is the sample size, and p is the number of predictors. Empirical estimates of adjustments to R-square Kromrey and Hines (1995) review empirical estimates of $R^2$ (e.g., cross-validation approaches). They show that such algorithms are inappropriate for estimating $\rho^2$. This makes sense given that such algorithms seem to be designed to estimate $\rho_c^2$. However, after reading this, I still wasn't sure whether some form of appropriately corrected empirical estimate might still perform better than analytic estimates in estimating $\rho^2$. References Kromrey, J. D., & Hines, C. V. (1995). Use of empirical estimates of shrinkage in multiple regression: a caution. Educational and Psychological Measurement, 55(6), 901-925. Yin, P., & Fan, X. (2001). Estimating $R^2$ shrinkage in multiple regression: A comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224. PDF
What is an unbiased estimate of population R-square?
Evaluation of analytic adjustments to R-square @ttnphns referred me to the Yin and Fan (2001) article that compares different analytic methods of estimating $R^2$. As per my question they discriminate
What is an unbiased estimate of population R-square? Evaluation of analytic adjustments to R-square @ttnphns referred me to the Yin and Fan (2001) article that compares different analytic methods of estimating $R^2$. As per my question they discriminate between two types of estimators. They use the following terminology: $\rho^2$: Estimator of the squared population multiple correlation coefficient $\rho_c^2$: Estimator of the squared population cross-validity coefficient Their results are summarised in the abstract: The authors conducted a Monte Carlo experiment to investigate the effectiveness of the analytical formulas for estimating $R^2$ shrinkage, with 4 fully crossed factors (squared population multiple correlation coefficient, number of predictors, sample size, and degree of multicollinearity) and 500 replications in each cell. The results indicated that the most widely used Wherry formula (in both SAS and SPSS) is probably not the most effective analytical formula for estimating $\rho^2$. Instead, the Pratt formula and the Browne formula outperformed other analytical formulas in estimating $\rho^2$ and $\rho_c^2$, respectively. Thus, the article implies that the Pratt formula (p.209) is a good choice for estimating $\rho^2$: $$\hat{R}^2=1 - \frac{(N-3)(1 - R^2)}{(N-p-1)} \left[ 1 + \frac{2(1-R^2)}{N-p-2.3} \right]$$ where N is the sample size, and p is the number of predictors. Empirical estimates of adjustments to R-square Kromrey and Hines (1995) review empirical estimates of $R^2$ (e.g., cross-validation approaches). They show that such algorithms are inappropriate for estimating $\rho^2$. This makes sense given that such algorithms seem to be designed to estimate $\rho_c^2$. However, after reading this, I still wasn't sure whether some form of appropriately corrected empirical estimate might still perform better than analytic estimates in estimating $\rho^2$. References Kromrey, J. D., & Hines, C. V. (1995). Use of empirical estimates of shrinkage in multiple regression: a caution. Educational and Psychological Measurement, 55(6), 901-925. Yin, P., & Fan, X. (2001). Estimating $R^2$ shrinkage in multiple regression: A comparison of different analytical methods. The Journal of Experimental Education, 69(2), 203-224. PDF
What is an unbiased estimate of population R-square? Evaluation of analytic adjustments to R-square @ttnphns referred me to the Yin and Fan (2001) article that compares different analytic methods of estimating $R^2$. As per my question they discriminate
18,852
What is an unbiased estimate of population R-square?
We made some progress regarding this. Thus here an updated answer. Assumptions All existing comparisons for this (thus also the results summarized by Jeromy, and all claims I make dependent on all regression assumptions being met, and the predictors being multivariate normally distributed. For a more formal treatment, see Shieh (2007) or Karch (2020) 1. Population $R^2$ As Jeremey notes, this is commonly called squared population multiple correlation coefficient $\rho^2$. It is defined as the amount of variance explained by the true regression model. The unbiased estimator has been derived theoretically in Olkin & Pratt (1958) and is correspondingly known as Olkin-Pratt estimator. Until recently, this estimator was not available in any software as it's nontrivial to compute it. However, I showed how to do that (Karch, 2020) and provide an R package for extracting it from a fitted regression model. Note that unbiasedness might not be what you want. Especially Bayesians tend to get angry at the Olkin-Pratt estimator as it can return negative values, which of course has a posterior probability $0$. At the same time, sometine returning negative values is needed for unbiasdness. If you consider other optimality criteria, most notably, lowest MSE, the results change dramatically, see Karch (2020), Which is better: r-squared or adjusted r-squared?, and Would the real adjusted R-squared formula please step forward?. 2. Out of sample $R^2$ First a warning: The squared population cross-validity coefficient $\rho_c^2$ is not equivalent to the description in the question the r-square that would be obtained if the regression equation obtained from the sample (i.e., $\hat{\beta}$) were applied to an infinite amount of data external to the sample but from the same data generating process. If we call the value described in the quote $\rho_c(\hat{\beta})$, $\rho_c^2$ is actually the expectation $E[\rho_c(\hat{\beta})]$ (Shieh,2007). After consulting the latest paper on the issue (Shieh, 2007), it seems that no unbiased estimator for this exists yet.
What is an unbiased estimate of population R-square?
We made some progress regarding this. Thus here an updated answer. Assumptions All existing comparisons for this (thus also the results summarized by Jeromy, and all claims I make dependent on all re
What is an unbiased estimate of population R-square? We made some progress regarding this. Thus here an updated answer. Assumptions All existing comparisons for this (thus also the results summarized by Jeromy, and all claims I make dependent on all regression assumptions being met, and the predictors being multivariate normally distributed. For a more formal treatment, see Shieh (2007) or Karch (2020) 1. Population $R^2$ As Jeremey notes, this is commonly called squared population multiple correlation coefficient $\rho^2$. It is defined as the amount of variance explained by the true regression model. The unbiased estimator has been derived theoretically in Olkin & Pratt (1958) and is correspondingly known as Olkin-Pratt estimator. Until recently, this estimator was not available in any software as it's nontrivial to compute it. However, I showed how to do that (Karch, 2020) and provide an R package for extracting it from a fitted regression model. Note that unbiasedness might not be what you want. Especially Bayesians tend to get angry at the Olkin-Pratt estimator as it can return negative values, which of course has a posterior probability $0$. At the same time, sometine returning negative values is needed for unbiasdness. If you consider other optimality criteria, most notably, lowest MSE, the results change dramatically, see Karch (2020), Which is better: r-squared or adjusted r-squared?, and Would the real adjusted R-squared formula please step forward?. 2. Out of sample $R^2$ First a warning: The squared population cross-validity coefficient $\rho_c^2$ is not equivalent to the description in the question the r-square that would be obtained if the regression equation obtained from the sample (i.e., $\hat{\beta}$) were applied to an infinite amount of data external to the sample but from the same data generating process. If we call the value described in the quote $\rho_c(\hat{\beta})$, $\rho_c^2$ is actually the expectation $E[\rho_c(\hat{\beta})]$ (Shieh,2007). After consulting the latest paper on the issue (Shieh, 2007), it seems that no unbiased estimator for this exists yet.
What is an unbiased estimate of population R-square? We made some progress regarding this. Thus here an updated answer. Assumptions All existing comparisons for this (thus also the results summarized by Jeromy, and all claims I make dependent on all re
18,853
Confidence intervals for empirical CDF
Yes, there are other types of confidence intervals (CI). One of the most popular CI are based on the Dvoretzky–Kiefer–Wolfowitz inequality, which states that $$P\left[\sup_{x}\vert \hat{F}_n(x)-F(x)\vert>\epsilon\right]\leq 2\exp(-2n\epsilon^2).$$ Then, if you want to construct an interval of level $\alpha$ you just have to equate $\alpha=2\exp(-2n\epsilon^2)$, which leads to $\epsilon = \sqrt{\dfrac{1}{2n}\log\left(\dfrac{2}{\alpha}\right)}$. Consequently, a confidence band for $F(x)$ is $L(x)=\max\{\hat{F}_n(x)-\epsilon,0\}$ and $U(x)=\min\{\hat{F}_n(x)+\epsilon,1\}$. You may want to work out the details and adapt this to $P[X>x]=1-F(x)$ (since you tagged this as self-study). This presentation provides other details that might be of interest.
Confidence intervals for empirical CDF
Yes, there are other types of confidence intervals (CI). One of the most popular CI are based on the Dvoretzky–Kiefer–Wolfowitz inequality, which states that $$P\left[\sup_{x}\vert \hat{F}_n(x)-F(x)\v
Confidence intervals for empirical CDF Yes, there are other types of confidence intervals (CI). One of the most popular CI are based on the Dvoretzky–Kiefer–Wolfowitz inequality, which states that $$P\left[\sup_{x}\vert \hat{F}_n(x)-F(x)\vert>\epsilon\right]\leq 2\exp(-2n\epsilon^2).$$ Then, if you want to construct an interval of level $\alpha$ you just have to equate $\alpha=2\exp(-2n\epsilon^2)$, which leads to $\epsilon = \sqrt{\dfrac{1}{2n}\log\left(\dfrac{2}{\alpha}\right)}$. Consequently, a confidence band for $F(x)$ is $L(x)=\max\{\hat{F}_n(x)-\epsilon,0\}$ and $U(x)=\min\{\hat{F}_n(x)+\epsilon,1\}$. You may want to work out the details and adapt this to $P[X>x]=1-F(x)$ (since you tagged this as self-study). This presentation provides other details that might be of interest.
Confidence intervals for empirical CDF Yes, there are other types of confidence intervals (CI). One of the most popular CI are based on the Dvoretzky–Kiefer–Wolfowitz inequality, which states that $$P\left[\sup_{x}\vert \hat{F}_n(x)-F(x)\v
18,854
How to tell if residuals are autocorrelated from a graphic
Not only can you look at a plot, I think it's generally a better option. Hypothesis testing in this situation answers the wrong question. The usual plot to look at would be an autocorrelation function (ACF) of residuals. The autocorrelation function is the correlation of the residuals (as a time series) with its own lags. Here, for example, is the ACF of residuals from a small example from Montgomery et al Some of the sample correlations (for example at lags 1,2 and 8) are not particularly small (and so may substantively affect things), but they also can't be told from the effect of noise (the sample is very small). Edit: Here's a plot to illustrate the difference between an uncorrelated and a highly correlated series (in fact, a nonstationary one) The upper plot is white noise (independent). The lower one is a random walk (whose differences are the original series) - it has very strong autocorrelation.
How to tell if residuals are autocorrelated from a graphic
Not only can you look at a plot, I think it's generally a better option. Hypothesis testing in this situation answers the wrong question. The usual plot to look at would be an autocorrelation function
How to tell if residuals are autocorrelated from a graphic Not only can you look at a plot, I think it's generally a better option. Hypothesis testing in this situation answers the wrong question. The usual plot to look at would be an autocorrelation function (ACF) of residuals. The autocorrelation function is the correlation of the residuals (as a time series) with its own lags. Here, for example, is the ACF of residuals from a small example from Montgomery et al Some of the sample correlations (for example at lags 1,2 and 8) are not particularly small (and so may substantively affect things), but they also can't be told from the effect of noise (the sample is very small). Edit: Here's a plot to illustrate the difference between an uncorrelated and a highly correlated series (in fact, a nonstationary one) The upper plot is white noise (independent). The lower one is a random walk (whose differences are the original series) - it has very strong autocorrelation.
How to tell if residuals are autocorrelated from a graphic Not only can you look at a plot, I think it's generally a better option. Hypothesis testing in this situation answers the wrong question. The usual plot to look at would be an autocorrelation function
18,855
How to tell if residuals are autocorrelated from a graphic
It's not unusual if 5% or less of the autocorrelation values fall outside the intervals as that could be due to sampling variation. One practice is to produce autocorrelation plot for first 20 values and check whether more than one value falls outside the allowed intervals.
How to tell if residuals are autocorrelated from a graphic
It's not unusual if 5% or less of the autocorrelation values fall outside the intervals as that could be due to sampling variation. One practice is to produce autocorrelation plot for first 20 values
How to tell if residuals are autocorrelated from a graphic It's not unusual if 5% or less of the autocorrelation values fall outside the intervals as that could be due to sampling variation. One practice is to produce autocorrelation plot for first 20 values and check whether more than one value falls outside the allowed intervals.
How to tell if residuals are autocorrelated from a graphic It's not unusual if 5% or less of the autocorrelation values fall outside the intervals as that could be due to sampling variation. One practice is to produce autocorrelation plot for first 20 values
18,856
Is there an optimal bandwidth for a kernel density estimator of derivatives?
The optimal bandwidth for derivative estimation will be different from the bandwidth for density estimation. In general, every feature of a density has its own optimal bandwidth selector. If your objective is to minimize mean integrated squared error (which is the usual criterion) there is nothing subjective about it. It is a matter of deriving the value that minimizes the criterion. The equations are given in Section 2.10 of Hansen (2009). The tricky part is that the optimal bandwidth is a function of the density itself, so this solution is not directly useful. There are a number of methods around to try to deal with that problem. These usually approximate some functionals of the density using normal approximations. (Note, there is no assumption that the density itself is normal. The assumption is that some functionals of the density can be obtained assuming normality.) Where the approximations are imposed determines how good the bandwidth selector is. The crudest approach is called the "normal reference rule" which imposes the approximation at a high level. The end of Section 2.10 in Hansen (2009) gives the formula using this approach. This approach is implemented in the hns() function from the ks package on CRAN. That's probably the best you will get if you don't want to write your own code. So you can estimate the derivative of a density as follows (using ks): library(ks) h <- hns(x,deriv.order=1) den <- kdde(x, h=h, deriv.order=1) A better approach, usually known as a "direct plug in" selector, imposes the approximation at a lower level. For straight density estimation, this is the Sheather-Jones method, implemented in R using density(x,bw="SJ"). However, I don't think there is a similar facility available in any R package for derivative estimation. Rather than use straight kernel estimation, you may be better off with a local polynomial estimator. This can be done using the locpoly() function from the ks package in R. Again, there is no optimal bandwidth selection implemented, but the bias will be smaller than for kernel estimators. e.g., den2 <- locpoly(x, bandwidth=?, drv=1) # Need to guess a sensible bandwidth
Is there an optimal bandwidth for a kernel density estimator of derivatives?
The optimal bandwidth for derivative estimation will be different from the bandwidth for density estimation. In general, every feature of a density has its own optimal bandwidth selector. If your obje
Is there an optimal bandwidth for a kernel density estimator of derivatives? The optimal bandwidth for derivative estimation will be different from the bandwidth for density estimation. In general, every feature of a density has its own optimal bandwidth selector. If your objective is to minimize mean integrated squared error (which is the usual criterion) there is nothing subjective about it. It is a matter of deriving the value that minimizes the criterion. The equations are given in Section 2.10 of Hansen (2009). The tricky part is that the optimal bandwidth is a function of the density itself, so this solution is not directly useful. There are a number of methods around to try to deal with that problem. These usually approximate some functionals of the density using normal approximations. (Note, there is no assumption that the density itself is normal. The assumption is that some functionals of the density can be obtained assuming normality.) Where the approximations are imposed determines how good the bandwidth selector is. The crudest approach is called the "normal reference rule" which imposes the approximation at a high level. The end of Section 2.10 in Hansen (2009) gives the formula using this approach. This approach is implemented in the hns() function from the ks package on CRAN. That's probably the best you will get if you don't want to write your own code. So you can estimate the derivative of a density as follows (using ks): library(ks) h <- hns(x,deriv.order=1) den <- kdde(x, h=h, deriv.order=1) A better approach, usually known as a "direct plug in" selector, imposes the approximation at a lower level. For straight density estimation, this is the Sheather-Jones method, implemented in R using density(x,bw="SJ"). However, I don't think there is a similar facility available in any R package for derivative estimation. Rather than use straight kernel estimation, you may be better off with a local polynomial estimator. This can be done using the locpoly() function from the ks package in R. Again, there is no optimal bandwidth selection implemented, but the bias will be smaller than for kernel estimators. e.g., den2 <- locpoly(x, bandwidth=?, drv=1) # Need to guess a sensible bandwidth
Is there an optimal bandwidth for a kernel density estimator of derivatives? The optimal bandwidth for derivative estimation will be different from the bandwidth for density estimation. In general, every feature of a density has its own optimal bandwidth selector. If your obje
18,857
Variance partitioning and longitudinal changes in correlation with binary data
Let $y_{ij}, {\boldsymbol x}_{ij}$ denote the response and predictor vector (respectively) of student $i$ in school $j$. (1) For binary data, I think the standard way to do variance decompositions analogous to those done for continuous data is what the authors call Method D (I'll comment on the other methods below) in your link - envisioning the binary data as arising from a underlying continuous variable that is governed by a linear model and decompose the variance on that latent scale. The reason is that logistic models (and other GLMs) naturally arises this way-- To see this, define $y^{\star}_{ij}$ such that it is governed by a linear mixed model: $$ y^{\star}_{ij} = \alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j + \varepsilon_{ij} $$ where $\alpha,\beta$ are regression coefficients, $\eta_j \sim N(0,\sigma^2)$ is the school level random effect and $\varepsilon_{ij}$ is the residual variance term and has a standard logistic distribution. Now let $$ y_{ij} = \begin{cases} 1 & \text{if} \ \ \ y^{\star}_{ij}β‰₯0\\ \\ 0 &\text{if} \ \ \ y^{\star}_{ij}<0 \end{cases} $$ let $p_{ij} = P(y_{ij} = 1|{\boldsymbol x}_{ij},\eta_j)$ now, simply using the logistic CDF we have $$p_{ij} = 1-P(y^{\star}_{ij}<0|{\boldsymbol x}_{ij},\eta_j) = \frac{ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j) \} }{1+ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j) \}}$$ now taking the logit transform of both sides, you have $$ \log \left( \frac{ p_{ij} }{1 - p_{ij}} \right) = \alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j $$ which is exactly the logistic mixed effects model. So, the logistic model is equivalent to the latent variable model specified above. One important note: The scale of $\varepsilon_{ij}$ is not identified since, if you were to scale it down but a constant $s$, it would simply change the above to $$ \frac{ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j)/s \} }{1+ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j)/s \}}$$ $\ \ \ \ \ \ \ $therefore the coefficients and random effects would simply be scaled up by the $\ \ \ \ \ \ $ corresponding amount. So, $s=1$ is used, which implies ${\rm var}(\varepsilon_{ij}) = \pi^2/3$. Now, if you use this model and then the quantity $$ \frac{ \hat{\sigma}^{2}_{\eta} }{\hat{\sigma}^{2}_{\eta} + \pi^2/3 } $$ estimates the intraclass correlation of the underlying latent variables. Another important note: If $\varepsilon_{ij}$ is specified as, instead, having a standard normal distribution, then you have the mixed effects probit model. In that case $$ \frac{ \hat{\sigma}^{2}_{\eta} }{\hat{\sigma}^{2}_{\eta} + 1 } $$ estimates the tetrachoric correlation between two randomly selected pupils in the same school, which were shown by Pearson (around 1900 I think) to be statistically identified when the underlying continuous data was normally distributed (this work actually showed these correlations were identified beyond the binary case to the multiple category case, where these correlations are termed polychoric correlations). For this reason, it may be preferable (and would be my recommenation) to use a probit model when the primary interest is in estimating the (tetrachoric) intraclass correlation of binary data. Regarding the other methods mentioned in the paper you linked: (A) I've never seen the linearization method, but one drawback I can see is that there's no indication of the approximation error incurred by this. In addition, if you're going to linearize the model (through a potentially crude approximation), why not just use a linear model in the first place (e.g. option (C), which I'll get to in a minute)? It would also be more complicated to present since the ICC would depend on ${\boldsymbol x}_{ij}$. (B) The simulation method is intuitively appealing to a statistician since it would give you an estimated variance decomposition on the original scale of the data but, depending on the audience, it may (i) be complicated to describe this in your "methods" section and (ii) may turn off a reviewer who was looking for something "more standard" (C) Pretending the data is continuous is probably not a great idea, although it won't perform terribly if most of the probabilities are not too close to 0 or 1. But, doing this would almost certainly raise a red flag to a reviewer so I'd stay away. Now finally, (2) If the fixed effects are very different across years, then you're right to think that it could be difficult to compare the random effect variances across years, since they are potentially on different scales (this is related to the non-identifiability of scaling issue mentioned above). If you want to keep the fixed effects over time (however, if you see them changing a lot over time, you may not want to do that) but look at the change in the random effect variance, you can explore this effect using some random slopes and dummy variables. For example, if you wanted to see if the ICCs were different in different years, you culd let $I_k = 1$ if the observation was made in year $k$ and 0 otherwise and then model your linear predictor as $$\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_{1j} I_1 + \eta_{2j} I_2 + \eta_{3j} I_3 + \eta_{4j} I_4 + \eta_{5j} I_5+ \eta_{6j} I_6$$ this will give you a different ICCs each year but the same fixed effects. It may be tempting to just use a random slope in time, making your linear predictor $$\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_{1} + \eta_{2} t $$ but I don't recommend this, since that will only allow your associations to increase over time, not decrease.
Variance partitioning and longitudinal changes in correlation with binary data
Let $y_{ij}, {\boldsymbol x}_{ij}$ denote the response and predictor vector (respectively) of student $i$ in school $j$. (1) For binary data, I think the standard way to do variance decompositions an
Variance partitioning and longitudinal changes in correlation with binary data Let $y_{ij}, {\boldsymbol x}_{ij}$ denote the response and predictor vector (respectively) of student $i$ in school $j$. (1) For binary data, I think the standard way to do variance decompositions analogous to those done for continuous data is what the authors call Method D (I'll comment on the other methods below) in your link - envisioning the binary data as arising from a underlying continuous variable that is governed by a linear model and decompose the variance on that latent scale. The reason is that logistic models (and other GLMs) naturally arises this way-- To see this, define $y^{\star}_{ij}$ such that it is governed by a linear mixed model: $$ y^{\star}_{ij} = \alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j + \varepsilon_{ij} $$ where $\alpha,\beta$ are regression coefficients, $\eta_j \sim N(0,\sigma^2)$ is the school level random effect and $\varepsilon_{ij}$ is the residual variance term and has a standard logistic distribution. Now let $$ y_{ij} = \begin{cases} 1 & \text{if} \ \ \ y^{\star}_{ij}β‰₯0\\ \\ 0 &\text{if} \ \ \ y^{\star}_{ij}<0 \end{cases} $$ let $p_{ij} = P(y_{ij} = 1|{\boldsymbol x}_{ij},\eta_j)$ now, simply using the logistic CDF we have $$p_{ij} = 1-P(y^{\star}_{ij}<0|{\boldsymbol x}_{ij},\eta_j) = \frac{ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j) \} }{1+ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j) \}}$$ now taking the logit transform of both sides, you have $$ \log \left( \frac{ p_{ij} }{1 - p_{ij}} \right) = \alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j $$ which is exactly the logistic mixed effects model. So, the logistic model is equivalent to the latent variable model specified above. One important note: The scale of $\varepsilon_{ij}$ is not identified since, if you were to scale it down but a constant $s$, it would simply change the above to $$ \frac{ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j)/s \} }{1+ \exp \{-(\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_j)/s \}}$$ $\ \ \ \ \ \ \ $therefore the coefficients and random effects would simply be scaled up by the $\ \ \ \ \ \ $ corresponding amount. So, $s=1$ is used, which implies ${\rm var}(\varepsilon_{ij}) = \pi^2/3$. Now, if you use this model and then the quantity $$ \frac{ \hat{\sigma}^{2}_{\eta} }{\hat{\sigma}^{2}_{\eta} + \pi^2/3 } $$ estimates the intraclass correlation of the underlying latent variables. Another important note: If $\varepsilon_{ij}$ is specified as, instead, having a standard normal distribution, then you have the mixed effects probit model. In that case $$ \frac{ \hat{\sigma}^{2}_{\eta} }{\hat{\sigma}^{2}_{\eta} + 1 } $$ estimates the tetrachoric correlation between two randomly selected pupils in the same school, which were shown by Pearson (around 1900 I think) to be statistically identified when the underlying continuous data was normally distributed (this work actually showed these correlations were identified beyond the binary case to the multiple category case, where these correlations are termed polychoric correlations). For this reason, it may be preferable (and would be my recommenation) to use a probit model when the primary interest is in estimating the (tetrachoric) intraclass correlation of binary data. Regarding the other methods mentioned in the paper you linked: (A) I've never seen the linearization method, but one drawback I can see is that there's no indication of the approximation error incurred by this. In addition, if you're going to linearize the model (through a potentially crude approximation), why not just use a linear model in the first place (e.g. option (C), which I'll get to in a minute)? It would also be more complicated to present since the ICC would depend on ${\boldsymbol x}_{ij}$. (B) The simulation method is intuitively appealing to a statistician since it would give you an estimated variance decomposition on the original scale of the data but, depending on the audience, it may (i) be complicated to describe this in your "methods" section and (ii) may turn off a reviewer who was looking for something "more standard" (C) Pretending the data is continuous is probably not a great idea, although it won't perform terribly if most of the probabilities are not too close to 0 or 1. But, doing this would almost certainly raise a red flag to a reviewer so I'd stay away. Now finally, (2) If the fixed effects are very different across years, then you're right to think that it could be difficult to compare the random effect variances across years, since they are potentially on different scales (this is related to the non-identifiability of scaling issue mentioned above). If you want to keep the fixed effects over time (however, if you see them changing a lot over time, you may not want to do that) but look at the change in the random effect variance, you can explore this effect using some random slopes and dummy variables. For example, if you wanted to see if the ICCs were different in different years, you culd let $I_k = 1$ if the observation was made in year $k$ and 0 otherwise and then model your linear predictor as $$\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_{1j} I_1 + \eta_{2j} I_2 + \eta_{3j} I_3 + \eta_{4j} I_4 + \eta_{5j} I_5+ \eta_{6j} I_6$$ this will give you a different ICCs each year but the same fixed effects. It may be tempting to just use a random slope in time, making your linear predictor $$\alpha + {\boldsymbol x}_{ij} {\boldsymbol \beta} + \eta_{1} + \eta_{2} t $$ but I don't recommend this, since that will only allow your associations to increase over time, not decrease.
Variance partitioning and longitudinal changes in correlation with binary data Let $y_{ij}, {\boldsymbol x}_{ij}$ denote the response and predictor vector (respectively) of student $i$ in school $j$. (1) For binary data, I think the standard way to do variance decompositions an
18,858
What's a good approach to teaching R in a computer lab?
I'd argue for a completely different approach. I've seen R tutorials that were taught from two different perspectives: a building-blocks approach, in which users are introduced to R's fundamental concepts, and a shock-and-awe approach, in which users are shown R's amazing capabilities but left with relatively little understanding of how to do anything. The latter definitely resonates more strongly with the pupils, but neither one seems very effective at actually producing users. Instead, I would take a common and relatively simple task in SPSS and walk through converting it to R, with a little bit of feigned naivete on your part - e.g., following Xi'an's excellent suggestion to look up some desired functions with ?? rather than just recalling the right function from memory. Your newbies will almost certainly be converting existing processes as they learn R, not writing them from scratch - so why not show them exactly how you'd go about that? A good example could consist of just loading data, performing some descriptives, and popping out some basic plots. lm() can be very, very simple and produces results they'll understand and can compare to SPSS output, so that might also good to cover. For homework, get them to take a stab at converting one of their simple processes or loading and exploring a dataset with which they're very familiar. Give them some one-on-one time to figure out where things are going wrong, then cover those in the next session with more example conversions. Concepts from your list will inevitably come up (my bet: factors vs. character vectors, for vs. apply) - and then you'll have a real-world motivation for covering them. If they don't come up (attach), then they're not really needed yet - if that means your newbies write a little non-idiomatic code early on (for instead of apply), I don't see the harm. This way, your students can progress in much the same way foreign-language students do (or at least, the way I did): crude translation of simple expressions prompts the desire for more complex expressions, which causes desire for a deeper understanding of grammar, which eventually leads to idiomatic expression. Don't jump to "grammar" too soon, and don't worry too much about teaching them things they aren't asking about because they'll probably just forget it anyway. Gentle pointers about idiomatic expression are great (for vs apply), but the main thing is to get them generating output and exploring on their own.
What's a good approach to teaching R in a computer lab?
I'd argue for a completely different approach. I've seen R tutorials that were taught from two different perspectives: a building-blocks approach, in which users are introduced to R's fundamental con
What's a good approach to teaching R in a computer lab? I'd argue for a completely different approach. I've seen R tutorials that were taught from two different perspectives: a building-blocks approach, in which users are introduced to R's fundamental concepts, and a shock-and-awe approach, in which users are shown R's amazing capabilities but left with relatively little understanding of how to do anything. The latter definitely resonates more strongly with the pupils, but neither one seems very effective at actually producing users. Instead, I would take a common and relatively simple task in SPSS and walk through converting it to R, with a little bit of feigned naivete on your part - e.g., following Xi'an's excellent suggestion to look up some desired functions with ?? rather than just recalling the right function from memory. Your newbies will almost certainly be converting existing processes as they learn R, not writing them from scratch - so why not show them exactly how you'd go about that? A good example could consist of just loading data, performing some descriptives, and popping out some basic plots. lm() can be very, very simple and produces results they'll understand and can compare to SPSS output, so that might also good to cover. For homework, get them to take a stab at converting one of their simple processes or loading and exploring a dataset with which they're very familiar. Give them some one-on-one time to figure out where things are going wrong, then cover those in the next session with more example conversions. Concepts from your list will inevitably come up (my bet: factors vs. character vectors, for vs. apply) - and then you'll have a real-world motivation for covering them. If they don't come up (attach), then they're not really needed yet - if that means your newbies write a little non-idiomatic code early on (for instead of apply), I don't see the harm. This way, your students can progress in much the same way foreign-language students do (or at least, the way I did): crude translation of simple expressions prompts the desire for more complex expressions, which causes desire for a deeper understanding of grammar, which eventually leads to idiomatic expression. Don't jump to "grammar" too soon, and don't worry too much about teaching them things they aren't asking about because they'll probably just forget it anyway. Gentle pointers about idiomatic expression are great (for vs apply), but the main thing is to get them generating output and exploring on their own.
What's a good approach to teaching R in a computer lab? I'd argue for a completely different approach. I've seen R tutorials that were taught from two different perspectives: a building-blocks approach, in which users are introduced to R's fundamental con
18,859
What's a good approach to teaching R in a computer lab?
OK, here's my own answer so far on what I think would get people started and motivate them to learn some more (I am trying to wean them off SPSS, which literally cannot do some of what we need to eg complex survey analysis, at least without buying more modules which I refuse to do). At the end of the first session you should be able to: Basics Use the interface to do straightforward calculations (use R as a calculator) Start, save and load a script window and use it efficiently Create and remove objects in your workspace See which folder is your working folder Understand how the P:/R/yourid folder works and what saving a workspace on exit does Load an image of a workspace including of XXX (our commonly used data) List the objects in memory List the names of columns (variables) in a data frame Print an object to the screen Attach and detach a data frame Know what is meant by: object, function, argument (to a function), workspace, vector, data frame, matrix, numeric, factor Know how to look up for help on a function Use ?? to find a list of relevant functions Where to go on the web and our local books and LAN for more resources understand enough of R basics to participate in lab sessions on specific statistical techniques Data manipulation Create a vector of numbers using the : operator Do a table of counts for one variable Do a crosstab of counts for two variables Create a new object (eg one of the tables above) for further manipulation Transpose a matrix or table Create a vector of means of a continuous variable by a factor using tapply() Bind several vectors together using cbind() or data.frame() Create a subset of a matrix using the [] Create a simple transformation eg logarithm or square root Statistics Calculate the correlation of two continuous variables Graphics Create a histogram of a continuous variable Create a graphics window and divide it into 2 or 4 parts Create a density line plot of a continuous variable Create a scatterplot of two continuous variables Add a straight line to a scatterplot (vertical, horizontal or a-b) Create labels for axes and titles At the end of three sessions and doing a range of exercises inbetween you should also be able to: Basics Import data in SPSS or .csv format Remove all the objects in your workspace to start fresh use a library of packages Save a workspace image and understand basic principles R and memory Generate random variables Use c() to create a vector Have a good feel for where to go to learn new methods and techniques Data manipulation Use aggregate() on a real data set eg visitor arrivals numbers by month and country The ==, != and %in% operators; logical vectors; and using them to subset data ifelse() and using it to create new variables max, min and similar functions and how they work with vectors Create a vector or matrix to store numerous results Use a loop to repeat a similar function many times Use apply() to apply a function to each column or row of a matrix Create an ordered factor Use cut() to recode a numeric variable Statistics Chi square test for a contingency table Robust versions of correlations Fit a linear model to two continuous variables, placing the results in an object and using anova(), summary() and plot() to look at the results understand enough about models and how they work in R to be ready to apply your skills to a wider range of model types Use boot() to perform bootstrap on a basic function like cor(), mean(), or var() Use sample() on a real life data set Graphics Create a lattice density line plot of a continuous variable given different levels of a factor qqnorm build a scatter plot with different colour and character points showing different levels of a factor; add points or lines to an existing scatter plot add a legend dotcharts errbar() using a loop to draw multiple charts on a page
What's a good approach to teaching R in a computer lab?
OK, here's my own answer so far on what I think would get people started and motivate them to learn some more (I am trying to wean them off SPSS, which literally cannot do some of what we need to eg c
What's a good approach to teaching R in a computer lab? OK, here's my own answer so far on what I think would get people started and motivate them to learn some more (I am trying to wean them off SPSS, which literally cannot do some of what we need to eg complex survey analysis, at least without buying more modules which I refuse to do). At the end of the first session you should be able to: Basics Use the interface to do straightforward calculations (use R as a calculator) Start, save and load a script window and use it efficiently Create and remove objects in your workspace See which folder is your working folder Understand how the P:/R/yourid folder works and what saving a workspace on exit does Load an image of a workspace including of XXX (our commonly used data) List the objects in memory List the names of columns (variables) in a data frame Print an object to the screen Attach and detach a data frame Know what is meant by: object, function, argument (to a function), workspace, vector, data frame, matrix, numeric, factor Know how to look up for help on a function Use ?? to find a list of relevant functions Where to go on the web and our local books and LAN for more resources understand enough of R basics to participate in lab sessions on specific statistical techniques Data manipulation Create a vector of numbers using the : operator Do a table of counts for one variable Do a crosstab of counts for two variables Create a new object (eg one of the tables above) for further manipulation Transpose a matrix or table Create a vector of means of a continuous variable by a factor using tapply() Bind several vectors together using cbind() or data.frame() Create a subset of a matrix using the [] Create a simple transformation eg logarithm or square root Statistics Calculate the correlation of two continuous variables Graphics Create a histogram of a continuous variable Create a graphics window and divide it into 2 or 4 parts Create a density line plot of a continuous variable Create a scatterplot of two continuous variables Add a straight line to a scatterplot (vertical, horizontal or a-b) Create labels for axes and titles At the end of three sessions and doing a range of exercises inbetween you should also be able to: Basics Import data in SPSS or .csv format Remove all the objects in your workspace to start fresh use a library of packages Save a workspace image and understand basic principles R and memory Generate random variables Use c() to create a vector Have a good feel for where to go to learn new methods and techniques Data manipulation Use aggregate() on a real data set eg visitor arrivals numbers by month and country The ==, != and %in% operators; logical vectors; and using them to subset data ifelse() and using it to create new variables max, min and similar functions and how they work with vectors Create a vector or matrix to store numerous results Use a loop to repeat a similar function many times Use apply() to apply a function to each column or row of a matrix Create an ordered factor Use cut() to recode a numeric variable Statistics Chi square test for a contingency table Robust versions of correlations Fit a linear model to two continuous variables, placing the results in an object and using anova(), summary() and plot() to look at the results understand enough about models and how they work in R to be ready to apply your skills to a wider range of model types Use boot() to perform bootstrap on a basic function like cor(), mean(), or var() Use sample() on a real life data set Graphics Create a lattice density line plot of a continuous variable given different levels of a factor qqnorm build a scatter plot with different colour and character points showing different levels of a factor; add points or lines to an existing scatter plot add a legend dotcharts errbar() using a loop to draw multiple charts on a page
What's a good approach to teaching R in a computer lab? OK, here's my own answer so far on what I think would get people started and motivate them to learn some more (I am trying to wean them off SPSS, which literally cannot do some of what we need to eg c
18,860
What's a good approach to teaching R in a computer lab?
To Peter's list I would add: subset data frames: subset by observation (e.g. all responses above 3), subset by variable. use ifelse statements (this was a huge learning curve for me, I kept trying to use the type of if statement), particularly nested ifelse. summarise data into a smaller data frame by using the aggregate command. learning to use the == operator. using <- rather than = rename variables basic vectorization traps, such as max(A,B) in SAS does not do what max(A,B) does in R, if A is a variable in a data frame and B is a single value. To do the equivalent of the SAS code (and probably the SPSS code), I use an ifelse statement. use with instead of attach. :) More thoughts: They probably use COMPUTE a lot in SPSS, so covering how to do that in R would be good. Also, how to RECODE variables in R. When I was using SPSS I think most of my "non analysis" work was using those two commands.
What's a good approach to teaching R in a computer lab?
To Peter's list I would add: subset data frames: subset by observation (e.g. all responses above 3), subset by variable. use ifelse statements (this was a huge learning curve for me, I kept trying to
What's a good approach to teaching R in a computer lab? To Peter's list I would add: subset data frames: subset by observation (e.g. all responses above 3), subset by variable. use ifelse statements (this was a huge learning curve for me, I kept trying to use the type of if statement), particularly nested ifelse. summarise data into a smaller data frame by using the aggregate command. learning to use the == operator. using <- rather than = rename variables basic vectorization traps, such as max(A,B) in SAS does not do what max(A,B) does in R, if A is a variable in a data frame and B is a single value. To do the equivalent of the SAS code (and probably the SPSS code), I use an ifelse statement. use with instead of attach. :) More thoughts: They probably use COMPUTE a lot in SPSS, so covering how to do that in R would be good. Also, how to RECODE variables in R. When I was using SPSS I think most of my "non analysis" work was using those two commands.
What's a good approach to teaching R in a computer lab? To Peter's list I would add: subset data frames: subset by observation (e.g. all responses above 3), subset by variable. use ifelse statements (this was a huge learning curve for me, I kept trying to
18,861
How to test whether a sample of data fits the family of Gamma distribution?
I think the question asks for a precise statistical test, not for an histogram comparison. When using the Kolmogorov-Smirnov test with estimated parameters, the distribution of the test statistics under the null depends on the tested distribution, as opposed to the case with no estimated parameter. For instance, using (in R) x <- rnorm(100) ks.test(x, "pnorm", mean=mean(x), sd=sd(x)) leads to One-sample Kolmogorov-Smirnov test data: x D = 0.0701, p-value = 0.7096 alternative hypothesis: two-sided while we get > ks.test(x, "pnorm") One-sample Kolmogorov-Smirnov test data: x D = 0.1294, p-value = 0.07022 alternative hypothesis: two-sided for the same sample x. The significance level or the p-value thus have to be determined by Monte Carlo simulation under the null, producing the distribution of the Kolmogorov-Smirnov statistics from samples simulated under the estimated distribution (with a slight approximation in the result given that the observed sample comes from another distribution, even under the null).
How to test whether a sample of data fits the family of Gamma distribution?
I think the question asks for a precise statistical test, not for an histogram comparison. When using the Kolmogorov-Smirnov test with estimated parameters, the distribution of the test statistics und
How to test whether a sample of data fits the family of Gamma distribution? I think the question asks for a precise statistical test, not for an histogram comparison. When using the Kolmogorov-Smirnov test with estimated parameters, the distribution of the test statistics under the null depends on the tested distribution, as opposed to the case with no estimated parameter. For instance, using (in R) x <- rnorm(100) ks.test(x, "pnorm", mean=mean(x), sd=sd(x)) leads to One-sample Kolmogorov-Smirnov test data: x D = 0.0701, p-value = 0.7096 alternative hypothesis: two-sided while we get > ks.test(x, "pnorm") One-sample Kolmogorov-Smirnov test data: x D = 0.1294, p-value = 0.07022 alternative hypothesis: two-sided for the same sample x. The significance level or the p-value thus have to be determined by Monte Carlo simulation under the null, producing the distribution of the Kolmogorov-Smirnov statistics from samples simulated under the estimated distribution (with a slight approximation in the result given that the observed sample comes from another distribution, even under the null).
How to test whether a sample of data fits the family of Gamma distribution? I think the question asks for a precise statistical test, not for an histogram comparison. When using the Kolmogorov-Smirnov test with estimated parameters, the distribution of the test statistics und
18,862
How to test whether a sample of data fits the family of Gamma distribution?
Compute MLEs of the parameters assuming a gamma distribution for your data and compare the theoretical density with the histogram of your data. If the two are very different the gamma dstribution is a poor approximation of your data. For a formal test you could compute, for example, the Kolmogorov-Smirnoff test statistic comparing the best fitting gamma distribution with the empirical distribution and test for significance.
How to test whether a sample of data fits the family of Gamma distribution?
Compute MLEs of the parameters assuming a gamma distribution for your data and compare the theoretical density with the histogram of your data. If the two are very different the gamma dstribution is a
How to test whether a sample of data fits the family of Gamma distribution? Compute MLEs of the parameters assuming a gamma distribution for your data and compare the theoretical density with the histogram of your data. If the two are very different the gamma dstribution is a poor approximation of your data. For a formal test you could compute, for example, the Kolmogorov-Smirnoff test statistic comparing the best fitting gamma distribution with the empirical distribution and test for significance.
How to test whether a sample of data fits the family of Gamma distribution? Compute MLEs of the parameters assuming a gamma distribution for your data and compare the theoretical density with the histogram of your data. If the two are very different the gamma dstribution is a
18,863
How to fit a Weibull distribution to input data containing zeroes?
(As others have pointed out, a Weibull distribution is not likely to be an appropriate approximation when the data are integers only. The following is intended just to help you determine what that previous researcher did, rightly or wrongly.) There are several alternative methods that are not affected by zeros in the data, such as using various method-of moments estimators. These typically require numerical solution of equations involving the gamma function, because the moments of the Weibull distribution are given in terms of this function. I'm not familiar with R, but here's a Sage program that illustrates one of the simpler methods -- maybe it can be adapted to R? (You can read about this and other such methods in, e.g., "The Weibull distribution: a handbook" by Horst Rinne, p. 455ff -- however, there is a typo in his eq.12.4b, as the '-1' is redundant). """ Blischke-Scheuer method-of-moments estimation of (a,b) for the Weibull distribution F(t) = 1 - exp(-(t/a)^b) """ x = [23,19,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,16,15,37,218,170,44,121] xbar = mean(x) varx = variance(x) var("b"); f(b) = gamma(1+2/b)/gamma(1+1/b)^2 - 1 - varx/xbar^2 bhat = find_root(f, 0.01, 100) ahat = xbar/gamma(1+1/bhat) print "Estimates: (ahat, bhat) = ", (ahat, bhat) This produced the output Estimates: (ahat, bhat) = (81.316784310814455, 1.3811394719075942) If the above data are modified (just for illustration) by replacing the three smallest values by $0$, i.e. x = [23,0,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,0,0,37,218,170,44,121] then the same procedure produces the output Estimates: (ahat, bhat) = (78.479354097488923, 1.2938352346035282) EDIT: I just installed R to give it a try. At the risk of making this answer over-long, for anyone interested here's my R-code for the Blischke-Scheuer method: fit_weibull <- function(x) { xbar <- mean(x) varx <- var(x) f <- function(b){return(gamma(1+2/b)/gamma(1+1/b)^2 - 1 - varx/xbar^2)} bhat <- uniroot(f,c(0.02,50))$root ahat <- xbar/gamma(1+1/bhat) return(c(ahat,bhat)) } This reproduces (to five significant digits) the two Sage examples above: x <- c(23,19,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,16,15,37,218,170,44,121) fit_weibull(x) [1] 81.316840 1.381145 x <- c(23,0,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,0,0,37,218,170,44,121) fit_weibull(x) [1] 78.479180 1.293821
How to fit a Weibull distribution to input data containing zeroes?
(As others have pointed out, a Weibull distribution is not likely to be an appropriate approximation when the data are integers only. The following is intended just to help you determine what that pre
How to fit a Weibull distribution to input data containing zeroes? (As others have pointed out, a Weibull distribution is not likely to be an appropriate approximation when the data are integers only. The following is intended just to help you determine what that previous researcher did, rightly or wrongly.) There are several alternative methods that are not affected by zeros in the data, such as using various method-of moments estimators. These typically require numerical solution of equations involving the gamma function, because the moments of the Weibull distribution are given in terms of this function. I'm not familiar with R, but here's a Sage program that illustrates one of the simpler methods -- maybe it can be adapted to R? (You can read about this and other such methods in, e.g., "The Weibull distribution: a handbook" by Horst Rinne, p. 455ff -- however, there is a typo in his eq.12.4b, as the '-1' is redundant). """ Blischke-Scheuer method-of-moments estimation of (a,b) for the Weibull distribution F(t) = 1 - exp(-(t/a)^b) """ x = [23,19,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,16,15,37,218,170,44,121] xbar = mean(x) varx = variance(x) var("b"); f(b) = gamma(1+2/b)/gamma(1+1/b)^2 - 1 - varx/xbar^2 bhat = find_root(f, 0.01, 100) ahat = xbar/gamma(1+1/bhat) print "Estimates: (ahat, bhat) = ", (ahat, bhat) This produced the output Estimates: (ahat, bhat) = (81.316784310814455, 1.3811394719075942) If the above data are modified (just for illustration) by replacing the three smallest values by $0$, i.e. x = [23,0,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,0,0,37,218,170,44,121] then the same procedure produces the output Estimates: (ahat, bhat) = (78.479354097488923, 1.2938352346035282) EDIT: I just installed R to give it a try. At the risk of making this answer over-long, for anyone interested here's my R-code for the Blischke-Scheuer method: fit_weibull <- function(x) { xbar <- mean(x) varx <- var(x) f <- function(b){return(gamma(1+2/b)/gamma(1+1/b)^2 - 1 - varx/xbar^2)} bhat <- uniroot(f,c(0.02,50))$root ahat <- xbar/gamma(1+1/bhat) return(c(ahat,bhat)) } This reproduces (to five significant digits) the two Sage examples above: x <- c(23,19,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,16,15,37,218,170,44,121) fit_weibull(x) [1] 81.316840 1.381145 x <- c(23,0,37,38,40,36,172,48,113,90,54,104,90,54,157, 51,77,78,144,34,29,45,0,0,37,218,170,44,121) fit_weibull(x) [1] 78.479180 1.293821
How to fit a Weibull distribution to input data containing zeroes? (As others have pointed out, a Weibull distribution is not likely to be an appropriate approximation when the data are integers only. The following is intended just to help you determine what that pre
18,864
How to fit a Weibull distribution to input data containing zeroes?
You could also try fitting a three-parameter Weibull, where the third parameter is a location parameter, let us say $\theta$. This amounts to estimating the constant that you ought to add to the data to get you the best fit to the Weibull. You might do this using a profile likelihood approach by putting a "wrapper" around fitdistr, where the wrapper takes a value of $\theta$ and the data, adds $\theta$ to the data, calls the fitdistr function, and returns the associated logliklihood: foo <- function(theta, x) { if (theta <= -min(x)) return(Inf); f <- fitdistr(x+theta, 'weibull') -2*f$loglik } Then minimize this function using one-dimensional optimization: bar <- optimize(foo, lower=-min(x)+0.001, upper=-min(x)+10, x=x) where I have just made up the "+10" based on nothing at all. For the data with the three smallest values replaced with zeroes, we get: > bar $minimum [1] 2.878442 $objective [1] 306.2792 > fitdistr(x+bar$minimum, 'weibull') shape scale 1.2836432 81.1678283 ( 0.1918654) (12.3101211) > bar$minimum is the MLE of $\theta$, and the fitdistr outputs are the MLEs of the Weibull parameters, jointly with $\theta$ that is. As you can see, they are pretty close to the method-of-moments estimators @r.e.s. demonstrated above.
How to fit a Weibull distribution to input data containing zeroes?
You could also try fitting a three-parameter Weibull, where the third parameter is a location parameter, let us say $\theta$. This amounts to estimating the constant that you ought to add to the data
How to fit a Weibull distribution to input data containing zeroes? You could also try fitting a three-parameter Weibull, where the third parameter is a location parameter, let us say $\theta$. This amounts to estimating the constant that you ought to add to the data to get you the best fit to the Weibull. You might do this using a profile likelihood approach by putting a "wrapper" around fitdistr, where the wrapper takes a value of $\theta$ and the data, adds $\theta$ to the data, calls the fitdistr function, and returns the associated logliklihood: foo <- function(theta, x) { if (theta <= -min(x)) return(Inf); f <- fitdistr(x+theta, 'weibull') -2*f$loglik } Then minimize this function using one-dimensional optimization: bar <- optimize(foo, lower=-min(x)+0.001, upper=-min(x)+10, x=x) where I have just made up the "+10" based on nothing at all. For the data with the three smallest values replaced with zeroes, we get: > bar $minimum [1] 2.878442 $objective [1] 306.2792 > fitdistr(x+bar$minimum, 'weibull') shape scale 1.2836432 81.1678283 ( 0.1918654) (12.3101211) > bar$minimum is the MLE of $\theta$, and the fitdistr outputs are the MLEs of the Weibull parameters, jointly with $\theta$ that is. As you can see, they are pretty close to the method-of-moments estimators @r.e.s. demonstrated above.
How to fit a Weibull distribution to input data containing zeroes? You could also try fitting a three-parameter Weibull, where the third parameter is a location parameter, let us say $\theta$. This amounts to estimating the constant that you ought to add to the data
18,865
How to fit a Weibull distribution to input data containing zeroes?
It should fail, you should be grateful that it failed. Your observations showed that failures occurred at the very moment you started observing them. If this is a real process, coming from real (and not simulated data), you need to somehow account for the reason why you're getting zeros. I've seen survival studies where 0 times show up as a consequence of one of several things: The data are actually truncated: objects were at risk and failed before the study started and you want to pretend you had observed them all along. The instruments are poorly calibrated: you don't have enough measurement precision for the study and so failures occurring near the start time were coded as exactly zero. The thing coded as a zero is not a zero. They're people or objects that were excluded from the analysis one way or another. The zero just shows up in the data as a consequence of merging, sorting, or otherwise recoding missing values. So for case 1: you need to use proper censoring methods, even if that means retrospectively pulling records. Case 2 means you can use the EM algorithm because you have a precision issue. Bayesian methods work similarly here as well. Case 3 means you just need to exclude the values that were supposed to be missing.
How to fit a Weibull distribution to input data containing zeroes?
It should fail, you should be grateful that it failed. Your observations showed that failures occurred at the very moment you started observing them. If this is a real process, coming from real (and
How to fit a Weibull distribution to input data containing zeroes? It should fail, you should be grateful that it failed. Your observations showed that failures occurred at the very moment you started observing them. If this is a real process, coming from real (and not simulated data), you need to somehow account for the reason why you're getting zeros. I've seen survival studies where 0 times show up as a consequence of one of several things: The data are actually truncated: objects were at risk and failed before the study started and you want to pretend you had observed them all along. The instruments are poorly calibrated: you don't have enough measurement precision for the study and so failures occurring near the start time were coded as exactly zero. The thing coded as a zero is not a zero. They're people or objects that were excluded from the analysis one way or another. The zero just shows up in the data as a consequence of merging, sorting, or otherwise recoding missing values. So for case 1: you need to use proper censoring methods, even if that means retrospectively pulling records. Case 2 means you can use the EM algorithm because you have a precision issue. Bayesian methods work similarly here as well. Case 3 means you just need to exclude the values that were supposed to be missing.
How to fit a Weibull distribution to input data containing zeroes? It should fail, you should be grateful that it failed. Your observations showed that failures occurred at the very moment you started observing them. If this is a real process, coming from real (and
18,866
How to fit a Weibull distribution to input data containing zeroes?
I agree with cardinal's answer above. However, it is also quite common to add a constant to avoid zeros. Another value commonly used is 0.5, but any positive constant might have been used. You might try a range of values to see if you can identify the exact value used by the previous researcher. Then you could be confident that you are able to reproduce his results, before going on a search for a better distribution.
How to fit a Weibull distribution to input data containing zeroes?
I agree with cardinal's answer above. However, it is also quite common to add a constant to avoid zeros. Another value commonly used is 0.5, but any positive constant might have been used. You migh
How to fit a Weibull distribution to input data containing zeroes? I agree with cardinal's answer above. However, it is also quite common to add a constant to avoid zeros. Another value commonly used is 0.5, but any positive constant might have been used. You might try a range of values to see if you can identify the exact value used by the previous researcher. Then you could be confident that you are able to reproduce his results, before going on a search for a better distribution.
How to fit a Weibull distribution to input data containing zeroes? I agree with cardinal's answer above. However, it is also quite common to add a constant to avoid zeros. Another value commonly used is 0.5, but any positive constant might have been used. You migh
18,867
How to fit a Weibull distribution to input data containing zeroes?
[Assuming Weibull is appropriate] Johnson Kotz and Balakrishnan's book has a lot of ways to estimate Weibull parameters. Some of these do not depend on the data not including zeroes (e.g. using the mean and standard deviation, or using certain percentiles). Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions. New York: Wiley, roughly on page 632.
How to fit a Weibull distribution to input data containing zeroes?
[Assuming Weibull is appropriate] Johnson Kotz and Balakrishnan's book has a lot of ways to estimate Weibull parameters. Some of these do not depend on the data not including zeroes (e.g. using the m
How to fit a Weibull distribution to input data containing zeroes? [Assuming Weibull is appropriate] Johnson Kotz and Balakrishnan's book has a lot of ways to estimate Weibull parameters. Some of these do not depend on the data not including zeroes (e.g. using the mean and standard deviation, or using certain percentiles). Johnson, N. L., Kotz, S., and Balakrishnan, N. (1994). Continuous Univariate Distributions. New York: Wiley, roughly on page 632.
How to fit a Weibull distribution to input data containing zeroes? [Assuming Weibull is appropriate] Johnson Kotz and Balakrishnan's book has a lot of ways to estimate Weibull parameters. Some of these do not depend on the data not including zeroes (e.g. using the m
18,868
Constrained Optimization library for equality and inequality constraints
Both packages, alabama and Rsolnp, contain "[i]mplementations of the augmented lagrange multiplier method for general nonlinear optimization" --- as the optimization task view says --- and are quite reliable and robust. The can handle equality and inequality constraints defined as (nonlinear) functions again. I have worked with both packages. Sometimes, constraints are a bit easier to formulate with Rsolnp, whereas alabama appears to be a bit faster at times. There is also the package Rdonlp2 that relies on an external and in the optimization community well-known software library. Unfortunately, its license status is a bit uncertain at the moment.
Constrained Optimization library for equality and inequality constraints
Both packages, alabama and Rsolnp, contain "[i]mplementations of the augmented lagrange multiplier method for general nonlinear optimization" --- as the optimization task view says --- and are quite r
Constrained Optimization library for equality and inequality constraints Both packages, alabama and Rsolnp, contain "[i]mplementations of the augmented lagrange multiplier method for general nonlinear optimization" --- as the optimization task view says --- and are quite reliable and robust. The can handle equality and inequality constraints defined as (nonlinear) functions again. I have worked with both packages. Sometimes, constraints are a bit easier to formulate with Rsolnp, whereas alabama appears to be a bit faster at times. There is also the package Rdonlp2 that relies on an external and in the optimization community well-known software library. Unfortunately, its license status is a bit uncertain at the moment.
Constrained Optimization library for equality and inequality constraints Both packages, alabama and Rsolnp, contain "[i]mplementations of the augmented lagrange multiplier method for general nonlinear optimization" --- as the optimization task view says --- and are quite r
18,869
Is the mean squared error used to assess relative superiority of one estimator over another?
If you have two competing estimators $\hat \theta_1$ and $\hat \theta_2$, whether or not $$ {\rm MSE}(\hat \theta_1) < {\rm MSE}(\hat \theta_2) $$ tells you that $\hat \theta_1$ is the better estimator depends entirely on your definition of "best". For example, if you are comparing unbiased estimators and by "better" you mean has lower variance then, yes, this would imply that $\hat \theta_1$ is better. $\rm MSE$ is a popular criterion because of its connection with Least Squares and the Gaussian log-likelihood but, like many statistical criteria, one should be cautioned from using $\rm MSE$ blindly as a measure of estimator quality without paying attention to the application. There are certain situations where choosing an estimator to minimize ${\rm MSE}$ may not be a particularly sensible thing to do. Two scenarios come to mind: If there are very large outliers in a data set then they can affect MSE drastically and thus the estimator that minimizes the MSE can be unduely influenced by such outliers. In such situations, the fact that an estimator minimizes the MSE doesn't really tell you much since, if you removed the outlier(s), you can get a wildly different estimate. In that sense, the MSE is not "robust" to outliers. In the context of regression, this fact is what motivated the Huber M-Estimator (that I discuss in this answer), which minimizes a different criterion function (that is a mixture between squared error and absolute error) when there are long-tailed errors. If you are estimating a bounded parameter, comparing $\rm MSE$s may not be appropriate since it penalizes over and understimation differently in that case. For example, suppose you're estimating a variance, $\sigma^2$. Then, if you consciously underestimate the quantity your $\rm MSE$ can be at most $\sigma^4$, while overestimation can produce an $\rm MSE$ that far exceeds $\sigma^4$, perhaps even by an unbounded amount. To make these drawback more clear, I'll give a concrete example of when, because of these issues, the $\rm MSE$ may not be an appropriate measure of estimator quality. Suppose you have a sample $X_1, ..., X_n$ from a $t$ distribution with $\nu>2$ degrees of freedom and we are trying to estimate the variance, which is $\nu/(\nu-2)$. Consider two competing estimators: $$\hat \theta_{1}: {\rm the \ unbiased \ sample \ variance} $$and $$\hat \theta_{2} = 0,{\rm \ regardless \ of \ the \ data}$$ Clearly $\rm MSE(\hat \theta_{2}) = \frac{\nu^2}{(\nu-2)^2}$ and it is a fact that $$ {\rm MSE}(\hat \theta_{1}) = \begin{cases} \infty &\mbox{if } \nu \leq 4 \\ \frac{\nu^2}{(\nu-2)^2} \left( \frac{2}{n-1}+\frac{6}{n(\nu-4)} \right) & \mbox{if } \nu>4 . \end{cases} $$ which can be derived using the fact discussed in this thread and the properties of the $t$-distribution. Thus the naive estimator outperforms in terms of $\rm MSE$ regardless of the sample size whenever $\nu < 4$, which is rather disconcerting. It also outperforms when $\left( \frac{2}{n-1}+\frac{6}{n(\nu-4)} \right) > 1$ but this is only relevant for very small sample sizes. The above happens because of the long tailed nature of the $t$ distribution with small degrees of freedom, which makes $\hat \theta_{2}$ prone to very large values and the $\rm MSE$ penalizes heavily for the overestimation, while $\hat \theta_1$ does not have this problem. The bottom line here is that $\rm MSE$ is not an appropriate measure estimator performance in this scenario. This is clear because the estimator that dominates in terms of $\rm MSE$ is a ridiculous one (particularly since there is no chance that it is correct if there is any variability in the observed data). Perhaps a more appropriate approach (as pointed out by Casella and Berger) would be to choose the variance estimator, $\hat \theta$ that minimizes Stein's Loss: $$ S(\hat \theta) = \frac{ \hat \theta}{\nu/(\nu-2)} - 1 - \log \left( \frac{ \hat \theta}{\nu/(\nu-2)} \right) $$ which penalizes underestimation equally to overestimation. It also brings us back to sanity since $S(\hat \theta_1)=\infty$ :)
Is the mean squared error used to assess relative superiority of one estimator over another?
If you have two competing estimators $\hat \theta_1$ and $\hat \theta_2$, whether or not $$ {\rm MSE}(\hat \theta_1) < {\rm MSE}(\hat \theta_2) $$ tells you that $\hat \theta_1$ is the better estimato
Is the mean squared error used to assess relative superiority of one estimator over another? If you have two competing estimators $\hat \theta_1$ and $\hat \theta_2$, whether or not $$ {\rm MSE}(\hat \theta_1) < {\rm MSE}(\hat \theta_2) $$ tells you that $\hat \theta_1$ is the better estimator depends entirely on your definition of "best". For example, if you are comparing unbiased estimators and by "better" you mean has lower variance then, yes, this would imply that $\hat \theta_1$ is better. $\rm MSE$ is a popular criterion because of its connection with Least Squares and the Gaussian log-likelihood but, like many statistical criteria, one should be cautioned from using $\rm MSE$ blindly as a measure of estimator quality without paying attention to the application. There are certain situations where choosing an estimator to minimize ${\rm MSE}$ may not be a particularly sensible thing to do. Two scenarios come to mind: If there are very large outliers in a data set then they can affect MSE drastically and thus the estimator that minimizes the MSE can be unduely influenced by such outliers. In such situations, the fact that an estimator minimizes the MSE doesn't really tell you much since, if you removed the outlier(s), you can get a wildly different estimate. In that sense, the MSE is not "robust" to outliers. In the context of regression, this fact is what motivated the Huber M-Estimator (that I discuss in this answer), which minimizes a different criterion function (that is a mixture between squared error and absolute error) when there are long-tailed errors. If you are estimating a bounded parameter, comparing $\rm MSE$s may not be appropriate since it penalizes over and understimation differently in that case. For example, suppose you're estimating a variance, $\sigma^2$. Then, if you consciously underestimate the quantity your $\rm MSE$ can be at most $\sigma^4$, while overestimation can produce an $\rm MSE$ that far exceeds $\sigma^4$, perhaps even by an unbounded amount. To make these drawback more clear, I'll give a concrete example of when, because of these issues, the $\rm MSE$ may not be an appropriate measure of estimator quality. Suppose you have a sample $X_1, ..., X_n$ from a $t$ distribution with $\nu>2$ degrees of freedom and we are trying to estimate the variance, which is $\nu/(\nu-2)$. Consider two competing estimators: $$\hat \theta_{1}: {\rm the \ unbiased \ sample \ variance} $$and $$\hat \theta_{2} = 0,{\rm \ regardless \ of \ the \ data}$$ Clearly $\rm MSE(\hat \theta_{2}) = \frac{\nu^2}{(\nu-2)^2}$ and it is a fact that $$ {\rm MSE}(\hat \theta_{1}) = \begin{cases} \infty &\mbox{if } \nu \leq 4 \\ \frac{\nu^2}{(\nu-2)^2} \left( \frac{2}{n-1}+\frac{6}{n(\nu-4)} \right) & \mbox{if } \nu>4 . \end{cases} $$ which can be derived using the fact discussed in this thread and the properties of the $t$-distribution. Thus the naive estimator outperforms in terms of $\rm MSE$ regardless of the sample size whenever $\nu < 4$, which is rather disconcerting. It also outperforms when $\left( \frac{2}{n-1}+\frac{6}{n(\nu-4)} \right) > 1$ but this is only relevant for very small sample sizes. The above happens because of the long tailed nature of the $t$ distribution with small degrees of freedom, which makes $\hat \theta_{2}$ prone to very large values and the $\rm MSE$ penalizes heavily for the overestimation, while $\hat \theta_1$ does not have this problem. The bottom line here is that $\rm MSE$ is not an appropriate measure estimator performance in this scenario. This is clear because the estimator that dominates in terms of $\rm MSE$ is a ridiculous one (particularly since there is no chance that it is correct if there is any variability in the observed data). Perhaps a more appropriate approach (as pointed out by Casella and Berger) would be to choose the variance estimator, $\hat \theta$ that minimizes Stein's Loss: $$ S(\hat \theta) = \frac{ \hat \theta}{\nu/(\nu-2)} - 1 - \log \left( \frac{ \hat \theta}{\nu/(\nu-2)} \right) $$ which penalizes underestimation equally to overestimation. It also brings us back to sanity since $S(\hat \theta_1)=\infty$ :)
Is the mean squared error used to assess relative superiority of one estimator over another? If you have two competing estimators $\hat \theta_1$ and $\hat \theta_2$, whether or not $$ {\rm MSE}(\hat \theta_1) < {\rm MSE}(\hat \theta_2) $$ tells you that $\hat \theta_1$ is the better estimato
18,870
Is the mean squared error used to assess relative superiority of one estimator over another?
MSE corresponds to the risk (expected loss) for the squared error loss function $L(\alpha_i) = (\alpha_i - \alpha)^2$. The squared error loss function is very popular but only one choice of many. The procedure you describe is correct under squared error loss; the question is whether that's appropriate in your problem or not.
Is the mean squared error used to assess relative superiority of one estimator over another?
MSE corresponds to the risk (expected loss) for the squared error loss function $L(\alpha_i) = (\alpha_i - \alpha)^2$. The squared error loss function is very popular but only one choice of many. The
Is the mean squared error used to assess relative superiority of one estimator over another? MSE corresponds to the risk (expected loss) for the squared error loss function $L(\alpha_i) = (\alpha_i - \alpha)^2$. The squared error loss function is very popular but only one choice of many. The procedure you describe is correct under squared error loss; the question is whether that's appropriate in your problem or not.
Is the mean squared error used to assess relative superiority of one estimator over another? MSE corresponds to the risk (expected loss) for the squared error loss function $L(\alpha_i) = (\alpha_i - \alpha)^2$. The squared error loss function is very popular but only one choice of many. The
18,871
Is the mean squared error used to assess relative superiority of one estimator over another?
Because the function $f(x) = x^2$ is differentiable, it makes finding the minimum MSE easier from both a theoretical and numerical standpoint. For example, in ordinary least squares you can solve explicity for the fitted slope and intercept. From a numerical standpoint, you have more efficient solvers when you have a derivative as well. Mean square error typically overweights outliers in my opinion. This is why it is often more robust to use the mean absolute error, i.e. use $f(x) = |x|$ as your error function. However, since it is non-differentiable it makes the solutions more difficult to work with. MSE is probably a good choice if the error terms are normally distributed. If they have fatter tails, a more robust choice such as absolute value is preferable.
Is the mean squared error used to assess relative superiority of one estimator over another?
Because the function $f(x) = x^2$ is differentiable, it makes finding the minimum MSE easier from both a theoretical and numerical standpoint. For example, in ordinary least squares you can solve expl
Is the mean squared error used to assess relative superiority of one estimator over another? Because the function $f(x) = x^2$ is differentiable, it makes finding the minimum MSE easier from both a theoretical and numerical standpoint. For example, in ordinary least squares you can solve explicity for the fitted slope and intercept. From a numerical standpoint, you have more efficient solvers when you have a derivative as well. Mean square error typically overweights outliers in my opinion. This is why it is often more robust to use the mean absolute error, i.e. use $f(x) = |x|$ as your error function. However, since it is non-differentiable it makes the solutions more difficult to work with. MSE is probably a good choice if the error terms are normally distributed. If they have fatter tails, a more robust choice such as absolute value is preferable.
Is the mean squared error used to assess relative superiority of one estimator over another? Because the function $f(x) = x^2$ is differentiable, it makes finding the minimum MSE easier from both a theoretical and numerical standpoint. For example, in ordinary least squares you can solve expl
18,872
Is the mean squared error used to assess relative superiority of one estimator over another?
In Case & Berger Statistical Inference 2nd edition Page 332 states that MSE penalizes equally for overestimation and underestimation, which is fine in the location case. In the scale case, however, 0 is a natural lower bound, so the estimation problem is not symmetric. Use of MSE in this case tends to be forgiving of underestimation. You might want to check which estimator satisfies UMVUE properties, which mean using Cramer-Rao Lower bound. Page 341.
Is the mean squared error used to assess relative superiority of one estimator over another?
In Case & Berger Statistical Inference 2nd edition Page 332 states that MSE penalizes equally for overestimation and underestimation, which is fine in the location case. In the scale case, however, 0
Is the mean squared error used to assess relative superiority of one estimator over another? In Case & Berger Statistical Inference 2nd edition Page 332 states that MSE penalizes equally for overestimation and underestimation, which is fine in the location case. In the scale case, however, 0 is a natural lower bound, so the estimation problem is not symmetric. Use of MSE in this case tends to be forgiving of underestimation. You might want to check which estimator satisfies UMVUE properties, which mean using Cramer-Rao Lower bound. Page 341.
Is the mean squared error used to assess relative superiority of one estimator over another? In Case & Berger Statistical Inference 2nd edition Page 332 states that MSE penalizes equally for overestimation and underestimation, which is fine in the location case. In the scale case, however, 0
18,873
What are the pros and cons of learning about a distribution algorithmically (simulations) versus mathematically?
This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don't know of any research that actually targets this question so the following is based on experience, reflection and discussions with colleagues. First it is important to realize that what motivates students to understand a fundamentally mathematical concept, such as a distribution and its mathematical properties, may depend on a lot of things and vary from student to student. Among math students in general I find that mathematically precise statements are appreciated and too much beating around the bush can be confusing and frustrating (hey, get to point man). That is not to say that you shouldn't use, for example, computer simulations. On the contrary, they can be very illustrative of the mathematical concepts, and I know of many examples where computational illustrations of key mathematical concepts could help the understanding, but where the teaching is still old-fashioned math oriented. It is important, though, for math students that the precise math gets through. However, your question suggests that you are not so much interested in math students. If the students have some kind of computational emphasis, computer simulations and algorithms are really good for quickly getting an intuition about what a distribution is and what kind of properties it can have. The students need to have good tools for programming and visualizing, and I use R. This implies that you need to teach some R (or another preferred language), but if this is part of the course anyway, that is not really a big deal. If the students are not expected to work rigorously with the math afterwords, I feel comfortable if they get most of their understanding from algorithms and simulations. I teach bioinformatics students like that. Then for the students who are neither computationally oriented nor math students, it may be better to have a range of real and relevant data sets that illustrate how different kinds of distributions occur in their field. If you teach survival distributions to medical doctors, say, the best way to get their attention is to have a range of real survival data. To me, it is an open question whether a subsequent mathematical treatment or a simulation based treatment is best. If you haven't done any programming before, the practical problems of doing so can easily overshadow the expected gain in understanding. The students may end up learning how to write if-then-else statements but fail to relate this to the real life distributions. As a general remark, I find that one of the really important points to investigate with simulations is how distributions transform. In particular, in relation to test statistics. It is quite a challenge to understand that this single number you computed, the $t$-test statistic, say, from your entire data set has anything to do with a distribution. Even if you understand the math quite well. As a curious side effect of having to deal with multiple testing for microarray data, it has actually become much easier to show the students how the distribution of the test statistic pops up in real life situations.
What are the pros and cons of learning about a distribution algorithmically (simulations) versus mat
This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don
What are the pros and cons of learning about a distribution algorithmically (simulations) versus mathematically? This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don't know of any research that actually targets this question so the following is based on experience, reflection and discussions with colleagues. First it is important to realize that what motivates students to understand a fundamentally mathematical concept, such as a distribution and its mathematical properties, may depend on a lot of things and vary from student to student. Among math students in general I find that mathematically precise statements are appreciated and too much beating around the bush can be confusing and frustrating (hey, get to point man). That is not to say that you shouldn't use, for example, computer simulations. On the contrary, they can be very illustrative of the mathematical concepts, and I know of many examples where computational illustrations of key mathematical concepts could help the understanding, but where the teaching is still old-fashioned math oriented. It is important, though, for math students that the precise math gets through. However, your question suggests that you are not so much interested in math students. If the students have some kind of computational emphasis, computer simulations and algorithms are really good for quickly getting an intuition about what a distribution is and what kind of properties it can have. The students need to have good tools for programming and visualizing, and I use R. This implies that you need to teach some R (or another preferred language), but if this is part of the course anyway, that is not really a big deal. If the students are not expected to work rigorously with the math afterwords, I feel comfortable if they get most of their understanding from algorithms and simulations. I teach bioinformatics students like that. Then for the students who are neither computationally oriented nor math students, it may be better to have a range of real and relevant data sets that illustrate how different kinds of distributions occur in their field. If you teach survival distributions to medical doctors, say, the best way to get their attention is to have a range of real survival data. To me, it is an open question whether a subsequent mathematical treatment or a simulation based treatment is best. If you haven't done any programming before, the practical problems of doing so can easily overshadow the expected gain in understanding. The students may end up learning how to write if-then-else statements but fail to relate this to the real life distributions. As a general remark, I find that one of the really important points to investigate with simulations is how distributions transform. In particular, in relation to test statistics. It is quite a challenge to understand that this single number you computed, the $t$-test statistic, say, from your entire data set has anything to do with a distribution. Even if you understand the math quite well. As a curious side effect of having to deal with multiple testing for microarray data, it has actually become much easier to show the students how the distribution of the test statistic pops up in real life situations.
What are the pros and cons of learning about a distribution algorithmically (simulations) versus mat This is an important question that I have given some thoughts over the years in my own teaching, and not only regarding distributions but also many other probabilistic and mathematical concepts. I don
18,874
Root mean square vs average absolute deviation?
In theory, this should be determined by how important different sized errors are to you, or in other words, your loss function. In the real world, people put ease of use first. So RMS deviations (or the related variances) are easier to combine, and easier to calculate in a single pass, while average absolute deviations are more robust to outliers and exist for more distributions. Basic linear regression and many of its offshoots are based on minimsing RMS errors. Another point is that the mean will minimise RMS deviations while the median will minimise absolute deviations, and you may prefer one of these.
Root mean square vs average absolute deviation?
In theory, this should be determined by how important different sized errors are to you, or in other words, your loss function. In the real world, people put ease of use first. So RMS deviations (o
Root mean square vs average absolute deviation? In theory, this should be determined by how important different sized errors are to you, or in other words, your loss function. In the real world, people put ease of use first. So RMS deviations (or the related variances) are easier to combine, and easier to calculate in a single pass, while average absolute deviations are more robust to outliers and exist for more distributions. Basic linear regression and many of its offshoots are based on minimsing RMS errors. Another point is that the mean will minimise RMS deviations while the median will minimise absolute deviations, and you may prefer one of these.
Root mean square vs average absolute deviation? In theory, this should be determined by how important different sized errors are to you, or in other words, your loss function. In the real world, people put ease of use first. So RMS deviations (o
18,875
1-step-ahead predictions with dynlm R package
Congratulations, you have found a bug. Prediction for dynlm with new data is broken if lagged variables are used. To see why look at the output of predict(model) predict(model,newdata=data) The results should be the same, but they are not. Without newdata argument, the predict function basically grabs model element from the dynlm output. With newdata argument predict tries to form new model matrix from newdata. Since this involves parsing formula supplied to dynlm and the formula has function L, which is defined only internaly in function dynlm, the incorrect model matrix is formed. If you try to debug, you will see, that the lagged dependent variable is not being lagged in the case of newdata argument is supplied. What you can do is to lag the dependent variable and include it in the newdata. Here is the code illustrating this approach. I use set.seed so it would be easily reproducible. library(dynlm) set.seed(1) y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable A<-rnorm(10) #Create independant variables B<-rnorm(10) C<-rnorm(10) y<-y+.5*A+.2*B-.3*C #Add relationship to independant variables data=cbind(y,A,B,C) #Fit linear model model<-dynlm(y~A+B+C+L(y,1),data=data) Here is the buggy behaviour: > predict(model) 2 3 4 5 6 7 8 9 10 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 > predict(model,newdata=data) 1 2 3 4 5 6 7 8 9 10 2.1628335 3.7063579 2.9781417 2.1374301 3.2582376 1.9534558 1.3670995 2.4547626 0.8448223 1.8762437 Form the newdata #Forecast fix. A<-c(A,rnorm(1)) #Assume we already have 1-step forecasts for A,B,C B<-c(B,rnorm(1)) C<-c(C,rnorm(1)) newdata<-ts(cbind(A,B,C),start=start(y),freq=frequency(y)) newdata<-cbind(lag(y,-1),newdata) colnames(newdata) <- c("y","A","B","C") Compare forecast with model fit: > predict(model) 2 3 4 5 6 7 8 9 10 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 > predict(model,newdata=newdata) 1 2 3 4 5 6 7 8 9 10 11 NA 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 1.102367 As you can see for historical data the forecast coincides and the last element contains the 1-step ahead forecast.
1-step-ahead predictions with dynlm R package
Congratulations, you have found a bug. Prediction for dynlm with new data is broken if lagged variables are used. To see why look at the output of predict(model) predict(model,newdata=data) The resu
1-step-ahead predictions with dynlm R package Congratulations, you have found a bug. Prediction for dynlm with new data is broken if lagged variables are used. To see why look at the output of predict(model) predict(model,newdata=data) The results should be the same, but they are not. Without newdata argument, the predict function basically grabs model element from the dynlm output. With newdata argument predict tries to form new model matrix from newdata. Since this involves parsing formula supplied to dynlm and the formula has function L, which is defined only internaly in function dynlm, the incorrect model matrix is formed. If you try to debug, you will see, that the lagged dependent variable is not being lagged in the case of newdata argument is supplied. What you can do is to lag the dependent variable and include it in the newdata. Here is the code illustrating this approach. I use set.seed so it would be easily reproducible. library(dynlm) set.seed(1) y<-arima.sim(model=list(ar=c(.9)),n=10) #Create AR(1) dependant variable A<-rnorm(10) #Create independant variables B<-rnorm(10) C<-rnorm(10) y<-y+.5*A+.2*B-.3*C #Add relationship to independant variables data=cbind(y,A,B,C) #Fit linear model model<-dynlm(y~A+B+C+L(y,1),data=data) Here is the buggy behaviour: > predict(model) 2 3 4 5 6 7 8 9 10 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 > predict(model,newdata=data) 1 2 3 4 5 6 7 8 9 10 2.1628335 3.7063579 2.9781417 2.1374301 3.2582376 1.9534558 1.3670995 2.4547626 0.8448223 1.8762437 Form the newdata #Forecast fix. A<-c(A,rnorm(1)) #Assume we already have 1-step forecasts for A,B,C B<-c(B,rnorm(1)) C<-c(C,rnorm(1)) newdata<-ts(cbind(A,B,C),start=start(y),freq=frequency(y)) newdata<-cbind(lag(y,-1),newdata) colnames(newdata) <- c("y","A","B","C") Compare forecast with model fit: > predict(model) 2 3 4 5 6 7 8 9 10 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 > predict(model,newdata=newdata) 1 2 3 4 5 6 7 8 9 10 11 NA 3.500667 2.411196 2.627915 2.813815 2.468595 1.733852 2.114553 1.423225 1.470738 1.102367 As you can see for historical data the forecast coincides and the last element contains the 1-step ahead forecast.
1-step-ahead predictions with dynlm R package Congratulations, you have found a bug. Prediction for dynlm with new data is broken if lagged variables are used. To see why look at the output of predict(model) predict(model,newdata=data) The resu
18,876
1-step-ahead predictions with dynlm R package
Following @md-azimul-haque 's request, I dug through my 4 years old source code, and found the following appropriately named function. Not sure if this is what @md-azimul-haque is looking for? # pass in training data, test data, # it will step through one by one # need to give dependent var name, so that it can make this into a timeseries predictDyn <- function( model, train, test, dependentvarname ) { Ntrain <- nrow(train) Ntest <- nrow(test) # can't rbind ts's apparently, so convert to numeric first train[,dependentvarname] <- as.numeric(train[,dependentvarname]) test[,dependentvarname] <- NA testtraindata <- rbind( train, test ) testtraindata[,dependentvarname] <- ts( as.numeric( testtraindata[,dependentvarname] ) ) for( i in 1:Ntest ) { cat("predicting i",i,"of",Ntest,"\n") result <- predict(model,newdata=testtraindata,subset=1:(Ntrain+i-1)) testtraindata[Ntrain+i,dependentvarname] <- result[Ntrain + i + 1 - start(result)][1] } testtraindata <- testtraindata[(Ntrain+1):(Ntrain + Ntest),dependentvarname] names(testtraindata) <- 1:Ntest return( testtraindata ) }
1-step-ahead predictions with dynlm R package
Following @md-azimul-haque 's request, I dug through my 4 years old source code, and found the following appropriately named function. Not sure if this is what @md-azimul-haque is looking for? # pass
1-step-ahead predictions with dynlm R package Following @md-azimul-haque 's request, I dug through my 4 years old source code, and found the following appropriately named function. Not sure if this is what @md-azimul-haque is looking for? # pass in training data, test data, # it will step through one by one # need to give dependent var name, so that it can make this into a timeseries predictDyn <- function( model, train, test, dependentvarname ) { Ntrain <- nrow(train) Ntest <- nrow(test) # can't rbind ts's apparently, so convert to numeric first train[,dependentvarname] <- as.numeric(train[,dependentvarname]) test[,dependentvarname] <- NA testtraindata <- rbind( train, test ) testtraindata[,dependentvarname] <- ts( as.numeric( testtraindata[,dependentvarname] ) ) for( i in 1:Ntest ) { cat("predicting i",i,"of",Ntest,"\n") result <- predict(model,newdata=testtraindata,subset=1:(Ntrain+i-1)) testtraindata[Ntrain+i,dependentvarname] <- result[Ntrain + i + 1 - start(result)][1] } testtraindata <- testtraindata[(Ntrain+1):(Ntrain + Ntest),dependentvarname] names(testtraindata) <- 1:Ntest return( testtraindata ) }
1-step-ahead predictions with dynlm R package Following @md-azimul-haque 's request, I dug through my 4 years old source code, and found the following appropriately named function. Not sure if this is what @md-azimul-haque is looking for? # pass
18,877
Mixed Models: How to derive Henderson's mixed-model equations?
One approach is to form the log-likelihood and differentiate this with respect to the random effects $\mathbf{u}$ and set this equal to zero, then repeat, but differentiate with respect to the fixed effects $\boldsymbol{\beta}$. With the usual normality assumptions we have: $$ \begin{align*} \mathbf{y|u} &\sim \mathcal{N}\mathbf{(X\beta + Zu, R)} \\ \mathbf{u} &\sim \mathcal{N}(\mathbf{0, G}) \end{align*} $$ where $\mathbf{y}$ is the response vector, $\mathbf{u}$ and $\boldsymbol{\beta}$ are the random effects and fixed effects coefficient vectors $\mathbf{X}$ and $\mathbf{Z}$ are model matrices for the fixed effects and random effects respectively. The log-likelihood is then: $$ -2\log L(\boldsymbol{\beta,}\mathbf{u}) = \log|\mathbf{R}|+(\mathbf{y - X\boldsymbol{\beta} - Zu})'\mathbf{R}^{-1}(\mathbf{y - X\boldsymbol{\beta} - Zu}) +\log|\mathbf{G}|+\mathbf{u'G^{-1}u} $$ Differentiating with respect to the random and fixed effects: $$ \begin{align*} \frac{\partial \log L}{\partial \mathbf{u}} &= \mathbf{Z'R^{-1}}(\mathbf{y - X\boldsymbol{\beta} - Zu}) - \mathbf{G^{-1}u} \\ \frac{\partial \log L}{\partial \boldsymbol{\beta}} &= \mathbf{X'R^{-1}}(\mathbf{y - X\boldsymbol{\beta} - Zu}) \end{align*} $$ After setting these both equal to zero, with some minor re-arranging, we obtain Henderson's mixed model equations: $$ \begin{align*} \mathbf{Z'R^{-1}}\mathbf{y} &= \mathbf{Z'R^{-1}X\boldsymbol{\beta}} + \mathbf{u(Z'R^{-1}Z+G^{-1})} \\ \mathbf{X'R^{-1}}\mathbf{y} &= \mathbf{X'R^{-1}X\boldsymbol{\beta}} + \mathbf{X'R^{-1}Zu} \end{align*} $$
Mixed Models: How to derive Henderson's mixed-model equations?
One approach is to form the log-likelihood and differentiate this with respect to the random effects $\mathbf{u}$ and set this equal to zero, then repeat, but differentiate with respect to the fixed e
Mixed Models: How to derive Henderson's mixed-model equations? One approach is to form the log-likelihood and differentiate this with respect to the random effects $\mathbf{u}$ and set this equal to zero, then repeat, but differentiate with respect to the fixed effects $\boldsymbol{\beta}$. With the usual normality assumptions we have: $$ \begin{align*} \mathbf{y|u} &\sim \mathcal{N}\mathbf{(X\beta + Zu, R)} \\ \mathbf{u} &\sim \mathcal{N}(\mathbf{0, G}) \end{align*} $$ where $\mathbf{y}$ is the response vector, $\mathbf{u}$ and $\boldsymbol{\beta}$ are the random effects and fixed effects coefficient vectors $\mathbf{X}$ and $\mathbf{Z}$ are model matrices for the fixed effects and random effects respectively. The log-likelihood is then: $$ -2\log L(\boldsymbol{\beta,}\mathbf{u}) = \log|\mathbf{R}|+(\mathbf{y - X\boldsymbol{\beta} - Zu})'\mathbf{R}^{-1}(\mathbf{y - X\boldsymbol{\beta} - Zu}) +\log|\mathbf{G}|+\mathbf{u'G^{-1}u} $$ Differentiating with respect to the random and fixed effects: $$ \begin{align*} \frac{\partial \log L}{\partial \mathbf{u}} &= \mathbf{Z'R^{-1}}(\mathbf{y - X\boldsymbol{\beta} - Zu}) - \mathbf{G^{-1}u} \\ \frac{\partial \log L}{\partial \boldsymbol{\beta}} &= \mathbf{X'R^{-1}}(\mathbf{y - X\boldsymbol{\beta} - Zu}) \end{align*} $$ After setting these both equal to zero, with some minor re-arranging, we obtain Henderson's mixed model equations: $$ \begin{align*} \mathbf{Z'R^{-1}}\mathbf{y} &= \mathbf{Z'R^{-1}X\boldsymbol{\beta}} + \mathbf{u(Z'R^{-1}Z+G^{-1})} \\ \mathbf{X'R^{-1}}\mathbf{y} &= \mathbf{X'R^{-1}X\boldsymbol{\beta}} + \mathbf{X'R^{-1}Zu} \end{align*} $$
Mixed Models: How to derive Henderson's mixed-model equations? One approach is to form the log-likelihood and differentiate this with respect to the random effects $\mathbf{u}$ and set this equal to zero, then repeat, but differentiate with respect to the fixed e
18,878
Mixed Models: How to derive Henderson's mixed-model equations?
For a very simple derivation, without making any assumption on normality, see my paper A. Neumaier and E. Groeneveld, Restricted maximum likelihood estimation of covariances in sparse linear models, Genet. Sel. Evol. 30 (1998), 3-26. Essentially, the mixed model $$y=X\beta+Zu+\epsilon,~~ Cov(u)=\sigma^2 G,~~ Cov(\epsilon)=\sigma^2 D,$$ where $u$ and $\epsilon $ have zero mean and wlog $G=LL^T$ and $D=MM^T$, is equivalent to the assertion that with $x=\pmatrix{\beta \cr u}$ and $P=\pmatrix{M & 0\cr 0 &L}$, $E=\pmatrix{I \cr 0}$, $A=\pmatrix{X & Z \cr 0 & I}$, the random vector $P^{-1}(Ey-Ax)$ has zero mean and covariance matrix $\sigma^2 I$. Thus the best linear unbiased predictor is given by the solution of the normal equations for the overdetermined linear system $P^{-1}Ax=P^{-1}Ey$. This gives Henderson's mixed model equations.
Mixed Models: How to derive Henderson's mixed-model equations?
For a very simple derivation, without making any assumption on normality, see my paper A. Neumaier and E. Groeneveld, Restricted maximum likelihood estimation of covariances in sparse linear models, G
Mixed Models: How to derive Henderson's mixed-model equations? For a very simple derivation, without making any assumption on normality, see my paper A. Neumaier and E. Groeneveld, Restricted maximum likelihood estimation of covariances in sparse linear models, Genet. Sel. Evol. 30 (1998), 3-26. Essentially, the mixed model $$y=X\beta+Zu+\epsilon,~~ Cov(u)=\sigma^2 G,~~ Cov(\epsilon)=\sigma^2 D,$$ where $u$ and $\epsilon $ have zero mean and wlog $G=LL^T$ and $D=MM^T$, is equivalent to the assertion that with $x=\pmatrix{\beta \cr u}$ and $P=\pmatrix{M & 0\cr 0 &L}$, $E=\pmatrix{I \cr 0}$, $A=\pmatrix{X & Z \cr 0 & I}$, the random vector $P^{-1}(Ey-Ax)$ has zero mean and covariance matrix $\sigma^2 I$. Thus the best linear unbiased predictor is given by the solution of the normal equations for the overdetermined linear system $P^{-1}Ax=P^{-1}Ey$. This gives Henderson's mixed model equations.
Mixed Models: How to derive Henderson's mixed-model equations? For a very simple derivation, without making any assumption on normality, see my paper A. Neumaier and E. Groeneveld, Restricted maximum likelihood estimation of covariances in sparse linear models, G
18,879
Is there any theoretical problem with averaging regression coefficients to build a model?
Given that OLS minimizes the MSE of the residuals amongst all unbiased linear estimators (by the Gauss-Markov theorem) , and that a weighted average of unbiased linear estimators (e.g., the estimated linear functions from each of your $k$ folds) is itself an unbiased linear estimator, it must be that OLS applied to the entire data set will outperform the weighted average of the $k$ linear regressions unless, by chance, the two give identical results. As to overfitting - linear models are not prone to overfitting in the same way that, for example, Gradient Boosting Machines are. The enforcement of linearity sees to that. If you have a very small number of outliers that pull your OLS regression line well away from where it should be, your approach may slightly - only slightly - ameliorate the damage, but there are far superior approaches to dealing with that problem in the context of a very small number of outliers, e.g., robust linear regression, or simply plotting the data, identifying, and then removing the outliers (assuming that they are indeed not representative of the data generating process whose parameters you are interested in estimating.)
Is there any theoretical problem with averaging regression coefficients to build a model?
Given that OLS minimizes the MSE of the residuals amongst all unbiased linear estimators (by the Gauss-Markov theorem) , and that a weighted average of unbiased linear estimators (e.g., the estimated
Is there any theoretical problem with averaging regression coefficients to build a model? Given that OLS minimizes the MSE of the residuals amongst all unbiased linear estimators (by the Gauss-Markov theorem) , and that a weighted average of unbiased linear estimators (e.g., the estimated linear functions from each of your $k$ folds) is itself an unbiased linear estimator, it must be that OLS applied to the entire data set will outperform the weighted average of the $k$ linear regressions unless, by chance, the two give identical results. As to overfitting - linear models are not prone to overfitting in the same way that, for example, Gradient Boosting Machines are. The enforcement of linearity sees to that. If you have a very small number of outliers that pull your OLS regression line well away from where it should be, your approach may slightly - only slightly - ameliorate the damage, but there are far superior approaches to dealing with that problem in the context of a very small number of outliers, e.g., robust linear regression, or simply plotting the data, identifying, and then removing the outliers (assuming that they are indeed not representative of the data generating process whose parameters you are interested in estimating.)
Is there any theoretical problem with averaging regression coefficients to build a model? Given that OLS minimizes the MSE of the residuals amongst all unbiased linear estimators (by the Gauss-Markov theorem) , and that a weighted average of unbiased linear estimators (e.g., the estimated
18,880
Is there any theoretical problem with averaging regression coefficients to build a model?
What about running a bootstrap? Create 100-1000 replicate samples with a 100% sampling rate using unrestricted random sampling (sampling with replacement). Run the models by replicate and get the median for each regression coefficient. Or try the mean. Also take a look and the distribution of each coefficient to see if signs change and at what cumulative distribution values.
Is there any theoretical problem with averaging regression coefficients to build a model?
What about running a bootstrap? Create 100-1000 replicate samples with a 100% sampling rate using unrestricted random sampling (sampling with replacement). Run the models by replicate and get the me
Is there any theoretical problem with averaging regression coefficients to build a model? What about running a bootstrap? Create 100-1000 replicate samples with a 100% sampling rate using unrestricted random sampling (sampling with replacement). Run the models by replicate and get the median for each regression coefficient. Or try the mean. Also take a look and the distribution of each coefficient to see if signs change and at what cumulative distribution values.
Is there any theoretical problem with averaging regression coefficients to build a model? What about running a bootstrap? Create 100-1000 replicate samples with a 100% sampling rate using unrestricted random sampling (sampling with replacement). Run the models by replicate and get the me
18,881
Mutual Information as probability
The measure you are describing is called Information Quality Ratio [IQR] (Wijaya, Sarno and Zulaika, 2017). IQR is mutual information $I(X,Y)$ divided by "total uncertainty" (joint entropy) $H(X,Y)$ (image source: Wijaya, Sarno and Zulaika, 2017). As described by Wijaya, Sarno and Zulaika (2017), the range of IQR is $[0,1]$. The biggest value (IQR=1) can be reached if DWT can perfectly reconstruct a signal without losing of information. Otherwise, the lowest value (IQR=0) means MWT is not compatible with an original signal. In the other words, a reconstructed signal with particular MWT cannot keep essential information and totally different with original signal characteristics. You can interpret it as probability that signal will be perfectly reconstructed without losing of information. Notice that such interpretation is closer to subjectivist interpretation of probability, then to traditional, frequentist interpretation. It is a probability for a binary event (reconstructing information vs not), where IQR=1 means that we believe the reconstructed information to be trustworthy, and IQR=0 means that opposite. It shares all the properties for probabilities of binary events. Moreover, entropies share a number of other properties with probabilities (e.g. definition of conditional entropies, independence etc). So it looks like a probability and quacks like it. Wijaya, D.R., Sarno, R., & Zulaika, E. (2017). Information Quality Ratio as a novel metric for mother wavelet selection. Chemometrics and Intelligent Laboratory Systems, 160, 59-71.
Mutual Information as probability
The measure you are describing is called Information Quality Ratio [IQR] (Wijaya, Sarno and Zulaika, 2017). IQR is mutual information $I(X,Y)$ divided by "total uncertainty" (joint entropy) $H(X,Y)$ (
Mutual Information as probability The measure you are describing is called Information Quality Ratio [IQR] (Wijaya, Sarno and Zulaika, 2017). IQR is mutual information $I(X,Y)$ divided by "total uncertainty" (joint entropy) $H(X,Y)$ (image source: Wijaya, Sarno and Zulaika, 2017). As described by Wijaya, Sarno and Zulaika (2017), the range of IQR is $[0,1]$. The biggest value (IQR=1) can be reached if DWT can perfectly reconstruct a signal without losing of information. Otherwise, the lowest value (IQR=0) means MWT is not compatible with an original signal. In the other words, a reconstructed signal with particular MWT cannot keep essential information and totally different with original signal characteristics. You can interpret it as probability that signal will be perfectly reconstructed without losing of information. Notice that such interpretation is closer to subjectivist interpretation of probability, then to traditional, frequentist interpretation. It is a probability for a binary event (reconstructing information vs not), where IQR=1 means that we believe the reconstructed information to be trustworthy, and IQR=0 means that opposite. It shares all the properties for probabilities of binary events. Moreover, entropies share a number of other properties with probabilities (e.g. definition of conditional entropies, independence etc). So it looks like a probability and quacks like it. Wijaya, D.R., Sarno, R., & Zulaika, E. (2017). Information Quality Ratio as a novel metric for mother wavelet selection. Chemometrics and Intelligent Laboratory Systems, 160, 59-71.
Mutual Information as probability The measure you are describing is called Information Quality Ratio [IQR] (Wijaya, Sarno and Zulaika, 2017). IQR is mutual information $I(X,Y)$ divided by "total uncertainty" (joint entropy) $H(X,Y)$ (
18,882
Mutual Information as probability
Here is the definition of a probability space. Let us use the notations there. IQR is a function of a tuple $(\Omega,\mathscr F,P,X,Y)$ (The first three components form the probability space the two random variables are defined on). A probability measure has to be a set function that satisfy all the conditions of the definition listed in Tim's answer. One will have to specify $\Theta:=(\Omega,\mathscr F,P,X,Y)$ as some subset of a set $\tilde\Omega$. Moreover, the set of $\Theta$'s has to form a field of subsets of $\tilde\Omega$, and that $\text{IQR}(\Omega,\mathscr F,P,X,Y)$ has to satisfy all three properties listed in the definition of probability measure listed in Tim's answer. Until one constructs such an object, it is wrong to say IQR is a probability measure. I for one do not see the utility of such a complicated probability measure (not the IQR function itself but as a probability measure). IQR in the paper cited in Tim's answer is not called or used as probability but as a metric (The former is one kind of the latter, but the latter is not one kind of the former.). On the other hand, there is a trivial construction that allows any number on $[0,1]$ to be a probability. Specifically in our case, consider any given $\Theta$. Pick a two-element set as the sample space $\tilde\Omega:=\{a,b\}$, let the field be $\tilde{\mathscr F}:=2^{\tilde\Omega}$ and set the probability measure $\tilde P(a):=\text{IQR}(\Theta)$. We have a class of probability spaces indexed by $\Theta$.
Mutual Information as probability
Here is the definition of a probability space. Let us use the notations there. IQR is a function of a tuple $(\Omega,\mathscr F,P,X,Y)$ (The first three components form the probability space the two r
Mutual Information as probability Here is the definition of a probability space. Let us use the notations there. IQR is a function of a tuple $(\Omega,\mathscr F,P,X,Y)$ (The first three components form the probability space the two random variables are defined on). A probability measure has to be a set function that satisfy all the conditions of the definition listed in Tim's answer. One will have to specify $\Theta:=(\Omega,\mathscr F,P,X,Y)$ as some subset of a set $\tilde\Omega$. Moreover, the set of $\Theta$'s has to form a field of subsets of $\tilde\Omega$, and that $\text{IQR}(\Omega,\mathscr F,P,X,Y)$ has to satisfy all three properties listed in the definition of probability measure listed in Tim's answer. Until one constructs such an object, it is wrong to say IQR is a probability measure. I for one do not see the utility of such a complicated probability measure (not the IQR function itself but as a probability measure). IQR in the paper cited in Tim's answer is not called or used as probability but as a metric (The former is one kind of the latter, but the latter is not one kind of the former.). On the other hand, there is a trivial construction that allows any number on $[0,1]$ to be a probability. Specifically in our case, consider any given $\Theta$. Pick a two-element set as the sample space $\tilde\Omega:=\{a,b\}$, let the field be $\tilde{\mathscr F}:=2^{\tilde\Omega}$ and set the probability measure $\tilde P(a):=\text{IQR}(\Theta)$. We have a class of probability spaces indexed by $\Theta$.
Mutual Information as probability Here is the definition of a probability space. Let us use the notations there. IQR is a function of a tuple $(\Omega,\mathscr F,P,X,Y)$ (The first three components form the probability space the two r
18,883
Mutual Information as probability
Going back in history a bit, the role of $\frac{I(X,Y)}{H(X,Y)} $ as a measure of probability can be seen, in part, in the 1961 article by Rajski: A Metric Space of Discrete Probability Distributions. This article outlines the development of the Rajski Distance ${(D_R)}$ is: $${D_R}=1 - \frac{I(X,Y)}{H(X,Y)} $$
Mutual Information as probability
Going back in history a bit, the role of $\frac{I(X,Y)}{H(X,Y)} $ as a measure of probability can be seen, in part, in the 1961 article by Rajski: A Metric Space of Discrete Probability Distributions.
Mutual Information as probability Going back in history a bit, the role of $\frac{I(X,Y)}{H(X,Y)} $ as a measure of probability can be seen, in part, in the 1961 article by Rajski: A Metric Space of Discrete Probability Distributions. This article outlines the development of the Rajski Distance ${(D_R)}$ is: $${D_R}=1 - \frac{I(X,Y)}{H(X,Y)} $$
Mutual Information as probability Going back in history a bit, the role of $\frac{I(X,Y)}{H(X,Y)} $ as a measure of probability can be seen, in part, in the 1961 article by Rajski: A Metric Space of Discrete Probability Distributions.
18,884
Quadratic weighted kappa
The Kappa coefficient is a chance-adjusted index of agreement. In machine learning it can be used to quantify the amount of agreement between an algorithm's predictions and some trusted labels of the same objects. Kappa starts with accuracy - the proportion of all objects that both the algorithm and the trusted labels assigned to the same category or class. However, it then attempts to adjust for the probability of the algorithm and trusted labels assigning items to the same category "by chance." It does this by assuming that the algorithm and the trusted labels each have a predetermined quota for the proportion of objects to assign to each category. The original kappa coefficient assumed nominal categories but this was later extended to non-nominal categories through "weighting." The idea behind weighting is that some categories are more similar than others and thus some mismatching pairs of categories deserve varying degrees of "partial credit." Quadratic weights are one popular way of determining how much partial credit to assign to each mismatched pair of categories; there are other weights. I have more information about all of these concepts, including MATLAB functions, on my website: mreliability.jmgirard.com See also: Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. Update: See my agreement package or Gwet's irrCAC package for R functions.
Quadratic weighted kappa
The Kappa coefficient is a chance-adjusted index of agreement. In machine learning it can be used to quantify the amount of agreement between an algorithm's predictions and some trusted labels of the
Quadratic weighted kappa The Kappa coefficient is a chance-adjusted index of agreement. In machine learning it can be used to quantify the amount of agreement between an algorithm's predictions and some trusted labels of the same objects. Kappa starts with accuracy - the proportion of all objects that both the algorithm and the trusted labels assigned to the same category or class. However, it then attempts to adjust for the probability of the algorithm and trusted labels assigning items to the same category "by chance." It does this by assuming that the algorithm and the trusted labels each have a predetermined quota for the proportion of objects to assign to each category. The original kappa coefficient assumed nominal categories but this was later extended to non-nominal categories through "weighting." The idea behind weighting is that some categories are more similar than others and thus some mismatching pairs of categories deserve varying degrees of "partial credit." Quadratic weights are one popular way of determining how much partial credit to assign to each mismatched pair of categories; there are other weights. I have more information about all of these concepts, including MATLAB functions, on my website: mreliability.jmgirard.com See also: Cohen, J. (1968). Weighted kappa: Nominal scale agreement with provision for scaled disagreement or partial credit. Psychological Bulletin, 70(4), 213–220. Update: See my agreement package or Gwet's irrCAC package for R functions.
Quadratic weighted kappa The Kappa coefficient is a chance-adjusted index of agreement. In machine learning it can be used to quantify the amount of agreement between an algorithm's predictions and some trusted labels of the
18,885
Quadratic weighted kappa
Here is a well explained example of Quadratic Weighted Kappa score: http://kagglesolutions.com/r/evaluation-metrics--quadratic-weighted-kappa
Quadratic weighted kappa
Here is a well explained example of Quadratic Weighted Kappa score: http://kagglesolutions.com/r/evaluation-metrics--quadratic-weighted-kappa
Quadratic weighted kappa Here is a well explained example of Quadratic Weighted Kappa score: http://kagglesolutions.com/r/evaluation-metrics--quadratic-weighted-kappa
Quadratic weighted kappa Here is a well explained example of Quadratic Weighted Kappa score: http://kagglesolutions.com/r/evaluation-metrics--quadratic-weighted-kappa
18,886
Is every non stationary series convertible to a stationary series through differencing
No. As a counterexample, let $X$ be any random variable and let the time series have the value $\exp(t X)$ at time $t$. The $k^\text{th}$ difference at time $i=0, 1, 2, \ldots$ is a linear combination $$\Delta^k(i) = \sum_{j=0}^k w_j \exp((i+j)X) = \exp(iX) \sum_{j=0}^k w_j \exp(jX) = \exp(iX) \Delta^k(0).$$ for coefficients $w_j$ (which can be computed but whose values are irrelevant for this discussion). Unless $X$ is constant, the left and right sides have different distributions, proving the $k^\text{th}$ difference is not stationary. Therefore no amount of differencing will make this time series stationary.
Is every non stationary series convertible to a stationary series through differencing
No. As a counterexample, let $X$ be any random variable and let the time series have the value $\exp(t X)$ at time $t$. The $k^\text{th}$ difference at time $i=0, 1, 2, \ldots$ is a linear combinati
Is every non stationary series convertible to a stationary series through differencing No. As a counterexample, let $X$ be any random variable and let the time series have the value $\exp(t X)$ at time $t$. The $k^\text{th}$ difference at time $i=0, 1, 2, \ldots$ is a linear combination $$\Delta^k(i) = \sum_{j=0}^k w_j \exp((i+j)X) = \exp(iX) \sum_{j=0}^k w_j \exp(jX) = \exp(iX) \Delta^k(0).$$ for coefficients $w_j$ (which can be computed but whose values are irrelevant for this discussion). Unless $X$ is constant, the left and right sides have different distributions, proving the $k^\text{th}$ difference is not stationary. Therefore no amount of differencing will make this time series stationary.
Is every non stationary series convertible to a stationary series through differencing No. As a counterexample, let $X$ be any random variable and let the time series have the value $\exp(t X)$ at time $t$. The $k^\text{th}$ difference at time $i=0, 1, 2, \ldots$ is a linear combinati
18,887
Is every non stationary series convertible to a stationary series through differencing
The answer by whuber is correct; there are lots of time-series that cannot be made stationary by differencing. Notwithstanding that this answers your question in a strict sense, it might also be worth noting that within the broad class of ARIMA models with white noise, differencing can turn them into ARMA models, and the latter are (asymptotically) stationary when the remaining roots of the auto-regressive characteristic polynomial are inside the unit circle. If you specify an appropriate starting distribution for the observable series that is equal to the stationary distribution, you get a strictly stationary time-series process. So as a general rule, no, not every time-series is convertible to a stationary series by differencing. However, if you restrict your scope to the broad class of time-series models in the ARIMA class with white noise and appropriately specified starting distribution (and other AR roots inside the unit circle) then yes, differencing can be used to get stationarity.
Is every non stationary series convertible to a stationary series through differencing
The answer by whuber is correct; there are lots of time-series that cannot be made stationary by differencing. Notwithstanding that this answers your question in a strict sense, it might also be wort
Is every non stationary series convertible to a stationary series through differencing The answer by whuber is correct; there are lots of time-series that cannot be made stationary by differencing. Notwithstanding that this answers your question in a strict sense, it might also be worth noting that within the broad class of ARIMA models with white noise, differencing can turn them into ARMA models, and the latter are (asymptotically) stationary when the remaining roots of the auto-regressive characteristic polynomial are inside the unit circle. If you specify an appropriate starting distribution for the observable series that is equal to the stationary distribution, you get a strictly stationary time-series process. So as a general rule, no, not every time-series is convertible to a stationary series by differencing. However, if you restrict your scope to the broad class of time-series models in the ARIMA class with white noise and appropriately specified starting distribution (and other AR roots inside the unit circle) then yes, differencing can be used to get stationarity.
Is every non stationary series convertible to a stationary series through differencing The answer by whuber is correct; there are lots of time-series that cannot be made stationary by differencing. Notwithstanding that this answers your question in a strict sense, it might also be wort
18,888
What is the difference between "count proportions" and "continuous proportions"?
Perhaps an example would help. Suppose you observe a number of people and count how many of them are women. The resulting proportion is what is called a count proportion and takes on values between zero and one but only $n+1$ of them where $n$ is the total number you observed. Suppose you buy a sausage from your local supermarket and observe on the label that it is 80% pork that is an example of a continuous proportion and could take on any value between 0 and 100. The distinction in modelling is that in the first case it is meaningful to predict the probability of a random person being a woman (logistic regression) but in the second case that is not a sensible question and something else (often beta regression) would be preferred.
What is the difference between "count proportions" and "continuous proportions"?
Perhaps an example would help. Suppose you observe a number of people and count how many of them are women. The resulting proportion is what is called a count proportion and takes on values between ze
What is the difference between "count proportions" and "continuous proportions"? Perhaps an example would help. Suppose you observe a number of people and count how many of them are women. The resulting proportion is what is called a count proportion and takes on values between zero and one but only $n+1$ of them where $n$ is the total number you observed. Suppose you buy a sausage from your local supermarket and observe on the label that it is 80% pork that is an example of a continuous proportion and could take on any value between 0 and 100. The distinction in modelling is that in the first case it is meaningful to predict the probability of a random person being a woman (logistic regression) but in the second case that is not a sensible question and something else (often beta regression) would be preferred.
What is the difference between "count proportions" and "continuous proportions"? Perhaps an example would help. Suppose you observe a number of people and count how many of them are women. The resulting proportion is what is called a count proportion and takes on values between ze
18,889
How can a smaller learning rate hurt the performance of a gbm?
Yes, you're right a lower learning rate should find a better optimum than a higher learning rate. But you should tune the hyper-parameters using grid search to find the best combination of learning rate along with the other hyper-parameters. The GBM algorithm uses multiple hyper parameters in addition to the learning rate (shrinkage), these are: Number of trees Interaction depth Minimum observation in a node Bag fraction (fraction of randomly selected observations) The grid search needs to check all of these in order to determine the most optimal set of parameters. For example, on some data-sets I've tuned with GBM, I've observed that accuracy varies widely as each hyper-parameter is changed. I haven't run GBM on your sample data-set, but I'll refer to a similar tuning exercise for another data-set. Refer to this graph on a classification problem with highly imbalanced classes. Although the accuracy is highest for lower learning rate, e.g. for max. tree depth of 16, the Kappa metric is 0.425 at learning rate 0.2 which is better than 0.415 at learning rate of 0.35. But when you look at learning rate at 0.25 vs. 0.26 there is a sharp but small increase in Kappa for max tree depth of 14, 15 and 16; whereas it continues decreasing for tree depth 12 and 13. Hence, I would suggest you should try the grid search. Additionally, as you mentioned, this situation could also have been aggravated by a smaller sample size of the data-set.
How can a smaller learning rate hurt the performance of a gbm?
Yes, you're right a lower learning rate should find a better optimum than a higher learning rate. But you should tune the hyper-parameters using grid search to find the best combination of learning ra
How can a smaller learning rate hurt the performance of a gbm? Yes, you're right a lower learning rate should find a better optimum than a higher learning rate. But you should tune the hyper-parameters using grid search to find the best combination of learning rate along with the other hyper-parameters. The GBM algorithm uses multiple hyper parameters in addition to the learning rate (shrinkage), these are: Number of trees Interaction depth Minimum observation in a node Bag fraction (fraction of randomly selected observations) The grid search needs to check all of these in order to determine the most optimal set of parameters. For example, on some data-sets I've tuned with GBM, I've observed that accuracy varies widely as each hyper-parameter is changed. I haven't run GBM on your sample data-set, but I'll refer to a similar tuning exercise for another data-set. Refer to this graph on a classification problem with highly imbalanced classes. Although the accuracy is highest for lower learning rate, e.g. for max. tree depth of 16, the Kappa metric is 0.425 at learning rate 0.2 which is better than 0.415 at learning rate of 0.35. But when you look at learning rate at 0.25 vs. 0.26 there is a sharp but small increase in Kappa for max tree depth of 14, 15 and 16; whereas it continues decreasing for tree depth 12 and 13. Hence, I would suggest you should try the grid search. Additionally, as you mentioned, this situation could also have been aggravated by a smaller sample size of the data-set.
How can a smaller learning rate hurt the performance of a gbm? Yes, you're right a lower learning rate should find a better optimum than a higher learning rate. But you should tune the hyper-parameters using grid search to find the best combination of learning ra
18,890
How can a smaller learning rate hurt the performance of a gbm?
Sandeep S. Sandhu has provided a great answer. As for your case, I think your model has not converged yet for those small learning rates. In my experience, when using learning rate as small as 0.001 on gradient boosting tree, you need about 100,000 of boost stages (or trees) to reach the minimum. So if you increase the boost rounds to ten times more, you should be able to see the smaller learning rate perform better than large one. You can also check the website by Laurae++ for a great description of each parameters of Lightgbm/XGBoost (https://sites.google.com/view/lauraepp/parameters and click the "Learning Rate"). Here is the most important quote about learning rate: Beliefs Once your learning rate is fixed, do not change it. It is not a good practice to consider the learning rate as a hyperparameter to tune. Learning rate should be tuned according to your training speed and performance tradeoff. Do not let an optimizer tune it. One must not expect to see an overfitting learning rate of 0.0202048. Details Each iteration is supposed to provide an improvement to the training loss. Such improvement is multiplied with the learning rate in order to perform smaller updates. Smaller updates allow to overfit slower the data, but requires more iterations for training. For instance, doing 5 iteations at a learning rate of 0.1 approximately would require doing 5000 iterations at a learning rate of 0.001, which might be obnoxious for large datasets. Typically, we use a learning rate of 0.05 or lower for training, while a learning rate of 0.10 or larger is used for tinkering the hyperparameters.
How can a smaller learning rate hurt the performance of a gbm?
Sandeep S. Sandhu has provided a great answer. As for your case, I think your model has not converged yet for those small learning rates. In my experience, when using learning rate as small as 0.001 o
How can a smaller learning rate hurt the performance of a gbm? Sandeep S. Sandhu has provided a great answer. As for your case, I think your model has not converged yet for those small learning rates. In my experience, when using learning rate as small as 0.001 on gradient boosting tree, you need about 100,000 of boost stages (or trees) to reach the minimum. So if you increase the boost rounds to ten times more, you should be able to see the smaller learning rate perform better than large one. You can also check the website by Laurae++ for a great description of each parameters of Lightgbm/XGBoost (https://sites.google.com/view/lauraepp/parameters and click the "Learning Rate"). Here is the most important quote about learning rate: Beliefs Once your learning rate is fixed, do not change it. It is not a good practice to consider the learning rate as a hyperparameter to tune. Learning rate should be tuned according to your training speed and performance tradeoff. Do not let an optimizer tune it. One must not expect to see an overfitting learning rate of 0.0202048. Details Each iteration is supposed to provide an improvement to the training loss. Such improvement is multiplied with the learning rate in order to perform smaller updates. Smaller updates allow to overfit slower the data, but requires more iterations for training. For instance, doing 5 iteations at a learning rate of 0.1 approximately would require doing 5000 iterations at a learning rate of 0.001, which might be obnoxious for large datasets. Typically, we use a learning rate of 0.05 or lower for training, while a learning rate of 0.10 or larger is used for tinkering the hyperparameters.
How can a smaller learning rate hurt the performance of a gbm? Sandeep S. Sandhu has provided a great answer. As for your case, I think your model has not converged yet for those small learning rates. In my experience, when using learning rate as small as 0.001 o
18,891
Why does the central limit theorem work with a single sample?
The CLT (at least in some of its various forms) tells us that in the limit as $n\to\infty$ distribution of a single standardized sample mean ($\frac{\bar{X}-\mu}{\sigma/\sqrt{n}}$) converges to a normal distribution (under some conditions). The CLT does not tell us what happens at $n=50$ or $n=50,000$. But in attempting to motivate the CLT, particularly when no proof of the CLT is offered, some people rely on the sampling distribution of $\bar{X}$ for finite samples and show that as larger samples are taken that the sampling distribution gets closer to the normal. Strictly speaking this isn't demonstrating the CLT, it's nearer to demonstrating the Berry-Esseen theorem, since it demonstrates something about the rate at which the approach to normality comes in -- but that in turn would lead us to the CLT, so it serves well enough as motivation (and in fact, often something like the Berry-Esseen comes closer to what people actually want to use in finite samples anyway, so that motivation may in some sense be more useful in practice than the central limit theorem itself). the distribution of these sample means would be normal. Well, no, they would be non-normal but they would in practice be very close to normal (heights are somewhat skew but not very skew). [Note again that the CLT really tells us nothing about the behavior of sample means for $n=50$; this is what I was getting at with my earlier discussion of Berry-Esseen, which does deal with how far from a normal cdf the distribution function of standardized means can be for finite samples] The real world case I am thinking about is doing statistics on a dataset of 50,000 twitter users. That dataset obviously isn't repeated samples, it is just one big sample of 50,000. For many distributions, a sample mean of 50,000 items would have very close to a normal distribution -- but it's not guaranteed, even at n=50,000 that you will have very close to a normal distribution (if the distribution of the individual items is sufficiently skewed, for example, then the distribution of sample means may still be skew enough to make a normal approximation untenable). The Berry-Esseen theorem would lead us to anticipate that exactly that problem might occur -- and demonstrably, it does. It's easy to give examples to which the CLT applies but for which n=50,000 is not nearly a large enough sample for the standardized sample mean to be close to normal.
Why does the central limit theorem work with a single sample?
The CLT (at least in some of its various forms) tells us that in the limit as $n\to\infty$ distribution of a single standardized sample mean ($\frac{\bar{X}-\mu}{\sigma/\sqrt{n}}$) converges to a norm
Why does the central limit theorem work with a single sample? The CLT (at least in some of its various forms) tells us that in the limit as $n\to\infty$ distribution of a single standardized sample mean ($\frac{\bar{X}-\mu}{\sigma/\sqrt{n}}$) converges to a normal distribution (under some conditions). The CLT does not tell us what happens at $n=50$ or $n=50,000$. But in attempting to motivate the CLT, particularly when no proof of the CLT is offered, some people rely on the sampling distribution of $\bar{X}$ for finite samples and show that as larger samples are taken that the sampling distribution gets closer to the normal. Strictly speaking this isn't demonstrating the CLT, it's nearer to demonstrating the Berry-Esseen theorem, since it demonstrates something about the rate at which the approach to normality comes in -- but that in turn would lead us to the CLT, so it serves well enough as motivation (and in fact, often something like the Berry-Esseen comes closer to what people actually want to use in finite samples anyway, so that motivation may in some sense be more useful in practice than the central limit theorem itself). the distribution of these sample means would be normal. Well, no, they would be non-normal but they would in practice be very close to normal (heights are somewhat skew but not very skew). [Note again that the CLT really tells us nothing about the behavior of sample means for $n=50$; this is what I was getting at with my earlier discussion of Berry-Esseen, which does deal with how far from a normal cdf the distribution function of standardized means can be for finite samples] The real world case I am thinking about is doing statistics on a dataset of 50,000 twitter users. That dataset obviously isn't repeated samples, it is just one big sample of 50,000. For many distributions, a sample mean of 50,000 items would have very close to a normal distribution -- but it's not guaranteed, even at n=50,000 that you will have very close to a normal distribution (if the distribution of the individual items is sufficiently skewed, for example, then the distribution of sample means may still be skew enough to make a normal approximation untenable). The Berry-Esseen theorem would lead us to anticipate that exactly that problem might occur -- and demonstrably, it does. It's easy to give examples to which the CLT applies but for which n=50,000 is not nearly a large enough sample for the standardized sample mean to be close to normal.
Why does the central limit theorem work with a single sample? The CLT (at least in some of its various forms) tells us that in the limit as $n\to\infty$ distribution of a single standardized sample mean ($\frac{\bar{X}-\mu}{\sigma/\sqrt{n}}$) converges to a norm
18,892
Is ANOVA relying on the method of moments and not on the maximum likelihood?
I first encountered the ANOVA when I was a Master's student at Oxford in 1978. Modern approaches, by teaching continuous and categorical variables together in the multiple regression model, make it difficult for younger statisticians to understand what is going on. So it can be helpful to go back to simpler times. In its original form, the ANOVA is an exercise in arithmetic whereby you break up the total sum of squares into pieces associated with treatments, blocks, interactions, whatever. In a balanced setting, sums of squares with an intuitive meaning (like SSB and SST) add up to the adjusted total sum of squares. All of this works thanks to Cochran's Theorem. Using Cochran, you can work out the expected values of these terms under the usual null hypotheses, and the F statistics flow from there. As a bonus, once you start thinking about Cochran and sums of squares, it makes sense to go on slicing and dicing your treatment sums of squares using orthogonal contrasts. Every entry in the ANOVA table should have an interpretation of interest to the statistician and yield a testable hypothesis. I recently wrote an answer where the difference between MOM and ML methods arose. The question turned on estimating random effects models. At this point, the traditional ANOVA approach totally parts company with maximum likelihood estimation, and the estimates of the effects are no longer the same. When the design is unbalanced, you don't get the same F statistics either. Back in the day, when statisticians wanted to compute random effects from split-plot or repeated measures designs, the random effects variance was computed from the mean squares of the ANOVA table. So if you have a plot with variance $\sigma^2_p$ and the residual variance is $\sigma^2$, you might have that the expected value of the mean square ("expected mean square", EMS) for plots is $\sigma^2 + n\sigma_p^2$, with $n$ the number of splits in the plot. You set the mean square equal to its expectation and solve for $\hat{\sigma_b^2}$. The ANOVA yields a method of moments estimator for the random effect variance. Now, we tend to solve such problems with mixed effects models and the variance components are obtained through maximum likelihood estimation or REML. The ANOVA as such is not a method of moments procedure. It turns on splitting the sum of squares (or more generally, a quadratic form of the response) into components that yield meaningful hypotheses. It depends strongly on normality since we want the sums of squares to have chi-squared distributions for the F tests to work. The maximum likelihood framework is more general and applies to situations like generalized linear models where sums of squares do not apply. Some software (like R) invite confusion by specifying anova methods to likelihood ratio tests with asymptotic chi-squared distributions. One can justify use of the term "anova", but strictly speaking, the theory behind it is different.
Is ANOVA relying on the method of moments and not on the maximum likelihood?
I first encountered the ANOVA when I was a Master's student at Oxford in 1978. Modern approaches, by teaching continuous and categorical variables together in the multiple regression model, make it di
Is ANOVA relying on the method of moments and not on the maximum likelihood? I first encountered the ANOVA when I was a Master's student at Oxford in 1978. Modern approaches, by teaching continuous and categorical variables together in the multiple regression model, make it difficult for younger statisticians to understand what is going on. So it can be helpful to go back to simpler times. In its original form, the ANOVA is an exercise in arithmetic whereby you break up the total sum of squares into pieces associated with treatments, blocks, interactions, whatever. In a balanced setting, sums of squares with an intuitive meaning (like SSB and SST) add up to the adjusted total sum of squares. All of this works thanks to Cochran's Theorem. Using Cochran, you can work out the expected values of these terms under the usual null hypotheses, and the F statistics flow from there. As a bonus, once you start thinking about Cochran and sums of squares, it makes sense to go on slicing and dicing your treatment sums of squares using orthogonal contrasts. Every entry in the ANOVA table should have an interpretation of interest to the statistician and yield a testable hypothesis. I recently wrote an answer where the difference between MOM and ML methods arose. The question turned on estimating random effects models. At this point, the traditional ANOVA approach totally parts company with maximum likelihood estimation, and the estimates of the effects are no longer the same. When the design is unbalanced, you don't get the same F statistics either. Back in the day, when statisticians wanted to compute random effects from split-plot or repeated measures designs, the random effects variance was computed from the mean squares of the ANOVA table. So if you have a plot with variance $\sigma^2_p$ and the residual variance is $\sigma^2$, you might have that the expected value of the mean square ("expected mean square", EMS) for plots is $\sigma^2 + n\sigma_p^2$, with $n$ the number of splits in the plot. You set the mean square equal to its expectation and solve for $\hat{\sigma_b^2}$. The ANOVA yields a method of moments estimator for the random effect variance. Now, we tend to solve such problems with mixed effects models and the variance components are obtained through maximum likelihood estimation or REML. The ANOVA as such is not a method of moments procedure. It turns on splitting the sum of squares (or more generally, a quadratic form of the response) into components that yield meaningful hypotheses. It depends strongly on normality since we want the sums of squares to have chi-squared distributions for the F tests to work. The maximum likelihood framework is more general and applies to situations like generalized linear models where sums of squares do not apply. Some software (like R) invite confusion by specifying anova methods to likelihood ratio tests with asymptotic chi-squared distributions. One can justify use of the term "anova", but strictly speaking, the theory behind it is different.
Is ANOVA relying on the method of moments and not on the maximum likelihood? I first encountered the ANOVA when I was a Master's student at Oxford in 1978. Modern approaches, by teaching continuous and categorical variables together in the multiple regression model, make it di
18,893
How are filters and activation maps connected in Convolutional Neural Networks?
The second convolutional neural network (CNN) architecture you posted comes from this paper. In the paper the authors give a description of what happens between layers S2 and C3. Their explanation is not very clear though. I'd say that this CNN architecture is not 'standard', and it can be quite confusing as a first example for CNNs. First of all, a clarification is needed on how feature maps are produced and what their relationship with filters is. A feature map is the result of the convolution of a filter with a feature map. Let's take the layers INPUT and C1 as an example. In the most common case, to get 6 feature maps of size $28 \times 28$ in layer C1 you need 6 filters of size $5 \times 5$ (the result of a 'valid' convolution of an image of size $M \times M$ with a filter of size $N \times N$, assuming $M \geq N$, has size $(M-N+1) \times (M-N+1)$. You could, however, produce 6 feature maps by combining feature maps produced by more or less than 6 filters (e.g. by summing them up). In the paper, nothing of the sort is implied though for layer C1. What happens between layer S2 and layer C3 is the following. There are 16 feature maps in layer C3 produced from 6 feature maps in layer S2. The number of filters in layer C3 is indeed not obvious. In fact, from the architecture diagram only, one cannot judge what the exact number of filters that produce those 16 feature maps is. The authors of the paper provide the following table (page 8): With the table they provide the following explanation (bottom of page 7): Layer C3 is a convolutional layer with 16 feature maps. Each unit in each feature map is connected to several $5 \times 5$ neighborhoods at identical locations in a subset of S2's feature maps. In the table the authors show that every feature map in layer C3 is produced by combining 3 or more feature maps (page 8): The first six C3 feature maps take inputs from every contiguous subsets of three feature maps in S2. The next six take input from every contiguous subset of four. The next three take input from some discontinuous subsets of four. Finally, the last one takes input from all S2 feature maps. Now, how many filters are there in layer C3? Unfortunately, they do not explain this. The two simplest possibilities would be: There is one filter per S2 feature map per C3 feature map, i.e. there is no filter sharing between S2 feature maps associated with the same C3 feature map. There is one filter per C3 feature map, which is shared across the (3 or more) feature maps of layer S2 that are combined. In both cases, to 'combine' would mean that the results of convolution per S2 feature map group, would need to be combined to produced C3 feature maps. The authors do not specify how this is done, but addition is a common choice (see for example the animated gif near the middle of this page. The authors give some additional information though, which can help us decipher the architecture. They say that 'layer C3 has 1,516 trainable parameters' (page 8). We can use this information to decide between cases (1) and (2) above. In case (1) we have $(6 \times 3) + (9 \times 4) + (1 \times 6) = 60$ filters. The filter size is $(14-10+1) \times (14-10+1) = 5 \times 5$. The number of trainable parameters in this case would be $5 \times 5 \times 60 = 1,500$ trainable parameters. If we assume one bias unit per C3 feature map, we get $1,500 + 16 = 1,516$ parameters, which is what the authors say. For completeness, in case (2) we would have $(5 \times 5 \times 16) + 16 = 416$ parameters, which is not the case. Therefore, if we look again at Table I above, there are 10 distinct C3 filters associated with each S2 feature map (thus 60 distinct filters in total). The authors explain this type of choice: Different feature maps [in layer C3] are forced to extract different (hopefully complementary) features because they get different sets of inputs. I hope this clarifies the situation.
How are filters and activation maps connected in Convolutional Neural Networks?
The second convolutional neural network (CNN) architecture you posted comes from this paper. In the paper the authors give a description of what happens between layers S2 and C3. Their explanation is
How are filters and activation maps connected in Convolutional Neural Networks? The second convolutional neural network (CNN) architecture you posted comes from this paper. In the paper the authors give a description of what happens between layers S2 and C3. Their explanation is not very clear though. I'd say that this CNN architecture is not 'standard', and it can be quite confusing as a first example for CNNs. First of all, a clarification is needed on how feature maps are produced and what their relationship with filters is. A feature map is the result of the convolution of a filter with a feature map. Let's take the layers INPUT and C1 as an example. In the most common case, to get 6 feature maps of size $28 \times 28$ in layer C1 you need 6 filters of size $5 \times 5$ (the result of a 'valid' convolution of an image of size $M \times M$ with a filter of size $N \times N$, assuming $M \geq N$, has size $(M-N+1) \times (M-N+1)$. You could, however, produce 6 feature maps by combining feature maps produced by more or less than 6 filters (e.g. by summing them up). In the paper, nothing of the sort is implied though for layer C1. What happens between layer S2 and layer C3 is the following. There are 16 feature maps in layer C3 produced from 6 feature maps in layer S2. The number of filters in layer C3 is indeed not obvious. In fact, from the architecture diagram only, one cannot judge what the exact number of filters that produce those 16 feature maps is. The authors of the paper provide the following table (page 8): With the table they provide the following explanation (bottom of page 7): Layer C3 is a convolutional layer with 16 feature maps. Each unit in each feature map is connected to several $5 \times 5$ neighborhoods at identical locations in a subset of S2's feature maps. In the table the authors show that every feature map in layer C3 is produced by combining 3 or more feature maps (page 8): The first six C3 feature maps take inputs from every contiguous subsets of three feature maps in S2. The next six take input from every contiguous subset of four. The next three take input from some discontinuous subsets of four. Finally, the last one takes input from all S2 feature maps. Now, how many filters are there in layer C3? Unfortunately, they do not explain this. The two simplest possibilities would be: There is one filter per S2 feature map per C3 feature map, i.e. there is no filter sharing between S2 feature maps associated with the same C3 feature map. There is one filter per C3 feature map, which is shared across the (3 or more) feature maps of layer S2 that are combined. In both cases, to 'combine' would mean that the results of convolution per S2 feature map group, would need to be combined to produced C3 feature maps. The authors do not specify how this is done, but addition is a common choice (see for example the animated gif near the middle of this page. The authors give some additional information though, which can help us decipher the architecture. They say that 'layer C3 has 1,516 trainable parameters' (page 8). We can use this information to decide between cases (1) and (2) above. In case (1) we have $(6 \times 3) + (9 \times 4) + (1 \times 6) = 60$ filters. The filter size is $(14-10+1) \times (14-10+1) = 5 \times 5$. The number of trainable parameters in this case would be $5 \times 5 \times 60 = 1,500$ trainable parameters. If we assume one bias unit per C3 feature map, we get $1,500 + 16 = 1,516$ parameters, which is what the authors say. For completeness, in case (2) we would have $(5 \times 5 \times 16) + 16 = 416$ parameters, which is not the case. Therefore, if we look again at Table I above, there are 10 distinct C3 filters associated with each S2 feature map (thus 60 distinct filters in total). The authors explain this type of choice: Different feature maps [in layer C3] are forced to extract different (hopefully complementary) features because they get different sets of inputs. I hope this clarifies the situation.
How are filters and activation maps connected in Convolutional Neural Networks? The second convolutional neural network (CNN) architecture you posted comes from this paper. In the paper the authors give a description of what happens between layers S2 and C3. Their explanation is
18,894
How are filters and activation maps connected in Convolutional Neural Networks?
You are indeed correct that the value before the @ indicates the amount of filters, and not the amount of feature maps (although for the first convolutional layers these values coincide). Regarding your last question: yes it does make sense to have every feature map at layer l connected to every filter at layer l+1. The sole reason for this is that this greatly increases the expression power of the network, as it has more ways (paths) to combine the feature maps which thus allow it to better distinguish whatever is on the input image. At last I don't know if you are practicing your neural network skills by implementing them yourself, but if you just want to apply convolutional networks to a specific task then there are already several excellent neural network libraries such as Theano, Brainstorm, Caffe
How are filters and activation maps connected in Convolutional Neural Networks?
You are indeed correct that the value before the @ indicates the amount of filters, and not the amount of feature maps (although for the first convolutional layers these values coincide). Regarding y
How are filters and activation maps connected in Convolutional Neural Networks? You are indeed correct that the value before the @ indicates the amount of filters, and not the amount of feature maps (although for the first convolutional layers these values coincide). Regarding your last question: yes it does make sense to have every feature map at layer l connected to every filter at layer l+1. The sole reason for this is that this greatly increases the expression power of the network, as it has more ways (paths) to combine the feature maps which thus allow it to better distinguish whatever is on the input image. At last I don't know if you are practicing your neural network skills by implementing them yourself, but if you just want to apply convolutional networks to a specific task then there are already several excellent neural network libraries such as Theano, Brainstorm, Caffe
How are filters and activation maps connected in Convolutional Neural Networks? You are indeed correct that the value before the @ indicates the amount of filters, and not the amount of feature maps (although for the first convolutional layers these values coincide). Regarding y
18,895
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM)
It's true that limiting your gradient propagation to 30 time steps will prevent it from learning everything possible in your dataset. However, it depends strongly on your dataset whether that will prevent it from learning important things about the features in your model! Limiting the gradient during training is more like limiting the window over which your model can assimilate input features and hidden state with high confidence. Because at test time you apply your model to the entire input sequence, it will still be able to incorporate information about all of the input features into its hidden state. It might not know exactly how to preserve that information until it makes its final prediction for the sentence, but there might be some (admittedly weaker) connections that it would still be able to make. Think first about a contrived example. Suppose your network is to generate a 1 if there is a 1 anywhere in its input, and a 0 otherwise. Say you train the network on sequences of length 20 and limit then gradient to 10 steps. If the training dataset never contains a 1 in the final 10 steps of an input, then the network is going to have a problem with test inputs of any configuration. However, if the training set has some examples like [1 0 0 ... 0 0 0] and others like [0 0 0 ... 1 0 0], then the network will be able to pick up on the "presence of a 1" feature anywhere in its input. Back to sentiment analysis then. Let's say during training your model encounters a long negative sentence like "I hate this because ... around and around" with, say, 50 words in the ellipsis. By limiting the gradient propagation to 30 time steps, the model will not connect the "I hate this because" to the output label, so it won't pick up on "I", "hate", or "this" from this training example. But it will pick up on the words that are within 30 time steps from the end of the sentence. If your training set contains other examples that contain those same words, possibly along with "hate" then it has a chance of picking up on the link between "hate" and the negative sentiment label. Also, if you have shorter training examples, say, "We hate this because it's terrible!" then your model will be able to connect the "hate" and "this" features to the target label. If you have enough of these training examples, then the model ought to be able to learn the connection effectively. At test time, let's say you present the model with another long sentence like "I hate this because ... on the gecko!" The model's input will start out with "I hate this", which will be passed into the hidden state of the model in some form. This hidden state is used to influence future hidden states of the model, so even though there might be 50 words before the end of the sentence, the hidden state from those initial words has a theoretical chance of influencing the output, even though it was never trained on samples that contained such a large distance between the "I hate this" and the end of the sentence.
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM)
It's true that limiting your gradient propagation to 30 time steps will prevent it from learning everything possible in your dataset. However, it depends strongly on your dataset whether that will pre
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM) It's true that limiting your gradient propagation to 30 time steps will prevent it from learning everything possible in your dataset. However, it depends strongly on your dataset whether that will prevent it from learning important things about the features in your model! Limiting the gradient during training is more like limiting the window over which your model can assimilate input features and hidden state with high confidence. Because at test time you apply your model to the entire input sequence, it will still be able to incorporate information about all of the input features into its hidden state. It might not know exactly how to preserve that information until it makes its final prediction for the sentence, but there might be some (admittedly weaker) connections that it would still be able to make. Think first about a contrived example. Suppose your network is to generate a 1 if there is a 1 anywhere in its input, and a 0 otherwise. Say you train the network on sequences of length 20 and limit then gradient to 10 steps. If the training dataset never contains a 1 in the final 10 steps of an input, then the network is going to have a problem with test inputs of any configuration. However, if the training set has some examples like [1 0 0 ... 0 0 0] and others like [0 0 0 ... 1 0 0], then the network will be able to pick up on the "presence of a 1" feature anywhere in its input. Back to sentiment analysis then. Let's say during training your model encounters a long negative sentence like "I hate this because ... around and around" with, say, 50 words in the ellipsis. By limiting the gradient propagation to 30 time steps, the model will not connect the "I hate this because" to the output label, so it won't pick up on "I", "hate", or "this" from this training example. But it will pick up on the words that are within 30 time steps from the end of the sentence. If your training set contains other examples that contain those same words, possibly along with "hate" then it has a chance of picking up on the link between "hate" and the negative sentiment label. Also, if you have shorter training examples, say, "We hate this because it's terrible!" then your model will be able to connect the "hate" and "this" features to the target label. If you have enough of these training examples, then the model ought to be able to learn the connection effectively. At test time, let's say you present the model with another long sentence like "I hate this because ... on the gecko!" The model's input will start out with "I hate this", which will be passed into the hidden state of the model in some form. This hidden state is used to influence future hidden states of the model, so even though there might be 50 words before the end of the sentence, the hidden state from those initial words has a theoretical chance of influencing the output, even though it was never trained on samples that contained such a large distance between the "I hate this" and the end of the sentence.
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM) It's true that limiting your gradient propagation to 30 time steps will prevent it from learning everything possible in your dataset. However, it depends strongly on your dataset whether that will pre
18,896
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM)
@Imjohns3 has right, if you process long sequences(size N) and limit backpropagation to last K steps, the network won't learn patterns at the begining. I have worked with long texts and use the approach where I compute loss and do backpropagation after every K steps. Let's assume that my sequence had N=1000 tokens, my RNN process first K=100 then I try to do prediction (compute loss) and backpropagate. Next while maintaining the RNN state brake the gradient chain(in pytorch->detach) and start another k=100 steps. A good example of this technique you can find here: https://github.com/ksopyla/pytorch_neural_networks/blob/master/RNN/lstm_imdb_tbptt.py
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM)
@Imjohns3 has right, if you process long sequences(size N) and limit backpropagation to last K steps, the network won't learn patterns at the begining. I have worked with long texts and use the appro
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM) @Imjohns3 has right, if you process long sequences(size N) and limit backpropagation to last K steps, the network won't learn patterns at the begining. I have worked with long texts and use the approach where I compute loss and do backpropagation after every K steps. Let's assume that my sequence had N=1000 tokens, my RNN process first K=100 then I try to do prediction (compute loss) and backpropagate. Next while maintaining the RNN state brake the gradient chain(in pytorch->detach) and start another k=100 steps. A good example of this technique you can find here: https://github.com/ksopyla/pytorch_neural_networks/blob/master/RNN/lstm_imdb_tbptt.py
Capturing initial patterns when using truncated backpropagation through time (RNN/LSTM) @Imjohns3 has right, if you process long sequences(size N) and limit backpropagation to last K steps, the network won't learn patterns at the begining. I have worked with long texts and use the appro
18,897
How to deal with "non-integer" warning from negative binomial GLM?
The negative binomial is a distribution for count data, so you really want your response variable to be counts (that is, non-negative whole numbers). That said, it is appropriate to account for "different sampling efforts" (I don't know exactly what you are referring to, but I get the gist of it). However, you should not try to do that by dividing your counts by another number. Instead, you need to use that other number as an offset. There is a nice discussion on CV of what an offset is here: When to use an offset in a Poisson regression? My guess is that your model should be something like: mst.nb = glm.nb(Larvae+Nymphs+Adults ~ B.type+Month+Season + offset(log(num.hosts)), data=MI.df)
How to deal with "non-integer" warning from negative binomial GLM?
The negative binomial is a distribution for count data, so you really want your response variable to be counts (that is, non-negative whole numbers). That said, it is appropriate to account for "diff
How to deal with "non-integer" warning from negative binomial GLM? The negative binomial is a distribution for count data, so you really want your response variable to be counts (that is, non-negative whole numbers). That said, it is appropriate to account for "different sampling efforts" (I don't know exactly what you are referring to, but I get the gist of it). However, you should not try to do that by dividing your counts by another number. Instead, you need to use that other number as an offset. There is a nice discussion on CV of what an offset is here: When to use an offset in a Poisson regression? My guess is that your model should be something like: mst.nb = glm.nb(Larvae+Nymphs+Adults ~ B.type+Month+Season + offset(log(num.hosts)), data=MI.df)
How to deal with "non-integer" warning from negative binomial GLM? The negative binomial is a distribution for count data, so you really want your response variable to be counts (that is, non-negative whole numbers). That said, it is appropriate to account for "diff
18,898
How to deal with "non-integer" warning from negative binomial GLM?
It's a warning, not a fatal error. glm.nb() is expecting counts as your outcome variable, which are integers. Your data are not integers: 251.529. R is saying "Hmmm... you might want to check this out and make sure it's OK, because it might not look right to be." If my memory is correct, SPSS doesn't give such a warning. If you're sure that you're using the right model, even though you don't have integers, ignore it and keep going.
How to deal with "non-integer" warning from negative binomial GLM?
It's a warning, not a fatal error. glm.nb() is expecting counts as your outcome variable, which are integers. Your data are not integers: 251.529. R is saying "Hmmm... you might want to check this out
How to deal with "non-integer" warning from negative binomial GLM? It's a warning, not a fatal error. glm.nb() is expecting counts as your outcome variable, which are integers. Your data are not integers: 251.529. R is saying "Hmmm... you might want to check this out and make sure it's OK, because it might not look right to be." If my memory is correct, SPSS doesn't give such a warning. If you're sure that you're using the right model, even though you don't have integers, ignore it and keep going.
How to deal with "non-integer" warning from negative binomial GLM? It's a warning, not a fatal error. glm.nb() is expecting counts as your outcome variable, which are integers. Your data are not integers: 251.529. R is saying "Hmmm... you might want to check this out
18,899
How to deal with "non-integer" warning from negative binomial GLM?
I'm an ecological parasitologist. The way you should handle this is by cbind-ing the hosts that were parasitised and the ones that were not, and then using a binomial distribution. Let's say you want to look at parasitised larvae: you would have n. of larvae that were healthy, and n. that were parasitised. For example, given Lh and Lp: parasitizedL=cbind(Lp, Lh) hist(parasitized) I'm guessing you can just use a regular binomial distribution with glm(), and might not need neg.binomial model. PLarvae1=glm(parasitizedL~B.type+Month+Season, family=binomial,data=MI.df) Then do stepwise model reduction to see which of your factors significantly effect parasitism: see this link. However it looks like you need to have random effects to account for repetitive sampling, so likely your random effect will be (1|Season/Month), but it's hard to tell without knowing your data.
How to deal with "non-integer" warning from negative binomial GLM?
I'm an ecological parasitologist. The way you should handle this is by cbind-ing the hosts that were parasitised and the ones that were not, and then using a binomial distribution. Let's say you want
How to deal with "non-integer" warning from negative binomial GLM? I'm an ecological parasitologist. The way you should handle this is by cbind-ing the hosts that were parasitised and the ones that were not, and then using a binomial distribution. Let's say you want to look at parasitised larvae: you would have n. of larvae that were healthy, and n. that were parasitised. For example, given Lh and Lp: parasitizedL=cbind(Lp, Lh) hist(parasitized) I'm guessing you can just use a regular binomial distribution with glm(), and might not need neg.binomial model. PLarvae1=glm(parasitizedL~B.type+Month+Season, family=binomial,data=MI.df) Then do stepwise model reduction to see which of your factors significantly effect parasitism: see this link. However it looks like you need to have random effects to account for repetitive sampling, so likely your random effect will be (1|Season/Month), but it's hard to tell without knowing your data.
How to deal with "non-integer" warning from negative binomial GLM? I'm an ecological parasitologist. The way you should handle this is by cbind-ing the hosts that were parasitised and the ones that were not, and then using a binomial distribution. Let's say you want
18,900
Does the "No Free Lunch Theorem" apply to general statistical tests?
I don't know of a proof but I'll bet this applies quite generally. An example is an experiment with 2 subjects in each of 2 treatment groups. The Wilcoxon test cannot possibly be significant at the 0.05 level, but the t-test can. You could say that its power comes more than half from its assumptions and not just from the data. To your original problem, it is not appropriate to proceed as if the observations per subject are independent. To take things into account after the fact is certainly not good statistical practice except in very special circumstances (e.g., cluster sandwich estimators).
Does the "No Free Lunch Theorem" apply to general statistical tests?
I don't know of a proof but I'll bet this applies quite generally. An example is an experiment with 2 subjects in each of 2 treatment groups. The Wilcoxon test cannot possibly be significant at the
Does the "No Free Lunch Theorem" apply to general statistical tests? I don't know of a proof but I'll bet this applies quite generally. An example is an experiment with 2 subjects in each of 2 treatment groups. The Wilcoxon test cannot possibly be significant at the 0.05 level, but the t-test can. You could say that its power comes more than half from its assumptions and not just from the data. To your original problem, it is not appropriate to proceed as if the observations per subject are independent. To take things into account after the fact is certainly not good statistical practice except in very special circumstances (e.g., cluster sandwich estimators).
Does the "No Free Lunch Theorem" apply to general statistical tests? I don't know of a proof but I'll bet this applies quite generally. An example is an experiment with 2 subjects in each of 2 treatment groups. The Wilcoxon test cannot possibly be significant at the