idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
28,001 | Questions about specifying linear mixed models in R for repeated measures data with additional nesting structure | I will answer each of your queries in turn.
Is the syntax correctly specifying the clustering and random effects?
The model you've fit here is, in mathematical terms, the model
$$ Y_{ijk} = {\bf X}_{ijk} {\boldsymbol \beta} + \eta_{i} + \theta_{ij} + \varepsilon_{ijk}$$
where
$Y_{ijk}$ is the reaction time for observation $k$ during session $j$ on individual $i$.
${\bf X}_{ijk}$ is the predictor vector for observation $k$ during session $j$ on individual $i$ (in the model you've written up, this is comprised of all main effects and all interactions).
$\eta_i$ is the person $i$ random effect that induces correlation between observations made on the same person. $\theta_{ij}$ is the random effect for individual $i$'s session $j$ and $\varepsilon_{ijk}$ is the leftover error term.
${\boldsymbol \beta}$ is the regression coefficient vector.
As noted on page 14-15 here this model is correct for specifying that sessions are nested within individuals, which is the case from your description.
Beyond syntax, is this model appropriate for the above within-subject design?
I think this model is reasonable, as it does respect the nesting structure in the data and I do think that individual and session are reasonably envisioned as random effects, as this model asserts. You should look at the relationships between the predictors and the response with scatterplots, etc. to ensure that the linear predictor (${\bf X}_{ijk} {\boldsymbol \beta}$) is correctly specified. The other standard regression diagnostics should possibly be examined as well.
Should the full model specify all interactions of fixed effects, or only the ones that I am really interested in?
I think starting with such a heavily saturated model may not be a great idea, unless it makes sense substantively. As I said in a comment, this will tend to overfit your particular data set and may make your results less generalizable. Regarding model selection, if you do start with the completely saturated model and do backwards selection (which some people on this site, with good reason, object to) then you have to make sure to respect the hierarchy in the model. That is, if you eliminate a lower level interaction from the model, then you should also delete all higher level interactions involving that variable. For more discussion on that, see the linked thread.
I have not included the STIM factor in the model, which characterizes the specific stimulus type used in a trial, but which I am not interested to estimate in any way - should I specify that as a random factor given it has 123 levels and very few data points per stimulus type?
Admittedly not knowing anything about the application (so take this with a grain of salt), that sounds like a fixed effect, not a random effect. That is, the treatment type sounds like a variable that would correspond to a fixed shift in the mean response, not something that would induce correlation between subjects who had the same stimulus type. But, the fact that it's a 123 level factor makes it cumbersome to enter into the model. I suppose I'd want to know how large of an effect you'd expect this to have. Regardless of the size of the effect, it will not induce bias in your slope estimates since this is a linear model, but leaving it out may make your standard errors larger than they would otherwise be. | Questions about specifying linear mixed models in R for repeated measures data with additional nesti | I will answer each of your queries in turn.
Is the syntax correctly specifying the clustering and random effects?
The model you've fit here is, in mathematical terms, the model
$$ Y_{ijk} = {\bf X}_ | Questions about specifying linear mixed models in R for repeated measures data with additional nesting structure
I will answer each of your queries in turn.
Is the syntax correctly specifying the clustering and random effects?
The model you've fit here is, in mathematical terms, the model
$$ Y_{ijk} = {\bf X}_{ijk} {\boldsymbol \beta} + \eta_{i} + \theta_{ij} + \varepsilon_{ijk}$$
where
$Y_{ijk}$ is the reaction time for observation $k$ during session $j$ on individual $i$.
${\bf X}_{ijk}$ is the predictor vector for observation $k$ during session $j$ on individual $i$ (in the model you've written up, this is comprised of all main effects and all interactions).
$\eta_i$ is the person $i$ random effect that induces correlation between observations made on the same person. $\theta_{ij}$ is the random effect for individual $i$'s session $j$ and $\varepsilon_{ijk}$ is the leftover error term.
${\boldsymbol \beta}$ is the regression coefficient vector.
As noted on page 14-15 here this model is correct for specifying that sessions are nested within individuals, which is the case from your description.
Beyond syntax, is this model appropriate for the above within-subject design?
I think this model is reasonable, as it does respect the nesting structure in the data and I do think that individual and session are reasonably envisioned as random effects, as this model asserts. You should look at the relationships between the predictors and the response with scatterplots, etc. to ensure that the linear predictor (${\bf X}_{ijk} {\boldsymbol \beta}$) is correctly specified. The other standard regression diagnostics should possibly be examined as well.
Should the full model specify all interactions of fixed effects, or only the ones that I am really interested in?
I think starting with such a heavily saturated model may not be a great idea, unless it makes sense substantively. As I said in a comment, this will tend to overfit your particular data set and may make your results less generalizable. Regarding model selection, if you do start with the completely saturated model and do backwards selection (which some people on this site, with good reason, object to) then you have to make sure to respect the hierarchy in the model. That is, if you eliminate a lower level interaction from the model, then you should also delete all higher level interactions involving that variable. For more discussion on that, see the linked thread.
I have not included the STIM factor in the model, which characterizes the specific stimulus type used in a trial, but which I am not interested to estimate in any way - should I specify that as a random factor given it has 123 levels and very few data points per stimulus type?
Admittedly not knowing anything about the application (so take this with a grain of salt), that sounds like a fixed effect, not a random effect. That is, the treatment type sounds like a variable that would correspond to a fixed shift in the mean response, not something that would induce correlation between subjects who had the same stimulus type. But, the fact that it's a 123 level factor makes it cumbersome to enter into the model. I suppose I'd want to know how large of an effect you'd expect this to have. Regardless of the size of the effect, it will not induce bias in your slope estimates since this is a linear model, but leaving it out may make your standard errors larger than they would otherwise be. | Questions about specifying linear mixed models in R for repeated measures data with additional nesti
I will answer each of your queries in turn.
Is the syntax correctly specifying the clustering and random effects?
The model you've fit here is, in mathematical terms, the model
$$ Y_{ijk} = {\bf X}_ |
28,002 | Can I do a t test if I have little to no variance in one group? | If you assume that the variances are the same for each group you can get a pooled variance estimate and work with it in constructing t tests for pairwise differences. But that would not be a good assumption unless all the variances were small and the one with all identical values was just a chance occurrence. If you cannot do that then you have no way to estimate the variance for that one group and cannot do the analysis of variance or any t test involving that group as one of the pairs being compared. | Can I do a t test if I have little to no variance in one group? | If you assume that the variances are the same for each group you can get a pooled variance estimate and work with it in constructing t tests for pairwise differences. But that would not be a good ass | Can I do a t test if I have little to no variance in one group?
If you assume that the variances are the same for each group you can get a pooled variance estimate and work with it in constructing t tests for pairwise differences. But that would not be a good assumption unless all the variances were small and the one with all identical values was just a chance occurrence. If you cannot do that then you have no way to estimate the variance for that one group and cannot do the analysis of variance or any t test involving that group as one of the pairs being compared. | Can I do a t test if I have little to no variance in one group?
If you assume that the variances are the same for each group you can get a pooled variance estimate and work with it in constructing t tests for pairwise differences. But that would not be a good ass |
28,003 | Can I do a t test if I have little to no variance in one group? | Here are a few observations to add to the existing answers.
I think it's important to think through conceptually why you are getting a group with zero variance.
Floor and ceiling effects
In my experience in psychology, this example comes up most often when there is a floor or ceiling on a scale, and you have some groups that fall in the middle of the scale and others who fall on the extreme. For example, If your dependent variable is proportion of items correct out of five questions, then you might find that your "smart" group gets 100% correct or that your "clinical group" gets 0% correct.
In this case:
You might want to fall back on ordinal non-parametric tests if you have no variance in one of your groups.
Although it may not help you after the fact, you may also want to think conceptually about whether a different measure that did not have floor or ceiling effects would have been better to use. In some cases it wont matter. For example, the point of the analysis may have been to show that one group could perform a task and another could not. In other cases, you may want to model individual differences in all groups, in which case you may need a scale that does not suffer from floor or ceiling effects.
Very small group size
Another case where you can get no group variance is where you have a group with a really small sample size (e.g., $n\lt5$), usually in combination with a dependent variable that is fairly discrete.
In this case, you may be more inclined to put the lack of variance down to chance, and proceed with a standard t-test. | Can I do a t test if I have little to no variance in one group? | Here are a few observations to add to the existing answers.
I think it's important to think through conceptually why you are getting a group with zero variance.
Floor and ceiling effects
In my experie | Can I do a t test if I have little to no variance in one group?
Here are a few observations to add to the existing answers.
I think it's important to think through conceptually why you are getting a group with zero variance.
Floor and ceiling effects
In my experience in psychology, this example comes up most often when there is a floor or ceiling on a scale, and you have some groups that fall in the middle of the scale and others who fall on the extreme. For example, If your dependent variable is proportion of items correct out of five questions, then you might find that your "smart" group gets 100% correct or that your "clinical group" gets 0% correct.
In this case:
You might want to fall back on ordinal non-parametric tests if you have no variance in one of your groups.
Although it may not help you after the fact, you may also want to think conceptually about whether a different measure that did not have floor or ceiling effects would have been better to use. In some cases it wont matter. For example, the point of the analysis may have been to show that one group could perform a task and another could not. In other cases, you may want to model individual differences in all groups, in which case you may need a scale that does not suffer from floor or ceiling effects.
Very small group size
Another case where you can get no group variance is where you have a group with a really small sample size (e.g., $n\lt5$), usually in combination with a dependent variable that is fairly discrete.
In this case, you may be more inclined to put the lack of variance down to chance, and proceed with a standard t-test. | Can I do a t test if I have little to no variance in one group?
Here are a few observations to add to the existing answers.
I think it's important to think through conceptually why you are getting a group with zero variance.
Floor and ceiling effects
In my experie |
28,004 | Can I do a t test if I have little to no variance in one group? | A couple of years ago, I would have fully subscribed to @Michael Chernick's answer.
However, I realized recently that some implementations of the t-test are extremely robust to inequality of variances. In particular, in R the function t.test has a default parameter var.equal=FALSE, which means that it does not simply rely on a pooled estimate of the variance. Instead, it uses the Welch-Satterthwaite approximate degrees of freedom, which compensates for unequal variances.
Let's see an example.
set.seed(123)
x <- rnorm(100)
y <- rnorm(100, sd=0.00001)
# x and y have 0 mean, but very different variance.
t.test(x,y)
Welch Two Sample t-test
data: x and y
t = 0.9904, df = 99, p-value = 0.3244
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09071549 0.27152946
sample estimates:
mean of x mean of y
9.040591e-02 -1.075468e-06
You can see that R claims to perform Welch's t-test and not Student's t-test. Here the degree of freedom is claimed to be 99, even though each sample has size 100, so here the function essentially tests the first sample against the fixed value 0.
You can verify yourself that this implementation gives correct (i.e. uniform) p-values for two samples with very different variances.
Now, this was for a two-sample t-test. My own experience with ANOVA is that it is much more sensitive to inequality of variances. In that case, I fully agree with @Michael Chernick. | Can I do a t test if I have little to no variance in one group? | A couple of years ago, I would have fully subscribed to @Michael Chernick's answer.
However, I realized recently that some implementations of the t-test are extremely robust to inequality of variances | Can I do a t test if I have little to no variance in one group?
A couple of years ago, I would have fully subscribed to @Michael Chernick's answer.
However, I realized recently that some implementations of the t-test are extremely robust to inequality of variances. In particular, in R the function t.test has a default parameter var.equal=FALSE, which means that it does not simply rely on a pooled estimate of the variance. Instead, it uses the Welch-Satterthwaite approximate degrees of freedom, which compensates for unequal variances.
Let's see an example.
set.seed(123)
x <- rnorm(100)
y <- rnorm(100, sd=0.00001)
# x and y have 0 mean, but very different variance.
t.test(x,y)
Welch Two Sample t-test
data: x and y
t = 0.9904, df = 99, p-value = 0.3244
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
-0.09071549 0.27152946
sample estimates:
mean of x mean of y
9.040591e-02 -1.075468e-06
You can see that R claims to perform Welch's t-test and not Student's t-test. Here the degree of freedom is claimed to be 99, even though each sample has size 100, so here the function essentially tests the first sample against the fixed value 0.
You can verify yourself that this implementation gives correct (i.e. uniform) p-values for two samples with very different variances.
Now, this was for a two-sample t-test. My own experience with ANOVA is that it is much more sensitive to inequality of variances. In that case, I fully agree with @Michael Chernick. | Can I do a t test if I have little to no variance in one group?
A couple of years ago, I would have fully subscribed to @Michael Chernick's answer.
However, I realized recently that some implementations of the t-test are extremely robust to inequality of variances |
28,005 | Can I do a t test if I have little to no variance in one group? | Under certain circumstances it may be possible to calculate an upper bound on what the variance for the population could be, and then use that variance in something such as a t-test with unequal variances.
For example, if you asked 10 randomly chosen students in a school of 100 students what is their favorite day in March and they all answered the 15th, you know that the largest variance you could possibly have for the student population is the variance for 10 values of 15, 45 values of 1, and 45 values of 31, which is 204.6364.
A larger variance should make detecting a difference more difficult, so that a t-test using this upper bound on the variance would be conservative in detecting a difference. That means you would be sure of a significant difference resulting from a t-test using the upper bound on the variance, but if you did not find a significant difference, you wouldn't know much, because a significant difference would still be consistent with some of the smaller variances that are possible.
Of course there may not be many situations where you can actually figure this out, but it might be possible. | Can I do a t test if I have little to no variance in one group? | Under certain circumstances it may be possible to calculate an upper bound on what the variance for the population could be, and then use that variance in something such as a t-test with unequal varia | Can I do a t test if I have little to no variance in one group?
Under certain circumstances it may be possible to calculate an upper bound on what the variance for the population could be, and then use that variance in something such as a t-test with unequal variances.
For example, if you asked 10 randomly chosen students in a school of 100 students what is their favorite day in March and they all answered the 15th, you know that the largest variance you could possibly have for the student population is the variance for 10 values of 15, 45 values of 1, and 45 values of 31, which is 204.6364.
A larger variance should make detecting a difference more difficult, so that a t-test using this upper bound on the variance would be conservative in detecting a difference. That means you would be sure of a significant difference resulting from a t-test using the upper bound on the variance, but if you did not find a significant difference, you wouldn't know much, because a significant difference would still be consistent with some of the smaller variances that are possible.
Of course there may not be many situations where you can actually figure this out, but it might be possible. | Can I do a t test if I have little to no variance in one group?
Under certain circumstances it may be possible to calculate an upper bound on what the variance for the population could be, and then use that variance in something such as a t-test with unequal varia |
28,006 | Estimating demand elasticity econometrically | Unfortunately, the problem you will run into here is one of endogeneity. The observed p and q are market equilibrium prices and quantities, not "samples" of points along a demand curve. While occasionally you may get lucky, usually if you naively regress quantity on price, you will end up with a positive coefficient. According to economic theory, this doesn't make sense (unless you've actually stumbled upon a Giffen good), but what one is typically observes are different levels of demand, and prices adjusting accordingly.
Therefore, to estimate accurate demand coefficients on price, it is typical to either estimate a system of equations (much as one solves a simple supply-demand problem in intro-micro), or to use instrumental variables. In either case, for identification, it requires some variables that affect only the supply side of the equation.
And finally, if you've solved all that such that you can overcome the endogeneity issues with your regression, remember that while taking logarithms is often a convenient way to do regressions, it implies a fundamental distinction about the model and the error terms, and if that doesn't fit the actual process generating the data, the "elasticity" you get will be the right units, but won't be the right number. | Estimating demand elasticity econometrically | Unfortunately, the problem you will run into here is one of endogeneity. The observed p and q are market equilibrium prices and quantities, not "samples" of points along a demand curve. While occas | Estimating demand elasticity econometrically
Unfortunately, the problem you will run into here is one of endogeneity. The observed p and q are market equilibrium prices and quantities, not "samples" of points along a demand curve. While occasionally you may get lucky, usually if you naively regress quantity on price, you will end up with a positive coefficient. According to economic theory, this doesn't make sense (unless you've actually stumbled upon a Giffen good), but what one is typically observes are different levels of demand, and prices adjusting accordingly.
Therefore, to estimate accurate demand coefficients on price, it is typical to either estimate a system of equations (much as one solves a simple supply-demand problem in intro-micro), or to use instrumental variables. In either case, for identification, it requires some variables that affect only the supply side of the equation.
And finally, if you've solved all that such that you can overcome the endogeneity issues with your regression, remember that while taking logarithms is often a convenient way to do regressions, it implies a fundamental distinction about the model and the error terms, and if that doesn't fit the actual process generating the data, the "elasticity" you get will be the right units, but won't be the right number. | Estimating demand elasticity econometrically
Unfortunately, the problem you will run into here is one of endogeneity. The observed p and q are market equilibrium prices and quantities, not "samples" of points along a demand curve. While occas |
28,007 | Estimating demand elasticity econometrically | I don't think you can interpret $\alpha$ as the price elasticity of demand unless you are willing to assume that supply is perfectly price inelastic. Take a look at some really good IO lecture notes if you want to proceed this way. You will need to instrument the price variable using supply side variables that do not effect demand directly. This will give you variation in supply that leaves demand curve unchanged. Input prices might good candidates. | Estimating demand elasticity econometrically | I don't think you can interpret $\alpha$ as the price elasticity of demand unless you are willing to assume that supply is perfectly price inelastic. Take a look at some really good IO lecture notes i | Estimating demand elasticity econometrically
I don't think you can interpret $\alpha$ as the price elasticity of demand unless you are willing to assume that supply is perfectly price inelastic. Take a look at some really good IO lecture notes if you want to proceed this way. You will need to instrument the price variable using supply side variables that do not effect demand directly. This will give you variation in supply that leaves demand curve unchanged. Input prices might good candidates. | Estimating demand elasticity econometrically
I don't think you can interpret $\alpha$ as the price elasticity of demand unless you are willing to assume that supply is perfectly price inelastic. Take a look at some really good IO lecture notes i |
28,008 | Time series analysis in Python | The model you have there is called an Autoregressive Distributed Lag (ARDL) Model. To be specific,
\begin{equation}
y_t=ay_{t-1}+by_{t-2}+...+cy_{t-m}+dx_t+ex_{t-1}+...+fx_{t-n}
\end{equation}
can be called an ARDL(m,n) model and we can write the model in slightly more compact form as:
\begin{equation}
y_{t} = \delta + \sum_{i=1}^{m} \alpha_{i} y_{t-i} + \sum_{j=0}^{n} \beta_{j} x_{t-j} + u_{t}
\end{equation}
where $u_{t} \sim IID(o, \sigma^{2})~ \forall~ t$ and in this case $\delta = 0$.
The values of m and n do not have to be the same. That is, the lag length of the autoregressive term does not have to be equal to the lag length of the distributed lag term. Note also that it is possible to include a second (or more) distributed lag terms (for example, $z_{t-k}$).
There are different ways of choosing the lag lengths and for a treatment of this issue, I refer you to Chapter 17 of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed).
To build a model like this in python, it might be worth checking out statsmodels.tsa as well as the other packages mentioned in the other answers. | Time series analysis in Python | The model you have there is called an Autoregressive Distributed Lag (ARDL) Model. To be specific,
\begin{equation}
y_t=ay_{t-1}+by_{t-2}+...+cy_{t-m}+dx_t+ex_{t-1}+...+fx_{t-n}
\end{equation}
can be | Time series analysis in Python
The model you have there is called an Autoregressive Distributed Lag (ARDL) Model. To be specific,
\begin{equation}
y_t=ay_{t-1}+by_{t-2}+...+cy_{t-m}+dx_t+ex_{t-1}+...+fx_{t-n}
\end{equation}
can be called an ARDL(m,n) model and we can write the model in slightly more compact form as:
\begin{equation}
y_{t} = \delta + \sum_{i=1}^{m} \alpha_{i} y_{t-i} + \sum_{j=0}^{n} \beta_{j} x_{t-j} + u_{t}
\end{equation}
where $u_{t} \sim IID(o, \sigma^{2})~ \forall~ t$ and in this case $\delta = 0$.
The values of m and n do not have to be the same. That is, the lag length of the autoregressive term does not have to be equal to the lag length of the distributed lag term. Note also that it is possible to include a second (or more) distributed lag terms (for example, $z_{t-k}$).
There are different ways of choosing the lag lengths and for a treatment of this issue, I refer you to Chapter 17 of Damodar Gujarati and Dawn Porter's Basic Econometrics (5th ed).
To build a model like this in python, it might be worth checking out statsmodels.tsa as well as the other packages mentioned in the other answers. | Time series analysis in Python
The model you have there is called an Autoregressive Distributed Lag (ARDL) Model. To be specific,
\begin{equation}
y_t=ay_{t-1}+by_{t-2}+...+cy_{t-m}+dx_t+ex_{t-1}+...+fx_{t-n}
\end{equation}
can be |
28,009 | Time series analysis in Python | This answer should likely be a comment because I am not addressing the first two questions, but is too long...
You can do a lot of statistical work in Python these days, and with projects like statsmodels and pandas it is getting better and better. For time series analysis I think the best choice currently is using the PyIMSL package, which contains a good selection of functions all written in C for speed (and free for non-commercial use). Documentation can be found here. (Full disclosure, I used to work for Rogue Wave Software).
Now then, even though I use Python for most of my analytical work, for time series modeling I have turned to using the excellent forecast package in R by Rob Hyndman. It is hard to beat, especially for exploratory work. | Time series analysis in Python | This answer should likely be a comment because I am not addressing the first two questions, but is too long...
You can do a lot of statistical work in Python these days, and with projects like statsmo | Time series analysis in Python
This answer should likely be a comment because I am not addressing the first two questions, but is too long...
You can do a lot of statistical work in Python these days, and with projects like statsmodels and pandas it is getting better and better. For time series analysis I think the best choice currently is using the PyIMSL package, which contains a good selection of functions all written in C for speed (and free for non-commercial use). Documentation can be found here. (Full disclosure, I used to work for Rogue Wave Software).
Now then, even though I use Python for most of my analytical work, for time series modeling I have turned to using the excellent forecast package in R by Rob Hyndman. It is hard to beat, especially for exploratory work. | Time series analysis in Python
This answer should likely be a comment because I am not addressing the first two questions, but is too long...
You can do a lot of statistical work in Python these days, and with projects like statsmo |
28,010 | Time series analysis in Python | There is also Tidal Analysis on Sourceforge.net. Check this as well. Sometimes there is one thing in a free application and misses few things that you really want.
http://sourceforge.net/projects/tappy/?source=directory | Time series analysis in Python | There is also Tidal Analysis on Sourceforge.net. Check this as well. Sometimes there is one thing in a free application and misses few things that you really want.
http://sourceforge.net/projects/tap | Time series analysis in Python
There is also Tidal Analysis on Sourceforge.net. Check this as well. Sometimes there is one thing in a free application and misses few things that you really want.
http://sourceforge.net/projects/tappy/?source=directory | Time series analysis in Python
There is also Tidal Analysis on Sourceforge.net. Check this as well. Sometimes there is one thing in a free application and misses few things that you really want.
http://sourceforge.net/projects/tap |
28,011 | Time series analysis in Python | Well, at first I suggest you to search on Google a Python package to manipulate time-series, like this one http://statsmodels.sourceforge.net/.
On the other hand, if you MUST use python (instead of R, for example) you can try an optimization approach for find the best model parameters using as objective function the prediction error (MSE or RMSE). | Time series analysis in Python | Well, at first I suggest you to search on Google a Python package to manipulate time-series, like this one http://statsmodels.sourceforge.net/.
On the other hand, if you MUST use python (instead of R | Time series analysis in Python
Well, at first I suggest you to search on Google a Python package to manipulate time-series, like this one http://statsmodels.sourceforge.net/.
On the other hand, if you MUST use python (instead of R, for example) you can try an optimization approach for find the best model parameters using as objective function the prediction error (MSE or RMSE). | Time series analysis in Python
Well, at first I suggest you to search on Google a Python package to manipulate time-series, like this one http://statsmodels.sourceforge.net/.
On the other hand, if you MUST use python (instead of R |
28,012 | Does transpose commute through expectation? | The short answer is "yes", $E(x^T) = E(x)^T=\mu^T$. Your full expression will be:
$E[(x−μ)(x−μ)^T)]=E(xx^T)−μE(x^T)-E(x)\mu^T+\mu\mu^T = E(xx^T)-\mu\mu^T$
The expectation operator doesn't care about the shape of the vector or matrix it operates on. | Does transpose commute through expectation? | The short answer is "yes", $E(x^T) = E(x)^T=\mu^T$. Your full expression will be:
$E[(x−μ)(x−μ)^T)]=E(xx^T)−μE(x^T)-E(x)\mu^T+\mu\mu^T = E(xx^T)-\mu\mu^T$
The expectation operator doesn't care about | Does transpose commute through expectation?
The short answer is "yes", $E(x^T) = E(x)^T=\mu^T$. Your full expression will be:
$E[(x−μ)(x−μ)^T)]=E(xx^T)−μE(x^T)-E(x)\mu^T+\mu\mu^T = E(xx^T)-\mu\mu^T$
The expectation operator doesn't care about the shape of the vector or matrix it operates on. | Does transpose commute through expectation?
The short answer is "yes", $E(x^T) = E(x)^T=\mu^T$. Your full expression will be:
$E[(x−μ)(x−μ)^T)]=E(xx^T)−μE(x^T)-E(x)\mu^T+\mu\mu^T = E(xx^T)-\mu\mu^T$
The expectation operator doesn't care about |
28,013 | Does transpose commute through expectation? | As the previous answer confirmed, the answer is yes. Actually, even complex transposing commutes through expectation, meaning:
$$\mathbb{E}[x^H] = \mathbb{E}[x]^H $$ | Does transpose commute through expectation? | As the previous answer confirmed, the answer is yes. Actually, even complex transposing commutes through expectation, meaning:
$$\mathbb{E}[x^H] = \mathbb{E}[x]^H $$ | Does transpose commute through expectation?
As the previous answer confirmed, the answer is yes. Actually, even complex transposing commutes through expectation, meaning:
$$\mathbb{E}[x^H] = \mathbb{E}[x]^H $$ | Does transpose commute through expectation?
As the previous answer confirmed, the answer is yes. Actually, even complex transposing commutes through expectation, meaning:
$$\mathbb{E}[x^H] = \mathbb{E}[x]^H $$ |
28,014 | How to sum two variables that are on different scales? | A common practice is to standardize the two variables, $A,B$, to place them on the same scale by subtracting the sample mean and dividing by the sample standard deviation. Once you've done this, both variables will be on the same scale in the sense that they each have a sample mean of 0 and sample standard deviation of 1. Thus, they can be added without one variable having an undue influence due simply to scale.
That is, calculate
$$ \frac{ A - \overline{A} }{ {\rm SD}(A) }, \ \ \frac{ B - \overline{B} }{ {\rm SD}(B) } $$
where $\overline{A}, {\rm SD}(A)$ denotes the sample mean and standard deviation of $A$, and similarly for B. The standardized versions of the variables are interpreted as the number of standard deviations above/below the mean a particular observation is. | How to sum two variables that are on different scales? | A common practice is to standardize the two variables, $A,B$, to place them on the same scale by subtracting the sample mean and dividing by the sample standard deviation. Once you've done this, both | How to sum two variables that are on different scales?
A common practice is to standardize the two variables, $A,B$, to place them on the same scale by subtracting the sample mean and dividing by the sample standard deviation. Once you've done this, both variables will be on the same scale in the sense that they each have a sample mean of 0 and sample standard deviation of 1. Thus, they can be added without one variable having an undue influence due simply to scale.
That is, calculate
$$ \frac{ A - \overline{A} }{ {\rm SD}(A) }, \ \ \frac{ B - \overline{B} }{ {\rm SD}(B) } $$
where $\overline{A}, {\rm SD}(A)$ denotes the sample mean and standard deviation of $A$, and similarly for B. The standardized versions of the variables are interpreted as the number of standard deviations above/below the mean a particular observation is. | How to sum two variables that are on different scales?
A common practice is to standardize the two variables, $A,B$, to place them on the same scale by subtracting the sample mean and dividing by the sample standard deviation. Once you've done this, both |
28,015 | Children's statistical education in different countries? | Statistics eduction in the US is in flux, in no small part because we now expect even grade school students (ages 5-12) to become proficient not only with fundamental concepts of statistical thinking, but also with techniques of data summary and presentation that many of their teachers do not even know!
For an authoritative overview of efforts being made at both the K-12 and college levels, see the GAISE reports on the ASA Website. At a high level, these documents expect that all students graduating from U.S. high schools (age 18) will:
formulate questions that can be addressed with
data and collect, organize, and display relevant
data to answer them;
select and use appropriate statistical methods to
analyze data;
develop and evaluate inferences and predictions
that are based on data; and
understand and apply basic concepts of probability.
Notably, in my opinion, is an insistence that by virtue of "variability in data," there is an important "difference between statistics and mathematics." The aim is to "develop statistical thinking" in students as opposed to teaching techniques or algorithms alone.
For a college level approach, a good resource is CAUSEweb (Consortium for the Advancement of Undergraduate Statistics Education). | Children's statistical education in different countries? | Statistics eduction in the US is in flux, in no small part because we now expect even grade school students (ages 5-12) to become proficient not only with fundamental concepts of statistical thinking, | Children's statistical education in different countries?
Statistics eduction in the US is in flux, in no small part because we now expect even grade school students (ages 5-12) to become proficient not only with fundamental concepts of statistical thinking, but also with techniques of data summary and presentation that many of their teachers do not even know!
For an authoritative overview of efforts being made at both the K-12 and college levels, see the GAISE reports on the ASA Website. At a high level, these documents expect that all students graduating from U.S. high schools (age 18) will:
formulate questions that can be addressed with
data and collect, organize, and display relevant
data to answer them;
select and use appropriate statistical methods to
analyze data;
develop and evaluate inferences and predictions
that are based on data; and
understand and apply basic concepts of probability.
Notably, in my opinion, is an insistence that by virtue of "variability in data," there is an important "difference between statistics and mathematics." The aim is to "develop statistical thinking" in students as opposed to teaching techniques or algorithms alone.
For a college level approach, a good resource is CAUSEweb (Consortium for the Advancement of Undergraduate Statistics Education). | Children's statistical education in different countries?
Statistics eduction in the US is in flux, in no small part because we now expect even grade school students (ages 5-12) to become proficient not only with fundamental concepts of statistical thinking, |
28,016 | Children's statistical education in different countries? | Good question.
For my answer, I'll talk about Ireland.
In Senior Cycle (16-18 years) students study very basic statistics, mean, histograms, standard deviation. Basic probability is covered (completely seperately). Calculus, up to the level of integration by parts. Matrices (only 2*2) are an option on the Higher level paper, as is more statistics.
That being said, less than 20% of the school population take the higher course, so the other 80% do basic statistics, some differentiation and very basic probability. | Children's statistical education in different countries? | Good question.
For my answer, I'll talk about Ireland.
In Senior Cycle (16-18 years) students study very basic statistics, mean, histograms, standard deviation. Basic probability is covered (complete | Children's statistical education in different countries?
Good question.
For my answer, I'll talk about Ireland.
In Senior Cycle (16-18 years) students study very basic statistics, mean, histograms, standard deviation. Basic probability is covered (completely seperately). Calculus, up to the level of integration by parts. Matrices (only 2*2) are an option on the Higher level paper, as is more statistics.
That being said, less than 20% of the school population take the higher course, so the other 80% do basic statistics, some differentiation and very basic probability. | Children's statistical education in different countries?
Good question.
For my answer, I'll talk about Ireland.
In Senior Cycle (16-18 years) students study very basic statistics, mean, histograms, standard deviation. Basic probability is covered (complete |
28,017 | Children's statistical education in different countries? | To continue talking about Ireland, we have introduced a new course for teaching mathematics to secondary school (high school) students. It is known as "Project Maths". From the Project Maths website, it says:
Project Maths is an exciting, dynamic development in Irish education.
It makes maths relevant to the everyday lives of teenagers, and helps
them understand it better using interesting and practical techniques.
Students are empowered in developing essential problem-solving skills
for higher education and the workplace.
In terms of statistics (for students taking higher level), the topics covered are:
Collecting data
Quantitative data
Qualitative data
Surveys
Samples
Averages
Frequency distribution for discrete (countable) data
Mean, mode, median for discrete or continuous grouped frequency distributions
Variability of data
Standard deviation of a frequency distribution
Histograms
Frequency Curves
Distributions and shapes of histograms
Stem and leaf diagrams
Scatter plots (scatter graphs)
Correlation and causality
The normal curve and the standard deviation as a ruler
Shifting data (transforming data)
Standardizing scores
Hypothesis testing
Margin of error and confidence intervals for population proportions
Sampling theory (distribution of the sample mean)
Pitfalls and misuses of statistics
It is correctly pointed out by @richiemorrisroe that probability is treated as a separate chapter from statistics, but there is some overlap. Some of the topics in probability (again, for students taking higher level) are:
Normal distribution and probability
Finding areas under normal curves
Probability distributions (including the notion of r.v. and expected value)
Some people have spoken out against Project Maths (for example, see here), but talking in terms of the sections on statistics and probability, I think it's a rather good course.
Since I have come across University lectures (both undergrad and postgrad!) that cover some of the above topics in no greater depth than the material we are presenting to our secondary school (high school) students then my reading of the situation is that we must be doing something right in this area.
In my own experience, I am not a high-school teacher, but I do help friends of family with their mathematics - particularly leaving certificate higher level. For me, it's enjoyable to teach statistics and probability to these young students and I do believe that they are learning material that is both useful and of a significantly high standard.
If you seek any further information, please visit the Project Maths website or watch this video by the National Council for Curriculum and Assessment. | Children's statistical education in different countries? | To continue talking about Ireland, we have introduced a new course for teaching mathematics to secondary school (high school) students. It is known as "Project Maths". From the Project Maths website, | Children's statistical education in different countries?
To continue talking about Ireland, we have introduced a new course for teaching mathematics to secondary school (high school) students. It is known as "Project Maths". From the Project Maths website, it says:
Project Maths is an exciting, dynamic development in Irish education.
It makes maths relevant to the everyday lives of teenagers, and helps
them understand it better using interesting and practical techniques.
Students are empowered in developing essential problem-solving skills
for higher education and the workplace.
In terms of statistics (for students taking higher level), the topics covered are:
Collecting data
Quantitative data
Qualitative data
Surveys
Samples
Averages
Frequency distribution for discrete (countable) data
Mean, mode, median for discrete or continuous grouped frequency distributions
Variability of data
Standard deviation of a frequency distribution
Histograms
Frequency Curves
Distributions and shapes of histograms
Stem and leaf diagrams
Scatter plots (scatter graphs)
Correlation and causality
The normal curve and the standard deviation as a ruler
Shifting data (transforming data)
Standardizing scores
Hypothesis testing
Margin of error and confidence intervals for population proportions
Sampling theory (distribution of the sample mean)
Pitfalls and misuses of statistics
It is correctly pointed out by @richiemorrisroe that probability is treated as a separate chapter from statistics, but there is some overlap. Some of the topics in probability (again, for students taking higher level) are:
Normal distribution and probability
Finding areas under normal curves
Probability distributions (including the notion of r.v. and expected value)
Some people have spoken out against Project Maths (for example, see here), but talking in terms of the sections on statistics and probability, I think it's a rather good course.
Since I have come across University lectures (both undergrad and postgrad!) that cover some of the above topics in no greater depth than the material we are presenting to our secondary school (high school) students then my reading of the situation is that we must be doing something right in this area.
In my own experience, I am not a high-school teacher, but I do help friends of family with their mathematics - particularly leaving certificate higher level. For me, it's enjoyable to teach statistics and probability to these young students and I do believe that they are learning material that is both useful and of a significantly high standard.
If you seek any further information, please visit the Project Maths website or watch this video by the National Council for Curriculum and Assessment. | Children's statistical education in different countries?
To continue talking about Ireland, we have introduced a new course for teaching mathematics to secondary school (high school) students. It is known as "Project Maths". From the Project Maths website, |
28,018 | How to calculate Standard Error of Odds Ratios? | You can calculate/approximate the standard errors via the p-values. First, convert the two-sided p-values into one-sided p-values by dividing them by 2. So you get $p = .0115$ and $p = .007$. Then convert these p-values to the corresponding z-values. For $p = .0115$, this is $z = -2.273$ and for $p = .007$, this is $z = -2.457$ (they are negative, since the odds ratios are below 1). These z-values are actually the test statistics calculated by taking the log of the odds ratios divided by the corresponding standard errors (i.e., $z = log(OR) / SE$). So, it follows that $SE = log(OR) / z$, which yields $SE = 0.071$ for the first and $SE = .038$ for the second study.
Now you have everything to do a meta-analysis. I'll illustrate how you can do the computations with R, using the metafor package:
library(metafor)
yi <- log(c(.85, .91)) ### the log odds ratios
sei <- c(0.071, .038) ### the corresponding standard errors
res <- rma(yi=yi, sei=sei) ### fit a random-effects model to these data
res
Random-Effects Model (k = 2; tau^2 estimator: REML)
tau^2 (estimate of total amount of heterogeneity): 0 (SE = 0.0046)
tau (sqrt of the estimate of total heterogeneity): 0
I^2 (% of total variability due to heterogeneity): 0.00%
H^2 (total variability / within-study variance): 1.00
Test for Heterogeneity:
Q(df = 1) = 0.7174, p-val = 0.3970
Model Results:
estimate se zval pval ci.lb ci.ub
-0.1095 0.0335 -3.2683 0.0011 -0.1752 -0.0438 **
Note that the meta-analysis is done using the log odds ratios. So, $-0.1095$ is the estimated pooled log odds ratio based on these two studies. Let's convert this back to an odds ratio:
predict(res, transf=exp, digits=2)
pred se ci.lb ci.ub cr.lb cr.ub
0.90 NA 0.84 0.96 0.84 0.96
So, the pooled odds ratio is .90 with 95% CI: .84 to .96. | How to calculate Standard Error of Odds Ratios? | You can calculate/approximate the standard errors via the p-values. First, convert the two-sided p-values into one-sided p-values by dividing them by 2. So you get $p = .0115$ and $p = .007$. Then con | How to calculate Standard Error of Odds Ratios?
You can calculate/approximate the standard errors via the p-values. First, convert the two-sided p-values into one-sided p-values by dividing them by 2. So you get $p = .0115$ and $p = .007$. Then convert these p-values to the corresponding z-values. For $p = .0115$, this is $z = -2.273$ and for $p = .007$, this is $z = -2.457$ (they are negative, since the odds ratios are below 1). These z-values are actually the test statistics calculated by taking the log of the odds ratios divided by the corresponding standard errors (i.e., $z = log(OR) / SE$). So, it follows that $SE = log(OR) / z$, which yields $SE = 0.071$ for the first and $SE = .038$ for the second study.
Now you have everything to do a meta-analysis. I'll illustrate how you can do the computations with R, using the metafor package:
library(metafor)
yi <- log(c(.85, .91)) ### the log odds ratios
sei <- c(0.071, .038) ### the corresponding standard errors
res <- rma(yi=yi, sei=sei) ### fit a random-effects model to these data
res
Random-Effects Model (k = 2; tau^2 estimator: REML)
tau^2 (estimate of total amount of heterogeneity): 0 (SE = 0.0046)
tau (sqrt of the estimate of total heterogeneity): 0
I^2 (% of total variability due to heterogeneity): 0.00%
H^2 (total variability / within-study variance): 1.00
Test for Heterogeneity:
Q(df = 1) = 0.7174, p-val = 0.3970
Model Results:
estimate se zval pval ci.lb ci.ub
-0.1095 0.0335 -3.2683 0.0011 -0.1752 -0.0438 **
Note that the meta-analysis is done using the log odds ratios. So, $-0.1095$ is the estimated pooled log odds ratio based on these two studies. Let's convert this back to an odds ratio:
predict(res, transf=exp, digits=2)
pred se ci.lb ci.ub cr.lb cr.ub
0.90 NA 0.84 0.96 0.84 0.96
So, the pooled odds ratio is .90 with 95% CI: .84 to .96. | How to calculate Standard Error of Odds Ratios?
You can calculate/approximate the standard errors via the p-values. First, convert the two-sided p-values into one-sided p-values by dividing them by 2. So you get $p = .0115$ and $p = .007$. Then con |
28,019 | Can a probability distribution have infinite standard deviation? | To answer your question title: Yes, a probability distribution can have infinite standard deviation (see below).
Your example is a special case of the Cauchy distribution whose mean or variance does not exist. Set the location parameter to 0 and the scale to 1 for the Cauchy to get to your pdf. | Can a probability distribution have infinite standard deviation? | To answer your question title: Yes, a probability distribution can have infinite standard deviation (see below).
Your example is a special case of the Cauchy distribution whose mean or variance does n | Can a probability distribution have infinite standard deviation?
To answer your question title: Yes, a probability distribution can have infinite standard deviation (see below).
Your example is a special case of the Cauchy distribution whose mean or variance does not exist. Set the location parameter to 0 and the scale to 1 for the Cauchy to get to your pdf. | Can a probability distribution have infinite standard deviation?
To answer your question title: Yes, a probability distribution can have infinite standard deviation (see below).
Your example is a special case of the Cauchy distribution whose mean or variance does n |
28,020 | Can a probability distribution have infinite standard deviation? | The Cauchy distribution doesn't have a mean or variance, in that the integral doesn't converge to anything in $[-\infty,\infty]$. However, a distribution like $f(x)=\frac{2}{x^3}$ on $[1,\infty)$ has a mean, but the standard deviation is infinite. | Can a probability distribution have infinite standard deviation? | The Cauchy distribution doesn't have a mean or variance, in that the integral doesn't converge to anything in $[-\infty,\infty]$. However, a distribution like $f(x)=\frac{2}{x^3}$ on $[1,\infty)$ has | Can a probability distribution have infinite standard deviation?
The Cauchy distribution doesn't have a mean or variance, in that the integral doesn't converge to anything in $[-\infty,\infty]$. However, a distribution like $f(x)=\frac{2}{x^3}$ on $[1,\infty)$ has a mean, but the standard deviation is infinite. | Can a probability distribution have infinite standard deviation?
The Cauchy distribution doesn't have a mean or variance, in that the integral doesn't converge to anything in $[-\infty,\infty]$. However, a distribution like $f(x)=\frac{2}{x^3}$ on $[1,\infty)$ has |
28,021 | What is AIC? Looking for a formal but intuitive answer | Let $f$ be your true distribution, and $g$ the family from which you are trying to fit your data. Then $\theta$, the maximum likelihood estimator of parameters of $g$, is a random variable. You could formulate model selection as finding the distribution family $g$ that minimizes the expected KL divergence between $f$ and $g(\theta)$, which can be written as
$$\text{Entropy}(f)-E_x E_y[\log(g(x|\theta(y)))]$$
Since you are minimizing over $g$, the Entropy($f$) term doesn't matter and you look for $g$ that maximizes $E_x E_y[\log(g(x|\theta(y)))]$.
Let $L(\theta(y)|y)$ be the likelihood of data $y$ according to $g(\theta)$. You could estimate $E_x E_y[\log(g(x|\theta(y)))]$ as $\log(L(\theta(y)|y))$ but that estimator is biased.
Akaike's showed that when $f$ belongs to family $g$ with dimension $k$, the following estimator is asymptotically unbiased
$$\log(L(\theta(y)|y))-k$$
Burnham has more details in this paper, also blog post by Enes Makalic has further explanation and references | What is AIC? Looking for a formal but intuitive answer | Let $f$ be your true distribution, and $g$ the family from which you are trying to fit your data. Then $\theta$, the maximum likelihood estimator of parameters of $g$, is a random variable. You could | What is AIC? Looking for a formal but intuitive answer
Let $f$ be your true distribution, and $g$ the family from which you are trying to fit your data. Then $\theta$, the maximum likelihood estimator of parameters of $g$, is a random variable. You could formulate model selection as finding the distribution family $g$ that minimizes the expected KL divergence between $f$ and $g(\theta)$, which can be written as
$$\text{Entropy}(f)-E_x E_y[\log(g(x|\theta(y)))]$$
Since you are minimizing over $g$, the Entropy($f$) term doesn't matter and you look for $g$ that maximizes $E_x E_y[\log(g(x|\theta(y)))]$.
Let $L(\theta(y)|y)$ be the likelihood of data $y$ according to $g(\theta)$. You could estimate $E_x E_y[\log(g(x|\theta(y)))]$ as $\log(L(\theta(y)|y))$ but that estimator is biased.
Akaike's showed that when $f$ belongs to family $g$ with dimension $k$, the following estimator is asymptotically unbiased
$$\log(L(\theta(y)|y))-k$$
Burnham has more details in this paper, also blog post by Enes Makalic has further explanation and references | What is AIC? Looking for a formal but intuitive answer
Let $f$ be your true distribution, and $g$ the family from which you are trying to fit your data. Then $\theta$, the maximum likelihood estimator of parameters of $g$, is a random variable. You could |
28,022 | What is AIC? Looking for a formal but intuitive answer | Basically one needs a loss function in order to optimize anything. AIC provides the loss function which when minimized gives a "optimal"* model which fits the given data. The AIC loss function (2k-2*log(L)) tries to formulate the bias variance trade off that every statistical modeler faces when fitting a model to finite set of data.
In other words while fitting a model if you increase the number of parameters you will improve the log likelihood but will run into the danger of over fitting. The AIC penalizes for increasing the number of parameters thus minimizing the AIC selects the model where the improvement in log likelihood is not worth the penalty for increasing the number of parameters.
Note that when I say optimal model it is optimal in the sense that the model minimizes the AIC. There are other criteria (e.g. BIC) which may give other "optimal" models.
I don't have any experience with stata so cannot help you with the other part of the question. | What is AIC? Looking for a formal but intuitive answer | Basically one needs a loss function in order to optimize anything. AIC provides the loss function which when minimized gives a "optimal"* model which fits the given data. The AIC loss function (2k-2*l | What is AIC? Looking for a formal but intuitive answer
Basically one needs a loss function in order to optimize anything. AIC provides the loss function which when minimized gives a "optimal"* model which fits the given data. The AIC loss function (2k-2*log(L)) tries to formulate the bias variance trade off that every statistical modeler faces when fitting a model to finite set of data.
In other words while fitting a model if you increase the number of parameters you will improve the log likelihood but will run into the danger of over fitting. The AIC penalizes for increasing the number of parameters thus minimizing the AIC selects the model where the improvement in log likelihood is not worth the penalty for increasing the number of parameters.
Note that when I say optimal model it is optimal in the sense that the model minimizes the AIC. There are other criteria (e.g. BIC) which may give other "optimal" models.
I don't have any experience with stata so cannot help you with the other part of the question. | What is AIC? Looking for a formal but intuitive answer
Basically one needs a loss function in order to optimize anything. AIC provides the loss function which when minimized gives a "optimal"* model which fits the given data. The AIC loss function (2k-2*l |
28,023 | What is AIC? Looking for a formal but intuitive answer | It is a heuristic, and as such, has been subjected to extensive testing. So when to trust it or not is not simple clear-cut and always-true decision.
At a rough approximation, it trades off goodness of fit and number of variables ("degrees of freedom"). Much more, as usual, at the Wikipedia article about AIC. | What is AIC? Looking for a formal but intuitive answer | It is a heuristic, and as such, has been subjected to extensive testing. So when to trust it or not is not simple clear-cut and always-true decision.
At a rough approximation, it trades off goodness | What is AIC? Looking for a formal but intuitive answer
It is a heuristic, and as such, has been subjected to extensive testing. So when to trust it or not is not simple clear-cut and always-true decision.
At a rough approximation, it trades off goodness of fit and number of variables ("degrees of freedom"). Much more, as usual, at the Wikipedia article about AIC. | What is AIC? Looking for a formal but intuitive answer
It is a heuristic, and as such, has been subjected to extensive testing. So when to trust it or not is not simple clear-cut and always-true decision.
At a rough approximation, it trades off goodness |
28,024 | Distribution of argmax of beta-distributed random variables | When the $x_i$ are independent for $1\le i \le d$ with distribution functions $F_i$ and density functions $f_i,$ respectively, the chance that $x_j$ is the largest is (by the very definition of the distribution functions)
$$\begin{aligned}
\Pr(x_j=\max(x_i,i\in\mathcal I)) &= \Pr(x_1 \le x_j, x_2 \le x_j, \ldots, x_d\le x_j) \\
&= E\left[F_1(x_j)F_2(x_j)\cdots F_{j-1}(x_j)(1) F_{j+1}(x_j) \cdots F_d(x_j)\right] \\
&= \int_{\mathbb{R}}\left[F_1(x_j)\cdots F_{j-1}(x_j)\ F_{j+1}(x_j) \cdots F_d(x_j)\right]f_j(x_j)\,\mathrm{d}x_j.
\end{aligned} $$
Provided none of the $\alpha_i$ and $\beta_i$ are really tiny, this is straightforward to obtain through numerical integration, as shown in the R function beta.argmax below. (When there is some possibility of tiny values, fussier code will be needed because the regions of highest density may have density values that overflow double precision arithmetic. As a practical matter, "tiny" means closer to $0$ than to $1.$)
As an example of its use, I generated $d=8$ values for $\alpha_i$ and $\beta_i.$
d <- 8
set.seed(17)
alpha <- rexp(d) + 0.1
beta <- rexp(d) + 0.1
I then computed the probability distribution and double-checked it with a simulation of 100,000 iterations:
p <- beta.argmax(alpha, beta, stop.on.error=FALSE) # The calculation
x <- matrix(rbeta(d * 1e5, alpha, beta), nrow=d) # The simulated x_j
p.hat <- tabulate(apply(x, 2, which.max), nbins=d) # Summary of the argmaxes
(signif(rbind(Calculated=p, Simulated=p.hat/sum(p.hat)), 3)) # Comparison
chisq.test(p.hat, p=p) # Formal comparison
The output is
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
Calculated 0.0247 0.0218 0.00230 0.124 0.451 0.0318 0.0341 0.311
Simulated 0.0245 0.0217 0.00225 0.125 0.451 0.0312 0.0346 0.311
Chi-squared test for given probabilities
data: p.hat
X-squared = 2.468, df = 7, p-value = 0.9295
The agreement between calculation and simulation, shown in the first array, is excellent, as confirmed by the chi-squared test that follows it.
I did other tests with $d$ as large as $200,$ keeping all $\alpha_i$ and $\beta_i$ above $0.5,$ and the results have been consistent with the calculations. For even larger values of $d$ the results worsen, indicating numerical problems. (I tested up to $d=500.$) These were cured (at some cost in computation time, which reached one minute) by improving the error tolerances in the numerical integration.
Here is the code.
beta.argmax <- function(alpha, beta, ...) {
lower <- min(qbeta(1e-9, alpha, beta))
upper <- max(qbeta(1-1e-9, alpha, beta))
p <- rep(NA_real_, length(alpha))
for (i in seq_along(p)) {
ff <- function(x) dbeta(x, alpha[i], beta[i], log=TRUE)
f <- Vectorize(function(x) sum(pbeta(x, alpha[-i], beta[-i], log.p=TRUE)))
h <- function(x) exp(ff(x) + f(x))
p[i] <- integrate(h, lower, upper, ...)$value
}
cat(sum(p), "\n") # Optional check: see how close to 1.000000 the sum is
p / sum(p)
} | Distribution of argmax of beta-distributed random variables | When the $x_i$ are independent for $1\le i \le d$ with distribution functions $F_i$ and density functions $f_i,$ respectively, the chance that $x_j$ is the largest is (by the very definition of the di | Distribution of argmax of beta-distributed random variables
When the $x_i$ are independent for $1\le i \le d$ with distribution functions $F_i$ and density functions $f_i,$ respectively, the chance that $x_j$ is the largest is (by the very definition of the distribution functions)
$$\begin{aligned}
\Pr(x_j=\max(x_i,i\in\mathcal I)) &= \Pr(x_1 \le x_j, x_2 \le x_j, \ldots, x_d\le x_j) \\
&= E\left[F_1(x_j)F_2(x_j)\cdots F_{j-1}(x_j)(1) F_{j+1}(x_j) \cdots F_d(x_j)\right] \\
&= \int_{\mathbb{R}}\left[F_1(x_j)\cdots F_{j-1}(x_j)\ F_{j+1}(x_j) \cdots F_d(x_j)\right]f_j(x_j)\,\mathrm{d}x_j.
\end{aligned} $$
Provided none of the $\alpha_i$ and $\beta_i$ are really tiny, this is straightforward to obtain through numerical integration, as shown in the R function beta.argmax below. (When there is some possibility of tiny values, fussier code will be needed because the regions of highest density may have density values that overflow double precision arithmetic. As a practical matter, "tiny" means closer to $0$ than to $1.$)
As an example of its use, I generated $d=8$ values for $\alpha_i$ and $\beta_i.$
d <- 8
set.seed(17)
alpha <- rexp(d) + 0.1
beta <- rexp(d) + 0.1
I then computed the probability distribution and double-checked it with a simulation of 100,000 iterations:
p <- beta.argmax(alpha, beta, stop.on.error=FALSE) # The calculation
x <- matrix(rbeta(d * 1e5, alpha, beta), nrow=d) # The simulated x_j
p.hat <- tabulate(apply(x, 2, which.max), nbins=d) # Summary of the argmaxes
(signif(rbind(Calculated=p, Simulated=p.hat/sum(p.hat)), 3)) # Comparison
chisq.test(p.hat, p=p) # Formal comparison
The output is
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
Calculated 0.0247 0.0218 0.00230 0.124 0.451 0.0318 0.0341 0.311
Simulated 0.0245 0.0217 0.00225 0.125 0.451 0.0312 0.0346 0.311
Chi-squared test for given probabilities
data: p.hat
X-squared = 2.468, df = 7, p-value = 0.9295
The agreement between calculation and simulation, shown in the first array, is excellent, as confirmed by the chi-squared test that follows it.
I did other tests with $d$ as large as $200,$ keeping all $\alpha_i$ and $\beta_i$ above $0.5,$ and the results have been consistent with the calculations. For even larger values of $d$ the results worsen, indicating numerical problems. (I tested up to $d=500.$) These were cured (at some cost in computation time, which reached one minute) by improving the error tolerances in the numerical integration.
Here is the code.
beta.argmax <- function(alpha, beta, ...) {
lower <- min(qbeta(1e-9, alpha, beta))
upper <- max(qbeta(1-1e-9, alpha, beta))
p <- rep(NA_real_, length(alpha))
for (i in seq_along(p)) {
ff <- function(x) dbeta(x, alpha[i], beta[i], log=TRUE)
f <- Vectorize(function(x) sum(pbeta(x, alpha[-i], beta[-i], log.p=TRUE)))
h <- function(x) exp(ff(x) + f(x))
p[i] <- integrate(h, lower, upper, ...)$value
}
cat(sum(p), "\n") # Optional check: see how close to 1.000000 the sum is
p / sum(p)
} | Distribution of argmax of beta-distributed random variables
When the $x_i$ are independent for $1\le i \le d$ with distribution functions $F_i$ and density functions $f_i,$ respectively, the chance that $x_j$ is the largest is (by the very definition of the di |
28,025 | In the most basic sense, what is marginal likelihood? | In Bayesian statistics, the marginal likelihood
$$m(x) = \int_\Theta f(x|\theta)\pi(\theta)\,\text d\theta$$
where
$x$ is the sample
$f(x|\theta)$ is the sampling density, which is proportional to the model likelihood
$\pi(\theta)$ is the prior density
is a misnomer in that
it is not a likelihood function [as a function of the parameter], since the parameter is integrated out (i.e., the likelihood function is averaged against the prior measure),
it is a density in the observations, the predictive density of the sample,
it is not defined up to a multiplicative constant,
it does not solely depend on sufficient statistics
Other names for $m(x)$ are evidence, prior predictive, partition function. It has however several important roles:
this is the normalising constant of the posterior distribution$$\pi(\theta|x) = \dfrac{f(x|\theta)\pi(\theta)}{m(x)}$$
in model comparison, this is the contribution of the data to the posterior probability of the associated model and the numerator or denominator in the Bayes factor.
it is a measure of goodness-of-fit (of a model to the data $x$), in that $2\log m(x)$ is asymptotically the BIC (Bayesian information criterion) of Schwarz (1978).
See also
Normalizing constant in Bayes theorem
Normalizing constant irrelevant in Bayes theorem?
Intuition of Bayesian normalizing constant | In the most basic sense, what is marginal likelihood? | In Bayesian statistics, the marginal likelihood
$$m(x) = \int_\Theta f(x|\theta)\pi(\theta)\,\text d\theta$$
where
$x$ is the sample
$f(x|\theta)$ is the sampling density, which is proportional to th | In the most basic sense, what is marginal likelihood?
In Bayesian statistics, the marginal likelihood
$$m(x) = \int_\Theta f(x|\theta)\pi(\theta)\,\text d\theta$$
where
$x$ is the sample
$f(x|\theta)$ is the sampling density, which is proportional to the model likelihood
$\pi(\theta)$ is the prior density
is a misnomer in that
it is not a likelihood function [as a function of the parameter], since the parameter is integrated out (i.e., the likelihood function is averaged against the prior measure),
it is a density in the observations, the predictive density of the sample,
it is not defined up to a multiplicative constant,
it does not solely depend on sufficient statistics
Other names for $m(x)$ are evidence, prior predictive, partition function. It has however several important roles:
this is the normalising constant of the posterior distribution$$\pi(\theta|x) = \dfrac{f(x|\theta)\pi(\theta)}{m(x)}$$
in model comparison, this is the contribution of the data to the posterior probability of the associated model and the numerator or denominator in the Bayes factor.
it is a measure of goodness-of-fit (of a model to the data $x$), in that $2\log m(x)$ is asymptotically the BIC (Bayesian information criterion) of Schwarz (1978).
See also
Normalizing constant in Bayes theorem
Normalizing constant irrelevant in Bayes theorem?
Intuition of Bayesian normalizing constant | In the most basic sense, what is marginal likelihood?
In Bayesian statistics, the marginal likelihood
$$m(x) = \int_\Theta f(x|\theta)\pi(\theta)\,\text d\theta$$
where
$x$ is the sample
$f(x|\theta)$ is the sampling density, which is proportional to th |
28,026 | In the most basic sense, what is marginal likelihood? | Although the OP clearly refers to the Bayesian framework, it may be
worth mentioning that the expression marginal likelihood has been
used for long outside of the Bayesian framework, with a different
meaning. This frequentist concept is described in several famous
books, especially in those by D.R. Cox and coauthors. A modern
presentation is to be found in the book Statistical Models by
A.C. Davison (Chap. 12) which inspired this answer. The book
Principles of Statistical Inference by D.R. Cox discusses both the
frequentist and the Bayesian concepts.
Consider a random vector $\mathbf{Y}$ of observations with
distribution depending on a vector of parameters
$\boldsymbol{\theta}$. If $\mathbf{Y}$ splits into two sub-vectors,
say $\mathbf{V}$ and $\mathbf{W}$, the likelihood is
$$
L(\boldsymbol{\theta};\,\mathbf{y}) =
f_{\mathbf{Y}}(\mathbf{y};\,\boldsymbol{\theta}) =
f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\theta}) \,
f_{\mathbf{W} \vert \mathbf{V}}(\mathbf{w} \, \vert\, \mathbf{v}; \,
\boldsymbol{\theta}). \tag{1}
$$
If instead the couple $[\mathbf{V}, \mathbf{W}]$ is a sufficient
statistic for $\boldsymbol{\theta}$, the same form holds, up to a
multiplicative constant (w.r.t. $\boldsymbol{\theta}$). It may happen
that the information on some specific component of interest in
$\boldsymbol{\theta}$ is mainly conveyed by one of the two factors of
the product on which we can therefore focus.
As an example consider
$\mathbf{Y} \sim_{\text{i.i.d.}} \text{Norm}(\mu,\, \sigma^2)$, so that
$\boldsymbol{\theta} := [\mu,\,\sigma^2]$, and
assume that we are interested in the inference on $\sigma^2$ only.
It can be checked that
$$
f_{\mathbf{Y}}(\mathbf{y}; \, \sigma^2,\,\mu) =
f_{S^2} (s^2; \, \sigma^2) \,
f_{\bar{Y}} (\bar{y}; \, \sigma^2,\, \mu) \,
f_{\mathbf{Y}\vert \bar{Y},S^2}(\mathbf{y} \, \vert \, \bar{y}, \, s^2).
$$
At the right-hand side the third term does not depend on the
parameter: in other words, the
sample mean $\bar{Y}$ and the sample variance $S^2$ are jointly
sufficient statistics for $\boldsymbol{\theta}$.
It can be anticipated that the information on
$\sigma^2$ in the likelihood is conveyed by the first term since the sample mean
$\bar{Y}$ does not tell much about $\sigma^2$. So the inference on $\sigma^2$ can be based on $f_{S^2} (s^2;
\, \sigma^2)$ called the/a marginal likelihood for this specific case.
More generally, the form (1) can have a special importance when the
parameter vector $\boldsymbol{\theta} = [\boldsymbol{\psi},\,
\boldsymbol{\lambda}]$ splits into two parts: a parameter of interest
$\boldsymbol{\psi}$ and a nuisance parameter
$\boldsymbol{\lambda}$. We can call marginal likelihood a function
which only depends on $\boldsymbol{\psi}$ and on the observations
available, and which extracts most of the information on
$\boldsymbol{\psi}$ that can be retrieved from the observations. If
the joint density can be factored as
$$
f_\mathbf{Y}(\mathbf{y}; \boldsymbol{\psi},\, \boldsymbol{\lambda})
= f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\psi}) \,
f_{\mathbf{W} \vert \mathbf{V}}(\mathbf{w} \, \vert\, \mathbf{v}; \,
\boldsymbol{\psi}, \,\boldsymbol{\lambda}), \tag{M}
$$
then we can ignore the second term at the r.h.s. and use the following
marginal likelihood (for $\boldsymbol{\psi}$)
$L_{\text{M}}(\boldsymbol{\psi};\, \mathbf{y}) :=
f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\psi})$.
A different form of factorisation that can be used is
$$
f_\mathbf{Y}(\mathbf{y}; \boldsymbol{\psi},\, \boldsymbol{\lambda})
= f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\psi},\, \boldsymbol{\lambda}) \,
f_{\mathbf{W} \vert \mathbf{V}}(\mathbf{w} \, \vert \, \mathbf{v}; \, \boldsymbol{\psi}).
\tag{C}
$$
In this case we may base the inference on $\boldsymbol{\psi}$ on the
so-called the conditional likelihood
$L_{\text{C}}(\boldsymbol{\psi};\, \mathbf{y}) := f_{\mathbf{W} \vert
\mathbf{V}}(\mathbf{w} \, \vert \, \mathbf{v}; \,
\boldsymbol{\psi})$. So, while in (M) the parameter of interest is
isolated in the marginal part of the factorisation, in (C) it is
isolated in the conditional part. As a typical example is provided by
regression where the inference is usually conditional on the vector of
covariates.
So in its usual frequentist meaning, a marginal likelihood is a
function of the parameter of interest $\psi$ that can be used as a
likelihood to infer on $\psi$ disregarding the nuisance parameter. A
striking point is that in some cases the inference based on the
marginal likelihood is better than that based on the profile
likelihood, which is mainly asymptotic. This is the case in the
example above, which generalises to linear regression. | In the most basic sense, what is marginal likelihood? | Although the OP clearly refers to the Bayesian framework, it may be
worth mentioning that the expression marginal likelihood has been
used for long outside of the Bayesian framework, with a different
| In the most basic sense, what is marginal likelihood?
Although the OP clearly refers to the Bayesian framework, it may be
worth mentioning that the expression marginal likelihood has been
used for long outside of the Bayesian framework, with a different
meaning. This frequentist concept is described in several famous
books, especially in those by D.R. Cox and coauthors. A modern
presentation is to be found in the book Statistical Models by
A.C. Davison (Chap. 12) which inspired this answer. The book
Principles of Statistical Inference by D.R. Cox discusses both the
frequentist and the Bayesian concepts.
Consider a random vector $\mathbf{Y}$ of observations with
distribution depending on a vector of parameters
$\boldsymbol{\theta}$. If $\mathbf{Y}$ splits into two sub-vectors,
say $\mathbf{V}$ and $\mathbf{W}$, the likelihood is
$$
L(\boldsymbol{\theta};\,\mathbf{y}) =
f_{\mathbf{Y}}(\mathbf{y};\,\boldsymbol{\theta}) =
f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\theta}) \,
f_{\mathbf{W} \vert \mathbf{V}}(\mathbf{w} \, \vert\, \mathbf{v}; \,
\boldsymbol{\theta}). \tag{1}
$$
If instead the couple $[\mathbf{V}, \mathbf{W}]$ is a sufficient
statistic for $\boldsymbol{\theta}$, the same form holds, up to a
multiplicative constant (w.r.t. $\boldsymbol{\theta}$). It may happen
that the information on some specific component of interest in
$\boldsymbol{\theta}$ is mainly conveyed by one of the two factors of
the product on which we can therefore focus.
As an example consider
$\mathbf{Y} \sim_{\text{i.i.d.}} \text{Norm}(\mu,\, \sigma^2)$, so that
$\boldsymbol{\theta} := [\mu,\,\sigma^2]$, and
assume that we are interested in the inference on $\sigma^2$ only.
It can be checked that
$$
f_{\mathbf{Y}}(\mathbf{y}; \, \sigma^2,\,\mu) =
f_{S^2} (s^2; \, \sigma^2) \,
f_{\bar{Y}} (\bar{y}; \, \sigma^2,\, \mu) \,
f_{\mathbf{Y}\vert \bar{Y},S^2}(\mathbf{y} \, \vert \, \bar{y}, \, s^2).
$$
At the right-hand side the third term does not depend on the
parameter: in other words, the
sample mean $\bar{Y}$ and the sample variance $S^2$ are jointly
sufficient statistics for $\boldsymbol{\theta}$.
It can be anticipated that the information on
$\sigma^2$ in the likelihood is conveyed by the first term since the sample mean
$\bar{Y}$ does not tell much about $\sigma^2$. So the inference on $\sigma^2$ can be based on $f_{S^2} (s^2;
\, \sigma^2)$ called the/a marginal likelihood for this specific case.
More generally, the form (1) can have a special importance when the
parameter vector $\boldsymbol{\theta} = [\boldsymbol{\psi},\,
\boldsymbol{\lambda}]$ splits into two parts: a parameter of interest
$\boldsymbol{\psi}$ and a nuisance parameter
$\boldsymbol{\lambda}$. We can call marginal likelihood a function
which only depends on $\boldsymbol{\psi}$ and on the observations
available, and which extracts most of the information on
$\boldsymbol{\psi}$ that can be retrieved from the observations. If
the joint density can be factored as
$$
f_\mathbf{Y}(\mathbf{y}; \boldsymbol{\psi},\, \boldsymbol{\lambda})
= f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\psi}) \,
f_{\mathbf{W} \vert \mathbf{V}}(\mathbf{w} \, \vert\, \mathbf{v}; \,
\boldsymbol{\psi}, \,\boldsymbol{\lambda}), \tag{M}
$$
then we can ignore the second term at the r.h.s. and use the following
marginal likelihood (for $\boldsymbol{\psi}$)
$L_{\text{M}}(\boldsymbol{\psi};\, \mathbf{y}) :=
f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\psi})$.
A different form of factorisation that can be used is
$$
f_\mathbf{Y}(\mathbf{y}; \boldsymbol{\psi},\, \boldsymbol{\lambda})
= f_{\mathbf{V}}(\mathbf{v}; \boldsymbol{\psi},\, \boldsymbol{\lambda}) \,
f_{\mathbf{W} \vert \mathbf{V}}(\mathbf{w} \, \vert \, \mathbf{v}; \, \boldsymbol{\psi}).
\tag{C}
$$
In this case we may base the inference on $\boldsymbol{\psi}$ on the
so-called the conditional likelihood
$L_{\text{C}}(\boldsymbol{\psi};\, \mathbf{y}) := f_{\mathbf{W} \vert
\mathbf{V}}(\mathbf{w} \, \vert \, \mathbf{v}; \,
\boldsymbol{\psi})$. So, while in (M) the parameter of interest is
isolated in the marginal part of the factorisation, in (C) it is
isolated in the conditional part. As a typical example is provided by
regression where the inference is usually conditional on the vector of
covariates.
So in its usual frequentist meaning, a marginal likelihood is a
function of the parameter of interest $\psi$ that can be used as a
likelihood to infer on $\psi$ disregarding the nuisance parameter. A
striking point is that in some cases the inference based on the
marginal likelihood is better than that based on the profile
likelihood, which is mainly asymptotic. This is the case in the
example above, which generalises to linear regression. | In the most basic sense, what is marginal likelihood?
Although the OP clearly refers to the Bayesian framework, it may be
worth mentioning that the expression marginal likelihood has been
used for long outside of the Bayesian framework, with a different
|
28,027 | In the most basic sense, what is marginal likelihood? | In my mind the most intuitive role of marginal likelihood is indeed as a normalization factor. I'll elaborate more on this.
The bayes theorem is: $$ P(\phi|X) = \frac{P(X|\phi)P(\phi)} {P(X)} $$
Now let's explore the numerator components:
$$P(X|\phi)P(\phi)$$ This is the prior, multiply by the likelihood using a specific parameter - How well can we explain the data using this specific $\phi$ parameter.
$$P(X)=\int_{\theta}P(X|\theta)P(\theta)\mathrm{d}\theta$$ The marginal likelihood - How well we can explain the data using all the parameters, weighted by the prior.
The ratio between them should give the proportional share of this specific parameter $\phi$. This makes the numerator over the different parameters sum to 1 constructing a probability distribution. Thus the posterior, the ratio between the numerator & denominator, represents the probability distribution over the parameters.
If for example a specific parameter $\phi$ is very good at explaining the data compared to the other weights, then this parameter will have higher probability. | In the most basic sense, what is marginal likelihood? | In my mind the most intuitive role of marginal likelihood is indeed as a normalization factor. I'll elaborate more on this.
The bayes theorem is: $$ P(\phi|X) = \frac{P(X|\phi)P(\phi)} {P(X)} $$
Now l | In the most basic sense, what is marginal likelihood?
In my mind the most intuitive role of marginal likelihood is indeed as a normalization factor. I'll elaborate more on this.
The bayes theorem is: $$ P(\phi|X) = \frac{P(X|\phi)P(\phi)} {P(X)} $$
Now let's explore the numerator components:
$$P(X|\phi)P(\phi)$$ This is the prior, multiply by the likelihood using a specific parameter - How well can we explain the data using this specific $\phi$ parameter.
$$P(X)=\int_{\theta}P(X|\theta)P(\theta)\mathrm{d}\theta$$ The marginal likelihood - How well we can explain the data using all the parameters, weighted by the prior.
The ratio between them should give the proportional share of this specific parameter $\phi$. This makes the numerator over the different parameters sum to 1 constructing a probability distribution. Thus the posterior, the ratio between the numerator & denominator, represents the probability distribution over the parameters.
If for example a specific parameter $\phi$ is very good at explaining the data compared to the other weights, then this parameter will have higher probability. | In the most basic sense, what is marginal likelihood?
In my mind the most intuitive role of marginal likelihood is indeed as a normalization factor. I'll elaborate more on this.
The bayes theorem is: $$ P(\phi|X) = \frac{P(X|\phi)P(\phi)} {P(X)} $$
Now l |
28,028 | Does the "divide by 4 rule" give the upper bound marginal effect? | I think it's a typo.
The derivative of the logistic curve with respect to $x$ is:
$$
\frac{\beta\mathrm{e}^{\alpha + \beta x}}{\left(1 + \mathrm{e}^{\alpha + \beta x}\right)^{2}}
$$
So for their example where $\alpha = -1.40, \beta = 0.33$ it is:
$$
\frac{0.33\mathrm{e}^{-1.40 + 0.33 x}}{\left(1 + \mathrm{e}^{-1.40 + 0.33 x}\right)^{2}}
$$
Evaluated at the mean $\bar{x}=3.1$ gives:
$$
\frac{0.33\mathrm{e}^{-1.40 + 0.33 \cdot 3.1}}{\left(1 + \mathrm{e}^{-1.40 + 0.33\cdot 3.1}\right)^{2}} = 0.0796367
$$
This result is very close to the maximum slope of $0.33/4 = 0.0825$ which is attained at $x=-\frac{\alpha}{\beta}=4.24$, supporting their claim.
On page 82, they write
But $0.33\mathrm{e}^{-0.39}/\left(1+\mathrm{e}^{-0.39}\right)^{2}\neq 0.13$. Instead, it's around $0.08$, as shown above. | Does the "divide by 4 rule" give the upper bound marginal effect? | I think it's a typo.
The derivative of the logistic curve with respect to $x$ is:
$$
\frac{\beta\mathrm{e}^{\alpha + \beta x}}{\left(1 + \mathrm{e}^{\alpha + \beta x}\right)^{2}}
$$
So for their examp | Does the "divide by 4 rule" give the upper bound marginal effect?
I think it's a typo.
The derivative of the logistic curve with respect to $x$ is:
$$
\frac{\beta\mathrm{e}^{\alpha + \beta x}}{\left(1 + \mathrm{e}^{\alpha + \beta x}\right)^{2}}
$$
So for their example where $\alpha = -1.40, \beta = 0.33$ it is:
$$
\frac{0.33\mathrm{e}^{-1.40 + 0.33 x}}{\left(1 + \mathrm{e}^{-1.40 + 0.33 x}\right)^{2}}
$$
Evaluated at the mean $\bar{x}=3.1$ gives:
$$
\frac{0.33\mathrm{e}^{-1.40 + 0.33 \cdot 3.1}}{\left(1 + \mathrm{e}^{-1.40 + 0.33\cdot 3.1}\right)^{2}} = 0.0796367
$$
This result is very close to the maximum slope of $0.33/4 = 0.0825$ which is attained at $x=-\frac{\alpha}{\beta}=4.24$, supporting their claim.
On page 82, they write
But $0.33\mathrm{e}^{-0.39}/\left(1+\mathrm{e}^{-0.39}\right)^{2}\neq 0.13$. Instead, it's around $0.08$, as shown above. | Does the "divide by 4 rule" give the upper bound marginal effect?
I think it's a typo.
The derivative of the logistic curve with respect to $x$ is:
$$
\frac{\beta\mathrm{e}^{\alpha + \beta x}}{\left(1 + \mathrm{e}^{\alpha + \beta x}\right)^{2}}
$$
So for their examp |
28,029 | Does the "divide by 4 rule" give the upper bound marginal effect? | For a continuous variable $x$, the marginal effect of $x$ in a logit model is
$$\Lambda(\alpha + \beta x)\cdot \left[1-\Lambda(\alpha + \beta x)\right]\cdot\beta = p \cdot (1 - p) \cdot \beta,$$ where the inverse logit function $\Lambda$ is
$$\Lambda(z)=\frac{\exp{z}}{1+\exp{z}}.$$
Here $p$ is a probability, so the factor $p\cdot (1-p)$ is maximized when $p=0.5$ at $0.25$, which is where the $\frac{1}{4}$ comes from. Multiplying by the coefficient gives you the upper bound on the marginal effect. Here it is
$$0.25\cdot0.33 =0.0825.$$
Calculating the marginal effect at the mean income yields,
$$\mathbf{invlogit}(-1.40 + 0.33 \cdot 3.1)\cdot \left(1-\mathbf{invlogit}(-1.40 + 0.33 \cdot3.1)\right)\cdot 0.33 = 0.07963666$$
These are pretty close, with the approximate maximum marginal effect bounding the marginal effect at the mean. | Does the "divide by 4 rule" give the upper bound marginal effect? | For a continuous variable $x$, the marginal effect of $x$ in a logit model is
$$\Lambda(\alpha + \beta x)\cdot \left[1-\Lambda(\alpha + \beta x)\right]\cdot\beta = p \cdot (1 - p) \cdot \beta,$$ where | Does the "divide by 4 rule" give the upper bound marginal effect?
For a continuous variable $x$, the marginal effect of $x$ in a logit model is
$$\Lambda(\alpha + \beta x)\cdot \left[1-\Lambda(\alpha + \beta x)\right]\cdot\beta = p \cdot (1 - p) \cdot \beta,$$ where the inverse logit function $\Lambda$ is
$$\Lambda(z)=\frac{\exp{z}}{1+\exp{z}}.$$
Here $p$ is a probability, so the factor $p\cdot (1-p)$ is maximized when $p=0.5$ at $0.25$, which is where the $\frac{1}{4}$ comes from. Multiplying by the coefficient gives you the upper bound on the marginal effect. Here it is
$$0.25\cdot0.33 =0.0825.$$
Calculating the marginal effect at the mean income yields,
$$\mathbf{invlogit}(-1.40 + 0.33 \cdot 3.1)\cdot \left(1-\mathbf{invlogit}(-1.40 + 0.33 \cdot3.1)\right)\cdot 0.33 = 0.07963666$$
These are pretty close, with the approximate maximum marginal effect bounding the marginal effect at the mean. | Does the "divide by 4 rule" give the upper bound marginal effect?
For a continuous variable $x$, the marginal effect of $x$ in a logit model is
$$\Lambda(\alpha + \beta x)\cdot \left[1-\Lambda(\alpha + \beta x)\right]\cdot\beta = p \cdot (1 - p) \cdot \beta,$$ where |
28,030 | Dummy/baseline models for time series forecasting | I think it makes sense to first compare the model performance to a set of "trivial" models.
This is unspeakably true. This is the point where I upvoted your question.
The excellent free online book Forecasting: Principles and Practice (2nd ed.) by Athanasopoulos & Hyndman gives a number of very simple methods which are often surprisingly hard to beat:
The overall historical average
The random walk or naive forecast, i.e., the last observation
The seasonal random walk or seasonal naive or naive2 forecast, i.e., the observation from one seasonal cycle back
The random walk with a drift term, i.e., extrapolating from the last observation out with the overall average trend between the first and the last observation
These and similar methods are also used as benchmarks in academic forecasting research. If your newfangled method can't consistently beat the historical average, it's probably not all that hot.
I am not aware of any Python implementation, but that should not be overly hard. | Dummy/baseline models for time series forecasting | I think it makes sense to first compare the model performance to a set of "trivial" models.
This is unspeakably true. This is the point where I upvoted your question.
The excellent free online book F | Dummy/baseline models for time series forecasting
I think it makes sense to first compare the model performance to a set of "trivial" models.
This is unspeakably true. This is the point where I upvoted your question.
The excellent free online book Forecasting: Principles and Practice (2nd ed.) by Athanasopoulos & Hyndman gives a number of very simple methods which are often surprisingly hard to beat:
The overall historical average
The random walk or naive forecast, i.e., the last observation
The seasonal random walk or seasonal naive or naive2 forecast, i.e., the observation from one seasonal cycle back
The random walk with a drift term, i.e., extrapolating from the last observation out with the overall average trend between the first and the last observation
These and similar methods are also used as benchmarks in academic forecasting research. If your newfangled method can't consistently beat the historical average, it's probably not all that hot.
I am not aware of any Python implementation, but that should not be overly hard. | Dummy/baseline models for time series forecasting
I think it makes sense to first compare the model performance to a set of "trivial" models.
This is unspeakably true. This is the point where I upvoted your question.
The excellent free online book F |
28,031 | Dummy/baseline models for time series forecasting | Adding to the previous answer by Stephan Kolassa: we're developing a Python toolbox for forecasting and have implemented a "naïve forecaster" class for that purpose. So with sktime, you could for example run:
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.forecasting.naive import NaiveForecaster
y = load_airline() # time series data
y_train, y_test = temporal_train_test_split(y)
fh = np.arange(1, len(y_test) + 1) # forecasting horizon
forecaster = NaiveForecaster(strategy="last") # random walk
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
print(smape_loss(y_test, y_pred)) | Dummy/baseline models for time series forecasting | Adding to the previous answer by Stephan Kolassa: we're developing a Python toolbox for forecasting and have implemented a "naïve forecaster" class for that purpose. So with sktime, you could for exam | Dummy/baseline models for time series forecasting
Adding to the previous answer by Stephan Kolassa: we're developing a Python toolbox for forecasting and have implemented a "naïve forecaster" class for that purpose. So with sktime, you could for example run:
import numpy as np
from sktime.datasets import load_airline
from sktime.forecasting.model_selection import temporal_train_test_split
from sktime.performance_metrics.forecasting import smape_loss
from sktime.forecasting.naive import NaiveForecaster
y = load_airline() # time series data
y_train, y_test = temporal_train_test_split(y)
fh = np.arange(1, len(y_test) + 1) # forecasting horizon
forecaster = NaiveForecaster(strategy="last") # random walk
forecaster.fit(y_train)
y_pred = forecaster.predict(fh)
print(smape_loss(y_test, y_pred)) | Dummy/baseline models for time series forecasting
Adding to the previous answer by Stephan Kolassa: we're developing a Python toolbox for forecasting and have implemented a "naïve forecaster" class for that purpose. So with sktime, you could for exam |
28,032 | How to interpret Bayesian (posterior predictive) p-value of 0.5? | The model is true if the data are generated according to the model you are doing inference with. In other words, the unobserved parameter is generated by the prior, and then, using that parameter draw, your observed data are generated by the likelihood. This is not the setup where you consider multiple models $M_1, M_2, \ldots$ and have discrete probability distributions describing the model uncertainty.
A posterior predictive "p-value" of .5 means your test statistic $T(y)$ will be exactly equal to the median of the posterior predictive distribution of $T(y^{\text{rep}})$. Generally, this distribution and its median are obtained by looking at simulated data. Roughly speaking, this tells us that predictions (i.e. $T(y^{\text{rep}})$) "look like" our real data $T(y)$. If our model's predictions are "biased" to be too high, then we will get a number greater than $.5$, and if they are generally on the low side, we will get a number less than $.5$.
The posterior predictive distribution is
\begin{align*}
p(y^{\text{rep}} \mid y) &= \int p(y^{\text{rep}},\theta \mid y) d\theta\\
&= \int p(y^{\text{rep}}\mid \theta, y)p(\theta \mid y) d\theta\\
&= \int \underbrace{p(y^{\text{rep}} \mid \theta)}_{\text{model}}\underbrace{p(\theta \mid y)}_{\text{posterior}} d\theta \\
&= E_{\theta \mid y}\left[p(y^{\text{rep}} \mid \theta) \right].
\end{align*}
Then you take this distribution and integrate over the region where
$T(y^{\text{rep}})$ is greater than some calculated nonrandom statistic of the dataset $T(y)$.
$$
P(T(y^{\text{rep}}) > T(y) \mid y) = \int_{\{T(y^{\text{rep}}) : T(y^{\text{rep}}) > T(y) \}} p(y^{\text{rep}} \mid y) dy^{\text{rep}}.
$$
In practice, if computing the above integral is too difficult, this means drawing parameters from the posterior, and then, using these parameters, simulating many $y^{\text{rep}}$s. For each simulated data set (of the same size as your original/real data set), you calculate $T(y^{\text{rep}})$. Then you calculate what percent of these simulated values are above your single $T(y)$ coming from your real data set.
For more information, see this thread: What are posterior predictive checks and what makes them useful?
Because you are assuming there is no model uncertainty, $p(y^{\text{rep}} \mid y)$ is an integral over the parameter space; not the parameter space AND the model space. | How to interpret Bayesian (posterior predictive) p-value of 0.5? | The model is true if the data are generated according to the model you are doing inference with. In other words, the unobserved parameter is generated by the prior, and then, using that parameter draw | How to interpret Bayesian (posterior predictive) p-value of 0.5?
The model is true if the data are generated according to the model you are doing inference with. In other words, the unobserved parameter is generated by the prior, and then, using that parameter draw, your observed data are generated by the likelihood. This is not the setup where you consider multiple models $M_1, M_2, \ldots$ and have discrete probability distributions describing the model uncertainty.
A posterior predictive "p-value" of .5 means your test statistic $T(y)$ will be exactly equal to the median of the posterior predictive distribution of $T(y^{\text{rep}})$. Generally, this distribution and its median are obtained by looking at simulated data. Roughly speaking, this tells us that predictions (i.e. $T(y^{\text{rep}})$) "look like" our real data $T(y)$. If our model's predictions are "biased" to be too high, then we will get a number greater than $.5$, and if they are generally on the low side, we will get a number less than $.5$.
The posterior predictive distribution is
\begin{align*}
p(y^{\text{rep}} \mid y) &= \int p(y^{\text{rep}},\theta \mid y) d\theta\\
&= \int p(y^{\text{rep}}\mid \theta, y)p(\theta \mid y) d\theta\\
&= \int \underbrace{p(y^{\text{rep}} \mid \theta)}_{\text{model}}\underbrace{p(\theta \mid y)}_{\text{posterior}} d\theta \\
&= E_{\theta \mid y}\left[p(y^{\text{rep}} \mid \theta) \right].
\end{align*}
Then you take this distribution and integrate over the region where
$T(y^{\text{rep}})$ is greater than some calculated nonrandom statistic of the dataset $T(y)$.
$$
P(T(y^{\text{rep}}) > T(y) \mid y) = \int_{\{T(y^{\text{rep}}) : T(y^{\text{rep}}) > T(y) \}} p(y^{\text{rep}} \mid y) dy^{\text{rep}}.
$$
In practice, if computing the above integral is too difficult, this means drawing parameters from the posterior, and then, using these parameters, simulating many $y^{\text{rep}}$s. For each simulated data set (of the same size as your original/real data set), you calculate $T(y^{\text{rep}})$. Then you calculate what percent of these simulated values are above your single $T(y)$ coming from your real data set.
For more information, see this thread: What are posterior predictive checks and what makes them useful?
Because you are assuming there is no model uncertainty, $p(y^{\text{rep}} \mid y)$ is an integral over the parameter space; not the parameter space AND the model space. | How to interpret Bayesian (posterior predictive) p-value of 0.5?
The model is true if the data are generated according to the model you are doing inference with. In other words, the unobserved parameter is generated by the prior, and then, using that parameter draw |
28,033 | How to interpret Bayesian (posterior predictive) p-value of 0.5? | I would recommend reading the underlying papers that this paper is derived from as the terminology doesn't appear to have become standard in the field. The original paper is by Rubin, but Gelman is writing from Meng.
Meng, X. (1994). Posterior Predictive p-Values. The Annals of Statistics, 22(3), 1142-1160.
As to your questions:
I am trying to interpret what is meant is meant when he says 'model is true'. My questions are:
i) Statistically, what is a "true model" as said in the quote above?
ii) What does a value of 0.5 mean in simple words?
So there is some unfortunate language usage as p-values are a Frequentist idea and Bayesian methods do not have p-values. Nonetheless, within the context of the articles beginning with Rubin, we can discuss the idea of a Bayesian p-value in a narrow sense.
As to question one, Bayesian models do not falsify a null hypothesis. In fact, except where some method is intending to mimic Frequentist methods, as in this paper, the phrasing "null hypothesis" is rarely used. Instead, Bayesian methods are generative methods and are usually constructed from a different set of axioms.
The easiest way to approach your question is from Cox's axioms.
Cox, R. T. (1961). The Algebra of Probable Inference. Baltimore, MD: Johns Hopkins University Press.
Cox's first axiom is that plausibilities of a proposition are a real number that varies with the information related to the proposition. Notice the word probability wasn't used as this also allows us to think in terms of odds or other mechanisms. This varies very strongly from null hypothesis methods. To see an example, consider binary hypotheses $H_1,H_2$, which in Frequentist methods will be denoted $H_0,H_A$. What is the difference?
$H_0$ is conditioned to be true and the p-value tests the probability of observing the sample, given the null is true. It does not test if the null is actually true or false and $H_A$ has no form of probability statement attached to it. So, if $p<.05$, this does not imply that $\Pr(H_A)>.95$.
In the Bayesian framework, each proposition has a probability assigned to it so that if $H_1:\mu\le{k}$ and $H_2:\mu>k$, then it follows that if $\Pr(H_1)=.7327$ then $\Pr(H_2)=.2673$.
The true model is the model that generated the data in nature. This varies from the Frequentist method which depends only on the sampling distribution of the statistic, generally.
As to question two, Gelman is responding to Meng. He was pointing out that in a broad variety of circumstances if the null hypothesis is true, then $\Pr(y^{rep}|y)$ will cluster around .5 if you average over the sample space. He provides a case where this is useful and one where it is a problem. However, the hint as to why comes from the examples, particularly the first.
The first has been constructed so that there are great prior uncertainty and this use of a nearly uninformative prior propagates through to the predictive distribution in such a way that, almost regardless of your sample, Rubin and Ming's posterior predictive p-values will be near 50%. In this case, it would mean that it would tell you there is a 50% chance the null is true, which is highly undesirable since you would rather be either near 100% or in the case of falsehood, 0%.
The idea of Bayesian posterior p-values is the observation that since you are now in the sample space as random, rather than the parameter space as random, the rough interpretation of a Bayesian posterior probability is remarkably close to the Frequentist p-value. It is problematic because the model is not considered a parameter, in itself, and has no prior probability as would be the case in a test of many different models. The model, $H_A$ versus $H_B$ is implicit.
This article is a warning of something that should, in a sense, be obvious. Imagine you had fifty million data points and there was no ambiguity as to the location of the parameter, then you would be stunned if the resulting predictive distribution was a bad estimator over the sample space. Now consider a model where the results are ambiguous and the prior was at best weakly informative, then even if the model is true, it would be surprising to get a clear result from the posterior predictive distribution.
He provides an example where data is drawn from a population that has a standard normal distribution. The required sample would have to be 28,000 to get a rejection of the model. In a standard normal population, that will never happen.
The model is about the propagation of uncertainty and whether or not Rubin/Meng's idea generates a useful construct when it is needed most when the data is poor, small, weak or ambiguous as opposed to samples that are stunningly clear. As an out-of-sample test tool, its sampling properties are undesirable in some circumstances, but desirable in others.
In this case, what Gelman is saying is that regardless of the true probability of the model, the out-of-sample validation score provided by the Bayesian posterior predictive p-value will be near 50% when the null is true when the data doesn't clearly point to a solution.
This has lead to the criticism of the idea as uncalibrated with the true probabilities. See
Bayarri, M. J. and Berger, J. (2000). P-values for composite null models. Journal of the American Statistical Association 95, 1127–1142. | How to interpret Bayesian (posterior predictive) p-value of 0.5? | I would recommend reading the underlying papers that this paper is derived from as the terminology doesn't appear to have become standard in the field. The original paper is by Rubin, but Gelman is w | How to interpret Bayesian (posterior predictive) p-value of 0.5?
I would recommend reading the underlying papers that this paper is derived from as the terminology doesn't appear to have become standard in the field. The original paper is by Rubin, but Gelman is writing from Meng.
Meng, X. (1994). Posterior Predictive p-Values. The Annals of Statistics, 22(3), 1142-1160.
As to your questions:
I am trying to interpret what is meant is meant when he says 'model is true'. My questions are:
i) Statistically, what is a "true model" as said in the quote above?
ii) What does a value of 0.5 mean in simple words?
So there is some unfortunate language usage as p-values are a Frequentist idea and Bayesian methods do not have p-values. Nonetheless, within the context of the articles beginning with Rubin, we can discuss the idea of a Bayesian p-value in a narrow sense.
As to question one, Bayesian models do not falsify a null hypothesis. In fact, except where some method is intending to mimic Frequentist methods, as in this paper, the phrasing "null hypothesis" is rarely used. Instead, Bayesian methods are generative methods and are usually constructed from a different set of axioms.
The easiest way to approach your question is from Cox's axioms.
Cox, R. T. (1961). The Algebra of Probable Inference. Baltimore, MD: Johns Hopkins University Press.
Cox's first axiom is that plausibilities of a proposition are a real number that varies with the information related to the proposition. Notice the word probability wasn't used as this also allows us to think in terms of odds or other mechanisms. This varies very strongly from null hypothesis methods. To see an example, consider binary hypotheses $H_1,H_2$, which in Frequentist methods will be denoted $H_0,H_A$. What is the difference?
$H_0$ is conditioned to be true and the p-value tests the probability of observing the sample, given the null is true. It does not test if the null is actually true or false and $H_A$ has no form of probability statement attached to it. So, if $p<.05$, this does not imply that $\Pr(H_A)>.95$.
In the Bayesian framework, each proposition has a probability assigned to it so that if $H_1:\mu\le{k}$ and $H_2:\mu>k$, then it follows that if $\Pr(H_1)=.7327$ then $\Pr(H_2)=.2673$.
The true model is the model that generated the data in nature. This varies from the Frequentist method which depends only on the sampling distribution of the statistic, generally.
As to question two, Gelman is responding to Meng. He was pointing out that in a broad variety of circumstances if the null hypothesis is true, then $\Pr(y^{rep}|y)$ will cluster around .5 if you average over the sample space. He provides a case where this is useful and one where it is a problem. However, the hint as to why comes from the examples, particularly the first.
The first has been constructed so that there are great prior uncertainty and this use of a nearly uninformative prior propagates through to the predictive distribution in such a way that, almost regardless of your sample, Rubin and Ming's posterior predictive p-values will be near 50%. In this case, it would mean that it would tell you there is a 50% chance the null is true, which is highly undesirable since you would rather be either near 100% or in the case of falsehood, 0%.
The idea of Bayesian posterior p-values is the observation that since you are now in the sample space as random, rather than the parameter space as random, the rough interpretation of a Bayesian posterior probability is remarkably close to the Frequentist p-value. It is problematic because the model is not considered a parameter, in itself, and has no prior probability as would be the case in a test of many different models. The model, $H_A$ versus $H_B$ is implicit.
This article is a warning of something that should, in a sense, be obvious. Imagine you had fifty million data points and there was no ambiguity as to the location of the parameter, then you would be stunned if the resulting predictive distribution was a bad estimator over the sample space. Now consider a model where the results are ambiguous and the prior was at best weakly informative, then even if the model is true, it would be surprising to get a clear result from the posterior predictive distribution.
He provides an example where data is drawn from a population that has a standard normal distribution. The required sample would have to be 28,000 to get a rejection of the model. In a standard normal population, that will never happen.
The model is about the propagation of uncertainty and whether or not Rubin/Meng's idea generates a useful construct when it is needed most when the data is poor, small, weak or ambiguous as opposed to samples that are stunningly clear. As an out-of-sample test tool, its sampling properties are undesirable in some circumstances, but desirable in others.
In this case, what Gelman is saying is that regardless of the true probability of the model, the out-of-sample validation score provided by the Bayesian posterior predictive p-value will be near 50% when the null is true when the data doesn't clearly point to a solution.
This has lead to the criticism of the idea as uncalibrated with the true probabilities. See
Bayarri, M. J. and Berger, J. (2000). P-values for composite null models. Journal of the American Statistical Association 95, 1127–1142. | How to interpret Bayesian (posterior predictive) p-value of 0.5?
I would recommend reading the underlying papers that this paper is derived from as the terminology doesn't appear to have become standard in the field. The original paper is by Rubin, but Gelman is w |
28,034 | How to interpret Bayesian (posterior predictive) p-value of 0.5? | On question (ii), there is much more to be said. Indeed, there are other kinds of "Bayesian" GOF p-values that have uniform distribution when the model used for inference was used to generate the data -which means that such p-values should not be compared to 0.5 but to 0 and 1. The simplest of these is the sampled posterio GOF p-value. See the following refernces:
Robins JM, van der Vaart A, Ventura V (2000) Asymptotic distribution of P values in composite null models. J Am Stat Assoc 95: 1143–56.
Johnson VE (2007) Bayesian Model Assessment Using Pivotal Quantities. Bayesian Anal 2: 719–34.
Gosselin F. (2011) A New Calibrated Bayesian Internal Goodness-of-Fit Method: Sampled Posterior p-Values as Simple and General p-Values That Allow Double Use of the Data. Plos One. https://doi.org/10.1371/journal.pone.0014770 | How to interpret Bayesian (posterior predictive) p-value of 0.5? | On question (ii), there is much more to be said. Indeed, there are other kinds of "Bayesian" GOF p-values that have uniform distribution when the model used for inference was used to generate the data | How to interpret Bayesian (posterior predictive) p-value of 0.5?
On question (ii), there is much more to be said. Indeed, there are other kinds of "Bayesian" GOF p-values that have uniform distribution when the model used for inference was used to generate the data -which means that such p-values should not be compared to 0.5 but to 0 and 1. The simplest of these is the sampled posterio GOF p-value. See the following refernces:
Robins JM, van der Vaart A, Ventura V (2000) Asymptotic distribution of P values in composite null models. J Am Stat Assoc 95: 1143–56.
Johnson VE (2007) Bayesian Model Assessment Using Pivotal Quantities. Bayesian Anal 2: 719–34.
Gosselin F. (2011) A New Calibrated Bayesian Internal Goodness-of-Fit Method: Sampled Posterior p-Values as Simple and General p-Values That Allow Double Use of the Data. Plos One. https://doi.org/10.1371/journal.pone.0014770 | How to interpret Bayesian (posterior predictive) p-value of 0.5?
On question (ii), there is much more to be said. Indeed, there are other kinds of "Bayesian" GOF p-values that have uniform distribution when the model used for inference was used to generate the data |
28,035 | How to use Kalman filter in regression? | The standard Kalman filter model is given by:
\begin{align*}
y_t &= \mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t, \qquad \nu_t \sim \mathcal{N}(0, v_t)\\
\boldsymbol{\theta}_t &= \mathbf{G}_t \boldsymbol{\theta}_{t-1} + \boldsymbol{\omega}_t, \qquad \boldsymbol{\omega}_t \sim \mathcal{N}(0, \mathbf{W}_t)
\end{align*}
Say you have a pair of random variables $y_t$ and $\mathbf{F}_t$ - for example, the price of a stock and a set of covariates including the time of the year, prices of other stocks, etc. The Kalman filter assumes that the relationship between $y_t$ and $\mathbf{F}_t$ varies as a function of time. So, while today, the two might be highly correlated, tomorrow, they may not be at all (usually the dynamics are much more gradual).
To fit a Kalman filter, you use a forward filtering, backward smoothing approach. Essentially, you are assuming a prior distribution on your parameter, and based on the discrepancy between your prediction and of $y_t$ and the true value, the prior is updated.
$v_t$ controls the scale of the $y_t$, and $\mathbf{W}_t$ controls the scale of $\boldsymbol{\theta}_t$. This means that there is an inherent identifiability problem; if we don't care about the identifiability or don't know the scale of one of the variables, we can leave it. It won't affect the model fit. If we have a ballpark estimate of one or the other, we can place a prior distribution on it. If the scale is known exactly, we can fix one of the two, and leave the other to be inferred. Note that if the scale is too big, our predictions will be more or less flat. If the scale is too small, the predictions will be very jittery.
Since you're interested in doing regression "on the fly," I'll derive the forward filtering steps here. (Backward smoothing is used when you have observed all the data and want to correct parameter estimates given future data.) $v_t$, $\mathbf{W}_t$, and $\mathbf{G}_t$ must be set by the user. (The first two can be inferred but it makes the update equations a bit more complicated - see Prado and West (2010). $\mathbf{G}_t$ is usually set by knowledge of the process.) There are three distributions: prior state, prior observation (i.e. forecast), posterior state. Derivations of these are given as follows, where $\mathcal{D}_t$ is all observed data ($y_t$ and $\mathbf{F}_t)$ up to and including time $t$.
Prior state
\begin{align*}
\boldsymbol{\theta}_t | \mathcal{D}_{t-1} &\sim \mathcal{N}(\mathbf{a}_t, \mathbf{R}_t)\\
\mathbf{a}_t &= \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\mathbf{G}_t \boldsymbol{\theta}_{t-1} + \boldsymbol{\omega}_t | \mathcal{D}_{t-1}]\\
&= \mathbf{G}_t \mathbb{E}[\boldsymbol{\theta}_{t-1} | \mathcal{D}_{t-1}]\\
& = \boxed{\mathbf{G}_t \mathbf{m}_{t-1}}\\
\mathbf{R}_t &= \mbox{Var}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}]\\
&= \mbox{Var}[\mathbf{G}_t \boldsymbol{\theta}_{t-1} + \boldsymbol{\omega}_t | \mathcal{D}_{t-1}]\\
&= \mathbf{G}_t \mbox{Var}[\boldsymbol{\theta}_{t-1} | \mathcal{D}_{t-1}] \mathbf{G}_t' + \mbox{Var}[\boldsymbol{\omega}_t | \mathcal{D}_{t-1}]\\
&= \boxed{\mathbf{G}_t \mathbf{C}_{t-1} \mathbf{G}_t' + \mathbf{W}_t}
\end{align*}
Prior observation (i.e. forecast)
\begin{align*}
y_t | \mathcal{D}_{t-1} &\sim \mathcal{N}(f_t, Q_t)\\
f_t &= \mathbb{E}[y_t | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t | \mathcal{D}_{t-1}]\\
&= \boxed{\mathbf{F}_t'\mathbf{a}_t}\\
Q_t &= \mbox{Var}[y_t | \mathcal{D}_{t-1}]\\
&= \mbox{Var}[\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t | \mathcal{D}_{t-1}]\\
&= \mathbf{F}_t' \mbox{Var}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbf{F}_t + \mbox{Var}[\nu_t | \mathcal{D}_{t-1}]\\
&= \boxed{\mathbf{F}_t' \mathbf{R}_t \mathbf{F}_t + v_t}
\end{align*}
Posterior state
There are two ways to get the posterior state. The first is by conditioning on the joint distribution of $(\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1})$. We need to get the covariance between $\boldsymbol{\theta}_t$ and $y_t$ to establish the joint distribution of $(\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1})$. By definition, the covariance between two random quantities $\mathbf{X}$ and $\mathbf{Y}$ is $\mbox{Cov}[\mathbf{X}, \mathbf{Y}] = \mathbb{E}[\mathbf{X}\mathbf{Y}'] - \mathbb{E}[\mathbf{X}]\mathbb{E}[\mathbf{Y}]'$. Thus:
\begin{align*}
\mbox{Cov}[\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1}] &= \mbox{Cov}[\boldsymbol{\theta}_t, \mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\boldsymbol{\theta}_t (\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t)' | \mathcal{D}_{t-1}] - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t' | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\boldsymbol{\theta}_t \nu_t' + \boldsymbol{\theta}_t \boldsymbol{\theta}_t' \mathbf{F}_t| \mathcal{D}_{t-1}] - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\boldsymbol{\theta}_t' | \mathcal{D}_{t-1}] \mathbf{F}_t \\
&= \mathbb{E}[\boldsymbol{\theta}_t \nu_t' | \mathcal{D}_{t-1}] + \mathbb{E}[\boldsymbol{\theta}_t \boldsymbol{\theta}_t' | \mathcal{D}_{t-1}]\mathbf{F}_t - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\boldsymbol{\theta}_t' | \mathcal{D}_{t-1}] \mathbf{F}_t \\
&= \mathbf{0} + (\mathbb{E}[\boldsymbol{\theta}_t \boldsymbol{\theta}_t' | \mathcal{D}_{t-1}] - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\boldsymbol{\theta}_t' | \mathcal{D}_{t-1}]) \mathbf{F}_t \\
&= \mbox{Var}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}]\mathbf{F}_t\\
&= \mathbf{R}_t \mathbf{F}_t\\
&= \mathbf{a}_t Q_t
\end{align*}
Thus, the joint distribution of $(\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1})$ is given by:
\begin{align*}
\begin{pmatrix}
\boldsymbol{\theta}_t\\
y_t
\end{pmatrix}
\sim \mathcal{N}\left(
\begin{pmatrix}
\mathbf{a}_t\\
f_t
\end{pmatrix}
,
\begin{pmatrix}
\mathbf{R}_t & \mathbf{a}_t Q_t\\
\mathbf{a}_t' Q_t & Q_t
\end{pmatrix}
\right)
\end{align*}
Then, conditioning on $y_t$, we get:
\begin{align*}
\boldsymbol{\theta}_t | y_t, \mathcal{D}_{t-1} &\sim \mathcal{N}(\mathbf{m}_t, \mathbf{C}_t)\\
\mathbf{m}_t &= \mathbf{a}_t + \mathbf{a}_t Q_t Q_t^{-1}(y_t - f_t)\\
&= \boxed{\mathbf{a}_t + \mathbf{a}_t e_t}\\
\mathbf{C}_t &= \mathbf{R}_t - (\mathbf{a}_t Q_t) Q_t^{-1} (Q_t \mathbf{a}_t')\\
&= \boxed{\mathbf{R}_t - \mathbf{a}_t Q_t \mathbf{a}_t'}
\end{align*}
We can also find the posterior state using Bayes' theorem. Take $\mathbf{G}_t = \mathbf{I}$ for simplicity. Then,
\begin{align*}
p(\boldsymbol{\theta}_t | y_t, \mathbf{F}_t) &\propto p(\mathbf{F}_t, y_t | \boldsymbol{\theta}_t) p(\boldsymbol{\theta}_t)\\
&= p(y_t | \mathbf{F}_t, \boldsymbol{\theta}_t) p(\mathbf{F}_t | \boldsymbol{\theta}_t) p(\boldsymbol{\theta}_t)\\
\end{align*}
Note that $\mathbf{F}_t$ is observed and deterministic while $y_t$ is observed and random. Thus, we can get rid of the $p(\mathbf{F}_t | \boldsymbol{\theta}_t)$.
\begin{align*}
&= \mathcal{N}(y_t | \mathbf{F}_t' \boldsymbol{\theta}_t, v_t) \mathcal{N}(\boldsymbol{\theta}_t | \mathbf{m}_{t-1}, \mathbf{R}_t)\\
\end{align*}
We use $\boldsymbol{\theta}_t$ rather than $\mathbf{m}_{t-1}$ in the expression for the mean of $\mathbf{y}_t$ because we are given $\boldsymbol{\theta}_t$ in the distribution of $y_t$. We are not given $\boldsymbol{\theta}_{t-1}$ in the distribution for $\boldsymbol{\theta}_t$, however, (this is a marginal distribution) so the expected value of $\boldsymbol{\theta}_t$ is $\mathbf{m}_{t-1}$, not $\boldsymbol{\theta}_{t-1}$.
\begin{align*}
&= \left[(2\pi v_t)^{-1/2}\exp\left\{-\frac{1}{2v_t}(y_t - \mathbf{F}_t'\boldsymbol{\theta}_t)^2\right\}\right]\left[(2\pi)^{-p/2}|\mathbf{R}_t|^{-1/2}\exp\left\{-\frac{1}{2}(\boldsymbol{\theta}_t - \mathbf{m}_{t-1})'\mathbf{R}_t^{-1}(\boldsymbol{\theta}_t - \mathbf{m}_{t-1})\right\}\right]\\
&\propto \exp \left\{-\frac{1}{2v_t}(y_t^2 - 2y_t\boldsymbol{\theta}'\mathbf{F}_t + \boldsymbol{\theta}_t'\mathbf{F}_t\mathbf{F}_t'\boldsymbol{\theta}_t) - \frac{1}{2}(\boldsymbol{\theta}_t'\mathbf{R}_t^{-1}\boldsymbol{\theta}_t - 2\boldsymbol{\theta}_t\mathbf{R}_t^{-1}\mathbf{m}_{t-1} + \mathbf{m}_{t-1}'\mathbf{R}_t^{-1}\mathbf{m}_{t-1})\right\}\\
&\propto \exp \left\{-\frac{1}{2}\left[\frac{1}{v_t}\boldsymbol{\theta}_t'\mathbf{F}_t\mathbf{F}_t'\boldsymbol{\theta}_t - \frac{2}{v_t}y_t\boldsymbol{\theta}_t'\mathbf{F}_t + \boldsymbol{\theta}_t'\mathbf{R}_t^{-1}\boldsymbol{\theta}_t - 2\boldsymbol{\theta}_t'\mathbf{R}_t^{-1}\mathbf{m}_{t-1}\right]\right\}\\
&= \exp \left\{ -\frac{1}{2}\left[\boldsymbol{\theta}_t'\left(\frac{1}{v_t}\mathbf{F}_t\mathbf{F}_t' + \mathbf{R}_t^{-1}\right)\boldsymbol{\theta}_t - 2\boldsymbol{\theta}_t'\left(\frac{1}{v_t}y_t\mathbf{F}_t + \mathbf{R}_t^{-1}\mathbf{m}_{t-1}\right)\right]\right\}\\
&\propto \mathcal{N}(\mathbf{m}_t, \mathbf{C}_t)\\
\end{align*}
where
\begin{align*}
\mathbf{a}_t &= \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] = \mathbf{m}_{t-1}\\
\mathbf{m}_t &= \boxed{\mathbf{C}_t\left[\frac{1}{v_t}y_t\mathbf{F}_t + \mathbf{R}_t^{-1}\mathbf{a}_{t}\right]}\\
\mathbf{C}_t &= \boxed{\left[\frac{1}{v_t}\mathbf{F}_t\mathbf{F}_t' + \mathbf{R}_t^{-1}\right]^{-1}}
\end{align*}
Although the expressions for $\mathbf{m}_t$ and $\mathbf{C}_t$ look very different when derived by conditioning on the joint distribution vs. using Bayes' theorem, they are mathematically identical.
Hope this helps! | How to use Kalman filter in regression? | The standard Kalman filter model is given by:
\begin{align*}
y_t &= \mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t, \qquad \nu_t \sim \mathcal{N}(0, v_t)\\
\boldsymbol{\theta}_t &= \mathbf{G}_t \boldsymb | How to use Kalman filter in regression?
The standard Kalman filter model is given by:
\begin{align*}
y_t &= \mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t, \qquad \nu_t \sim \mathcal{N}(0, v_t)\\
\boldsymbol{\theta}_t &= \mathbf{G}_t \boldsymbol{\theta}_{t-1} + \boldsymbol{\omega}_t, \qquad \boldsymbol{\omega}_t \sim \mathcal{N}(0, \mathbf{W}_t)
\end{align*}
Say you have a pair of random variables $y_t$ and $\mathbf{F}_t$ - for example, the price of a stock and a set of covariates including the time of the year, prices of other stocks, etc. The Kalman filter assumes that the relationship between $y_t$ and $\mathbf{F}_t$ varies as a function of time. So, while today, the two might be highly correlated, tomorrow, they may not be at all (usually the dynamics are much more gradual).
To fit a Kalman filter, you use a forward filtering, backward smoothing approach. Essentially, you are assuming a prior distribution on your parameter, and based on the discrepancy between your prediction and of $y_t$ and the true value, the prior is updated.
$v_t$ controls the scale of the $y_t$, and $\mathbf{W}_t$ controls the scale of $\boldsymbol{\theta}_t$. This means that there is an inherent identifiability problem; if we don't care about the identifiability or don't know the scale of one of the variables, we can leave it. It won't affect the model fit. If we have a ballpark estimate of one or the other, we can place a prior distribution on it. If the scale is known exactly, we can fix one of the two, and leave the other to be inferred. Note that if the scale is too big, our predictions will be more or less flat. If the scale is too small, the predictions will be very jittery.
Since you're interested in doing regression "on the fly," I'll derive the forward filtering steps here. (Backward smoothing is used when you have observed all the data and want to correct parameter estimates given future data.) $v_t$, $\mathbf{W}_t$, and $\mathbf{G}_t$ must be set by the user. (The first two can be inferred but it makes the update equations a bit more complicated - see Prado and West (2010). $\mathbf{G}_t$ is usually set by knowledge of the process.) There are three distributions: prior state, prior observation (i.e. forecast), posterior state. Derivations of these are given as follows, where $\mathcal{D}_t$ is all observed data ($y_t$ and $\mathbf{F}_t)$ up to and including time $t$.
Prior state
\begin{align*}
\boldsymbol{\theta}_t | \mathcal{D}_{t-1} &\sim \mathcal{N}(\mathbf{a}_t, \mathbf{R}_t)\\
\mathbf{a}_t &= \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\mathbf{G}_t \boldsymbol{\theta}_{t-1} + \boldsymbol{\omega}_t | \mathcal{D}_{t-1}]\\
&= \mathbf{G}_t \mathbb{E}[\boldsymbol{\theta}_{t-1} | \mathcal{D}_{t-1}]\\
& = \boxed{\mathbf{G}_t \mathbf{m}_{t-1}}\\
\mathbf{R}_t &= \mbox{Var}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}]\\
&= \mbox{Var}[\mathbf{G}_t \boldsymbol{\theta}_{t-1} + \boldsymbol{\omega}_t | \mathcal{D}_{t-1}]\\
&= \mathbf{G}_t \mbox{Var}[\boldsymbol{\theta}_{t-1} | \mathcal{D}_{t-1}] \mathbf{G}_t' + \mbox{Var}[\boldsymbol{\omega}_t | \mathcal{D}_{t-1}]\\
&= \boxed{\mathbf{G}_t \mathbf{C}_{t-1} \mathbf{G}_t' + \mathbf{W}_t}
\end{align*}
Prior observation (i.e. forecast)
\begin{align*}
y_t | \mathcal{D}_{t-1} &\sim \mathcal{N}(f_t, Q_t)\\
f_t &= \mathbb{E}[y_t | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t | \mathcal{D}_{t-1}]\\
&= \boxed{\mathbf{F}_t'\mathbf{a}_t}\\
Q_t &= \mbox{Var}[y_t | \mathcal{D}_{t-1}]\\
&= \mbox{Var}[\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t | \mathcal{D}_{t-1}]\\
&= \mathbf{F}_t' \mbox{Var}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbf{F}_t + \mbox{Var}[\nu_t | \mathcal{D}_{t-1}]\\
&= \boxed{\mathbf{F}_t' \mathbf{R}_t \mathbf{F}_t + v_t}
\end{align*}
Posterior state
There are two ways to get the posterior state. The first is by conditioning on the joint distribution of $(\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1})$. We need to get the covariance between $\boldsymbol{\theta}_t$ and $y_t$ to establish the joint distribution of $(\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1})$. By definition, the covariance between two random quantities $\mathbf{X}$ and $\mathbf{Y}$ is $\mbox{Cov}[\mathbf{X}, \mathbf{Y}] = \mathbb{E}[\mathbf{X}\mathbf{Y}'] - \mathbb{E}[\mathbf{X}]\mathbb{E}[\mathbf{Y}]'$. Thus:
\begin{align*}
\mbox{Cov}[\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1}] &= \mbox{Cov}[\boldsymbol{\theta}_t, \mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\boldsymbol{\theta}_t (\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t)' | \mathcal{D}_{t-1}] - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t' | \mathcal{D}_{t-1}]\\
&= \mathbb{E}[\boldsymbol{\theta}_t \nu_t' + \boldsymbol{\theta}_t \boldsymbol{\theta}_t' \mathbf{F}_t| \mathcal{D}_{t-1}] - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\boldsymbol{\theta}_t' | \mathcal{D}_{t-1}] \mathbf{F}_t \\
&= \mathbb{E}[\boldsymbol{\theta}_t \nu_t' | \mathcal{D}_{t-1}] + \mathbb{E}[\boldsymbol{\theta}_t \boldsymbol{\theta}_t' | \mathcal{D}_{t-1}]\mathbf{F}_t - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\boldsymbol{\theta}_t' | \mathcal{D}_{t-1}] \mathbf{F}_t \\
&= \mathbf{0} + (\mathbb{E}[\boldsymbol{\theta}_t \boldsymbol{\theta}_t' | \mathcal{D}_{t-1}] - \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] \mathbb{E}[\boldsymbol{\theta}_t' | \mathcal{D}_{t-1}]) \mathbf{F}_t \\
&= \mbox{Var}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}]\mathbf{F}_t\\
&= \mathbf{R}_t \mathbf{F}_t\\
&= \mathbf{a}_t Q_t
\end{align*}
Thus, the joint distribution of $(\boldsymbol{\theta}_t, y_t | \mathcal{D}_{t-1})$ is given by:
\begin{align*}
\begin{pmatrix}
\boldsymbol{\theta}_t\\
y_t
\end{pmatrix}
\sim \mathcal{N}\left(
\begin{pmatrix}
\mathbf{a}_t\\
f_t
\end{pmatrix}
,
\begin{pmatrix}
\mathbf{R}_t & \mathbf{a}_t Q_t\\
\mathbf{a}_t' Q_t & Q_t
\end{pmatrix}
\right)
\end{align*}
Then, conditioning on $y_t$, we get:
\begin{align*}
\boldsymbol{\theta}_t | y_t, \mathcal{D}_{t-1} &\sim \mathcal{N}(\mathbf{m}_t, \mathbf{C}_t)\\
\mathbf{m}_t &= \mathbf{a}_t + \mathbf{a}_t Q_t Q_t^{-1}(y_t - f_t)\\
&= \boxed{\mathbf{a}_t + \mathbf{a}_t e_t}\\
\mathbf{C}_t &= \mathbf{R}_t - (\mathbf{a}_t Q_t) Q_t^{-1} (Q_t \mathbf{a}_t')\\
&= \boxed{\mathbf{R}_t - \mathbf{a}_t Q_t \mathbf{a}_t'}
\end{align*}
We can also find the posterior state using Bayes' theorem. Take $\mathbf{G}_t = \mathbf{I}$ for simplicity. Then,
\begin{align*}
p(\boldsymbol{\theta}_t | y_t, \mathbf{F}_t) &\propto p(\mathbf{F}_t, y_t | \boldsymbol{\theta}_t) p(\boldsymbol{\theta}_t)\\
&= p(y_t | \mathbf{F}_t, \boldsymbol{\theta}_t) p(\mathbf{F}_t | \boldsymbol{\theta}_t) p(\boldsymbol{\theta}_t)\\
\end{align*}
Note that $\mathbf{F}_t$ is observed and deterministic while $y_t$ is observed and random. Thus, we can get rid of the $p(\mathbf{F}_t | \boldsymbol{\theta}_t)$.
\begin{align*}
&= \mathcal{N}(y_t | \mathbf{F}_t' \boldsymbol{\theta}_t, v_t) \mathcal{N}(\boldsymbol{\theta}_t | \mathbf{m}_{t-1}, \mathbf{R}_t)\\
\end{align*}
We use $\boldsymbol{\theta}_t$ rather than $\mathbf{m}_{t-1}$ in the expression for the mean of $\mathbf{y}_t$ because we are given $\boldsymbol{\theta}_t$ in the distribution of $y_t$. We are not given $\boldsymbol{\theta}_{t-1}$ in the distribution for $\boldsymbol{\theta}_t$, however, (this is a marginal distribution) so the expected value of $\boldsymbol{\theta}_t$ is $\mathbf{m}_{t-1}$, not $\boldsymbol{\theta}_{t-1}$.
\begin{align*}
&= \left[(2\pi v_t)^{-1/2}\exp\left\{-\frac{1}{2v_t}(y_t - \mathbf{F}_t'\boldsymbol{\theta}_t)^2\right\}\right]\left[(2\pi)^{-p/2}|\mathbf{R}_t|^{-1/2}\exp\left\{-\frac{1}{2}(\boldsymbol{\theta}_t - \mathbf{m}_{t-1})'\mathbf{R}_t^{-1}(\boldsymbol{\theta}_t - \mathbf{m}_{t-1})\right\}\right]\\
&\propto \exp \left\{-\frac{1}{2v_t}(y_t^2 - 2y_t\boldsymbol{\theta}'\mathbf{F}_t + \boldsymbol{\theta}_t'\mathbf{F}_t\mathbf{F}_t'\boldsymbol{\theta}_t) - \frac{1}{2}(\boldsymbol{\theta}_t'\mathbf{R}_t^{-1}\boldsymbol{\theta}_t - 2\boldsymbol{\theta}_t\mathbf{R}_t^{-1}\mathbf{m}_{t-1} + \mathbf{m}_{t-1}'\mathbf{R}_t^{-1}\mathbf{m}_{t-1})\right\}\\
&\propto \exp \left\{-\frac{1}{2}\left[\frac{1}{v_t}\boldsymbol{\theta}_t'\mathbf{F}_t\mathbf{F}_t'\boldsymbol{\theta}_t - \frac{2}{v_t}y_t\boldsymbol{\theta}_t'\mathbf{F}_t + \boldsymbol{\theta}_t'\mathbf{R}_t^{-1}\boldsymbol{\theta}_t - 2\boldsymbol{\theta}_t'\mathbf{R}_t^{-1}\mathbf{m}_{t-1}\right]\right\}\\
&= \exp \left\{ -\frac{1}{2}\left[\boldsymbol{\theta}_t'\left(\frac{1}{v_t}\mathbf{F}_t\mathbf{F}_t' + \mathbf{R}_t^{-1}\right)\boldsymbol{\theta}_t - 2\boldsymbol{\theta}_t'\left(\frac{1}{v_t}y_t\mathbf{F}_t + \mathbf{R}_t^{-1}\mathbf{m}_{t-1}\right)\right]\right\}\\
&\propto \mathcal{N}(\mathbf{m}_t, \mathbf{C}_t)\\
\end{align*}
where
\begin{align*}
\mathbf{a}_t &= \mathbb{E}[\boldsymbol{\theta}_t | \mathcal{D}_{t-1}] = \mathbf{m}_{t-1}\\
\mathbf{m}_t &= \boxed{\mathbf{C}_t\left[\frac{1}{v_t}y_t\mathbf{F}_t + \mathbf{R}_t^{-1}\mathbf{a}_{t}\right]}\\
\mathbf{C}_t &= \boxed{\left[\frac{1}{v_t}\mathbf{F}_t\mathbf{F}_t' + \mathbf{R}_t^{-1}\right]^{-1}}
\end{align*}
Although the expressions for $\mathbf{m}_t$ and $\mathbf{C}_t$ look very different when derived by conditioning on the joint distribution vs. using Bayes' theorem, they are mathematically identical.
Hope this helps! | How to use Kalman filter in regression?
The standard Kalman filter model is given by:
\begin{align*}
y_t &= \mathbf{F}_t' \boldsymbol{\theta}_t + \nu_t, \qquad \nu_t \sim \mathcal{N}(0, v_t)\\
\boldsymbol{\theta}_t &= \mathbf{G}_t \boldsymb |
28,036 | How to use Kalman filter in regression? | I believe nobody gives you a comprehensive lecture of "Kalman filter" here. So just google it and do some homework before thinking about "regression problem".
Frankly speaking, Kalman filter is consisted of two equations. System Equation (or System Model) and Observation Equation (or Observation Model). I assume you already know the difference of these two.
Kalman filter is just a filter as it called. So before you try to use it you have to formalize your problem into the mold of "Kalman filter". In this case, the problem is "regression".
So, regression.
Well, I'm not an expert of signal filtering problem. But as far as I know there are two ways to perform "regression" with Kalman filter.
Case one:
you can assume the coefficients of the regression -- so called alpha and beta -- is time varying. In this case you have to put state variables "alpha" and "beta" into your system equation. If you have some idea on how alpha and beta should be evaluated, you have to describe it as mathematical equations. It would be the system model of your Kalman Filter. Then you can connect them with regular equation of linear regression in your observation model. I assume you don't have any particular ideas on how your alpha and beta should be evaluated...so It would be a random walk with certain variance sigma.
System Equation:
beta(t) = beta(t-1) + N(0.0, sigma) --- beta equation
alpha(t) = alpha(t-1) + N(0.0, sigma) -- alpha equation
And the regression part...you have to specify, "observed" external variable x(t) in your observation equation. Usually it's should be an element of the matrix of observation equation, some form like:
y = dot(H,x)...
But I'll write it in plain (pseudo) formula.
Observation Equation:
y(t) = beta(t-1) * x(t-1) + alpha(t-1) + N(0.0, zeta)
Observable variable: y, x.
(these variable should be fed to the filter as "observed" data)
Hidden variable: alpha, beta.
(You don't have to know the level of these variables...it will be estimated by the filter.)
And run the filter! So it estimates "hidden" variable alpha and beta for you. You should hope it's not very volatile...in that case, you can not get meaningful prediction I guess (still you can get good "estimation" of alpha and beta if your hypothesis was right)
Case 2:
In this case, you know your beta and alpha is constant (or almost constant...whatever it is, it shouldn't change much beyond of your estimation/prediction horizon) but you can assume there is no way to observe variable x(t) directly. All the thing you can observe is y(t). So you need to estimate x(t) "using" regression rather than "doing" regression. So strictly speaking, this is not the solution of regression problem but maybe this is what you really want.
In this case, you have to describe how your x(t) is evaluated in your system equation. Then, use the same "Observation Equation" above but in this time x(t) don't have to be observed. But alpha and beta must be "fixed" variable. If you don't know the exact value of alpha and beta, you can optimize it with your data, I guess.
I guess there is some the other ways to do these kind of stuffs using Kalman filter. But all the things you have to do is:
Study basic usage of Kalman filter. You can learn how to use some package software or write your own Kalman filter in whatever language you like (I recommend python for this type of problem, by the way)
Formalize your problem. What is "hidden" variable should be estimated by Kalman filter? What is "observable" variable you can obtain and use for estimation of your "hidden" variable(s)? And write it as a mathematical equation...but be careful, if you want to use "Kalman filter", your equation should be linear (and the noises should be Gaussian...usually)
If you want much more complicated methods with these kind of things, google VECM or VAR models. probably, "Regression" is not the thing you really want. In that case, you have to deal with tons of mathematics...If I were you, I won't stuck my head into such complicated models!
GOOD LUCK <3<3<3 | How to use Kalman filter in regression? | I believe nobody gives you a comprehensive lecture of "Kalman filter" here. So just google it and do some homework before thinking about "regression problem".
Frankly speaking, Kalman filter is consis | How to use Kalman filter in regression?
I believe nobody gives you a comprehensive lecture of "Kalman filter" here. So just google it and do some homework before thinking about "regression problem".
Frankly speaking, Kalman filter is consisted of two equations. System Equation (or System Model) and Observation Equation (or Observation Model). I assume you already know the difference of these two.
Kalman filter is just a filter as it called. So before you try to use it you have to formalize your problem into the mold of "Kalman filter". In this case, the problem is "regression".
So, regression.
Well, I'm not an expert of signal filtering problem. But as far as I know there are two ways to perform "regression" with Kalman filter.
Case one:
you can assume the coefficients of the regression -- so called alpha and beta -- is time varying. In this case you have to put state variables "alpha" and "beta" into your system equation. If you have some idea on how alpha and beta should be evaluated, you have to describe it as mathematical equations. It would be the system model of your Kalman Filter. Then you can connect them with regular equation of linear regression in your observation model. I assume you don't have any particular ideas on how your alpha and beta should be evaluated...so It would be a random walk with certain variance sigma.
System Equation:
beta(t) = beta(t-1) + N(0.0, sigma) --- beta equation
alpha(t) = alpha(t-1) + N(0.0, sigma) -- alpha equation
And the regression part...you have to specify, "observed" external variable x(t) in your observation equation. Usually it's should be an element of the matrix of observation equation, some form like:
y = dot(H,x)...
But I'll write it in plain (pseudo) formula.
Observation Equation:
y(t) = beta(t-1) * x(t-1) + alpha(t-1) + N(0.0, zeta)
Observable variable: y, x.
(these variable should be fed to the filter as "observed" data)
Hidden variable: alpha, beta.
(You don't have to know the level of these variables...it will be estimated by the filter.)
And run the filter! So it estimates "hidden" variable alpha and beta for you. You should hope it's not very volatile...in that case, you can not get meaningful prediction I guess (still you can get good "estimation" of alpha and beta if your hypothesis was right)
Case 2:
In this case, you know your beta and alpha is constant (or almost constant...whatever it is, it shouldn't change much beyond of your estimation/prediction horizon) but you can assume there is no way to observe variable x(t) directly. All the thing you can observe is y(t). So you need to estimate x(t) "using" regression rather than "doing" regression. So strictly speaking, this is not the solution of regression problem but maybe this is what you really want.
In this case, you have to describe how your x(t) is evaluated in your system equation. Then, use the same "Observation Equation" above but in this time x(t) don't have to be observed. But alpha and beta must be "fixed" variable. If you don't know the exact value of alpha and beta, you can optimize it with your data, I guess.
I guess there is some the other ways to do these kind of stuffs using Kalman filter. But all the things you have to do is:
Study basic usage of Kalman filter. You can learn how to use some package software or write your own Kalman filter in whatever language you like (I recommend python for this type of problem, by the way)
Formalize your problem. What is "hidden" variable should be estimated by Kalman filter? What is "observable" variable you can obtain and use for estimation of your "hidden" variable(s)? And write it as a mathematical equation...but be careful, if you want to use "Kalman filter", your equation should be linear (and the noises should be Gaussian...usually)
If you want much more complicated methods with these kind of things, google VECM or VAR models. probably, "Regression" is not the thing you really want. In that case, you have to deal with tons of mathematics...If I were you, I won't stuck my head into such complicated models!
GOOD LUCK <3<3<3 | How to use Kalman filter in regression?
I believe nobody gives you a comprehensive lecture of "Kalman filter" here. So just google it and do some homework before thinking about "regression problem".
Frankly speaking, Kalman filter is consis |
28,037 | Convolutional Neural Network Scale Sensitivity | CNNs are too large a class of models, to answer this question. LeNet, AlexNet, ZFNet and VGG16 will behave very differently than GoogLeNet, which was built specifically to do most of what R-CNN do, with a CNN architecture (you may know GoogLeNet with the name of Inception, even though strictly speaking Inception is just the basic unit (subnetwork) upon which GoogLeNet is built). Finally, ResNets will behave differently. And all these architectures were not built to classify age classes, but the 1000 ImageNet classes, which don't contain age classes for humans. One could use transfer learning (if you have enough training images) to train one of the widely available trained models above, and see how they perform. In general, however, especially the older architectures (let's say up to VGG16) have an hard time learning "global features" which require to learn about "head" (already a complex feature), "torso" (another complex feature) and their ratio (which also requires that the two features are in a certain spatial relationship). This kind of stuff is what Capsule Networks should have been able to do.
Convnets were born to do exactly the opposite: be sensitive to local features, and relatively insensitive to their relative position/scale. A good Convnet should recognize "white cat" whether the picture is a close-up or an American shot. Combining convolutional layers (which are sensitive to local features) with pooling layers (which remove part of the sensitivity to variation in scale or translation of the image) gives you an architecture which in its most basic form is not great at learning the kind of spatial relationships among objects which you're looking for. There was an example somewhere (but I can't find it anymore) where, after splitting a cat image in various rectangular nonoverlapping tiles and putting them together in a random order, the CNN would keep identifying the image as cat. This indicates that CNNs are more sensitive to local features (textures or something like that) than to the spatial relationship among high level features. See also the Capsule networks paper for some discussion of this. Hinton also showed an example of this in a video about the limits of convnets.
My wild guess is that one of the recent architectures would be perfectly capable (given enough data) of discerning men from children, but not because of a "threshold" on a metric relationship among high level features such as "head" and "torso". It would learn some statistical regularity, maybe completely unnoticeable to humans, which separates adult images from child images in the training set. | Convolutional Neural Network Scale Sensitivity | CNNs are too large a class of models, to answer this question. LeNet, AlexNet, ZFNet and VGG16 will behave very differently than GoogLeNet, which was built specifically to do most of what R-CNN do, wi | Convolutional Neural Network Scale Sensitivity
CNNs are too large a class of models, to answer this question. LeNet, AlexNet, ZFNet and VGG16 will behave very differently than GoogLeNet, which was built specifically to do most of what R-CNN do, with a CNN architecture (you may know GoogLeNet with the name of Inception, even though strictly speaking Inception is just the basic unit (subnetwork) upon which GoogLeNet is built). Finally, ResNets will behave differently. And all these architectures were not built to classify age classes, but the 1000 ImageNet classes, which don't contain age classes for humans. One could use transfer learning (if you have enough training images) to train one of the widely available trained models above, and see how they perform. In general, however, especially the older architectures (let's say up to VGG16) have an hard time learning "global features" which require to learn about "head" (already a complex feature), "torso" (another complex feature) and their ratio (which also requires that the two features are in a certain spatial relationship). This kind of stuff is what Capsule Networks should have been able to do.
Convnets were born to do exactly the opposite: be sensitive to local features, and relatively insensitive to their relative position/scale. A good Convnet should recognize "white cat" whether the picture is a close-up or an American shot. Combining convolutional layers (which are sensitive to local features) with pooling layers (which remove part of the sensitivity to variation in scale or translation of the image) gives you an architecture which in its most basic form is not great at learning the kind of spatial relationships among objects which you're looking for. There was an example somewhere (but I can't find it anymore) where, after splitting a cat image in various rectangular nonoverlapping tiles and putting them together in a random order, the CNN would keep identifying the image as cat. This indicates that CNNs are more sensitive to local features (textures or something like that) than to the spatial relationship among high level features. See also the Capsule networks paper for some discussion of this. Hinton also showed an example of this in a video about the limits of convnets.
My wild guess is that one of the recent architectures would be perfectly capable (given enough data) of discerning men from children, but not because of a "threshold" on a metric relationship among high level features such as "head" and "torso". It would learn some statistical regularity, maybe completely unnoticeable to humans, which separates adult images from child images in the training set. | Convolutional Neural Network Scale Sensitivity
CNNs are too large a class of models, to answer this question. LeNet, AlexNet, ZFNet and VGG16 will behave very differently than GoogLeNet, which was built specifically to do most of what R-CNN do, wi |
28,038 | Convolutional Neural Network Scale Sensitivity | Firstly, thanks for posting a very interesting question.
To answer it shortly, a vanilla convnet trained end-2-end to predict age from a photo will be generally prone to mis-classify images such as the one you posted. Secondly, note that accurately estimating the age of a person is a nearly impossible task1.
The main difference from your proposed approach using some object detectors (be it RCNN, Faster RCNN, YOLO or SSD) is that you are using different information to train the models. The CNN is trained only on images and needs to find out all the necessary features itself. It is most likely going to find various facial features, but it will also rely on clothing and perhaps scene features (kids may be often in the picture with some toys, adults will be more likely in office environments, etc.). These features will not be robust to your counterexample.
On the other hand, if you train the network to explicitly detect objects as "torso" and "head", you are providing extra information that these objects are important for the task, and thus simplify the problem2.
While the approach of detecting head and torso and then evaluation the size ratio of the bounding boxes sounds interesting, I can see several obstacles:
Obtaining data: I am not aware of the availability of large dataset where both age and bounding boxes would be present.
Imperfect FOV: In most images (e.g. both your examples), the people are not displayed whole. You would have to deal with the fact that the torso bounding boxes would not be always perfect simply because part of the person is not in the image and the net would have to guess how large part is missing (and the ground truth bounding boxes would most likely not capture this information). Also, the aforementioned object detectors don't always handle predictions of partial objects properly. This might introduce too much noise in the model.
Various poses: The torso-to-head ratio would be very different for people viewed frontally and from the side.
Adults: It seems the ratio works well to predict ages between 0-21, but I don't see how it would help to predict ages of adults (I suppose the ratio does not change in higher age).
All these problems suggest that the head-to-torso ratio approach is also not going to work perfectly, although it might be more robust to your particular counterexample.
I guess the best way to perform this task would be to 1) detect the face, 2) predict age only from the facial crop (removes potentially misleading information). Note that some R-CNN-like architecture using ROI-pooling could be trained to do this end-2-end.
1 Even using very sophisticated medical methods (which are arguably much more informative than a photo of the person) this is not possible to do accurately. See this Quora thread for more information.
2 Check the article Knowledge Matters: Importance of Prior Information for Optimization for an example how providing some intermediate knowledge about the task can greatly simplify learning. | Convolutional Neural Network Scale Sensitivity | Firstly, thanks for posting a very interesting question.
To answer it shortly, a vanilla convnet trained end-2-end to predict age from a photo will be generally prone to mis-classify images such as th | Convolutional Neural Network Scale Sensitivity
Firstly, thanks for posting a very interesting question.
To answer it shortly, a vanilla convnet trained end-2-end to predict age from a photo will be generally prone to mis-classify images such as the one you posted. Secondly, note that accurately estimating the age of a person is a nearly impossible task1.
The main difference from your proposed approach using some object detectors (be it RCNN, Faster RCNN, YOLO or SSD) is that you are using different information to train the models. The CNN is trained only on images and needs to find out all the necessary features itself. It is most likely going to find various facial features, but it will also rely on clothing and perhaps scene features (kids may be often in the picture with some toys, adults will be more likely in office environments, etc.). These features will not be robust to your counterexample.
On the other hand, if you train the network to explicitly detect objects as "torso" and "head", you are providing extra information that these objects are important for the task, and thus simplify the problem2.
While the approach of detecting head and torso and then evaluation the size ratio of the bounding boxes sounds interesting, I can see several obstacles:
Obtaining data: I am not aware of the availability of large dataset where both age and bounding boxes would be present.
Imperfect FOV: In most images (e.g. both your examples), the people are not displayed whole. You would have to deal with the fact that the torso bounding boxes would not be always perfect simply because part of the person is not in the image and the net would have to guess how large part is missing (and the ground truth bounding boxes would most likely not capture this information). Also, the aforementioned object detectors don't always handle predictions of partial objects properly. This might introduce too much noise in the model.
Various poses: The torso-to-head ratio would be very different for people viewed frontally and from the side.
Adults: It seems the ratio works well to predict ages between 0-21, but I don't see how it would help to predict ages of adults (I suppose the ratio does not change in higher age).
All these problems suggest that the head-to-torso ratio approach is also not going to work perfectly, although it might be more robust to your particular counterexample.
I guess the best way to perform this task would be to 1) detect the face, 2) predict age only from the facial crop (removes potentially misleading information). Note that some R-CNN-like architecture using ROI-pooling could be trained to do this end-2-end.
1 Even using very sophisticated medical methods (which are arguably much more informative than a photo of the person) this is not possible to do accurately. See this Quora thread for more information.
2 Check the article Knowledge Matters: Importance of Prior Information for Optimization for an example how providing some intermediate knowledge about the task can greatly simplify learning. | Convolutional Neural Network Scale Sensitivity
Firstly, thanks for posting a very interesting question.
To answer it shortly, a vanilla convnet trained end-2-end to predict age from a photo will be generally prone to mis-classify images such as th |
28,039 | Convolutional Neural Network Scale Sensitivity | Well, it all depends on how your dataset is constructed. From my experience neural networks tend to go for simplest explanations. And inferring the age from the outfit is actually simpler than using head to body ratio. If you can expand your dataset having this in mind your CNN should work as expected. | Convolutional Neural Network Scale Sensitivity | Well, it all depends on how your dataset is constructed. From my experience neural networks tend to go for simplest explanations. And inferring the age from the outfit is actually simpler than using h | Convolutional Neural Network Scale Sensitivity
Well, it all depends on how your dataset is constructed. From my experience neural networks tend to go for simplest explanations. And inferring the age from the outfit is actually simpler than using head to body ratio. If you can expand your dataset having this in mind your CNN should work as expected. | Convolutional Neural Network Scale Sensitivity
Well, it all depends on how your dataset is constructed. From my experience neural networks tend to go for simplest explanations. And inferring the age from the outfit is actually simpler than using h |
28,040 | Fitting exponential decay with negative y values | Use a selfstarting function:
ggplot(dt, aes(x = x, y = y)) +
geom_point() +
stat_smooth(method = "nls", formula = y ~ SSasymp(x, Asym, R0, lrc), se = FALSE)
fit <- nls(y ~ SSasymp(x, Asym, R0, lrc), data = dt)
summary(fit)
#Formula: y ~ SSasymp(x, Asym, R0, lrc)
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#Asym -0.0001302 0.0004693 -0.277 0.782
#R0 77.9103278 2.1432998 36.351 <2e-16 ***
#lrc -4.0862443 0.0051816 -788.604 <2e-16 ***
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 0.007307 on 698 degrees of freedom
#
#Number of iterations to convergence: 0
#Achieved convergence tolerance: 9.189e-08
exp(coef(fit)[["lrc"]]) #lambda
#[1] 0.01680222
However, I would seriously consider if your domain knowledge doesn't justify setting the asymptote to zero. I believe it does and the above model doesn't disagree (see the standard error / p-value of the coefficient).
ggplot(dt, aes(x = x, y = y)) +
geom_point() +
stat_smooth(method = "nls", formula = y ~ a * exp(-S * x),
method.args = list(start = list(a = 78, S = 0.02)), se = FALSE, #starting values obtained from fit above
color = "dark red") | Fitting exponential decay with negative y values | Use a selfstarting function:
ggplot(dt, aes(x = x, y = y)) +
geom_point() +
stat_smooth(method = "nls", formula = y ~ SSasymp(x, Asym, R0, lrc), se = FALSE)
fit <- nls(y ~ SSasymp(x, Asym, R0, | Fitting exponential decay with negative y values
Use a selfstarting function:
ggplot(dt, aes(x = x, y = y)) +
geom_point() +
stat_smooth(method = "nls", formula = y ~ SSasymp(x, Asym, R0, lrc), se = FALSE)
fit <- nls(y ~ SSasymp(x, Asym, R0, lrc), data = dt)
summary(fit)
#Formula: y ~ SSasymp(x, Asym, R0, lrc)
#
#Parameters:
# Estimate Std. Error t value Pr(>|t|)
#Asym -0.0001302 0.0004693 -0.277 0.782
#R0 77.9103278 2.1432998 36.351 <2e-16 ***
#lrc -4.0862443 0.0051816 -788.604 <2e-16 ***
#---
#Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
#Residual standard error: 0.007307 on 698 degrees of freedom
#
#Number of iterations to convergence: 0
#Achieved convergence tolerance: 9.189e-08
exp(coef(fit)[["lrc"]]) #lambda
#[1] 0.01680222
However, I would seriously consider if your domain knowledge doesn't justify setting the asymptote to zero. I believe it does and the above model doesn't disagree (see the standard error / p-value of the coefficient).
ggplot(dt, aes(x = x, y = y)) +
geom_point() +
stat_smooth(method = "nls", formula = y ~ a * exp(-S * x),
method.args = list(start = list(a = 78, S = 0.02)), se = FALSE, #starting values obtained from fit above
color = "dark red") | Fitting exponential decay with negative y values
Use a selfstarting function:
ggplot(dt, aes(x = x, y = y)) +
geom_point() +
stat_smooth(method = "nls", formula = y ~ SSasymp(x, Asym, R0, lrc), se = FALSE)
fit <- nls(y ~ SSasymp(x, Asym, R0, |
28,041 | Fitting exponential decay with negative y values | This question has relationships with several other questions
How to fit exponential y=A(1-exp(B*X)) function to a given data set? Especially how to determine the initial start parameters?
How to minimize residual sum of squares of an exponential fit?
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
I have three additional remarks regarding some points in this question.
1: Why linearized model does not fit well the large values of $y$
Much better, but the model does not trace y values perfectly at low x values.
The linearized fit is not minimizing the same residuals. At the logarithmic scale the residuals for smaller values will be larger. The image below shows the comparison by plotting the y-axis on a log scale in the right image:
When necessary you could add weights to the least squares loss function.
2: Using linearized fit as starting values
After you have obtained estimates with your linearized fit you could have used these as starting point for the non linear fitting.*
# vectors x and y from data
x <- dat$x
y <- dat$y
# linearized fit with zero correction
K <- abs(min(y))
dty <- y + K*(1+10^-15)
fit <- lm(log(dty) ~x)
# old fit that had a singluar gradient matrix error
# nls(y ~ a * exp(-S * x) + K,
# start = list(a = 0.5,
# S = 0.1,
# K = -0.1))
#
# new fit
fitnls <- nls(y ~ a * exp(-S * x) + K,
start = list(a = exp(fit$coefficients[1]),
S = -fit$coefficients[2],
K = -0.1))
#
3: Using a more general method to obtain the starting point
If you have enough points then you can also obtain the slope without having to worry about asymptotic value and negative values (no computation of a logarithm needed).
You can do this by integrating the data points. Then with $$y = a e^{sx} + k $$ and $$Y = \frac{a}{s} e^{sx} + kx + Const$$ you can use a linear model to obtain the value of $s$ by describing $y$ as a linear combination of the vectors $Y$, $x$ and an intercept:
$$\begin{array}{rccccl}y &=& a e^{sx} + k &=& s(\frac{a}{s} e^{s x} + k x + Const) &- s k x - s Const \\
&&&=& sY &- sk x - s Const \end{array}$$
The advantage of this method (see Tittelbach and Helmrich 1993 "An integration method for the analysis of multiexponential transient signals") is that you can extend it to more than a single exponentially decaying component (adding more integrals).
#
# using Tittelbach Helmrich
#
# integrating with trapezium rule assuming x variable is already ordered
ys <- c(0,cumsum(0.5*diff(x)*(y[-1]+y[-length(y)])))
# getting slope parameter
modth <- lm(y ~ ys + x)
slope <- modth$coefficients[2]
# getting other parameters
modlm <- lm(y ~ 1 + I(exp(slope*x)))
K <- modlm$coefficients[1]
a <- modlm$coefficients[2]
# fitting with TH start
fitnls2 <- nls(y ~ a * exp(-S * x) + K,
start = list(a = a,
S = -slope,
K = K))
Footnote:
*This use of the slope in the linearized problem is exactly what what the SSasymp selfstarting function does. It first estimates the asymptote
> stats:::NLSstRtAsymptote.sortedXyData
function (xy)
{
in.range <- range(xy$y)
last.dif <- abs(in.range - xy$y[nrow(xy)])
if (match(min(last.dif), last.dif) == 2L)
in.range[2L] + diff(in.range)/8
else in.range[1L] - diff(in.range)/8
}
and then the slope by (subtracting the asymptote value and taking the log values)
> stats:::NLSstAsymptotic.sortedXyData
function (xy)
{
xy$rt <- NLSstRtAsymptote(xy)
setNames(coef(nls(y ~ cbind(1, 1 - exp(-exp(lrc) * x)), data = xy,
start = list(lrc = log(-coef(lm(log(abs(y - rt)) ~ x,
data = xy))[[2L]])), algorithm = "plinear"))[c(2,
3, 1)], c("b0", "b1", "lrc"))
}
Note the line start = list(lrc = log(-coef(lm(log(abs(y - rt)) ~ x, data = xy))[[2L]]))
Sidenote: In the special case that $K=0$ you can use
plot(x,y)
mod <- glm(y~x, family = gaussian(link = log), start = c(2,-0.01))
lines(x,exp(predict(mod)),col=2)
which models the observed parameter $y$ as
$$y = exp(X\beta) + \epsilon = exp(\beta_0) \cdot exp(\beta_1 \cdot x) + \epsilon$$ | Fitting exponential decay with negative y values | This question has relationships with several other questions
How to fit exponential y=A(1-exp(B*X)) function to a given data set? Especially how to determine the initial start parameters?
How to mini | Fitting exponential decay with negative y values
This question has relationships with several other questions
How to fit exponential y=A(1-exp(B*X)) function to a given data set? Especially how to determine the initial start parameters?
How to minimize residual sum of squares of an exponential fit?
Why is nls() giving me "singular gradient matrix at initial parameter estimates" errors?
I have three additional remarks regarding some points in this question.
1: Why linearized model does not fit well the large values of $y$
Much better, but the model does not trace y values perfectly at low x values.
The linearized fit is not minimizing the same residuals. At the logarithmic scale the residuals for smaller values will be larger. The image below shows the comparison by plotting the y-axis on a log scale in the right image:
When necessary you could add weights to the least squares loss function.
2: Using linearized fit as starting values
After you have obtained estimates with your linearized fit you could have used these as starting point for the non linear fitting.*
# vectors x and y from data
x <- dat$x
y <- dat$y
# linearized fit with zero correction
K <- abs(min(y))
dty <- y + K*(1+10^-15)
fit <- lm(log(dty) ~x)
# old fit that had a singluar gradient matrix error
# nls(y ~ a * exp(-S * x) + K,
# start = list(a = 0.5,
# S = 0.1,
# K = -0.1))
#
# new fit
fitnls <- nls(y ~ a * exp(-S * x) + K,
start = list(a = exp(fit$coefficients[1]),
S = -fit$coefficients[2],
K = -0.1))
#
3: Using a more general method to obtain the starting point
If you have enough points then you can also obtain the slope without having to worry about asymptotic value and negative values (no computation of a logarithm needed).
You can do this by integrating the data points. Then with $$y = a e^{sx} + k $$ and $$Y = \frac{a}{s} e^{sx} + kx + Const$$ you can use a linear model to obtain the value of $s$ by describing $y$ as a linear combination of the vectors $Y$, $x$ and an intercept:
$$\begin{array}{rccccl}y &=& a e^{sx} + k &=& s(\frac{a}{s} e^{s x} + k x + Const) &- s k x - s Const \\
&&&=& sY &- sk x - s Const \end{array}$$
The advantage of this method (see Tittelbach and Helmrich 1993 "An integration method for the analysis of multiexponential transient signals") is that you can extend it to more than a single exponentially decaying component (adding more integrals).
#
# using Tittelbach Helmrich
#
# integrating with trapezium rule assuming x variable is already ordered
ys <- c(0,cumsum(0.5*diff(x)*(y[-1]+y[-length(y)])))
# getting slope parameter
modth <- lm(y ~ ys + x)
slope <- modth$coefficients[2]
# getting other parameters
modlm <- lm(y ~ 1 + I(exp(slope*x)))
K <- modlm$coefficients[1]
a <- modlm$coefficients[2]
# fitting with TH start
fitnls2 <- nls(y ~ a * exp(-S * x) + K,
start = list(a = a,
S = -slope,
K = K))
Footnote:
*This use of the slope in the linearized problem is exactly what what the SSasymp selfstarting function does. It first estimates the asymptote
> stats:::NLSstRtAsymptote.sortedXyData
function (xy)
{
in.range <- range(xy$y)
last.dif <- abs(in.range - xy$y[nrow(xy)])
if (match(min(last.dif), last.dif) == 2L)
in.range[2L] + diff(in.range)/8
else in.range[1L] - diff(in.range)/8
}
and then the slope by (subtracting the asymptote value and taking the log values)
> stats:::NLSstAsymptotic.sortedXyData
function (xy)
{
xy$rt <- NLSstRtAsymptote(xy)
setNames(coef(nls(y ~ cbind(1, 1 - exp(-exp(lrc) * x)), data = xy,
start = list(lrc = log(-coef(lm(log(abs(y - rt)) ~ x,
data = xy))[[2L]])), algorithm = "plinear"))[c(2,
3, 1)], c("b0", "b1", "lrc"))
}
Note the line start = list(lrc = log(-coef(lm(log(abs(y - rt)) ~ x, data = xy))[[2L]]))
Sidenote: In the special case that $K=0$ you can use
plot(x,y)
mod <- glm(y~x, family = gaussian(link = log), start = c(2,-0.01))
lines(x,exp(predict(mod)),col=2)
which models the observed parameter $y$ as
$$y = exp(X\beta) + \epsilon = exp(\beta_0) \cdot exp(\beta_1 \cdot x) + \epsilon$$ | Fitting exponential decay with negative y values
This question has relationships with several other questions
How to fit exponential y=A(1-exp(B*X)) function to a given data set? Especially how to determine the initial start parameters?
How to mini |
28,042 | Log-likelihood function in Poisson Regression | In Poisson regression, there are two Deviances.
The Null Deviance shows how well the response variable is predicted by a model that includes only the intercept (grand mean).
And the Residual Deviance is −2 times the difference between the log-likelihood evaluated at the maximum likelihood estimate (MLE) and the log-likelihood for a "saturated model" (a theoretical model with a separate parameter for each observation and thus a perfect fit).
Now let us write down those likelihood functions.
Suppose $Y$ has a Poisson distribution whose mean depends on vector $\bf{x}$, for simplicity, we will suppose $\bf{x}$ only has one predictor variable. We write
$$
E(Y|x)=\lambda(x)
$$
For Poisson regression we can choose a log or an identity link function, we choose a log link here.
$$\textrm{Log}(\lambda (x))=\beta_0+\beta_1x$$
$\beta_0$ is the intercept.
The Likelihood function with the parameter $\beta_0$ and $\beta_1$ is
$$
L(\beta_0,\beta_1;y_i)=\prod_{i=1}^{n}\frac{e^{-\lambda{(x_i)}}[\lambda(x_i)]^{y_i}}{y_i!}=\prod_{i=1}^{n}\frac{e^{-e^{(\beta_0+\beta_1x_i)}}\left [e^{(\beta_0+\beta_1x_i)}\right ]^{y_i}}{y_i!}
$$
The log-likelihood function is:
$$
l(\beta_0,\beta_1;y_i)=-\sum_{i=1}^n e^{(\beta_0+\beta_1x_i)}+\sum_{i=1}^ny_i (\beta_0+\beta_1x_i)-\sum_{i=1}^n\log(y_i!) \tag{1}
$$
When we calculate null deviance we will plug in $\beta_0$ into $(1)$. $\beta_0$ will be calculated by a intercept only regression, $\beta_1$ will be set to zero. We write
$$
l(\beta_0;y_i)=-\sum_{i=1}^ne^{\beta_0}+\sum_{i=1}^ny_i\beta_0-\sum_{i=1}^n \log(y_i!) \tag{2}
$$
Next, we need to calculate the log-likelihood for the "saturated model" (a theoretical model with a separate parameter for each observation), therefore, we have $\mu_1,\mu_2,...,\mu_n$ parameters here.
(Note, in $(1)$, we only have two parameters, i.e. as long as the subjects have the same value for the predictor variables we think they are the same).
The log-likelihood function for the "saturated model" is
$$
l(\mu)=\sum_{i=1}^n y_i \log\mu_i-\sum_{i=1}^n\mu_i-\sum_{i=1}^n \log(y_i!)
$$
Then it can be written as:
$$
l(\mu)=\sum y_i I_{(y_i>0)} \log\mu_i-\sum\mu_iI_{(y_i>0)}-\sum \log(y_i!)I_{(y_i>0)}-\sum\mu_iI_{(y_i=0)} \tag{3}
$$
(Note, $y_i\ge 0$, when $y_i=0,y_i\log\mu_i=0$ and $\log(y_i!)=0$, this will be useful later, not now)
$$
\frac{\partial}{\partial \mu_i}l(\mu)=\frac{y_i}{\mu_i}-1
$$
set to zero, we get
$$
\hat{\mu_i}=y_i
$$
Now put $\hat{\mu_i}$ into $(3)$, since when $y_i=0$ we can see that $\hat{\mu_i}$ will be zero.
Now for the likelihood function $(3)$of the "saturated model" we can only care $y_i>0$, we write
$$
l(\hat{\mu})=\sum y_i \log{y_i}-\sum y_i-\sum \log(y_i!) \tag{4}
$$
From $(4)$ you can see why we need $(3)$ since $\log y_i$ will be undefined when $y_i=0$
Now let us calculate the deviances.
The Residual Deviance=$$-2[(1)-(4)]=-2\times[l(\beta_0,\beta_1;y_i)-l(\hat{\mu})]\tag{5}$$
The Null Deviance=
$$-2[(2)-(4)]=-2\times[l(\beta_0;y_i)-l(\hat{\mu})]\tag{6} $$
Ok, next let us calculate the two Deviances by R then by "hand" or excel.
x<- c(2,15,19,14,16,15,9,17,10,23,14,14,9,5,17,16,13,6,16,19,24,9,12,7,9,7,15,21,20,20)
y<-c(0,6,4,1,5,2,2,10,3,10,2,6,5,2,2,7,6,2,5,5,6,2,5,1,3,3,3,4,6,9)
p_data<-data.frame(y,x)
p_glm<-glm(y~x, family=poisson, data=p_data)
summary(p_glm)
You can see $\beta_0=0.30787,\beta_1=0.07636$, Null Deviance=48.31, Residual Deviance=27.84.
Here is the intercept only model
p_glm2<-glm(y~1,family=poisson, data=p_data)
summary(p_glm2)
You can see $\beta_0=1.44299$
Now let us calculate these two Deviances by hand (or by excel)
l_saturated<-c()
l_regression<-c()
l_intercept<-c()
for(i in 1:30){
l_regression[i]<--exp( 0.30787 +0.07636 *x[i])+y[i]*(0.30787+0.07636 *x[i])-
log(factorial(y[i]))}
l_reg<-sum(l_regression)
l_reg
# -60.25116 ###log likelihood for regression model
for(i in 1:30){
l_saturated[i]<-y[i]*try(log(y[i]),T)-y[i]-log(factorial(y[i]))
} #there is one y_i=0 need to take care
l_sat<-sum(l_saturated,na.rm=T)
l_sat
#-46.33012 ###log likelihood for saturated model
for(i in 1:30){
l_intercept[i]<--exp(1.44299)+y[i]*(1.44299)-log(factorial(y[i]))
}
l_inter<-sum(l_intercept)
l_inter
#-70.48501 ##log likelihood for intercept model only
-2*(l_reg-l_sat)
#27.84209 ##Residual Deviance
-2*(l_inter-l_sat)
##48.30979 ##Null Deviance
You can see use these formulas and calculate by hand you can get exactly the same numbers as calculated by GLM function of R. | Log-likelihood function in Poisson Regression | In Poisson regression, there are two Deviances.
The Null Deviance shows how well the response variable is predicted by a model that includes only the intercept (grand mean).
And the Residual Deviance | Log-likelihood function in Poisson Regression
In Poisson regression, there are two Deviances.
The Null Deviance shows how well the response variable is predicted by a model that includes only the intercept (grand mean).
And the Residual Deviance is −2 times the difference between the log-likelihood evaluated at the maximum likelihood estimate (MLE) and the log-likelihood for a "saturated model" (a theoretical model with a separate parameter for each observation and thus a perfect fit).
Now let us write down those likelihood functions.
Suppose $Y$ has a Poisson distribution whose mean depends on vector $\bf{x}$, for simplicity, we will suppose $\bf{x}$ only has one predictor variable. We write
$$
E(Y|x)=\lambda(x)
$$
For Poisson regression we can choose a log or an identity link function, we choose a log link here.
$$\textrm{Log}(\lambda (x))=\beta_0+\beta_1x$$
$\beta_0$ is the intercept.
The Likelihood function with the parameter $\beta_0$ and $\beta_1$ is
$$
L(\beta_0,\beta_1;y_i)=\prod_{i=1}^{n}\frac{e^{-\lambda{(x_i)}}[\lambda(x_i)]^{y_i}}{y_i!}=\prod_{i=1}^{n}\frac{e^{-e^{(\beta_0+\beta_1x_i)}}\left [e^{(\beta_0+\beta_1x_i)}\right ]^{y_i}}{y_i!}
$$
The log-likelihood function is:
$$
l(\beta_0,\beta_1;y_i)=-\sum_{i=1}^n e^{(\beta_0+\beta_1x_i)}+\sum_{i=1}^ny_i (\beta_0+\beta_1x_i)-\sum_{i=1}^n\log(y_i!) \tag{1}
$$
When we calculate null deviance we will plug in $\beta_0$ into $(1)$. $\beta_0$ will be calculated by a intercept only regression, $\beta_1$ will be set to zero. We write
$$
l(\beta_0;y_i)=-\sum_{i=1}^ne^{\beta_0}+\sum_{i=1}^ny_i\beta_0-\sum_{i=1}^n \log(y_i!) \tag{2}
$$
Next, we need to calculate the log-likelihood for the "saturated model" (a theoretical model with a separate parameter for each observation), therefore, we have $\mu_1,\mu_2,...,\mu_n$ parameters here.
(Note, in $(1)$, we only have two parameters, i.e. as long as the subjects have the same value for the predictor variables we think they are the same).
The log-likelihood function for the "saturated model" is
$$
l(\mu)=\sum_{i=1}^n y_i \log\mu_i-\sum_{i=1}^n\mu_i-\sum_{i=1}^n \log(y_i!)
$$
Then it can be written as:
$$
l(\mu)=\sum y_i I_{(y_i>0)} \log\mu_i-\sum\mu_iI_{(y_i>0)}-\sum \log(y_i!)I_{(y_i>0)}-\sum\mu_iI_{(y_i=0)} \tag{3}
$$
(Note, $y_i\ge 0$, when $y_i=0,y_i\log\mu_i=0$ and $\log(y_i!)=0$, this will be useful later, not now)
$$
\frac{\partial}{\partial \mu_i}l(\mu)=\frac{y_i}{\mu_i}-1
$$
set to zero, we get
$$
\hat{\mu_i}=y_i
$$
Now put $\hat{\mu_i}$ into $(3)$, since when $y_i=0$ we can see that $\hat{\mu_i}$ will be zero.
Now for the likelihood function $(3)$of the "saturated model" we can only care $y_i>0$, we write
$$
l(\hat{\mu})=\sum y_i \log{y_i}-\sum y_i-\sum \log(y_i!) \tag{4}
$$
From $(4)$ you can see why we need $(3)$ since $\log y_i$ will be undefined when $y_i=0$
Now let us calculate the deviances.
The Residual Deviance=$$-2[(1)-(4)]=-2\times[l(\beta_0,\beta_1;y_i)-l(\hat{\mu})]\tag{5}$$
The Null Deviance=
$$-2[(2)-(4)]=-2\times[l(\beta_0;y_i)-l(\hat{\mu})]\tag{6} $$
Ok, next let us calculate the two Deviances by R then by "hand" or excel.
x<- c(2,15,19,14,16,15,9,17,10,23,14,14,9,5,17,16,13,6,16,19,24,9,12,7,9,7,15,21,20,20)
y<-c(0,6,4,1,5,2,2,10,3,10,2,6,5,2,2,7,6,2,5,5,6,2,5,1,3,3,3,4,6,9)
p_data<-data.frame(y,x)
p_glm<-glm(y~x, family=poisson, data=p_data)
summary(p_glm)
You can see $\beta_0=0.30787,\beta_1=0.07636$, Null Deviance=48.31, Residual Deviance=27.84.
Here is the intercept only model
p_glm2<-glm(y~1,family=poisson, data=p_data)
summary(p_glm2)
You can see $\beta_0=1.44299$
Now let us calculate these two Deviances by hand (or by excel)
l_saturated<-c()
l_regression<-c()
l_intercept<-c()
for(i in 1:30){
l_regression[i]<--exp( 0.30787 +0.07636 *x[i])+y[i]*(0.30787+0.07636 *x[i])-
log(factorial(y[i]))}
l_reg<-sum(l_regression)
l_reg
# -60.25116 ###log likelihood for regression model
for(i in 1:30){
l_saturated[i]<-y[i]*try(log(y[i]),T)-y[i]-log(factorial(y[i]))
} #there is one y_i=0 need to take care
l_sat<-sum(l_saturated,na.rm=T)
l_sat
#-46.33012 ###log likelihood for saturated model
for(i in 1:30){
l_intercept[i]<--exp(1.44299)+y[i]*(1.44299)-log(factorial(y[i]))
}
l_inter<-sum(l_intercept)
l_inter
#-70.48501 ##log likelihood for intercept model only
-2*(l_reg-l_sat)
#27.84209 ##Residual Deviance
-2*(l_inter-l_sat)
##48.30979 ##Null Deviance
You can see use these formulas and calculate by hand you can get exactly the same numbers as calculated by GLM function of R. | Log-likelihood function in Poisson Regression
In Poisson regression, there are two Deviances.
The Null Deviance shows how well the response variable is predicted by a model that includes only the intercept (grand mean).
And the Residual Deviance |
28,043 | Why do we take the average for regression Random Forest predictions? | I've always thought about the averaging in terms of the bias-variance tradeoff. If I remember correctly Leo Breiman hinted at this in the randomForest paper with his statement "... are more robust with respect to noise."
The explanation goes like this: basically you are taking a bunch of trees that are grown to full length-no pruning-so you know they will each be biased by themselves. However, the random sampling that induces each tree in the forest should induce under-bias as often as over-bias. So by taking an average you then eliminate the bias of each tree-the over+under biases canceling. Hopefully in the process you also reduce the variance in each tree and so the overall variance should be reduced as well.
As indicated by the other answers to the post, this might not be the only reason for averaging. | Why do we take the average for regression Random Forest predictions? | I've always thought about the averaging in terms of the bias-variance tradeoff. If I remember correctly Leo Breiman hinted at this in the randomForest paper with his statement "... are more robust wit | Why do we take the average for regression Random Forest predictions?
I've always thought about the averaging in terms of the bias-variance tradeoff. If I remember correctly Leo Breiman hinted at this in the randomForest paper with his statement "... are more robust with respect to noise."
The explanation goes like this: basically you are taking a bunch of trees that are grown to full length-no pruning-so you know they will each be biased by themselves. However, the random sampling that induces each tree in the forest should induce under-bias as often as over-bias. So by taking an average you then eliminate the bias of each tree-the over+under biases canceling. Hopefully in the process you also reduce the variance in each tree and so the overall variance should be reduced as well.
As indicated by the other answers to the post, this might not be the only reason for averaging. | Why do we take the average for regression Random Forest predictions?
I've always thought about the averaging in terms of the bias-variance tradeoff. If I remember correctly Leo Breiman hinted at this in the randomForest paper with his statement "... are more robust wit |
28,044 | Why do we take the average for regression Random Forest predictions? | When using the average, you are saying two things:
Outliers are not a huge problem (otherwise you would use the median or at least filter out some outliers before taking the average)
Every prediction has the same weight (otherwise you would factor in weights)
You shouldn't expect there to be huge outliers since you can make the sample size big enough for them to matter less in the average and since you would expect a minimum of stability from the predictions of the individual trees.
There is no reason to think some trees should have more predictive weight than others, nor a way to determine such weights.
You cannot really use mode since the predictions are on a continuous scale. For example, if you had the predictions 80 80 100 101 99 98 97 102 103 104 96, mode would predict as 80. That cannot be what you want. If all values have distinct decimals, mode wouldn't know how to decide.
Other averages than the arithmetic mean exist, like the geometric mean and the harmonic mean. They are designed to pull the average down if there are some low values in the series of data. That's not what you want here either. | Why do we take the average for regression Random Forest predictions? | When using the average, you are saying two things:
Outliers are not a huge problem (otherwise you would use the median or at least filter out some outliers before taking the average)
Every prediction | Why do we take the average for regression Random Forest predictions?
When using the average, you are saying two things:
Outliers are not a huge problem (otherwise you would use the median or at least filter out some outliers before taking the average)
Every prediction has the same weight (otherwise you would factor in weights)
You shouldn't expect there to be huge outliers since you can make the sample size big enough for them to matter less in the average and since you would expect a minimum of stability from the predictions of the individual trees.
There is no reason to think some trees should have more predictive weight than others, nor a way to determine such weights.
You cannot really use mode since the predictions are on a continuous scale. For example, if you had the predictions 80 80 100 101 99 98 97 102 103 104 96, mode would predict as 80. That cannot be what you want. If all values have distinct decimals, mode wouldn't know how to decide.
Other averages than the arithmetic mean exist, like the geometric mean and the harmonic mean. They are designed to pull the average down if there are some low values in the series of data. That's not what you want here either. | Why do we take the average for regression Random Forest predictions?
When using the average, you are saying two things:
Outliers are not a huge problem (otherwise you would use the median or at least filter out some outliers before taking the average)
Every prediction |
28,045 | Why do we take the average for regression Random Forest predictions? | Of course you could use any aggregation function that is useful in your particular situation. The median is a good way of making a small sample robust against outliers. In regression forests you can usually influence the sample size to avoid having the problem of small sample sizes. Thus the mean seems sensible in a very large fraction of use cases. | Why do we take the average for regression Random Forest predictions? | Of course you could use any aggregation function that is useful in your particular situation. The median is a good way of making a small sample robust against outliers. In regression forests you can u | Why do we take the average for regression Random Forest predictions?
Of course you could use any aggregation function that is useful in your particular situation. The median is a good way of making a small sample robust against outliers. In regression forests you can usually influence the sample size to avoid having the problem of small sample sizes. Thus the mean seems sensible in a very large fraction of use cases. | Why do we take the average for regression Random Forest predictions?
Of course you could use any aggregation function that is useful in your particular situation. The median is a good way of making a small sample robust against outliers. In regression forests you can u |
28,046 | Why do we take the average for regression Random Forest predictions? | Wouldn't it be possible as well to take the median, mode, or some other aggregate function?
Random Forest classification (i.e. not probability estimation) is based on the mode of the predictions (majority voting), so yeah, you can aggregate the results as you like. | Why do we take the average for regression Random Forest predictions? | Wouldn't it be possible as well to take the median, mode, or some other aggregate function?
Random Forest classification (i.e. not probability estimation) is based on the mode of the predictions (maj | Why do we take the average for regression Random Forest predictions?
Wouldn't it be possible as well to take the median, mode, or some other aggregate function?
Random Forest classification (i.e. not probability estimation) is based on the mode of the predictions (majority voting), so yeah, you can aggregate the results as you like. | Why do we take the average for regression Random Forest predictions?
Wouldn't it be possible as well to take the median, mode, or some other aggregate function?
Random Forest classification (i.e. not probability estimation) is based on the mode of the predictions (maj |
28,047 | Why do we take the average for regression Random Forest predictions? | First things first. As many other people said you can use other metrics but the average is the "default" option.
As a default option, one would set a function that works under some mild conditions
Now, If you think about it, a random forest is a collection of trees and each of these trees has the objective to estimate your numeric response variable.
Additionally, as @David Ernst correctly mentions:
There is no reason to think some trees should have more predictive weights as others, nor a way to determine such weights.
Furthermore, there is no reason to think that these trees will have different standard deviations. Again, under mild conditions!
That being said, the average should work because of the Weak law of large numbers | Why do we take the average for regression Random Forest predictions? | First things first. As many other people said you can use other metrics but the average is the "default" option.
As a default option, one would set a function that works under some mild conditions
Now | Why do we take the average for regression Random Forest predictions?
First things first. As many other people said you can use other metrics but the average is the "default" option.
As a default option, one would set a function that works under some mild conditions
Now, If you think about it, a random forest is a collection of trees and each of these trees has the objective to estimate your numeric response variable.
Additionally, as @David Ernst correctly mentions:
There is no reason to think some trees should have more predictive weights as others, nor a way to determine such weights.
Furthermore, there is no reason to think that these trees will have different standard deviations. Again, under mild conditions!
That being said, the average should work because of the Weak law of large numbers | Why do we take the average for regression Random Forest predictions?
First things first. As many other people said you can use other metrics but the average is the "default" option.
As a default option, one would set a function that works under some mild conditions
Now |
28,048 | Why do we take the average for regression Random Forest predictions? | In ensemble. Averaging is prioritizing more on confidence rather than majority.
Example you have 3 trees,
2 of them vote A with 22% confidence and 1 voted B with 90% confidence.
If we use majority we get vote A. Average of 22, N, N
If we use confidence we get vote B. Average of 90, N, N
It would make sense to go with the 90% confidence since its more sure than the majority of others with only 22% confidence. | Why do we take the average for regression Random Forest predictions? | In ensemble. Averaging is prioritizing more on confidence rather than majority.
Example you have 3 trees,
2 of them vote A with 22% confidence and 1 voted B with 90% confidence.
If we use majority we | Why do we take the average for regression Random Forest predictions?
In ensemble. Averaging is prioritizing more on confidence rather than majority.
Example you have 3 trees,
2 of them vote A with 22% confidence and 1 voted B with 90% confidence.
If we use majority we get vote A. Average of 22, N, N
If we use confidence we get vote B. Average of 90, N, N
It would make sense to go with the 90% confidence since its more sure than the majority of others with only 22% confidence. | Why do we take the average for regression Random Forest predictions?
In ensemble. Averaging is prioritizing more on confidence rather than majority.
Example you have 3 trees,
2 of them vote A with 22% confidence and 1 voted B with 90% confidence.
If we use majority we |
28,049 | What are the differences between ANOVAs and GLMs? | This is a common point of confusion, as the word ANOVA is used with different meanings in different textbooks / software packages. I'll try to sort this out a bit:
Historically, ANOVA is a method to partition out the contribution of different factors to the variation in a continuous variable. The classical ANOVA measured this contribution via the sum of squares, which corresponds to the assumptions of an lm (iid normal and so on), thus you will often read that AVOVA = normal distribution. This also implies that it doesn't matter if you calculate an ANOVA directly, or first an lm and then perform the ANOVA on the fitted lm, it's basically the same model.
This obviously doesn't generalise to GLM(Ms), but people would still like to test for the significance (and contribution) of factors or factor groups in those models. Thus, the ANOVA concept has been extended, and a more modern way to look at ANOVA is that an ANOVA is just a series of tests that add / remove predictors or predictor groups in a regression to test for their overall significance, as well as changes in some metric of fit (pseudo R2, there are multiple definitions, most based on deviance).
So, regardless of lm, glm, glmer, if you have a model with a predictor color = red, green, blue, in R
m1 <- lm/glm/glmer(res ~ color)
summary(m1)
will give you p-values for the contrasts between red, green, blue, while
am1 <- anova(m1)
summary(am1)
will essentially do a (likelihood ratio) test to decide if the predictor color improves the fit significantly (note that adding color, you are adding 2 df / parameters at once), and it will also provide a feedback about the improvement of fit.
In R there are different ANOVA functions (aov, anova, car::ANOVA) that slightly differ in their use and appropriateness for particular regressions and questions. car::ANOVA is very versatile and allows changing between type II, II ANOVA (explanation here)
Note that you can also use the anova command in R to do a LRT between two models, as in anova(m1, m2), so in a way you can see anova(m1) simply as a shorthand to compare m1 all of its smaller submodels via a LRTs. | What are the differences between ANOVAs and GLMs? | This is a common point of confusion, as the word ANOVA is used with different meanings in different textbooks / software packages. I'll try to sort this out a bit:
Historically, ANOVA is a method to p | What are the differences between ANOVAs and GLMs?
This is a common point of confusion, as the word ANOVA is used with different meanings in different textbooks / software packages. I'll try to sort this out a bit:
Historically, ANOVA is a method to partition out the contribution of different factors to the variation in a continuous variable. The classical ANOVA measured this contribution via the sum of squares, which corresponds to the assumptions of an lm (iid normal and so on), thus you will often read that AVOVA = normal distribution. This also implies that it doesn't matter if you calculate an ANOVA directly, or first an lm and then perform the ANOVA on the fitted lm, it's basically the same model.
This obviously doesn't generalise to GLM(Ms), but people would still like to test for the significance (and contribution) of factors or factor groups in those models. Thus, the ANOVA concept has been extended, and a more modern way to look at ANOVA is that an ANOVA is just a series of tests that add / remove predictors or predictor groups in a regression to test for their overall significance, as well as changes in some metric of fit (pseudo R2, there are multiple definitions, most based on deviance).
So, regardless of lm, glm, glmer, if you have a model with a predictor color = red, green, blue, in R
m1 <- lm/glm/glmer(res ~ color)
summary(m1)
will give you p-values for the contrasts between red, green, blue, while
am1 <- anova(m1)
summary(am1)
will essentially do a (likelihood ratio) test to decide if the predictor color improves the fit significantly (note that adding color, you are adding 2 df / parameters at once), and it will also provide a feedback about the improvement of fit.
In R there are different ANOVA functions (aov, anova, car::ANOVA) that slightly differ in their use and appropriateness for particular regressions and questions. car::ANOVA is very versatile and allows changing between type II, II ANOVA (explanation here)
Note that you can also use the anova command in R to do a LRT between two models, as in anova(m1, m2), so in a way you can see anova(m1) simply as a shorthand to compare m1 all of its smaller submodels via a LRTs. | What are the differences between ANOVAs and GLMs?
This is a common point of confusion, as the word ANOVA is used with different meanings in different textbooks / software packages. I'll try to sort this out a bit:
Historically, ANOVA is a method to p |
28,050 | What are the differences between ANOVAs and GLMs? | regression & ANOVA are General Linear Models - for Normally distributed data. Example of Generalized Linear Models can be Logistic Regression that uses logistic-function, or logistic-curve (S-shaped), that converts log-odds to probability of belonging to certain class (in binary classification) or other non-linear functions assigned in arguments of API's glmm-implementation
Also, I know that it's possible to add mixed effects in a GLMM. Is it
the case for ANOVAs?
yes ANOVA can be GLMM for analysing within-group & between-group variability e.g. having several experiments repeated (as random effect) & compaaring it to the group after any treatment (as fixed effect) | What are the differences between ANOVAs and GLMs? | regression & ANOVA are General Linear Models - for Normally distributed data. Example of Generalized Linear Models can be Logistic Regression that uses logistic-function, or logistic-curve (S-shaped), | What are the differences between ANOVAs and GLMs?
regression & ANOVA are General Linear Models - for Normally distributed data. Example of Generalized Linear Models can be Logistic Regression that uses logistic-function, or logistic-curve (S-shaped), that converts log-odds to probability of belonging to certain class (in binary classification) or other non-linear functions assigned in arguments of API's glmm-implementation
Also, I know that it's possible to add mixed effects in a GLMM. Is it
the case for ANOVAs?
yes ANOVA can be GLMM for analysing within-group & between-group variability e.g. having several experiments repeated (as random effect) & compaaring it to the group after any treatment (as fixed effect) | What are the differences between ANOVAs and GLMs?
regression & ANOVA are General Linear Models - for Normally distributed data. Example of Generalized Linear Models can be Logistic Regression that uses logistic-function, or logistic-curve (S-shaped), |
28,051 | 1/2 on lagrangian equation from lasso [duplicate] | There is nothing "necessary" about the factor of $\frac{1}{2}$. It is often used, as a matter of convenience, for quadratic objectives of the form $\frac{1}{2}x^TQx + g^Tx$ so that the matrix $Q$ winds up being the Hessian of the objective function.
In this case, the authors were not consistent between these two problems. The factor of $\frac{1}{2}$ can be absorbed in (adjustment made to) $\lambda$ and result in an equivalent problem, i.e., having the same argmin (although not the same optimal objective value). | 1/2 on lagrangian equation from lasso [duplicate] | There is nothing "necessary" about the factor of $\frac{1}{2}$. It is often used, as a matter of convenience, for quadratic objectives of the form $\frac{1}{2}x^TQx + g^Tx$ so that the matrix $Q$ wi | 1/2 on lagrangian equation from lasso [duplicate]
There is nothing "necessary" about the factor of $\frac{1}{2}$. It is often used, as a matter of convenience, for quadratic objectives of the form $\frac{1}{2}x^TQx + g^Tx$ so that the matrix $Q$ winds up being the Hessian of the objective function.
In this case, the authors were not consistent between these two problems. The factor of $\frac{1}{2}$ can be absorbed in (adjustment made to) $\lambda$ and result in an equivalent problem, i.e., having the same argmin (although not the same optimal objective value). | 1/2 on lagrangian equation from lasso [duplicate]
There is nothing "necessary" about the factor of $\frac{1}{2}$. It is often used, as a matter of convenience, for quadratic objectives of the form $\frac{1}{2}x^TQx + g^Tx$ so that the matrix $Q$ wi |
28,052 | 1/2 on lagrangian equation from lasso [duplicate] | The factor $\frac{1}{2}$ is quite obviously of no practical importance and is just a rescaling.
To see this just multiply the objective function with $2$, then lasso solves obviously also the equivalent problem $$\beta_{lasso} \in \arg\min\{\sum_{i=1}^n (y_i-\beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 + \lambda^* \sum_{j=1}^p |\beta_j|\}$$ where $\lambda^* = 2 \lambda \geq 0$. Since lasso is a convex optimization problem, the solutions to the problems will be identical, moreover there is a one-to-one relationship between $\lambda^*$ and $\lambda$. finally, both equivalent minimization problems translate to the same constrained minimization problem (just with different $\lambda$'s): $$\min_{\beta}\sum_{i=1}^n (y_i-\beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 \qquad s.t. \qquad \sum_{j=1}^p |\beta_j| \leq t.$$
The factor $\frac{1}{2}$ ist just introduced for convenience, i.e. to simplify the writing within the theoretical analysis of the lasso.
for example, the KKT conditions are then nicely "scaled", otherwise you would carry the factor $2$ from the derivative of the quadratic sum with you during your whole analysis. | 1/2 on lagrangian equation from lasso [duplicate] | The factor $\frac{1}{2}$ is quite obviously of no practical importance and is just a rescaling.
To see this just multiply the objective function with $2$, then lasso solves obviously also the equival | 1/2 on lagrangian equation from lasso [duplicate]
The factor $\frac{1}{2}$ is quite obviously of no practical importance and is just a rescaling.
To see this just multiply the objective function with $2$, then lasso solves obviously also the equivalent problem $$\beta_{lasso} \in \arg\min\{\sum_{i=1}^n (y_i-\beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 + \lambda^* \sum_{j=1}^p |\beta_j|\}$$ where $\lambda^* = 2 \lambda \geq 0$. Since lasso is a convex optimization problem, the solutions to the problems will be identical, moreover there is a one-to-one relationship between $\lambda^*$ and $\lambda$. finally, both equivalent minimization problems translate to the same constrained minimization problem (just with different $\lambda$'s): $$\min_{\beta}\sum_{i=1}^n (y_i-\beta_0 - \sum_{j=1}^p x_{ij}\beta_j)^2 \qquad s.t. \qquad \sum_{j=1}^p |\beta_j| \leq t.$$
The factor $\frac{1}{2}$ ist just introduced for convenience, i.e. to simplify the writing within the theoretical analysis of the lasso.
for example, the KKT conditions are then nicely "scaled", otherwise you would carry the factor $2$ from the derivative of the quadratic sum with you during your whole analysis. | 1/2 on lagrangian equation from lasso [duplicate]
The factor $\frac{1}{2}$ is quite obviously of no practical importance and is just a rescaling.
To see this just multiply the objective function with $2$, then lasso solves obviously also the equival |
28,053 | What are some of the most important "early papers" on Regularization methods? | Since you're simply looking for references, here is the list:
Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR. 39 (5): 195–198.
Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504.. Translated in "Solution of incorrectly formulated problems and the regularization method". Soviet Mathematics. 4: 1035–1038.
Hoerl AE, 1962, Application of ridge analysis to regression problems, Chemical Engineering Progress, 1958, 54–59.
Arthur E. Hoerl; Robert W. Kennard (1970). "Ridge regression: Biased estimation for nonorthogonal problems". Technometrics. 12 (1): 55–67. doi:10.2307/1267351. https://pdfs.semanticscholar.org/910e/d31ef5532dcbcf0bd01a980b1f79b9086fca.pdf
Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso" (PostScript). Journal of the Royal Statistical Society, Series B. 58 (1): 267–288. MR 1379242 https://statweb.stanford.edu/~tibs/lasso/lasso.pdf
Zou, H. and Hastie, T. (2005). Regularization and variable
selection via the elastic net. Journal of the Royal Statistical Society, Series B. 67: pp. 301–320. https://web.stanford.edu/~hastie/Papers/B67.2%20%282005%29%20301-320%20Zou%20&%20Hastie.pdf | What are some of the most important "early papers" on Regularization methods? | Since you're simply looking for references, here is the list:
Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR | What are some of the most important "early papers" on Regularization methods?
Since you're simply looking for references, here is the list:
Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR. 39 (5): 195–198.
Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504.. Translated in "Solution of incorrectly formulated problems and the regularization method". Soviet Mathematics. 4: 1035–1038.
Hoerl AE, 1962, Application of ridge analysis to regression problems, Chemical Engineering Progress, 1958, 54–59.
Arthur E. Hoerl; Robert W. Kennard (1970). "Ridge regression: Biased estimation for nonorthogonal problems". Technometrics. 12 (1): 55–67. doi:10.2307/1267351. https://pdfs.semanticscholar.org/910e/d31ef5532dcbcf0bd01a980b1f79b9086fca.pdf
Tibshirani, Robert (1996). "Regression Shrinkage and Selection via the Lasso" (PostScript). Journal of the Royal Statistical Society, Series B. 58 (1): 267–288. MR 1379242 https://statweb.stanford.edu/~tibs/lasso/lasso.pdf
Zou, H. and Hastie, T. (2005). Regularization and variable
selection via the elastic net. Journal of the Royal Statistical Society, Series B. 67: pp. 301–320. https://web.stanford.edu/~hastie/Papers/B67.2%20%282005%29%20301-320%20Zou%20&%20Hastie.pdf | What are some of the most important "early papers" on Regularization methods?
Since you're simply looking for references, here is the list:
Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" [On the stability of inverse problems]. Doklady Akademii Nauk SSSR |
28,054 | What are some of the most important "early papers" on Regularization methods? | A historically important paper which I believe first demonstrated that biasing estimators can result in improved estimates for ordinary linear models:
Stein, C., 1956, January. Inadmissibility of the usual estimator for
the mean of a multivariate normal distribution. In Proceedings of the
Third Berkeley symposium on mathematical statistics and probability
(Vol. 1, No. 399, pp. 197-206).
A few more modern and important penalties include SCAD and MCP:
Fan, J. and Li, R., 2001. Variable selection via nonconcave penalized
likelihood and its oracle properties. Journal of the American
statistical Association, 96(456), pp.1348-1360.
Zhang, C.H., 2010. Nearly unbiased variable selection under minimax
concave penalty. The Annals of statistics, 38(2), pp.894-942.
And some more on very good algorithms for obtaining estimates using these methods:
Breheny, P. and Huang, J., 2011. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The annals of applied statistics, 5(1), p.232.
Mazumder, R., Friedman, J.H. and Hastie, T., 2011. Sparsenet:
Coordinate descent with nonconvex penalties. Journal of the American
Statistical Association, 106(495), pp.1125-1138.
Also worth looking at is this paper on the Dantzig selector which is very closely related to the LASSO, but (i believe) it introduces the idea of oracle inequalities for statistical estimators which are a pretty powerful idea
Candes, E. and Tao, T., 2007. The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics, pp.2313-2351. | What are some of the most important "early papers" on Regularization methods? | A historically important paper which I believe first demonstrated that biasing estimators can result in improved estimates for ordinary linear models:
Stein, C., 1956, January. Inadmissibility of the | What are some of the most important "early papers" on Regularization methods?
A historically important paper which I believe first demonstrated that biasing estimators can result in improved estimates for ordinary linear models:
Stein, C., 1956, January. Inadmissibility of the usual estimator for
the mean of a multivariate normal distribution. In Proceedings of the
Third Berkeley symposium on mathematical statistics and probability
(Vol. 1, No. 399, pp. 197-206).
A few more modern and important penalties include SCAD and MCP:
Fan, J. and Li, R., 2001. Variable selection via nonconcave penalized
likelihood and its oracle properties. Journal of the American
statistical Association, 96(456), pp.1348-1360.
Zhang, C.H., 2010. Nearly unbiased variable selection under minimax
concave penalty. The Annals of statistics, 38(2), pp.894-942.
And some more on very good algorithms for obtaining estimates using these methods:
Breheny, P. and Huang, J., 2011. Coordinate descent algorithms for nonconvex penalized regression, with applications to biological feature selection. The annals of applied statistics, 5(1), p.232.
Mazumder, R., Friedman, J.H. and Hastie, T., 2011. Sparsenet:
Coordinate descent with nonconvex penalties. Journal of the American
Statistical Association, 106(495), pp.1125-1138.
Also worth looking at is this paper on the Dantzig selector which is very closely related to the LASSO, but (i believe) it introduces the idea of oracle inequalities for statistical estimators which are a pretty powerful idea
Candes, E. and Tao, T., 2007. The Dantzig selector: Statistical estimation when p is much larger than n. The Annals of Statistics, pp.2313-2351. | What are some of the most important "early papers" on Regularization methods?
A historically important paper which I believe first demonstrated that biasing estimators can result in improved estimates for ordinary linear models:
Stein, C., 1956, January. Inadmissibility of the |
28,055 | How do you interpret the cross-entropy value? | Andrew Ng explains the intuition behind using cross-entropy as a cost function in his ML Coursera course under the logistic regression module, specifically at this point in time with the mathematical expression:
$$\text{Cost}\left(h_\theta(x),y\right)=\left\{
\begin{array}{l}
-\log\left(h_\theta(x)\right) \quad \quad\quad \text{if $y =1$}\\
-\log\left(1 -h_\theta(x)\right) \quad \;\text{if $y =0$}
\end{array}
\right.
$$
The idea is that with an activation function with values between zero and one (in this case a logistic sigmoid, but clearly applicable to, for instance, a softmax function in CNN, where the final output is a multinomial logistic), the cost in the case of a true 1 value ($y=1$), will decrease from infinity to zero as $h_\theta(x)\to1$, because ideally we would like for its to be $1$, predicting exactly the true value, and hence rewarding an activation output that gets close to it; reciprocally, the cost will tend to infinity as the activation function tends to $0$. The opposite is true for $y=0$ with the trick of obtaining the logarithm of $1-h_\theta(x)$, as opposed to $h_\theta(x).$
Here is my attempt at showing this graphically, as we limit these two functions between the vertical lines at $0$ and $1$, consistent with the output of a sigmoid function:
This can be summarized in one more succinct expression as:
$$\text{Cost}\left(h_\theta(x),y\right)=-y\log\left(h_\theta(x)\right)-(1-y) \log\left(1 - h_\theta(x)\right).$$
In the case of softmax in CNN, the cross-entropy would similarly be formulated as
$$\text{Cost}=-\sum_j \,t_j\,\log(y_j)$$
where $t_j$ stands for the target value of each class, and $y_j$ the probability assigned to it by the output.
Beyond the intuition, the introduction of cross entropy is meant to make the cost function convex. | How do you interpret the cross-entropy value? | Andrew Ng explains the intuition behind using cross-entropy as a cost function in his ML Coursera course under the logistic regression module, specifically at this point in time with the mathematical | How do you interpret the cross-entropy value?
Andrew Ng explains the intuition behind using cross-entropy as a cost function in his ML Coursera course under the logistic regression module, specifically at this point in time with the mathematical expression:
$$\text{Cost}\left(h_\theta(x),y\right)=\left\{
\begin{array}{l}
-\log\left(h_\theta(x)\right) \quad \quad\quad \text{if $y =1$}\\
-\log\left(1 -h_\theta(x)\right) \quad \;\text{if $y =0$}
\end{array}
\right.
$$
The idea is that with an activation function with values between zero and one (in this case a logistic sigmoid, but clearly applicable to, for instance, a softmax function in CNN, where the final output is a multinomial logistic), the cost in the case of a true 1 value ($y=1$), will decrease from infinity to zero as $h_\theta(x)\to1$, because ideally we would like for its to be $1$, predicting exactly the true value, and hence rewarding an activation output that gets close to it; reciprocally, the cost will tend to infinity as the activation function tends to $0$. The opposite is true for $y=0$ with the trick of obtaining the logarithm of $1-h_\theta(x)$, as opposed to $h_\theta(x).$
Here is my attempt at showing this graphically, as we limit these two functions between the vertical lines at $0$ and $1$, consistent with the output of a sigmoid function:
This can be summarized in one more succinct expression as:
$$\text{Cost}\left(h_\theta(x),y\right)=-y\log\left(h_\theta(x)\right)-(1-y) \log\left(1 - h_\theta(x)\right).$$
In the case of softmax in CNN, the cross-entropy would similarly be formulated as
$$\text{Cost}=-\sum_j \,t_j\,\log(y_j)$$
where $t_j$ stands for the target value of each class, and $y_j$ the probability assigned to it by the output.
Beyond the intuition, the introduction of cross entropy is meant to make the cost function convex. | How do you interpret the cross-entropy value?
Andrew Ng explains the intuition behind using cross-entropy as a cost function in his ML Coursera course under the logistic regression module, specifically at this point in time with the mathematical |
28,056 | Simulate linear regression with heteroscedasticity | To simulate data with a varying error variance, you need to specify the data generating process for the error variance. As has been pointed out in the comments, you did that when you generated your original data. If you have real data and want to try this, you just need to identify the function that specifies how the residual variance depends on your covariates. The standard way to do that is to fit your model, check that it is reasonable (other than the heteroscedasticity), and save the residuals. Those residuals become the Y variable of a new model. Below I have done that for your data generating process. (I don't see where you set the random seed, so these won't literally be the same data, but should be similar, and you can reproduce mine exactly by using my seed.)
set.seed(568) # this makes the example exactly reproducible
n = rep(1:100,2)
a = 0
b = 1
sigma2 = n^1.3
eps = rnorm(n,mean=0,sd=sqrt(sigma2))
y = a+b*n + eps
mod = lm(y ~ n)
res = residuals(mod)
windows()
layout(matrix(1:2, nrow=2))
plot(n,y)
abline(coef(mod), col="red")
plot(mod, which=3)
Note that R's ?plot.lm will give you a plot (cf., here) of the square root of the absolute values of the residuals, helpfully overlaid with a lowess fit, which is just what you need. (If you have multiple covariates, you might want to assess this against each covariate separately.) There is the slightest hint of a curve, but this looks like a straight line does a good job of fitting the data. So let's explicitly fit that model:
res.mod = lm(sqrt(abs(res))~fitted(mod))
summary(res.mod)
# Call:
# lm(formula = sqrt(abs(res)) ~ fitted(mod))
#
# Residuals:
# Min 1Q Median 3Q Max
# -3.3912 -0.7640 0.0794 0.8764 3.2726
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 1.669571 0.181361 9.206 < 2e-16 ***
# fitted(mod) 0.023558 0.003157 7.461 2.64e-12 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 1.285 on 198 degrees of freedom
# Multiple R-squared: 0.2195, Adjusted R-squared: 0.2155
# F-statistic: 55.67 on 1 and 198 DF, p-value: 2.641e-12
windows()
layout(matrix(1:4, nrow=2, ncol=2, byrow=TRUE))
plot(res.mod, which=1)
plot(res.mod, which=2)
plot(res.mod, which=3)
plot(res.mod, which=5)
We needn't be concerned that the residual variance seems to be increasing in the scale-location plot for this model as well—that essentially has to happen. There is again the slightest hint of a curve, so we can try to fit a squared term and see if that helps (but it doesn't):
res.mod2 = lm(sqrt(abs(res))~poly(fitted(mod), 2))
summary(res.mod2)
# output omitted
anova(res.mod, res.mod2)
# Analysis of Variance Table
#
# Model 1: sqrt(abs(res)) ~ fitted(mod)
# Model 2: sqrt(abs(res)) ~ poly(fitted(mod), 2)
# Res.Df RSS Df Sum of Sq F Pr(>F)
# 1 198 326.87
# 2 197 326.85 1 0.011564 0.007 0.9336
If we're satisfied with this, we can now use this process as an add-on to simulate data.
set.seed(4396) # this makes the example exactly reproducible
x = n
expected.y = coef(mod)[1] + coef(mod)[2]*x
sim.errors = rnorm(length(x), mean=0,
sd=(coef(res.mod)[1] + coef(res.mod)[2]*expected.y)^2)
observed.y = expected.y + sim.errors
Note that this process is no more guaranteed to find the true data generating process than any other statistical method. You used a non-linear function to generate the error SDs, and we approximated it with a linear function. If you actually know the true data generating process a-priori (as in this case, because you simulated the original data), you might as well use it. You can decide if the approximation here is good enough for your purposes. We typically don't know the true data generating process, however, and based on Occam's razor, go with the simplest function that adequately fits the data we have given the amount of information available. You can also try splines or fancier approaches if you prefer. The bivariate distributions look reasonably similar to me, but we can see that while the estimated function largely parallels the true function, they do not overlap: | Simulate linear regression with heteroscedasticity | To simulate data with a varying error variance, you need to specify the data generating process for the error variance. As has been pointed out in the comments, you did that when you generated your o | Simulate linear regression with heteroscedasticity
To simulate data with a varying error variance, you need to specify the data generating process for the error variance. As has been pointed out in the comments, you did that when you generated your original data. If you have real data and want to try this, you just need to identify the function that specifies how the residual variance depends on your covariates. The standard way to do that is to fit your model, check that it is reasonable (other than the heteroscedasticity), and save the residuals. Those residuals become the Y variable of a new model. Below I have done that for your data generating process. (I don't see where you set the random seed, so these won't literally be the same data, but should be similar, and you can reproduce mine exactly by using my seed.)
set.seed(568) # this makes the example exactly reproducible
n = rep(1:100,2)
a = 0
b = 1
sigma2 = n^1.3
eps = rnorm(n,mean=0,sd=sqrt(sigma2))
y = a+b*n + eps
mod = lm(y ~ n)
res = residuals(mod)
windows()
layout(matrix(1:2, nrow=2))
plot(n,y)
abline(coef(mod), col="red")
plot(mod, which=3)
Note that R's ?plot.lm will give you a plot (cf., here) of the square root of the absolute values of the residuals, helpfully overlaid with a lowess fit, which is just what you need. (If you have multiple covariates, you might want to assess this against each covariate separately.) There is the slightest hint of a curve, but this looks like a straight line does a good job of fitting the data. So let's explicitly fit that model:
res.mod = lm(sqrt(abs(res))~fitted(mod))
summary(res.mod)
# Call:
# lm(formula = sqrt(abs(res)) ~ fitted(mod))
#
# Residuals:
# Min 1Q Median 3Q Max
# -3.3912 -0.7640 0.0794 0.8764 3.2726
#
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 1.669571 0.181361 9.206 < 2e-16 ***
# fitted(mod) 0.023558 0.003157 7.461 2.64e-12 ***
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# Residual standard error: 1.285 on 198 degrees of freedom
# Multiple R-squared: 0.2195, Adjusted R-squared: 0.2155
# F-statistic: 55.67 on 1 and 198 DF, p-value: 2.641e-12
windows()
layout(matrix(1:4, nrow=2, ncol=2, byrow=TRUE))
plot(res.mod, which=1)
plot(res.mod, which=2)
plot(res.mod, which=3)
plot(res.mod, which=5)
We needn't be concerned that the residual variance seems to be increasing in the scale-location plot for this model as well—that essentially has to happen. There is again the slightest hint of a curve, so we can try to fit a squared term and see if that helps (but it doesn't):
res.mod2 = lm(sqrt(abs(res))~poly(fitted(mod), 2))
summary(res.mod2)
# output omitted
anova(res.mod, res.mod2)
# Analysis of Variance Table
#
# Model 1: sqrt(abs(res)) ~ fitted(mod)
# Model 2: sqrt(abs(res)) ~ poly(fitted(mod), 2)
# Res.Df RSS Df Sum of Sq F Pr(>F)
# 1 198 326.87
# 2 197 326.85 1 0.011564 0.007 0.9336
If we're satisfied with this, we can now use this process as an add-on to simulate data.
set.seed(4396) # this makes the example exactly reproducible
x = n
expected.y = coef(mod)[1] + coef(mod)[2]*x
sim.errors = rnorm(length(x), mean=0,
sd=(coef(res.mod)[1] + coef(res.mod)[2]*expected.y)^2)
observed.y = expected.y + sim.errors
Note that this process is no more guaranteed to find the true data generating process than any other statistical method. You used a non-linear function to generate the error SDs, and we approximated it with a linear function. If you actually know the true data generating process a-priori (as in this case, because you simulated the original data), you might as well use it. You can decide if the approximation here is good enough for your purposes. We typically don't know the true data generating process, however, and based on Occam's razor, go with the simplest function that adequately fits the data we have given the amount of information available. You can also try splines or fancier approaches if you prefer. The bivariate distributions look reasonably similar to me, but we can see that while the estimated function largely parallels the true function, they do not overlap: | Simulate linear regression with heteroscedasticity
To simulate data with a varying error variance, you need to specify the data generating process for the error variance. As has been pointed out in the comments, you did that when you generated your o |
28,057 | Simulate linear regression with heteroscedasticity | You need to model the heteroskedasticity. One approach is via the R package (CRAN) dglm, dispersion generalized linear model. This is an extension of glm's which, in addition to the usual glm, fits a second glm for dispersion from the residuals from the first glm. I have no experience with such models, but they seem promising ... Here is some code:
n <- rep(1:100, 2)
a <- 0
b <- 1
sigma2 <- n^1.3
eps <- rnorm(n, mean=0, sd=sqrt(sigma2))
y <- a+b*n + eps
mod <- lm(y ~ n)
library(dglm) ### double glm's
mod2 <- dglm(y ~ n, ~ n, gaussian, ykeep=TRUE, xkeep=TRUE,
zkeep=TRUE)
### This uses log link for the dispersion part, should also try identity link ...
y2 <- simulate(mod2)
plot(n, y2$sim_1)
mod3 <- dglm(y ~ n, ~ n, gaussian, dlink="identity",
ykeep=TRUE, xkeep=TRUE, zkeep=TRUE)
### This do not work because it leads to negative weights!
The simulated plot is shown below:
The plot do look like the simulation have used the estimated variance, but I'm unsure, as the simulate() function do not have methods for dglm's ...
(Another possibility to look into, is using the R package gamlss, which uses another approach to modelling the variance as a function of covariables.) | Simulate linear regression with heteroscedasticity | You need to model the heteroskedasticity. One approach is via the R package (CRAN) dglm, dispersion generalized linear model. This is an extension of glm's which, in addition to the usual glm, fits a | Simulate linear regression with heteroscedasticity
You need to model the heteroskedasticity. One approach is via the R package (CRAN) dglm, dispersion generalized linear model. This is an extension of glm's which, in addition to the usual glm, fits a second glm for dispersion from the residuals from the first glm. I have no experience with such models, but they seem promising ... Here is some code:
n <- rep(1:100, 2)
a <- 0
b <- 1
sigma2 <- n^1.3
eps <- rnorm(n, mean=0, sd=sqrt(sigma2))
y <- a+b*n + eps
mod <- lm(y ~ n)
library(dglm) ### double glm's
mod2 <- dglm(y ~ n, ~ n, gaussian, ykeep=TRUE, xkeep=TRUE,
zkeep=TRUE)
### This uses log link for the dispersion part, should also try identity link ...
y2 <- simulate(mod2)
plot(n, y2$sim_1)
mod3 <- dglm(y ~ n, ~ n, gaussian, dlink="identity",
ykeep=TRUE, xkeep=TRUE, zkeep=TRUE)
### This do not work because it leads to negative weights!
The simulated plot is shown below:
The plot do look like the simulation have used the estimated variance, but I'm unsure, as the simulate() function do not have methods for dglm's ...
(Another possibility to look into, is using the R package gamlss, which uses another approach to modelling the variance as a function of covariables.) | Simulate linear regression with heteroscedasticity
You need to model the heteroskedasticity. One approach is via the R package (CRAN) dglm, dispersion generalized linear model. This is an extension of glm's which, in addition to the usual glm, fits a |
28,058 | Distributions over sorted lists | Let's assume $r_i$, the rank of list element $i$, has a value in $\{0, 1, \ldots, n-1\}$ for a list with $n$ elements (ties can be broken randomly). Then we could define the probability of selecting $i$ to be:
$$p_i = \frac{\alpha^{r_i}}{\sum_{k=1}^n \alpha^{r_k}}$$
This is basically just an appropriately normalized truncated geometric distribution, and it is also related to the Softmax function. In the special case of $\alpha=0$, use the convention $0^0 = 1$. Note that the denominator can always be written in a simple closed-form expression. For $\alpha < 1$ it takes value $\frac{1-\alpha^n}{1-\alpha}$, and for $\alpha=1$ it takes value $n$.
With $\alpha=1$, it is clear that this just assigns equal probability to each element. As $\alpha\rightarrow 0$, this approaches giving all the probability mass to the first element.
In a list with 10 elements, the roughly exponential decrease you requested is clear with $\alpha=0.5$:
$$
p_0 \approx 0.5005 \\
p_1 \approx 0.2502 \\
p_2 \approx 0.1251 \\
p_3 \approx 0.0626 \\
p_4 \approx 0.0313 \\
p_5 \approx 0.0156 \\
p_6 \approx 0.0078 \\
p_7 \approx 0.0039 \\
p_8 \approx 0.0020 \\
p_9 \approx 0.0010
$$
The following plots how the probability of the first element being selected changes based on $\alpha$, using a list of length 10. | Distributions over sorted lists | Let's assume $r_i$, the rank of list element $i$, has a value in $\{0, 1, \ldots, n-1\}$ for a list with $n$ elements (ties can be broken randomly). Then we could define the probability of selecting $ | Distributions over sorted lists
Let's assume $r_i$, the rank of list element $i$, has a value in $\{0, 1, \ldots, n-1\}$ for a list with $n$ elements (ties can be broken randomly). Then we could define the probability of selecting $i$ to be:
$$p_i = \frac{\alpha^{r_i}}{\sum_{k=1}^n \alpha^{r_k}}$$
This is basically just an appropriately normalized truncated geometric distribution, and it is also related to the Softmax function. In the special case of $\alpha=0$, use the convention $0^0 = 1$. Note that the denominator can always be written in a simple closed-form expression. For $\alpha < 1$ it takes value $\frac{1-\alpha^n}{1-\alpha}$, and for $\alpha=1$ it takes value $n$.
With $\alpha=1$, it is clear that this just assigns equal probability to each element. As $\alpha\rightarrow 0$, this approaches giving all the probability mass to the first element.
In a list with 10 elements, the roughly exponential decrease you requested is clear with $\alpha=0.5$:
$$
p_0 \approx 0.5005 \\
p_1 \approx 0.2502 \\
p_2 \approx 0.1251 \\
p_3 \approx 0.0626 \\
p_4 \approx 0.0313 \\
p_5 \approx 0.0156 \\
p_6 \approx 0.0078 \\
p_7 \approx 0.0039 \\
p_8 \approx 0.0020 \\
p_9 \approx 0.0010
$$
The following plots how the probability of the first element being selected changes based on $\alpha$, using a list of length 10. | Distributions over sorted lists
Let's assume $r_i$, the rank of list element $i$, has a value in $\{0, 1, \ldots, n-1\}$ for a list with $n$ elements (ties can be broken randomly). Then we could define the probability of selecting $ |
28,059 | Distributions over sorted lists | I'll try to build an example from first principles.
Let's take three distributions as our building blocks:
P is the distribution assigning probability one to the first element of the list, zero to all others.
E is the distribution assigning probability $\frac{1}{2}$ to the first element of the list, $\frac{1}{4}$ to the next, and so on. Since the list is finite, these will not sum to $1$, but we can normalize to get a probability distribution.
U is the uniform distribution over the list.
Now we want to take a one-parameter family of positive convex combinations of these distributions
$$ \alpha(t) P + \beta(t) E + \gamma(t) U $$
where $\alpha(t) + \beta(t) + \gamma(t) = 1$ for all $t \in [0, 1]$, with the additional property that $\alpha(0) = 1$ and $\gamma(1) = 1$.
Geometrically, we want $(\alpha(t), \beta(t), \gamma(t))$ to trace out a curve in the equilateral triangle spanned between the points $(1, 0, 0), (0, 1, 0), (0, 0, 1)$ which starts at the first corner, and ends and the last. Additionally, since we want the distribution to look "exponential" in the middle times, we would like the curve to occupy the interior of the triangle at times $t \in (0, 1)$.
Here's an option for the curve:
$$ (1 - t(1-t)) \left(1 - t, 0, t \right) + t(1 - t) \left(\frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right) $$
I constructed this working backwards from the properties we would like. The curve $(1-t, 0, t)$ runs along the edge of the triangle between the starting and ending verticies. The rest of the formula is just a convex sum of this edge curve and the single point $\left(\frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right)$, which pushes the curve along the edge into the interior at times $t \in (0, 1)$. | Distributions over sorted lists | I'll try to build an example from first principles.
Let's take three distributions as our building blocks:
P is the distribution assigning probability one to the first element of the list, zero to al | Distributions over sorted lists
I'll try to build an example from first principles.
Let's take three distributions as our building blocks:
P is the distribution assigning probability one to the first element of the list, zero to all others.
E is the distribution assigning probability $\frac{1}{2}$ to the first element of the list, $\frac{1}{4}$ to the next, and so on. Since the list is finite, these will not sum to $1$, but we can normalize to get a probability distribution.
U is the uniform distribution over the list.
Now we want to take a one-parameter family of positive convex combinations of these distributions
$$ \alpha(t) P + \beta(t) E + \gamma(t) U $$
where $\alpha(t) + \beta(t) + \gamma(t) = 1$ for all $t \in [0, 1]$, with the additional property that $\alpha(0) = 1$ and $\gamma(1) = 1$.
Geometrically, we want $(\alpha(t), \beta(t), \gamma(t))$ to trace out a curve in the equilateral triangle spanned between the points $(1, 0, 0), (0, 1, 0), (0, 0, 1)$ which starts at the first corner, and ends and the last. Additionally, since we want the distribution to look "exponential" in the middle times, we would like the curve to occupy the interior of the triangle at times $t \in (0, 1)$.
Here's an option for the curve:
$$ (1 - t(1-t)) \left(1 - t, 0, t \right) + t(1 - t) \left(\frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right) $$
I constructed this working backwards from the properties we would like. The curve $(1-t, 0, t)$ runs along the edge of the triangle between the starting and ending verticies. The rest of the formula is just a convex sum of this edge curve and the single point $\left(\frac{1}{3}, \frac{1}{3}, \frac{1}{3} \right)$, which pushes the curve along the edge into the interior at times $t \in (0, 1)$. | Distributions over sorted lists
I'll try to build an example from first principles.
Let's take three distributions as our building blocks:
P is the distribution assigning probability one to the first element of the list, zero to al |
28,060 | Linear model where the data has uncertainty, using R | This type of model is actually much more common in certain branches of science (e.g. physics) and engineering than "normal" linear regression. So, in physics tools like ROOT, doing this type of fit is trivial, while linear regression is not natively implemented! Physicists tend to call this just a "fit" or a chi-square minimizing fit.
The normal linear regression model assumes that there is an overall variance $\sigma$ attached to every measurement. It then maximizes the likelihood
$$
L \propto \prod_i e^{-\frac{1}{2} \left( \frac{y_i-(ax_i+b)}{\sigma} \right)^2}
$$
or equivalently its logarithm
$$
\log(L) = \mathrm{constant} - \frac{1}{2\sigma^2} \sum_i (y_i-(ax_i+b))^2
$$
Hence the name least-squares -- maximizing the likelihood is the same as minimizing the sum of squares, and $\sigma$ is an unimportant constant, as long as it is constant. With measurements that have different known uncertainties, you'll want to maximize
$$
L \propto \prod e^{-\frac{1}{2} \left( \frac{y-(ax+b)}{\sigma_i} \right)^2}
$$
or equivalently its logarithm
$$
\log(L) = \mathrm{constant} - \frac{1}{2} \sum \left( \frac{y_i-(ax_i+b)}{\sigma_i} \right)^2
$$
So, you actually want to weight the measurements by the inverse variance $1/\sigma_i^2$, not the variance.
This makes sense -- a more accurate measurement has smaller uncertainty and should be given more weight. Note that if this weight is constant, it still factors out of the sum. So, it doesn't affect the estimated values, but it should affect the standard errors, taken from the second derivative of $\log(L)$.
However, here we come to another difference between physics/science and statistics at large. Typically in statistics, you expect that a correlation might exist between two variables, but rarely will it be exact. In physics and other sciences, on the other hand, you often expect a correlation or relationship to be exact, if only it weren't for pesky measurement errors (e.g. $F=ma$, not $F=ma+\epsilon$). Your problem seems to fall more into the physics/engineering case. Consequently, lm's interpretation of the uncertainties attached to your measurements and of the weights isn't quite the same as what you want. It'll take the weights, but it still thinks there is an overall $\sigma^2$ to account for regression error, which is not what you want -- you want your measurement errors to be the only kind of error there is. (The end result of lm's interpretation is that only the relative values of the weights matter, which is why the constant weights you added as a test had no effect). The question and answer here have more details:
lm weights and the standard error
There are a couple of possible solutions given in the answers there. In particular, an anonymous answer there suggests using
vcov(mod)/summary(mod)$sigma^2
Basically, lm scales the covariance matrix based on its estimated $\sigma$, and you want to undo this. You can then get the information you want from the corrected covariance matrix. Try this, but try to double-check it if you can with manual linear algebra. And remember that the weights should the the inverse variances.
EDIT
If you're doing this sort of thing a lot you might consider using ROOT (which seems to do this natively while lm and glm do not). Here's a brief example of how to do this in ROOT. First off, ROOT can be used via C++ or Python, and its a huge download and installation. You can try it in the browser using a Jupiter notebook, following the link here, choosing "Binder" on the right, and "Python" on the left.
import ROOT
from array import array
import math
x = range(1,11)
xerrs = [0]*10
y = [131.4,227.1,245,331.2,386.9,464.9,476.3,512.2,510.8,532.9]
yerrs = [math.sqrt(i) for i in y]
graph = ROOT.TGraphErrors(len(x),array('d',x),array('d',y),array('d',xerrs),array('d',yerrs))
graph.Fit("pol2","S")
c = ROOT.TCanvas("test","test",800,600)
graph.Draw("AP")
c.Draw()
I've put in square roots as the uncertainties on the $y$ values. The output of the fit is
Welcome to JupyROOT 6.07/03
****************************************
Minimizer is Linear
Chi2 = 8.2817
NDf = 7
p0 = 46.6629 +/- 16.0838
p1 = 88.194 +/- 8.09565
p2 = -3.91398 +/- 0.78028
and a nice plot is produced:
The ROOT fitter can also handle uncertainties in the $x$ values, which would probably require even more hacking of lm. If anyone knows a native way to do this in R, I'd be interested to learn it.
SECOND EDIT
The other answer from the same previous question by @Wolfgang gives an even better solution: the rma tool from the metafor package (I originally interpreted text in that answer to mean that it did not calculate the intercept, but that's not the case). Taking the variances in the measurements y to be simply y:
> rma(y~x+I(x^2),y,method="FE")
Fixed-Effects with Moderators Model (k = 10)
Test for Residual Heterogeneity:
QE(df = 7) = 8.2817, p-val = 0.3084
Test of Moderators (coefficient(s) 2,3):
QM(df = 2) = 659.4641, p-val < .0001
Model Results:
estimate se zval pval ci.lb ci.ub
intrcpt 46.6629 16.0838 2.9012 0.0037 15.1393 78.1866 **
x 88.1940 8.0956 10.8940 <.0001 72.3268 104.0612 ***
I(x^2) -3.9140 0.7803 -5.0161 <.0001 -5.4433 -2.3847 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This is definitely the best pure R tool for this type of regression that I've found. | Linear model where the data has uncertainty, using R | This type of model is actually much more common in certain branches of science (e.g. physics) and engineering than "normal" linear regression. So, in physics tools like ROOT, doing this type of fit is | Linear model where the data has uncertainty, using R
This type of model is actually much more common in certain branches of science (e.g. physics) and engineering than "normal" linear regression. So, in physics tools like ROOT, doing this type of fit is trivial, while linear regression is not natively implemented! Physicists tend to call this just a "fit" or a chi-square minimizing fit.
The normal linear regression model assumes that there is an overall variance $\sigma$ attached to every measurement. It then maximizes the likelihood
$$
L \propto \prod_i e^{-\frac{1}{2} \left( \frac{y_i-(ax_i+b)}{\sigma} \right)^2}
$$
or equivalently its logarithm
$$
\log(L) = \mathrm{constant} - \frac{1}{2\sigma^2} \sum_i (y_i-(ax_i+b))^2
$$
Hence the name least-squares -- maximizing the likelihood is the same as minimizing the sum of squares, and $\sigma$ is an unimportant constant, as long as it is constant. With measurements that have different known uncertainties, you'll want to maximize
$$
L \propto \prod e^{-\frac{1}{2} \left( \frac{y-(ax+b)}{\sigma_i} \right)^2}
$$
or equivalently its logarithm
$$
\log(L) = \mathrm{constant} - \frac{1}{2} \sum \left( \frac{y_i-(ax_i+b)}{\sigma_i} \right)^2
$$
So, you actually want to weight the measurements by the inverse variance $1/\sigma_i^2$, not the variance.
This makes sense -- a more accurate measurement has smaller uncertainty and should be given more weight. Note that if this weight is constant, it still factors out of the sum. So, it doesn't affect the estimated values, but it should affect the standard errors, taken from the second derivative of $\log(L)$.
However, here we come to another difference between physics/science and statistics at large. Typically in statistics, you expect that a correlation might exist between two variables, but rarely will it be exact. In physics and other sciences, on the other hand, you often expect a correlation or relationship to be exact, if only it weren't for pesky measurement errors (e.g. $F=ma$, not $F=ma+\epsilon$). Your problem seems to fall more into the physics/engineering case. Consequently, lm's interpretation of the uncertainties attached to your measurements and of the weights isn't quite the same as what you want. It'll take the weights, but it still thinks there is an overall $\sigma^2$ to account for regression error, which is not what you want -- you want your measurement errors to be the only kind of error there is. (The end result of lm's interpretation is that only the relative values of the weights matter, which is why the constant weights you added as a test had no effect). The question and answer here have more details:
lm weights and the standard error
There are a couple of possible solutions given in the answers there. In particular, an anonymous answer there suggests using
vcov(mod)/summary(mod)$sigma^2
Basically, lm scales the covariance matrix based on its estimated $\sigma$, and you want to undo this. You can then get the information you want from the corrected covariance matrix. Try this, but try to double-check it if you can with manual linear algebra. And remember that the weights should the the inverse variances.
EDIT
If you're doing this sort of thing a lot you might consider using ROOT (which seems to do this natively while lm and glm do not). Here's a brief example of how to do this in ROOT. First off, ROOT can be used via C++ or Python, and its a huge download and installation. You can try it in the browser using a Jupiter notebook, following the link here, choosing "Binder" on the right, and "Python" on the left.
import ROOT
from array import array
import math
x = range(1,11)
xerrs = [0]*10
y = [131.4,227.1,245,331.2,386.9,464.9,476.3,512.2,510.8,532.9]
yerrs = [math.sqrt(i) for i in y]
graph = ROOT.TGraphErrors(len(x),array('d',x),array('d',y),array('d',xerrs),array('d',yerrs))
graph.Fit("pol2","S")
c = ROOT.TCanvas("test","test",800,600)
graph.Draw("AP")
c.Draw()
I've put in square roots as the uncertainties on the $y$ values. The output of the fit is
Welcome to JupyROOT 6.07/03
****************************************
Minimizer is Linear
Chi2 = 8.2817
NDf = 7
p0 = 46.6629 +/- 16.0838
p1 = 88.194 +/- 8.09565
p2 = -3.91398 +/- 0.78028
and a nice plot is produced:
The ROOT fitter can also handle uncertainties in the $x$ values, which would probably require even more hacking of lm. If anyone knows a native way to do this in R, I'd be interested to learn it.
SECOND EDIT
The other answer from the same previous question by @Wolfgang gives an even better solution: the rma tool from the metafor package (I originally interpreted text in that answer to mean that it did not calculate the intercept, but that's not the case). Taking the variances in the measurements y to be simply y:
> rma(y~x+I(x^2),y,method="FE")
Fixed-Effects with Moderators Model (k = 10)
Test for Residual Heterogeneity:
QE(df = 7) = 8.2817, p-val = 0.3084
Test of Moderators (coefficient(s) 2,3):
QM(df = 2) = 659.4641, p-val < .0001
Model Results:
estimate se zval pval ci.lb ci.ub
intrcpt 46.6629 16.0838 2.9012 0.0037 15.1393 78.1866 **
x 88.1940 8.0956 10.8940 <.0001 72.3268 104.0612 ***
I(x^2) -3.9140 0.7803 -5.0161 <.0001 -5.4433 -2.3847 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
This is definitely the best pure R tool for this type of regression that I've found. | Linear model where the data has uncertainty, using R
This type of model is actually much more common in certain branches of science (e.g. physics) and engineering than "normal" linear regression. So, in physics tools like ROOT, doing this type of fit is |
28,061 | Best way to evaluate PDF estimation methods | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A2: You could test your methods in 1D on the following set of benchmarks. | Best way to evaluate PDF estimation methods | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Best way to evaluate PDF estimation methods
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
A2: You could test your methods in 1D on the following set of benchmarks. | Best way to evaluate PDF estimation methods
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
28,062 | Best way to evaluate PDF estimation methods | A1. This sounds like a sensible plan to me. Just to mention a couple of points. You'll want to test with different error metrics ($L^p$, K-L divergence, etc.) since methods will perform differently depending on the loss function. Also, you'll want to test for different number of samples. Finally, many density estimation methods perform notoriously badly near discontinuities/boundaries, so be sure to include truncated pdfs in your set.
A2. Are you interested only in 1-D pdfs or is your plan to test the multivariate case? As for a benchmark suite of pdfs, I asked a somewhat related question in the past with the goal of testing MCMC algorithms, but I did not find anything like a well-established set of pdfs.
If you have plenty of time and computational resources, you might consider performing some sort of adversarial testing of your idea:
Define a very flexible parametric family of pdfs (e.g., a large mixture of a number of known pdfs), and move around the parameter space of the mixture via some nonconvex global optimization method (*) so as to minimize performance of your method and maximize performance of some other state-of-the-art density estimation method (and possibly vice versa). This will be a strong test of the strength/weakness of your method.
Finally, the requirement of being better than all other methods is an excessively high bar; there must be some no free lunch principle at work (any algorithm has some underlying prior assumption, such as smoothness, length scale, etc.). In order for your method to be a valuable contribution, you only need to show that there are regimes/domains of some general interest in which your algorithm works better (the adversarial test above can help you find/define such a domain).
(*) Since your performance metric is stochastic (you will be evaluating it via Monte Carlo sampling), you may also want to check this answer about optimization of noisy, costly objective functions. | Best way to evaluate PDF estimation methods | A1. This sounds like a sensible plan to me. Just to mention a couple of points. You'll want to test with different error metrics ($L^p$, K-L divergence, etc.) since methods will perform differently de | Best way to evaluate PDF estimation methods
A1. This sounds like a sensible plan to me. Just to mention a couple of points. You'll want to test with different error metrics ($L^p$, K-L divergence, etc.) since methods will perform differently depending on the loss function. Also, you'll want to test for different number of samples. Finally, many density estimation methods perform notoriously badly near discontinuities/boundaries, so be sure to include truncated pdfs in your set.
A2. Are you interested only in 1-D pdfs or is your plan to test the multivariate case? As for a benchmark suite of pdfs, I asked a somewhat related question in the past with the goal of testing MCMC algorithms, but I did not find anything like a well-established set of pdfs.
If you have plenty of time and computational resources, you might consider performing some sort of adversarial testing of your idea:
Define a very flexible parametric family of pdfs (e.g., a large mixture of a number of known pdfs), and move around the parameter space of the mixture via some nonconvex global optimization method (*) so as to minimize performance of your method and maximize performance of some other state-of-the-art density estimation method (and possibly vice versa). This will be a strong test of the strength/weakness of your method.
Finally, the requirement of being better than all other methods is an excessively high bar; there must be some no free lunch principle at work (any algorithm has some underlying prior assumption, such as smoothness, length scale, etc.). In order for your method to be a valuable contribution, you only need to show that there are regimes/domains of some general interest in which your algorithm works better (the adversarial test above can help you find/define such a domain).
(*) Since your performance metric is stochastic (you will be evaluating it via Monte Carlo sampling), you may also want to check this answer about optimization of noisy, costly objective functions. | Best way to evaluate PDF estimation methods
A1. This sounds like a sensible plan to me. Just to mention a couple of points. You'll want to test with different error metrics ($L^p$, K-L divergence, etc.) since methods will perform differently de |
28,063 | Best way to evaluate PDF estimation methods | Q1: Are there any improvements over my plan above?
That depends. Mixture distribution residuals often result from doing silly things like specifying an unnecessary mixture distribution as a data model to begin with. So, my own experience suggests to at least specify as many mixture distribution terms in the output as there are in the model. Moreover, the mixture PDF's output are unlike the PDF's in the model. The Mathematica default search includes mixture distributions with two terms, and can be specified as a larger number.
Q2: Is there already a comprehensive list of many analytically defined true PDFs with varying difficulties (including very difficult ones) that I can re-use here?
This is a list from Mathematica's FindDistribution routine:
Possible continuous distributions for TargetFunctions are: BetaDistribution, CauchyDistribution, ChiDistribution, ChiSquareDistribution, ExponentialDistribution, ExtremeValueDistribution, FrechetDistribution, GammaDistribution, GumbelDistribution, HalfNormalDistribution, InverseGaussianDistribution, LaplaceDistribution, LevyDistribution, LogisticDistribution, LogNormalDistribution, MaxwellDistribution, NormalDistribution, ParetoDistribution, RayleighDistribution, StudentTDistribution, UniformDistribution, WeibullDistribution, HistogramDistribution.
Possible discrete distributions for TargetFunctions are: BenfordDistribution, BinomialDistribution, BorelTannerDistribution, DiscreteUniformDistribution, GeometricDistribution, LogSeriesDistribution, NegativeBinomialDistribution, PascalDistribution, PoissonDistribution, WaringYuleDistribution, ZipfDistribution, HistogramDistribution, EmpiricalDistribution.
The internal information criterion uses a Bayesian information criterion together with priors over TargetFunctions. | Best way to evaluate PDF estimation methods | Q1: Are there any improvements over my plan above?
That depends. Mixture distribution residuals often result from doing silly things like specifying an unnecessary mixture distribution as a data model | Best way to evaluate PDF estimation methods
Q1: Are there any improvements over my plan above?
That depends. Mixture distribution residuals often result from doing silly things like specifying an unnecessary mixture distribution as a data model to begin with. So, my own experience suggests to at least specify as many mixture distribution terms in the output as there are in the model. Moreover, the mixture PDF's output are unlike the PDF's in the model. The Mathematica default search includes mixture distributions with two terms, and can be specified as a larger number.
Q2: Is there already a comprehensive list of many analytically defined true PDFs with varying difficulties (including very difficult ones) that I can re-use here?
This is a list from Mathematica's FindDistribution routine:
Possible continuous distributions for TargetFunctions are: BetaDistribution, CauchyDistribution, ChiDistribution, ChiSquareDistribution, ExponentialDistribution, ExtremeValueDistribution, FrechetDistribution, GammaDistribution, GumbelDistribution, HalfNormalDistribution, InverseGaussianDistribution, LaplaceDistribution, LevyDistribution, LogisticDistribution, LogNormalDistribution, MaxwellDistribution, NormalDistribution, ParetoDistribution, RayleighDistribution, StudentTDistribution, UniformDistribution, WeibullDistribution, HistogramDistribution.
Possible discrete distributions for TargetFunctions are: BenfordDistribution, BinomialDistribution, BorelTannerDistribution, DiscreteUniformDistribution, GeometricDistribution, LogSeriesDistribution, NegativeBinomialDistribution, PascalDistribution, PoissonDistribution, WaringYuleDistribution, ZipfDistribution, HistogramDistribution, EmpiricalDistribution.
The internal information criterion uses a Bayesian information criterion together with priors over TargetFunctions. | Best way to evaluate PDF estimation methods
Q1: Are there any improvements over my plan above?
That depends. Mixture distribution residuals often result from doing silly things like specifying an unnecessary mixture distribution as a data model |
28,064 | Why do p-values change in significance when changing the order of covariates in the aov model? | The problem comes from the way that aov() does its default significance testing. It uses what is called "Type I" ANOVA analysis, in which testing is done in the order that the variables are specified in your model. So in the first example, it determines how much variance is explained by sex and tests its significance, then what portion of the remaining variance is explained by DMRT3 and tests its significance in terms of that remaining variance, and so forth. In the second example, DMRT3 is only evaluated after Voltsec, Autosec, and sex, in that order, so there is less variance remaining for DMRT3 to explain.
If two predictor variables are correlated then the first one entered into the model will get full "credit," leaving less variance to be "explained by" the second one, which thus may appear less "statistically significant" than the first even if it is not, functionally. This question and its answer explain the different Types of ANOVA analyses.
One way to get around this is to extract yourself from the strictures of classical ANOVA and use a simple linear model, with lm() in R, rather than aov(). This effectively analyzes all predictors in parallel, "correcting for" all predictors at once. In that case, two correlated predictors might end up having large standard errors of their estimated regression coefficients, and their coefficients might differ among different samples from the population, but at least the order you enter the variables into the model specification won't matter.
If your response variable is some type of count variable, as its name Starts suggests, then you probably shouldn't be using ANOVA anyway as residuals are unlikely to be normally distributed, as the p-value interpretation requires. Count variables are better handled with generalized linear models (e.g., glm() in R), which can be thought of as a generalization of lm() for other types of residual error structures. | Why do p-values change in significance when changing the order of covariates in the aov model? | The problem comes from the way that aov() does its default significance testing. It uses what is called "Type I" ANOVA analysis, in which testing is done in the order that the variables are specified | Why do p-values change in significance when changing the order of covariates in the aov model?
The problem comes from the way that aov() does its default significance testing. It uses what is called "Type I" ANOVA analysis, in which testing is done in the order that the variables are specified in your model. So in the first example, it determines how much variance is explained by sex and tests its significance, then what portion of the remaining variance is explained by DMRT3 and tests its significance in terms of that remaining variance, and so forth. In the second example, DMRT3 is only evaluated after Voltsec, Autosec, and sex, in that order, so there is less variance remaining for DMRT3 to explain.
If two predictor variables are correlated then the first one entered into the model will get full "credit," leaving less variance to be "explained by" the second one, which thus may appear less "statistically significant" than the first even if it is not, functionally. This question and its answer explain the different Types of ANOVA analyses.
One way to get around this is to extract yourself from the strictures of classical ANOVA and use a simple linear model, with lm() in R, rather than aov(). This effectively analyzes all predictors in parallel, "correcting for" all predictors at once. In that case, two correlated predictors might end up having large standard errors of their estimated regression coefficients, and their coefficients might differ among different samples from the population, but at least the order you enter the variables into the model specification won't matter.
If your response variable is some type of count variable, as its name Starts suggests, then you probably shouldn't be using ANOVA anyway as residuals are unlikely to be normally distributed, as the p-value interpretation requires. Count variables are better handled with generalized linear models (e.g., glm() in R), which can be thought of as a generalization of lm() for other types of residual error structures. | Why do p-values change in significance when changing the order of covariates in the aov model?
The problem comes from the way that aov() does its default significance testing. It uses what is called "Type I" ANOVA analysis, in which testing is done in the order that the variables are specified |
28,065 | Robustness of correlation test to non-normality | The Edgell and Noon paper got it wrong.
Background
The paper describes result from simulated datasets $(x_i,y_i)$ with independent coordinates drawn from Normal, Exponential, Uniform, and Cauchy distributions. (Although it reports two "forms" of the Cauchy, they differed only in how the values were generated, which is an irrelevant distraction.) The dataset sizes $n$ ("sample size") ranged from $5$ to $100$. For each dataset the Pearson sample correlation coefficient $r$ was computed, converted into a $t$ statistic via
$$t = r \sqrt{\frac{n-2}{1-r^2}},$$
(see Equation (1)), and referred that to a Student $t$ distribution with $n-2$ degrees of freedom using a two-tailed calculation. The authors conducted $10,000$ independent simulations for each of the $10$ pairs of these distribution and each sample size, producing $10,000$ $t$ statistics in each. Finally, they tabulated the proportion of $t$ statistics that appeared to be significant at the $\alpha=0.05$ level: that is, the $t$ statistics in the outer $\alpha/2 = 0.025$ tails of the Student $t$ distribution.
Discussion
Before we proceed, notice that this study looks only at how robust a test of zero correlation might be to non-normality. That's not an error, but it's an important limitation to keep in mind.
There is an important strategic error in this study and a glaring technical error.
The strategic error is that these distributions aren't that non-normal. Neither the Normal nor the Uniform distributions are going to cause any trouble for correlation coefficients: the former by design and the latter because it cannot produce outliers (which is what causes the Pearson correlation not to be robust). (The Normal had to be included as a reference, though, to make sure everything was working properly.) None of these four distributions are good models for common situations where the data might be "contaminated" by values from a distribution with a different location altogether (such as when the subjects really come from distinct populations, unknown to the experimenter). The most severe test comes from the Cauchy but, because it is symmetric, does not probe the most likely sensitivity of the correlation coefficient to one-sided outliers.
The technical error is that the study did not examine the actual distributions of the p-values: it looked solely at the two-sided rates for $\alpha=0.05$.
(Although we can excuse much that happened 32 years ago due to limitations in computing technology, people were routinely examining contaminated distributions, slash distributions, Lognormal distributions, and other more serious forms of non-normality; and it has been routine for even longer to explore a wider range of test sizes rather than limiting studies to just one size.)
Correcting the Errors
Below, I provide R code that will completely reproduce this study (in less than a minute of computation). But it does something more: it displays the sample distributions of the p-values. This is quite revealing, so let's just jump in and look at those histograms.
First, here are histograms of large samples from the three distributions I looked at, so you can get a sense of how they are non-Normal.
The Exponential is skewed (but not terribly so); the Cauchy has long tails (in fact, some values out into the thousands were excluded from this plot so you can see its center); the Contaminated is a standard Normal with a 5% mixture of a standard Normal shifted out to $10$. They represent forms of non-Normality frequently encountered in data.
Because Edgell and Noon tabulated their results in rows corresponding to pairs of distributions and columns for sample sizes, I did the same. We don't need to look at the full range of sample sizes they used: the smallest ($5$), largest ($100$), and one intermediate value ($20$) will do fine. But instead of tabulating tail frequencies, I have plotted the distributions of the p-values.
Ideally, the p-values will have uniform distributions: the bars should all be close to a constant height of $1$, shown with a dashed gray line in each plot. In these plots there are 40 bars, at a constant spacing of $0.025$ A study of $\alpha=0.05$ will focus on the average height of the leftmost and rightmost bar (the "extreme bars"). Edgell and Noon compared these averages to the ideal frequency of $0.05$.
Because the departures from uniformity are prominent, not much commentary is needed, but before I provide some, look for yourself at the rest of the results. You can identify the sample sizes in the titles--they all run $5-20-100$ across each row--and you can read the pairs of distributions in the subtitles beneath each graphic.
What should strike you most is how different the extreme bars are from the rest of the distribution. A study of $\alpha=0.05$ is extraordinarily special! It doesn't really tell us how well the test will perform a other sizes; in fact, the results for $0.05$ are so special that they will deceive us concerning the characteristics of this test.
Second, notice that when the Contaminated distribution is involved--with its tendency to produce only high outliers--the distribution of p-values becomes asymmetric. One bar (which would be used for testing for positive correlation) is extremely high while its counterpart at the other end (which would be used for testing for negative correlation) is extremely low. On average, though, they nearly balance out: two huge errors cancel!
It is particularly alarming that the problems tend to get worse with larger sample sizes.
I also have some concerns about the accuracy of the results. Here are the summaries from $100,000$ iterations, ten times more than Edgell and Noon did:
5 20 100
Exponential-Exponential 0.05398 0.05048 0.04742
Exponential-Cauchy 0.05864 0.05780 0.05331
Exponential-Contaminated 0.05462 0.05213 0.04758
Cauchy-Cauchy 0.07256 0.06876 0.04515
Cauchy-Contaminated 0.06207 0.06366 0.06045
Contaminated-Contaminated 0.05637 0.06010 0.05460
Three of these--the ones not involving the Contaminated distribution--reproduce parts of the paper's table. Although they lead qualitatively to the same (bad) conclusions (namely, that these frequencies look pretty close to the target of $0.05$) they differ enough to call into question either my code or the paper's results. (The precision in the paper will be approximately $\sqrt{\alpha(1-\alpha)/n} \approx 0.0022$, but some of these results differ from the paper's by many times that.)
Conclusions
By failing to include non-Normal distributions that are likely to cause problems for correlation coefficients, and by not examining the simulations in detail, Edgell and Noon failed to identify a clear lack of robustness and missed an opportunity to characterize its nature. That they found robustness for two-sided tests at the $\alpha=0.05$ level appears to be almost purely an accident, an anomaly that is not shared by tests at other levels.
R Code
#
# Create one row (or cell) of the paper's table.
#
simulate <- function(F1, F2, sample.size, n.iter=1e4, alpha=0.05, ...) {
p <- rep(NA, length(sample.size))
i <- 0
for (n in sample.size) {
#
# Create the data.
#
x <- array(cbind(matrix(F1(n*n.iter), nrow=n),
matrix(F2(n*n.iter), nrow=n)), dim=c(n, n.iter, 2))
#
# Compute the p-values.
#
r.hat <- apply(x, 2, cor)[2, ]
t.stat <- r.hat * sqrt((n-2) / (1 - r.hat^2))
p.values <- pt(t.stat, n-2)
#
# Plot the p-values.
#
hist(p.values, breaks=seq(0, 1, 1/40), freq=FALSE,
xlab="p-values",
main=paste("Sample size", n), ...)
abline(h=1, lty=3, col="#a0a0a0")
#
# Store the frequency of p-values less than `alpha` (two-sided).
#
i <- i+1
p[i] <- mean(1 - abs(1 - 2*p.values) <= alpha)
}
return(p)
}
#
# The paper's distributions.
#
distributions <- list(N=rnorm,
U=runif,
E=rexp,
C=function(n) rt(n, 1)
)
#
# A slightly better set of distributions.
#
# distributions <- list(Exponential=rexp,
# Cauchy=function(n) rt(n, 1),
# Contaminated=function(n) rnorm(n, rbinom(n, 1, 0.05)*10))
#
# Depict the distributions.
#
par(mfrow=c(1, length(distributions)))
for (s in names(distributions)) {
x <- distributions[[s]](1e5)
x <- x[abs(x) < 20]
hist(x, breaks=seq(min(x), max(x), length.out=60),main=s, xlab="Value")
}
#
# Conduct the study.
#
set.seed(17)
sample.sizes <- c(5, 10, 15, 20, 30, 50, 100)
#sample.sizes <- c(5, 20, 100)
results <- matrix(numeric(0), nrow=0, ncol=length(sample.sizes))
colnames(results) <- sample.sizes
par(mfrow=c(2, length(sample.sizes)))
s <- names(distributions)
for (i1 in 1:length(distributions)) {
s1 <- s[i1]
F1 <- distributions[[s1]]
for (i2 in i1:length(distributions)) {
s2 <- s[i2]
F2 <- distributions[[s2]]
title <- paste(s1, s2, sep="-")
p <- simulate(F1, F2, sample.sizes, sub=title)
p <- matrix(p, nrow=1)
rownames(p) <- title
results <- rbind(results, p)
}
}
#
# Display the table.
#
print(results)
Reference
Stephen E. Edgell and Sheila M. Noon, Effect of Violation of Normality on the $t$ Test of the Correlation Coefficient. Psychological Bulletin 1984, Vol., 95, No. 3, 576-583. | Robustness of correlation test to non-normality | The Edgell and Noon paper got it wrong.
Background
The paper describes result from simulated datasets $(x_i,y_i)$ with independent coordinates drawn from Normal, Exponential, Uniform, and Cauchy distr | Robustness of correlation test to non-normality
The Edgell and Noon paper got it wrong.
Background
The paper describes result from simulated datasets $(x_i,y_i)$ with independent coordinates drawn from Normal, Exponential, Uniform, and Cauchy distributions. (Although it reports two "forms" of the Cauchy, they differed only in how the values were generated, which is an irrelevant distraction.) The dataset sizes $n$ ("sample size") ranged from $5$ to $100$. For each dataset the Pearson sample correlation coefficient $r$ was computed, converted into a $t$ statistic via
$$t = r \sqrt{\frac{n-2}{1-r^2}},$$
(see Equation (1)), and referred that to a Student $t$ distribution with $n-2$ degrees of freedom using a two-tailed calculation. The authors conducted $10,000$ independent simulations for each of the $10$ pairs of these distribution and each sample size, producing $10,000$ $t$ statistics in each. Finally, they tabulated the proportion of $t$ statistics that appeared to be significant at the $\alpha=0.05$ level: that is, the $t$ statistics in the outer $\alpha/2 = 0.025$ tails of the Student $t$ distribution.
Discussion
Before we proceed, notice that this study looks only at how robust a test of zero correlation might be to non-normality. That's not an error, but it's an important limitation to keep in mind.
There is an important strategic error in this study and a glaring technical error.
The strategic error is that these distributions aren't that non-normal. Neither the Normal nor the Uniform distributions are going to cause any trouble for correlation coefficients: the former by design and the latter because it cannot produce outliers (which is what causes the Pearson correlation not to be robust). (The Normal had to be included as a reference, though, to make sure everything was working properly.) None of these four distributions are good models for common situations where the data might be "contaminated" by values from a distribution with a different location altogether (such as when the subjects really come from distinct populations, unknown to the experimenter). The most severe test comes from the Cauchy but, because it is symmetric, does not probe the most likely sensitivity of the correlation coefficient to one-sided outliers.
The technical error is that the study did not examine the actual distributions of the p-values: it looked solely at the two-sided rates for $\alpha=0.05$.
(Although we can excuse much that happened 32 years ago due to limitations in computing technology, people were routinely examining contaminated distributions, slash distributions, Lognormal distributions, and other more serious forms of non-normality; and it has been routine for even longer to explore a wider range of test sizes rather than limiting studies to just one size.)
Correcting the Errors
Below, I provide R code that will completely reproduce this study (in less than a minute of computation). But it does something more: it displays the sample distributions of the p-values. This is quite revealing, so let's just jump in and look at those histograms.
First, here are histograms of large samples from the three distributions I looked at, so you can get a sense of how they are non-Normal.
The Exponential is skewed (but not terribly so); the Cauchy has long tails (in fact, some values out into the thousands were excluded from this plot so you can see its center); the Contaminated is a standard Normal with a 5% mixture of a standard Normal shifted out to $10$. They represent forms of non-Normality frequently encountered in data.
Because Edgell and Noon tabulated their results in rows corresponding to pairs of distributions and columns for sample sizes, I did the same. We don't need to look at the full range of sample sizes they used: the smallest ($5$), largest ($100$), and one intermediate value ($20$) will do fine. But instead of tabulating tail frequencies, I have plotted the distributions of the p-values.
Ideally, the p-values will have uniform distributions: the bars should all be close to a constant height of $1$, shown with a dashed gray line in each plot. In these plots there are 40 bars, at a constant spacing of $0.025$ A study of $\alpha=0.05$ will focus on the average height of the leftmost and rightmost bar (the "extreme bars"). Edgell and Noon compared these averages to the ideal frequency of $0.05$.
Because the departures from uniformity are prominent, not much commentary is needed, but before I provide some, look for yourself at the rest of the results. You can identify the sample sizes in the titles--they all run $5-20-100$ across each row--and you can read the pairs of distributions in the subtitles beneath each graphic.
What should strike you most is how different the extreme bars are from the rest of the distribution. A study of $\alpha=0.05$ is extraordinarily special! It doesn't really tell us how well the test will perform a other sizes; in fact, the results for $0.05$ are so special that they will deceive us concerning the characteristics of this test.
Second, notice that when the Contaminated distribution is involved--with its tendency to produce only high outliers--the distribution of p-values becomes asymmetric. One bar (which would be used for testing for positive correlation) is extremely high while its counterpart at the other end (which would be used for testing for negative correlation) is extremely low. On average, though, they nearly balance out: two huge errors cancel!
It is particularly alarming that the problems tend to get worse with larger sample sizes.
I also have some concerns about the accuracy of the results. Here are the summaries from $100,000$ iterations, ten times more than Edgell and Noon did:
5 20 100
Exponential-Exponential 0.05398 0.05048 0.04742
Exponential-Cauchy 0.05864 0.05780 0.05331
Exponential-Contaminated 0.05462 0.05213 0.04758
Cauchy-Cauchy 0.07256 0.06876 0.04515
Cauchy-Contaminated 0.06207 0.06366 0.06045
Contaminated-Contaminated 0.05637 0.06010 0.05460
Three of these--the ones not involving the Contaminated distribution--reproduce parts of the paper's table. Although they lead qualitatively to the same (bad) conclusions (namely, that these frequencies look pretty close to the target of $0.05$) they differ enough to call into question either my code or the paper's results. (The precision in the paper will be approximately $\sqrt{\alpha(1-\alpha)/n} \approx 0.0022$, but some of these results differ from the paper's by many times that.)
Conclusions
By failing to include non-Normal distributions that are likely to cause problems for correlation coefficients, and by not examining the simulations in detail, Edgell and Noon failed to identify a clear lack of robustness and missed an opportunity to characterize its nature. That they found robustness for two-sided tests at the $\alpha=0.05$ level appears to be almost purely an accident, an anomaly that is not shared by tests at other levels.
R Code
#
# Create one row (or cell) of the paper's table.
#
simulate <- function(F1, F2, sample.size, n.iter=1e4, alpha=0.05, ...) {
p <- rep(NA, length(sample.size))
i <- 0
for (n in sample.size) {
#
# Create the data.
#
x <- array(cbind(matrix(F1(n*n.iter), nrow=n),
matrix(F2(n*n.iter), nrow=n)), dim=c(n, n.iter, 2))
#
# Compute the p-values.
#
r.hat <- apply(x, 2, cor)[2, ]
t.stat <- r.hat * sqrt((n-2) / (1 - r.hat^2))
p.values <- pt(t.stat, n-2)
#
# Plot the p-values.
#
hist(p.values, breaks=seq(0, 1, 1/40), freq=FALSE,
xlab="p-values",
main=paste("Sample size", n), ...)
abline(h=1, lty=3, col="#a0a0a0")
#
# Store the frequency of p-values less than `alpha` (two-sided).
#
i <- i+1
p[i] <- mean(1 - abs(1 - 2*p.values) <= alpha)
}
return(p)
}
#
# The paper's distributions.
#
distributions <- list(N=rnorm,
U=runif,
E=rexp,
C=function(n) rt(n, 1)
)
#
# A slightly better set of distributions.
#
# distributions <- list(Exponential=rexp,
# Cauchy=function(n) rt(n, 1),
# Contaminated=function(n) rnorm(n, rbinom(n, 1, 0.05)*10))
#
# Depict the distributions.
#
par(mfrow=c(1, length(distributions)))
for (s in names(distributions)) {
x <- distributions[[s]](1e5)
x <- x[abs(x) < 20]
hist(x, breaks=seq(min(x), max(x), length.out=60),main=s, xlab="Value")
}
#
# Conduct the study.
#
set.seed(17)
sample.sizes <- c(5, 10, 15, 20, 30, 50, 100)
#sample.sizes <- c(5, 20, 100)
results <- matrix(numeric(0), nrow=0, ncol=length(sample.sizes))
colnames(results) <- sample.sizes
par(mfrow=c(2, length(sample.sizes)))
s <- names(distributions)
for (i1 in 1:length(distributions)) {
s1 <- s[i1]
F1 <- distributions[[s1]]
for (i2 in i1:length(distributions)) {
s2 <- s[i2]
F2 <- distributions[[s2]]
title <- paste(s1, s2, sep="-")
p <- simulate(F1, F2, sample.sizes, sub=title)
p <- matrix(p, nrow=1)
rownames(p) <- title
results <- rbind(results, p)
}
}
#
# Display the table.
#
print(results)
Reference
Stephen E. Edgell and Sheila M. Noon, Effect of Violation of Normality on the $t$ Test of the Correlation Coefficient. Psychological Bulletin 1984, Vol., 95, No. 3, 576-583. | Robustness of correlation test to non-normality
The Edgell and Noon paper got it wrong.
Background
The paper describes result from simulated datasets $(x_i,y_i)$ with independent coordinates drawn from Normal, Exponential, Uniform, and Cauchy distr |
28,066 | Robustness of correlation test to non-normality | Since whuber has given a comprehensive analysis of the behavior of the distributions of p-values under a null of zero-correlation, I'll focus my comments elsewhere.
Robustness in relation to hypothesis tests doesn't only mean level-robustness (getting close to the desired significance level). Besides only looking at one level and only at two-sided tests, the study appears to have ignored impact on power. There's no much point saying that you're keeping close to a 5% rejection rate under the null if you also end up with a 5% rejection rate* for large deviations from the null.
* (or maybe worse, if the test ends up biased under the non-normal distributions for some alternatives)
Investigating power is considerably more involved. For a start, with these distributions you'd have to be looking at specifying some copula or copulas, presumably with close to a linear relationship in the untransformed variables, and certainly with close to some specified value for the population correlation coefficient. You'll have to look at several effect sizes (at least), and possibly both negative and positive dependence.
Nevertheless, if one is to understand the properties of inference with the test in these situations, one cannot ignore the potential impact on power.
It would seem odd to discuss that particular test of the Pearson correlation without examining alternative tests - for example, permutation tests of the Pearson correlation, rank tests like Kendall's tau and Spearman's rho (which not only have good performance when the normal assumptions hold, but which also have direct relevance to the issue with copulas needed for a power study that I mentioned before), perhaps robustified versions of the correlation coefficient, possibly also bootstrap tests. | Robustness of correlation test to non-normality | Since whuber has given a comprehensive analysis of the behavior of the distributions of p-values under a null of zero-correlation, I'll focus my comments elsewhere.
Robustness in relation to hypothes | Robustness of correlation test to non-normality
Since whuber has given a comprehensive analysis of the behavior of the distributions of p-values under a null of zero-correlation, I'll focus my comments elsewhere.
Robustness in relation to hypothesis tests doesn't only mean level-robustness (getting close to the desired significance level). Besides only looking at one level and only at two-sided tests, the study appears to have ignored impact on power. There's no much point saying that you're keeping close to a 5% rejection rate under the null if you also end up with a 5% rejection rate* for large deviations from the null.
* (or maybe worse, if the test ends up biased under the non-normal distributions for some alternatives)
Investigating power is considerably more involved. For a start, with these distributions you'd have to be looking at specifying some copula or copulas, presumably with close to a linear relationship in the untransformed variables, and certainly with close to some specified value for the population correlation coefficient. You'll have to look at several effect sizes (at least), and possibly both negative and positive dependence.
Nevertheless, if one is to understand the properties of inference with the test in these situations, one cannot ignore the potential impact on power.
It would seem odd to discuss that particular test of the Pearson correlation without examining alternative tests - for example, permutation tests of the Pearson correlation, rank tests like Kendall's tau and Spearman's rho (which not only have good performance when the normal assumptions hold, but which also have direct relevance to the issue with copulas needed for a power study that I mentioned before), perhaps robustified versions of the correlation coefficient, possibly also bootstrap tests. | Robustness of correlation test to non-normality
Since whuber has given a comprehensive analysis of the behavior of the distributions of p-values under a null of zero-correlation, I'll focus my comments elsewhere.
Robustness in relation to hypothes |
28,067 | Proposal distribution - Metropolis Hastings MCMC | A1: Indeed the Gaussian distribution is probably the most used proposal distribution primarily due to ease of use. However, one might want to use other proposal distributions for the following reason
Heavy Tails: The Gaussian distribution has light tails. This means that $N(x_{t-1}, \sigma^2)$ will possibly only suggest values between $(x_{t-1} - 3\sigma, x_{t-1} + 3\sigma)$. But a $t$ distribution has heavier tails, and thus can propose values which are farther away. This ensures that the resulting Markov chain explores the state space more freely, and possibly reduces autocorrelation. The plot below shows the $N(0,1)$ compared to the $t_1$. You see how the $t$ will likely propose more values farther from 0.
Restricted Space: The Gaussian distribution is defined on all reals. If the distribution you are sampling from is lets say only defined on the positives or on $(0,1)$, then the Gaussian will likely propose values for which the the target density is 0. Such values are then immediately rejected, and the Markov chain does not move from its current spot. This is essentially wasting a draw from the Markov chain. Instead, if you are on the positives, you could use a Gamma distribution and on $(0,1)$ you could use a Beta.
Multiple Modes: When the target distribution is multi-modal, a Gaussian proposal will likely lead to the Markov chain getting stuck near one mode. This is in part due to the light tails of the Gaussian. Thus, instead, people use gradient based proposals, or a mixture of Gaussians as a proposal.
You can find more discussion here and here.
A2: Yes you can use a Uniform distribution as long as the support for the uniform distribution is bounded (since if the support is unbounded the Uniform distribution is improper as it integrates to $\infty$). So a Uniform on $(x_{t-1} - c, x_{t-1} + c)$. | Proposal distribution - Metropolis Hastings MCMC | A1: Indeed the Gaussian distribution is probably the most used proposal distribution primarily due to ease of use. However, one might want to use other proposal distributions for the following reason
| Proposal distribution - Metropolis Hastings MCMC
A1: Indeed the Gaussian distribution is probably the most used proposal distribution primarily due to ease of use. However, one might want to use other proposal distributions for the following reason
Heavy Tails: The Gaussian distribution has light tails. This means that $N(x_{t-1}, \sigma^2)$ will possibly only suggest values between $(x_{t-1} - 3\sigma, x_{t-1} + 3\sigma)$. But a $t$ distribution has heavier tails, and thus can propose values which are farther away. This ensures that the resulting Markov chain explores the state space more freely, and possibly reduces autocorrelation. The plot below shows the $N(0,1)$ compared to the $t_1$. You see how the $t$ will likely propose more values farther from 0.
Restricted Space: The Gaussian distribution is defined on all reals. If the distribution you are sampling from is lets say only defined on the positives or on $(0,1)$, then the Gaussian will likely propose values for which the the target density is 0. Such values are then immediately rejected, and the Markov chain does not move from its current spot. This is essentially wasting a draw from the Markov chain. Instead, if you are on the positives, you could use a Gamma distribution and on $(0,1)$ you could use a Beta.
Multiple Modes: When the target distribution is multi-modal, a Gaussian proposal will likely lead to the Markov chain getting stuck near one mode. This is in part due to the light tails of the Gaussian. Thus, instead, people use gradient based proposals, or a mixture of Gaussians as a proposal.
You can find more discussion here and here.
A2: Yes you can use a Uniform distribution as long as the support for the uniform distribution is bounded (since if the support is unbounded the Uniform distribution is improper as it integrates to $\infty$). So a Uniform on $(x_{t-1} - c, x_{t-1} + c)$. | Proposal distribution - Metropolis Hastings MCMC
A1: Indeed the Gaussian distribution is probably the most used proposal distribution primarily due to ease of use. However, one might want to use other proposal distributions for the following reason
|
28,068 | Predicting mean smooth in GAM with smooth-by-random-factor interaction | The solution suggested by Simon Wood to the simpler problem of predicting the population level effect from a model with random intercepts represented as a smooth is to use a by variable in the random effect smooth. See this Answer for some detail.
You can't do this dummy trick directly with your model as you have the smooth and random effects all bound up in the 2d spline term. As I understand it, you should be able to decompose your tensor product spline into "main effects" and the "spline interaction". I quote these as the decomposition will be to split out the fixed effects and random effects parts of the model.
Nb: I think I have this right but it would be helpful to have people knowledgeable with mgcv give this a once over.
## load packages
library("mgcv")
library("ggplot2")
set.seed(0)
means <- rnorm(5, mean=0, sd=2)
group <- as.factor(rep(1:5, each=100))
## generate data
df <- data.frame(group = group,
x = rep(seq(-3,3, length.out =100), 5),
y = as.numeric(dnorm(x, mean=means[group]) >
0.4*runif(10)),
dummy = 1) # dummy variable trick
This is what I came up with:
gam_model3 <- gam(y ~ s(x, bs = "ts") + s(group, bs = "re", by = dummy) +
ti(x, group, bs = c("ts","re"), by = dummy),
data = df, family = binomial, method = "REML")
Here I've broken out the fixed effects smooth of x, the random intercepts and the random - smooth interaction. Each of the random effect terms includes by = dummy. This allows us to zero out these terms by switching dummy to be a vector of 0s. This works because by terms here multiply the smooth by a numeric value; where dummy == 1 we get the effect of the random effect smooth but when dummy == 0 we are multiplying the effect of each random effect smoother by 0.
To get the population level we need just the effect of s(x, bs = "ts") and zero out the other terms.
newdf <- data.frame(group = as.factor(rep(1, 100)),
x = seq(-3, 3, length = 100),
dummy = rep(0, 100)) # zero out ranef terms
ilink <- family(gam_model3)$linkinv # inverse link function
df2 <- predict(gam_model3, newdf, se.fit = TRUE)
ilink <- family(gam_model3)$linkinv
df2 <- with(df2, data.frame(newdf,
response = ilink(fit),
lwr = ilink(fit - 2*se.fit),
upr = ilink(fit + 2*se.fit)))
(Note that all this was done on the scale of the linear predictor and only backtransformed at the end using ilink())
Here's what the population-level effect looks like
theme_set(theme_bw())
p <- ggplot(df2, aes(x = x, y = response)) +
geom_point(data = df, aes(x = x, y = y, colour = group)) +
geom_ribbon(aes(ymin = lwr, ymax = upr), alpha = 0.1) +
geom_line()
p
And here are the group level smooths with the population level one superimposed
df3 <- predict(gam_model3, se.fit = TRUE)
df3 <- with(df3, data.frame(df,
response = ilink(fit),
lwr = ilink(fit - 2*se.fit),
upr = ilink(fit + 2*se.fit)))
and a plot
p2 <- ggplot(df3, aes(x = x, y = response)) +
geom_point(data = df, aes(x = x, y = y, colour = group)) +
geom_ribbon(aes(ymin = lwr, ymax = upr, fill = group), alpha = 0.1) +
geom_line(aes(colour = group)) +
geom_ribbon(data = df2, aes(ymin = lwr, ymax = upr), alpha = 0.1) +
geom_line(data = df2, aes(y = response))
p2
From a cursory inspection this looks qualitatively similar to the result from Ben's answer but it is smoother; you don't get the blips where the next group's data is not all zero. | Predicting mean smooth in GAM with smooth-by-random-factor interaction | The solution suggested by Simon Wood to the simpler problem of predicting the population level effect from a model with random intercepts represented as a smooth is to use a by variable in the random | Predicting mean smooth in GAM with smooth-by-random-factor interaction
The solution suggested by Simon Wood to the simpler problem of predicting the population level effect from a model with random intercepts represented as a smooth is to use a by variable in the random effect smooth. See this Answer for some detail.
You can't do this dummy trick directly with your model as you have the smooth and random effects all bound up in the 2d spline term. As I understand it, you should be able to decompose your tensor product spline into "main effects" and the "spline interaction". I quote these as the decomposition will be to split out the fixed effects and random effects parts of the model.
Nb: I think I have this right but it would be helpful to have people knowledgeable with mgcv give this a once over.
## load packages
library("mgcv")
library("ggplot2")
set.seed(0)
means <- rnorm(5, mean=0, sd=2)
group <- as.factor(rep(1:5, each=100))
## generate data
df <- data.frame(group = group,
x = rep(seq(-3,3, length.out =100), 5),
y = as.numeric(dnorm(x, mean=means[group]) >
0.4*runif(10)),
dummy = 1) # dummy variable trick
This is what I came up with:
gam_model3 <- gam(y ~ s(x, bs = "ts") + s(group, bs = "re", by = dummy) +
ti(x, group, bs = c("ts","re"), by = dummy),
data = df, family = binomial, method = "REML")
Here I've broken out the fixed effects smooth of x, the random intercepts and the random - smooth interaction. Each of the random effect terms includes by = dummy. This allows us to zero out these terms by switching dummy to be a vector of 0s. This works because by terms here multiply the smooth by a numeric value; where dummy == 1 we get the effect of the random effect smooth but when dummy == 0 we are multiplying the effect of each random effect smoother by 0.
To get the population level we need just the effect of s(x, bs = "ts") and zero out the other terms.
newdf <- data.frame(group = as.factor(rep(1, 100)),
x = seq(-3, 3, length = 100),
dummy = rep(0, 100)) # zero out ranef terms
ilink <- family(gam_model3)$linkinv # inverse link function
df2 <- predict(gam_model3, newdf, se.fit = TRUE)
ilink <- family(gam_model3)$linkinv
df2 <- with(df2, data.frame(newdf,
response = ilink(fit),
lwr = ilink(fit - 2*se.fit),
upr = ilink(fit + 2*se.fit)))
(Note that all this was done on the scale of the linear predictor and only backtransformed at the end using ilink())
Here's what the population-level effect looks like
theme_set(theme_bw())
p <- ggplot(df2, aes(x = x, y = response)) +
geom_point(data = df, aes(x = x, y = y, colour = group)) +
geom_ribbon(aes(ymin = lwr, ymax = upr), alpha = 0.1) +
geom_line()
p
And here are the group level smooths with the population level one superimposed
df3 <- predict(gam_model3, se.fit = TRUE)
df3 <- with(df3, data.frame(df,
response = ilink(fit),
lwr = ilink(fit - 2*se.fit),
upr = ilink(fit + 2*se.fit)))
and a plot
p2 <- ggplot(df3, aes(x = x, y = response)) +
geom_point(data = df, aes(x = x, y = y, colour = group)) +
geom_ribbon(aes(ymin = lwr, ymax = upr, fill = group), alpha = 0.1) +
geom_line(aes(colour = group)) +
geom_ribbon(data = df2, aes(ymin = lwr, ymax = upr), alpha = 0.1) +
geom_line(data = df2, aes(y = response))
p2
From a cursory inspection this looks qualitatively similar to the result from Ben's answer but it is smoother; you don't get the blips where the next group's data is not all zero. | Predicting mean smooth in GAM with smooth-by-random-factor interaction
The solution suggested by Simon Wood to the simpler problem of predicting the population level effect from a model with random intercepts represented as a smooth is to use a by variable in the random |
28,069 | Predicting mean smooth in GAM with smooth-by-random-factor interaction | It depends. There are a bunch of ways to define the "average" response. The answer here is based on the unweighted average across groups; with this simple, artificial example it doesn't make any difference, but in other cases you might want to take a population-weighted average.
n.b. there are several reasons the following is not quite right, although it's not an unreasonable start
for consistency, we ought to be taking the mean / combining the standard errors on the linear predictor (link) scale, not the response scale
the answer below essentially treats the groups as fixed effects. We know more about the distribution (at least the assumed distribution) of the conditional modes ... but it means there are a lot of possible definitions
I will update when I get a chance, but it's still a slightly useful answer
Repeating data generation for my convenience ...
library("dplyr") ## for data_frame
set.seed(0)
means = rnorm(5, mean=0, sd=2)
df = data_frame(group = as.factor(rep(1:5, each=100)),
x = rep(seq(-3,3, length.out =100), 5),
y=as.numeric(dnorm(x, mean=means[group]) > 0.4*runif(10)))
#Fit model
library(mgcv)
gam_model = gam(y ~ te(x, group, bs=c("ts", "re")),
data=df, family = binomial)
gam_avg = gam(y ~ s(x), data=df, family = binomial)
Tweak prediction step a tiny bit to retain se in the results (it would be nice to write a broom::augment() method for this case ...)
#Predict
pfun <- function(x, type="response") {
pp <- predict(x, type="response", se.fit=TRUE)
df2 = predict(gam_model, type="response", se.fit=TRUE)
df2 = with(df2,data.frame(df,
response = fit,
se= se.fit, lwr = fit-2*se.fit,
upr = fit+2*se.fit))
Generate mean predictions by averaging at each value of x; construct confidence intervals by adding "in quadrature" (i.e. sqrt(sum(x^2))) (I don't know why c() is necessary, but it seems to be).
sumquad <- function(x) { sqrt(sum(c(x)^2)) }
dfsum <- df2 %>% group_by(x) %>%
summarise(response=mean(c(response)),
se=sumquad(se)) %>%
mutate(lwr=response-2*se,upr=response+2*se)
Now visualize:
library("ggplot2"); theme_set(theme_bw())
gg1 <- ggplot(mapping=aes(x)) +
geom_ribbon(data = df2,
mapping=aes(ymin=lwr, ymax=upr, fill=group),
alpha=0.25) +
geom_line(data = df2, mapping=aes(y=response, col=group)) +
geom_point(data = df, mapping=aes(y=y, col=group))
## add mean response + ribbon
gg1 + geom_line(data=dfsum,aes(y=response))+
geom_ribbon(data=dfsum,aes(ymin=lwr,ymax=upr),alpha=0.2) | Predicting mean smooth in GAM with smooth-by-random-factor interaction | It depends. There are a bunch of ways to define the "average" response. The answer here is based on the unweighted average across groups; with this simple, artificial example it doesn't make any diff | Predicting mean smooth in GAM with smooth-by-random-factor interaction
It depends. There are a bunch of ways to define the "average" response. The answer here is based on the unweighted average across groups; with this simple, artificial example it doesn't make any difference, but in other cases you might want to take a population-weighted average.
n.b. there are several reasons the following is not quite right, although it's not an unreasonable start
for consistency, we ought to be taking the mean / combining the standard errors on the linear predictor (link) scale, not the response scale
the answer below essentially treats the groups as fixed effects. We know more about the distribution (at least the assumed distribution) of the conditional modes ... but it means there are a lot of possible definitions
I will update when I get a chance, but it's still a slightly useful answer
Repeating data generation for my convenience ...
library("dplyr") ## for data_frame
set.seed(0)
means = rnorm(5, mean=0, sd=2)
df = data_frame(group = as.factor(rep(1:5, each=100)),
x = rep(seq(-3,3, length.out =100), 5),
y=as.numeric(dnorm(x, mean=means[group]) > 0.4*runif(10)))
#Fit model
library(mgcv)
gam_model = gam(y ~ te(x, group, bs=c("ts", "re")),
data=df, family = binomial)
gam_avg = gam(y ~ s(x), data=df, family = binomial)
Tweak prediction step a tiny bit to retain se in the results (it would be nice to write a broom::augment() method for this case ...)
#Predict
pfun <- function(x, type="response") {
pp <- predict(x, type="response", se.fit=TRUE)
df2 = predict(gam_model, type="response", se.fit=TRUE)
df2 = with(df2,data.frame(df,
response = fit,
se= se.fit, lwr = fit-2*se.fit,
upr = fit+2*se.fit))
Generate mean predictions by averaging at each value of x; construct confidence intervals by adding "in quadrature" (i.e. sqrt(sum(x^2))) (I don't know why c() is necessary, but it seems to be).
sumquad <- function(x) { sqrt(sum(c(x)^2)) }
dfsum <- df2 %>% group_by(x) %>%
summarise(response=mean(c(response)),
se=sumquad(se)) %>%
mutate(lwr=response-2*se,upr=response+2*se)
Now visualize:
library("ggplot2"); theme_set(theme_bw())
gg1 <- ggplot(mapping=aes(x)) +
geom_ribbon(data = df2,
mapping=aes(ymin=lwr, ymax=upr, fill=group),
alpha=0.25) +
geom_line(data = df2, mapping=aes(y=response, col=group)) +
geom_point(data = df, mapping=aes(y=y, col=group))
## add mean response + ribbon
gg1 + geom_line(data=dfsum,aes(y=response))+
geom_ribbon(data=dfsum,aes(ymin=lwr,ymax=upr),alpha=0.2) | Predicting mean smooth in GAM with smooth-by-random-factor interaction
It depends. There are a bunch of ways to define the "average" response. The answer here is based on the unweighted average across groups; with this simple, artificial example it doesn't make any diff |
28,070 | Proof that the linear kernel is a kernel, understanding the math | First, your definition should be corrected as
$$k(x, x') = \langle x, x\color{red}{'}\rangle = \sum_{a = 1}^N x_a x_a'. $$
The problem of your derivation is that you didn't distinguish $x_i = (x_{i,1}, \ldots, x_{i,N})^T$ and $x_j = (x_{j, 1}, \ldots, x_{j, N})^T$ very clearly.
Let's say you have $p$ vectors $\{x_1, \ldots, x_p\}$ under consideration. It follows that (what you provided was actually incorrect):
\begin{align}
& \sum_{i, j} c_i c_j k(x_i, x_j) \\
= & \sum_{i = 1}^p \sum_{j = 1}^p c_i c_j \sum_{a = 1}^N x_{i,a}x_{j, a} \\
= & \sum_{i = 1}^p \sum_{j = 1}^p \sum_{a = 1}^N c_i x_{i,a} c_j x_{j, a} \\
= & \sum_{a = 1}^N \left(\sum_{i = 1}^p c_i x_{i, a}\right) \left(\sum_{j = 1}^p c_j x_{j, a}\right) \qquad \text{ change the order of summation}\\
= & \sum_{a = 1}^N \left(\sum_{i = 1}^p c_i x_{i, a}\right)^2 \geq 0. \qquad i, j \text{ are just dummy indices}
\end{align} | Proof that the linear kernel is a kernel, understanding the math | First, your definition should be corrected as
$$k(x, x') = \langle x, x\color{red}{'}\rangle = \sum_{a = 1}^N x_a x_a'. $$
The problem of your derivation is that you didn't distinguish $x_i = (x_{i,1} | Proof that the linear kernel is a kernel, understanding the math
First, your definition should be corrected as
$$k(x, x') = \langle x, x\color{red}{'}\rangle = \sum_{a = 1}^N x_a x_a'. $$
The problem of your derivation is that you didn't distinguish $x_i = (x_{i,1}, \ldots, x_{i,N})^T$ and $x_j = (x_{j, 1}, \ldots, x_{j, N})^T$ very clearly.
Let's say you have $p$ vectors $\{x_1, \ldots, x_p\}$ under consideration. It follows that (what you provided was actually incorrect):
\begin{align}
& \sum_{i, j} c_i c_j k(x_i, x_j) \\
= & \sum_{i = 1}^p \sum_{j = 1}^p c_i c_j \sum_{a = 1}^N x_{i,a}x_{j, a} \\
= & \sum_{i = 1}^p \sum_{j = 1}^p \sum_{a = 1}^N c_i x_{i,a} c_j x_{j, a} \\
= & \sum_{a = 1}^N \left(\sum_{i = 1}^p c_i x_{i, a}\right) \left(\sum_{j = 1}^p c_j x_{j, a}\right) \qquad \text{ change the order of summation}\\
= & \sum_{a = 1}^N \left(\sum_{i = 1}^p c_i x_{i, a}\right)^2 \geq 0. \qquad i, j \text{ are just dummy indices}
\end{align} | Proof that the linear kernel is a kernel, understanding the math
First, your definition should be corrected as
$$k(x, x') = \langle x, x\color{red}{'}\rangle = \sum_{a = 1}^N x_a x_a'. $$
The problem of your derivation is that you didn't distinguish $x_i = (x_{i,1} |
28,071 | Proof that the linear kernel is a kernel, understanding the math | If you don't mind matrix notation, let $X$ be the $n \times p$ matrix of observations. Each of the $p$ vectors is a column of $X$. Then the kernel condition is:
$$ c'(X'X)c $$ for an arbitrary vector $c$ of length $p$.
$$ c'(X'X)c = (Xc)'Xc$$
Recall that $Xc$ is an $n \times 1$ vector, say $(u_1, u_2, \ldots, u_N)$, and the RHS of the previous equation is:
$$ \sum_{i=1}^N u_i^2 \geq 0 $$ | Proof that the linear kernel is a kernel, understanding the math | If you don't mind matrix notation, let $X$ be the $n \times p$ matrix of observations. Each of the $p$ vectors is a column of $X$. Then the kernel condition is:
$$ c'(X'X)c $$ for an arbitrary vector | Proof that the linear kernel is a kernel, understanding the math
If you don't mind matrix notation, let $X$ be the $n \times p$ matrix of observations. Each of the $p$ vectors is a column of $X$. Then the kernel condition is:
$$ c'(X'X)c $$ for an arbitrary vector $c$ of length $p$.
$$ c'(X'X)c = (Xc)'Xc$$
Recall that $Xc$ is an $n \times 1$ vector, say $(u_1, u_2, \ldots, u_N)$, and the RHS of the previous equation is:
$$ \sum_{i=1}^N u_i^2 \geq 0 $$ | Proof that the linear kernel is a kernel, understanding the math
If you don't mind matrix notation, let $X$ be the $n \times p$ matrix of observations. Each of the $p$ vectors is a column of $X$. Then the kernel condition is:
$$ c'(X'X)c $$ for an arbitrary vector |
28,072 | Proof that the linear kernel is a kernel, understanding the math | Your goal was to show that it is positive semidefinite, and the square of a real number is non-negative.
The first one uses an abbreviated notation for the double sum over $i$ and $j$, but they are dummy variables over the same interval, so we can remove one of these and square what we have left (noting that the $c_i$ are scalars and can be rearranged). | Proof that the linear kernel is a kernel, understanding the math | Your goal was to show that it is positive semidefinite, and the square of a real number is non-negative.
The first one uses an abbreviated notation for the double sum over $i$ and $j$, but they are du | Proof that the linear kernel is a kernel, understanding the math
Your goal was to show that it is positive semidefinite, and the square of a real number is non-negative.
The first one uses an abbreviated notation for the double sum over $i$ and $j$, but they are dummy variables over the same interval, so we can remove one of these and square what we have left (noting that the $c_i$ are scalars and can be rearranged). | Proof that the linear kernel is a kernel, understanding the math
Your goal was to show that it is positive semidefinite, and the square of a real number is non-negative.
The first one uses an abbreviated notation for the double sum over $i$ and $j$, but they are du |
28,073 | What is this bias-variance tradeoff for regression coefficients and how to derive it? | The last term in the equation can be written as
$$
(X\beta - X\hat{\beta})'H^{-1}(X\beta - X\hat{\beta}).
$$
In this form the equation is saying something interesting. Assuming $H$ is positive definite and symmetric, so is its inverse. Therefore,we can define an inner product $<x, y>_{H^{-1}} = x'H^{-1}y$, giving us geometry. Then the above equality is essentially saying that,
$$
(X\beta - X\hat{\beta}) \perp (y - X\hat{\beta}).
$$
I wanted to give you this bit of intuition since a commenter has already left a link to the derivation.
Edit: For Posterity
LHS:
\begin{eqnarray}
(y-X \beta)'H^{-1}(y-X \beta) &=& y'H^{-1}y &-& 2y'H^{-1}X \beta &+& \beta'X'H^{-1}X\beta \\
&=& (A) &-& (B) &+& (C)
\end{eqnarray}
RHS:
$$
(y-X\hat\beta)'H^{-1}(y-X\hat\beta)+(\beta-\hat\beta)'(X'H^{-1}X)(\beta-\hat\beta)
$$
\begin{eqnarray}
&=& y'H^{-1}y &- 2y'H^{-1}X\hat{\beta} &+ \hat{\beta}'X'H^{-1}X\hat{\beta} &+ \beta X'H^{-1}X\beta &- 2\hat{\beta}X'H^{-1}X\beta &+ \hat{\beta}'X'H^{-1}X\hat{\beta} \\
&=& (A) &- (D) &+ (E) &+ (C) &- (F) &+ (E)
\end{eqnarray}
Relation:
$$
\hat{\beta} = (X'H^{-1}X)^{-1}X'H^{-1}y
$$
By plugging in the relation you can show that (B) = (F), and that 2(E) = (D). All done. | What is this bias-variance tradeoff for regression coefficients and how to derive it? | The last term in the equation can be written as
$$
(X\beta - X\hat{\beta})'H^{-1}(X\beta - X\hat{\beta}).
$$
In this form the equation is saying something interesting. Assuming $H$ is positive defini | What is this bias-variance tradeoff for regression coefficients and how to derive it?
The last term in the equation can be written as
$$
(X\beta - X\hat{\beta})'H^{-1}(X\beta - X\hat{\beta}).
$$
In this form the equation is saying something interesting. Assuming $H$ is positive definite and symmetric, so is its inverse. Therefore,we can define an inner product $<x, y>_{H^{-1}} = x'H^{-1}y$, giving us geometry. Then the above equality is essentially saying that,
$$
(X\beta - X\hat{\beta}) \perp (y - X\hat{\beta}).
$$
I wanted to give you this bit of intuition since a commenter has already left a link to the derivation.
Edit: For Posterity
LHS:
\begin{eqnarray}
(y-X \beta)'H^{-1}(y-X \beta) &=& y'H^{-1}y &-& 2y'H^{-1}X \beta &+& \beta'X'H^{-1}X\beta \\
&=& (A) &-& (B) &+& (C)
\end{eqnarray}
RHS:
$$
(y-X\hat\beta)'H^{-1}(y-X\hat\beta)+(\beta-\hat\beta)'(X'H^{-1}X)(\beta-\hat\beta)
$$
\begin{eqnarray}
&=& y'H^{-1}y &- 2y'H^{-1}X\hat{\beta} &+ \hat{\beta}'X'H^{-1}X\hat{\beta} &+ \beta X'H^{-1}X\beta &- 2\hat{\beta}X'H^{-1}X\beta &+ \hat{\beta}'X'H^{-1}X\hat{\beta} \\
&=& (A) &- (D) &+ (E) &+ (C) &- (F) &+ (E)
\end{eqnarray}
Relation:
$$
\hat{\beta} = (X'H^{-1}X)^{-1}X'H^{-1}y
$$
By plugging in the relation you can show that (B) = (F), and that 2(E) = (D). All done. | What is this bias-variance tradeoff for regression coefficients and how to derive it?
The last term in the equation can be written as
$$
(X\beta - X\hat{\beta})'H^{-1}(X\beta - X\hat{\beta}).
$$
In this form the equation is saying something interesting. Assuming $H$ is positive defini |
28,074 | What is this bias-variance tradeoff for regression coefficients and how to derive it? | They arrive at this identity by a technique called completing the square. The left hand side is in a quadratic form, so start by multiplying it out
$$ (y-X\beta)'H^{-1}(y-X\beta)= y'H^{-1}y - 2y'H^{-1}X\beta + \beta'X'H^{-1} X\beta $$
continue on and then rewrite in terms of $\hat{\beta} = (X'H^{-1}X)^{-1}X'H^{-1}y$. The algebra is kind of long but googling completing the square in Bayesian regression and you can find plenty of hints. For example, see the wikipedia on Bayesian linear regression, and other CrossValided answers regarding completing the square, like here. | What is this bias-variance tradeoff for regression coefficients and how to derive it? | They arrive at this identity by a technique called completing the square. The left hand side is in a quadratic form, so start by multiplying it out
$$ (y-X\beta)'H^{-1}(y-X\beta)= y'H^{-1}y - 2y'H^{- | What is this bias-variance tradeoff for regression coefficients and how to derive it?
They arrive at this identity by a technique called completing the square. The left hand side is in a quadratic form, so start by multiplying it out
$$ (y-X\beta)'H^{-1}(y-X\beta)= y'H^{-1}y - 2y'H^{-1}X\beta + \beta'X'H^{-1} X\beta $$
continue on and then rewrite in terms of $\hat{\beta} = (X'H^{-1}X)^{-1}X'H^{-1}y$. The algebra is kind of long but googling completing the square in Bayesian regression and you can find plenty of hints. For example, see the wikipedia on Bayesian linear regression, and other CrossValided answers regarding completing the square, like here. | What is this bias-variance tradeoff for regression coefficients and how to derive it?
They arrive at this identity by a technique called completing the square. The left hand side is in a quadratic form, so start by multiplying it out
$$ (y-X\beta)'H^{-1}(y-X\beta)= y'H^{-1}y - 2y'H^{- |
28,075 | What is this bias-variance tradeoff for regression coefficients and how to derive it? | If you know your matrix algebra, then this should be doable by multiplying everything out and verifying that you indeed have the same on both sides. This is what jlimahaverford has demonstrated.
To be able to do this you need the formula for the estimate of $\hat{\beta}$. We can derive the formula in a similar manner as for linear regression when we have uncorrelated error terms. The trick is to standardize.
Here is some information on how to standardize a RV that comes from a multivariate normal distribution. Let's assume that you have
$$
\mathbf{X}\sim \mathcal{N}(\mu,\Sigma).
$$
$\Sigma$ is positive definite, so you can factorize it as $\Sigma = PP^T$. Now the random variable
$$
\mathbf{Y}=P^{-1}(\mathbf{X}-\mu)
$$
comes from the distribution $\mathcal{N}(0,I)$. Now we can use this trick for our problem to find $\hat{\beta}$. Let's factorize $H=PP^T$. We have
$$
\begin{align}
y&=X\beta+\epsilon\\
P^{-1}y &= P^{-1}X\beta + P^{-1}\epsilon
\end{align}
$$
Now $\epsilon$ has been standardized, such that $\text{cov}(P^{-1}\epsilon)=I$, so we can now treat this as a simple multiple linear regression model where:
$$
\tilde{X}=P^{-1}X,\qquad \tilde{y}=P^{-1}y\quad\text{and}\quad \tilde{\epsilon}=P^{-1}\epsilon.
$$
So we have the regression problem:
$$
\tilde{y}=\tilde{X}\beta+\tilde{\epsilon}
$$
The formula for $\hat{\beta}$ is
$$
\begin{align}
\hat{\beta} &= (\tilde{X}^T\tilde{X})^{-1}\tilde{X}^T\tilde{y}\\
&=((P^{-1}X)^TP^{-1}X)^{-1}(P^{-1}X)^TP^{-1}y\\
&=(X^T(PP^T)^{-1}X)^{-1}X(PP^T)^{-1}y\\
&=(X^TH^{-1}X)^{-1}XH^{-1}y
\end{align}
$$
This is the key to do this, the rest is the algebraic manipulation demonstrated in the solution by jlimahaverford. | What is this bias-variance tradeoff for regression coefficients and how to derive it? | If you know your matrix algebra, then this should be doable by multiplying everything out and verifying that you indeed have the same on both sides. This is what jlimahaverford has demonstrated.
To be | What is this bias-variance tradeoff for regression coefficients and how to derive it?
If you know your matrix algebra, then this should be doable by multiplying everything out and verifying that you indeed have the same on both sides. This is what jlimahaverford has demonstrated.
To be able to do this you need the formula for the estimate of $\hat{\beta}$. We can derive the formula in a similar manner as for linear regression when we have uncorrelated error terms. The trick is to standardize.
Here is some information on how to standardize a RV that comes from a multivariate normal distribution. Let's assume that you have
$$
\mathbf{X}\sim \mathcal{N}(\mu,\Sigma).
$$
$\Sigma$ is positive definite, so you can factorize it as $\Sigma = PP^T$. Now the random variable
$$
\mathbf{Y}=P^{-1}(\mathbf{X}-\mu)
$$
comes from the distribution $\mathcal{N}(0,I)$. Now we can use this trick for our problem to find $\hat{\beta}$. Let's factorize $H=PP^T$. We have
$$
\begin{align}
y&=X\beta+\epsilon\\
P^{-1}y &= P^{-1}X\beta + P^{-1}\epsilon
\end{align}
$$
Now $\epsilon$ has been standardized, such that $\text{cov}(P^{-1}\epsilon)=I$, so we can now treat this as a simple multiple linear regression model where:
$$
\tilde{X}=P^{-1}X,\qquad \tilde{y}=P^{-1}y\quad\text{and}\quad \tilde{\epsilon}=P^{-1}\epsilon.
$$
So we have the regression problem:
$$
\tilde{y}=\tilde{X}\beta+\tilde{\epsilon}
$$
The formula for $\hat{\beta}$ is
$$
\begin{align}
\hat{\beta} &= (\tilde{X}^T\tilde{X})^{-1}\tilde{X}^T\tilde{y}\\
&=((P^{-1}X)^TP^{-1}X)^{-1}(P^{-1}X)^TP^{-1}y\\
&=(X^T(PP^T)^{-1}X)^{-1}X(PP^T)^{-1}y\\
&=(X^TH^{-1}X)^{-1}XH^{-1}y
\end{align}
$$
This is the key to do this, the rest is the algebraic manipulation demonstrated in the solution by jlimahaverford. | What is this bias-variance tradeoff for regression coefficients and how to derive it?
If you know your matrix algebra, then this should be doable by multiplying everything out and verifying that you indeed have the same on both sides. This is what jlimahaverford has demonstrated.
To be |
28,076 | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$ | This is Samuelson's inequality and it needs the $\leq$ sign. If you take the Wikipedia version and rework it for the $n-1$ definition of $S,$ you will find that it becomes $${{ \left| X_i-\bar X \right| } \over S} \leq {{n-1} \over \sqrt{n}}$$ | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$ | This is Samuelson's inequality and it needs the $\leq$ sign. If you take the Wikipedia version and rework it for the $n-1$ definition of $S,$ you will find that it becomes $${{ \left| X_i-\bar X \righ | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$
This is Samuelson's inequality and it needs the $\leq$ sign. If you take the Wikipedia version and rework it for the $n-1$ definition of $S,$ you will find that it becomes $${{ \left| X_i-\bar X \right| } \over S} \leq {{n-1} \over \sqrt{n}}$$ | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$
This is Samuelson's inequality and it needs the $\leq$ sign. If you take the Wikipedia version and rework it for the $n-1$ definition of $S,$ you will find that it becomes $${{ \left| X_i-\bar X \righ |
28,077 | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$ | After simplifying the problem by means of routine procedures, it can be solved by converting it into a dual minimization program which has a well-known answer with an elementary proof. Perhaps this dualization is the "subtle step" referred to in the question. The inequality can also be established in a purely mechanical manner by maximizing $|T_i|$ via Lagrange multipliers.
First though, I offer a more elegant solution based on the geometry of least squares. It requires no preliminary simplification and is almost immediate, providing direct intuition into the result. As suggested in the question, the problem reduces to the Cauchy-Schwarz inequality.
Geometric solution
Consider $\mathbf{x} = (X_1, X_2, \ldots, X_n)$ as an $n$-dimensional vector in Euclidean space with the usual dot product. Let $\mathbf{y} = (0,0,\ldots,0,1,0,\ldots,0)$ be the $i^\text{th}$ basis vector and $\mathbf{1} = (1,1,\ldots, 1)$. Write $\mathbf{\hat x}$ and $\mathbf{\hat y}$ for the orthogonal projections of $\mathbf{x}$ and $\mathbf{y}$ into the orthogonal complement of $\mathbf{1}$. (In statistical terminology, they are the residuals with respect to the means.) Then, since $X_i-\bar X = \mathbf{\hat x}\cdot \mathbf{y}$ and $S = ||\mathbf{\hat x}||/\sqrt{n-1}$,
$$|T_i| = \sqrt{n-1}\frac{|\mathbf{\hat x} \cdot \mathbf{y}|}{||\mathbf{\hat x}||} = \sqrt{n-1}\frac{|\mathbf{\hat x} \cdot \mathbf{\hat y}|}{||\mathbf{\hat x}||}$$
is the component of $\mathbf{\hat y}$ in the $\mathbf{\hat x}$ direction. By Cauchy-Schwarz, it is maximized exactly when $\mathbf{\hat x}$ is parallel to $\mathbf{\hat y} = (-1,-1,\ldots,-1,n-1,-1,-1,\ldots,-1)/n$, for which $$T_i = \pm \sqrt{n-1} \frac{\mathbf{\hat y}\cdot \mathbf{\hat y} }{ ||\mathbf{\hat y}||} = \pm\sqrt{n-1}||\mathbf{\hat y}|| = \pm\frac{n-1}{\sqrt{n}},$$ QED.
Incidentally, this solution provides an exhaustive characterization of all the cases where $|T_i|$ is maximized: they are all of the form
$$\mathbf{x} = \sigma\mathbf{\hat y} + \mu\mathbf{1} = \sigma(-1,-1,\ldots,-1,n-1,-1,-1,\ldots,-1) + \mu(1,1,\ldots, 1)$$
for all real $\mu, \sigma$.
This analysis generalizes easily to the case where $\{\mathbf{1}\}$ is replaced by any set of regressors. Evidently the maximum of $T_i$ is proportional to the length of the residual of $\mathbf{y}$, $||\mathbf{\hat y}||$.
Simplification
Because $T_i$ is invariant under changes of location and scale, we may assume with no loss of generality that the $X_i$ sum to zero and their squares sum to $n-1$. This identifies $|T_i|$ with $|X_i|$, since $S$ (the mean square) is $1$. Maximizing it is tantamount to maximizing $|T_i|^2 = T_i^2 = X_i^2$. No generality is lost by taking $i=1$, either, since the $X_i$ are exchangeable.
Solution via a dual formulation
A dual problem is to fix the value of $X_1^2$ and ask what values of the remaining $X_j, j\ne 1$ are needed to minimize the sum of squares $\sum_{j=1}^n X_j^2$ given that $\sum_{j=1}^n X_j = 0$. Because $X_1$ is given, this is the problem of minimizing $\sum_{j=2}^n X_j^2$ given that $\sum_{j=2}^n X_j = -X_1$.
The solution is easily found in many ways. One of the most elementary is to write
$$X_j = -\frac{X_1}{n-1} + \varepsilon_j,\ j=2, 3, \ldots, n$$
for which $\sum_{j=2}^n \varepsilon_j = 0$. Expanding the objective function and using this sum-to-zero identity to simplify it produces
$$\sum_{j=2}^n X_j^2 = \sum_{j=2}^n \left(-\frac{X_1}{n-1} + \varepsilon_j\right)^2 = \\\sum \left(-\frac{X_1}{n-1}\right)^2 - 2\frac{X_1}{n-1}\sum \varepsilon_j + \sum \varepsilon_j^2 \\= \text{Constant} + \sum \varepsilon_j^2,$$
immediately showing the unique solution is $\varepsilon_j=0$ for all $j$. For this solution,
$$(n-1)S^2 = X_1^2 + (n-1)\left(-\frac{X_1}{n-1}\right)^2 = \left(1 + \frac{1}{n-1}\right)X_1^2 = \frac{n}{n-1}X_1^2$$
and
$$|T_i| = \frac{|X_1|}{S} = \frac{|X_1|}{\sqrt{\frac{n}{(n-1)^2}X_1^2}} = \frac{n-1}{\sqrt{n}},$$
QED.
Solution via machinery
Return to the simplified program we began with:
$$\text{Maximize } X_1^2$$
subject to
$$\sum_{i=1}^n X_i = 0\text{ and }\sum_{i=1}^n X_i^2 -(n-1)=0.$$
The method of Lagrange multipliers (which is almost purely mechanical and straightforward) equates a nontrivial linear combination of the gradients of these three functions to zero:
$$(0,0,\ldots, 0) = \lambda_1 D(X_1^2) + \lambda_2 D\left(\sum_{i=1}^n X_i\right ) + \lambda_3 D\left(\sum_{i=1}^n X_i^2 -(n-1)\right).$$
Component by component, these $n$ equations are
$$\eqalign{
0 &= 2\lambda_1 X_1 +& \lambda_2 &+ 2\lambda_3 X_1 \\
0 &= & \lambda_2 &+ 2\lambda_3 X_2 \\
0 &= \cdots \\
0 &= & \lambda _2 &+ 2\lambda_3 X_n.
}$$
The last $n-1$ of them imply either $X_2 = X_3 = \cdots = X_n = -\lambda_2/(2\lambda_3)$ or $\lambda_2=\lambda_3=0$. (We may rule out the latter case because then the first equation implies $\lambda_1=0$, trivializing the linear combination.) The sum-to-zero constraint produces $X_1 = -(n-1)X_2$. The sum-of-squares constraint provides the two solutions
$$X_1 = \pm\frac{n-1}{\sqrt{n}};\ X_2 = X_3 = \cdots = X_n = \mp\frac{1}{\sqrt{n}}.$$
They both yield
$$|T_i| = |X_1| \le |\pm\frac{n-1}{\sqrt{n}}| = \frac{n-1}{\sqrt{n}}.$$ | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$ | After simplifying the problem by means of routine procedures, it can be solved by converting it into a dual minimization program which has a well-known answer with an elementary proof. Perhaps this d | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$
After simplifying the problem by means of routine procedures, it can be solved by converting it into a dual minimization program which has a well-known answer with an elementary proof. Perhaps this dualization is the "subtle step" referred to in the question. The inequality can also be established in a purely mechanical manner by maximizing $|T_i|$ via Lagrange multipliers.
First though, I offer a more elegant solution based on the geometry of least squares. It requires no preliminary simplification and is almost immediate, providing direct intuition into the result. As suggested in the question, the problem reduces to the Cauchy-Schwarz inequality.
Geometric solution
Consider $\mathbf{x} = (X_1, X_2, \ldots, X_n)$ as an $n$-dimensional vector in Euclidean space with the usual dot product. Let $\mathbf{y} = (0,0,\ldots,0,1,0,\ldots,0)$ be the $i^\text{th}$ basis vector and $\mathbf{1} = (1,1,\ldots, 1)$. Write $\mathbf{\hat x}$ and $\mathbf{\hat y}$ for the orthogonal projections of $\mathbf{x}$ and $\mathbf{y}$ into the orthogonal complement of $\mathbf{1}$. (In statistical terminology, they are the residuals with respect to the means.) Then, since $X_i-\bar X = \mathbf{\hat x}\cdot \mathbf{y}$ and $S = ||\mathbf{\hat x}||/\sqrt{n-1}$,
$$|T_i| = \sqrt{n-1}\frac{|\mathbf{\hat x} \cdot \mathbf{y}|}{||\mathbf{\hat x}||} = \sqrt{n-1}\frac{|\mathbf{\hat x} \cdot \mathbf{\hat y}|}{||\mathbf{\hat x}||}$$
is the component of $\mathbf{\hat y}$ in the $\mathbf{\hat x}$ direction. By Cauchy-Schwarz, it is maximized exactly when $\mathbf{\hat x}$ is parallel to $\mathbf{\hat y} = (-1,-1,\ldots,-1,n-1,-1,-1,\ldots,-1)/n$, for which $$T_i = \pm \sqrt{n-1} \frac{\mathbf{\hat y}\cdot \mathbf{\hat y} }{ ||\mathbf{\hat y}||} = \pm\sqrt{n-1}||\mathbf{\hat y}|| = \pm\frac{n-1}{\sqrt{n}},$$ QED.
Incidentally, this solution provides an exhaustive characterization of all the cases where $|T_i|$ is maximized: they are all of the form
$$\mathbf{x} = \sigma\mathbf{\hat y} + \mu\mathbf{1} = \sigma(-1,-1,\ldots,-1,n-1,-1,-1,\ldots,-1) + \mu(1,1,\ldots, 1)$$
for all real $\mu, \sigma$.
This analysis generalizes easily to the case where $\{\mathbf{1}\}$ is replaced by any set of regressors. Evidently the maximum of $T_i$ is proportional to the length of the residual of $\mathbf{y}$, $||\mathbf{\hat y}||$.
Simplification
Because $T_i$ is invariant under changes of location and scale, we may assume with no loss of generality that the $X_i$ sum to zero and their squares sum to $n-1$. This identifies $|T_i|$ with $|X_i|$, since $S$ (the mean square) is $1$. Maximizing it is tantamount to maximizing $|T_i|^2 = T_i^2 = X_i^2$. No generality is lost by taking $i=1$, either, since the $X_i$ are exchangeable.
Solution via a dual formulation
A dual problem is to fix the value of $X_1^2$ and ask what values of the remaining $X_j, j\ne 1$ are needed to minimize the sum of squares $\sum_{j=1}^n X_j^2$ given that $\sum_{j=1}^n X_j = 0$. Because $X_1$ is given, this is the problem of minimizing $\sum_{j=2}^n X_j^2$ given that $\sum_{j=2}^n X_j = -X_1$.
The solution is easily found in many ways. One of the most elementary is to write
$$X_j = -\frac{X_1}{n-1} + \varepsilon_j,\ j=2, 3, \ldots, n$$
for which $\sum_{j=2}^n \varepsilon_j = 0$. Expanding the objective function and using this sum-to-zero identity to simplify it produces
$$\sum_{j=2}^n X_j^2 = \sum_{j=2}^n \left(-\frac{X_1}{n-1} + \varepsilon_j\right)^2 = \\\sum \left(-\frac{X_1}{n-1}\right)^2 - 2\frac{X_1}{n-1}\sum \varepsilon_j + \sum \varepsilon_j^2 \\= \text{Constant} + \sum \varepsilon_j^2,$$
immediately showing the unique solution is $\varepsilon_j=0$ for all $j$. For this solution,
$$(n-1)S^2 = X_1^2 + (n-1)\left(-\frac{X_1}{n-1}\right)^2 = \left(1 + \frac{1}{n-1}\right)X_1^2 = \frac{n}{n-1}X_1^2$$
and
$$|T_i| = \frac{|X_1|}{S} = \frac{|X_1|}{\sqrt{\frac{n}{(n-1)^2}X_1^2}} = \frac{n-1}{\sqrt{n}},$$
QED.
Solution via machinery
Return to the simplified program we began with:
$$\text{Maximize } X_1^2$$
subject to
$$\sum_{i=1}^n X_i = 0\text{ and }\sum_{i=1}^n X_i^2 -(n-1)=0.$$
The method of Lagrange multipliers (which is almost purely mechanical and straightforward) equates a nontrivial linear combination of the gradients of these three functions to zero:
$$(0,0,\ldots, 0) = \lambda_1 D(X_1^2) + \lambda_2 D\left(\sum_{i=1}^n X_i\right ) + \lambda_3 D\left(\sum_{i=1}^n X_i^2 -(n-1)\right).$$
Component by component, these $n$ equations are
$$\eqalign{
0 &= 2\lambda_1 X_1 +& \lambda_2 &+ 2\lambda_3 X_1 \\
0 &= & \lambda_2 &+ 2\lambda_3 X_2 \\
0 &= \cdots \\
0 &= & \lambda _2 &+ 2\lambda_3 X_n.
}$$
The last $n-1$ of them imply either $X_2 = X_3 = \cdots = X_n = -\lambda_2/(2\lambda_3)$ or $\lambda_2=\lambda_3=0$. (We may rule out the latter case because then the first equation implies $\lambda_1=0$, trivializing the linear combination.) The sum-to-zero constraint produces $X_1 = -(n-1)X_2$. The sum-of-squares constraint provides the two solutions
$$X_1 = \pm\frac{n-1}{\sqrt{n}};\ X_2 = X_3 = \cdots = X_n = \mp\frac{1}{\sqrt{n}}.$$
They both yield
$$|T_i| = |X_1| \le |\pm\frac{n-1}{\sqrt{n}}| = \frac{n-1}{\sqrt{n}}.$$ | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$
After simplifying the problem by means of routine procedures, it can be solved by converting it into a dual minimization program which has a well-known answer with an elementary proof. Perhaps this d |
28,078 | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$ | The inequality as stated is true. It is quite clear intuitively that we get the most difficult case for the inequality (that is, maximizing the left hannd side for given $S^2$) by choosing one value, say $x_1$ as large as possible while having all the others equal. Let us look at an example with such configuration:
$$
n=4, \quad x_1=x_2=x_3=0, x_4=4, \bar{x}=1, S^2=4,
$$
now $\frac{|x_i-\bar{x}|}{S}=\begin{cases} \frac12 ~\text{or}~ \\
\frac32 \end{cases}$ depending on $i$, while the given upper limit is equal to $\frac{4-1}{2}=1.5$ which is just enough. That idea can be completed to a proof.
EDIT
We will now prove the claim, as hinted above. First, for any given vector $ x=(x_1, x_2, \dots, x_n)$ in this problem, we can replace it with $x-\bar{x}$ without changing either side of the inequality above. So, in the following let us assume that $\bar{x}=0$. We can also by relabelling assume that $x_1$ is largest. Then, by choosing first $x_1>0$ and then $x_2=x_3=\dots=x_n=-\frac{x_1}{n-1}$ we can check by simple algebra that we have equality in the claimed inequality. So, it is sharp.
Then, define the (convex) region $R$ by
$$
R = \{ x\in\mathbb{R} \colon \bar{x}=0, \sum(x_i-\bar{x})^2/(n-1) \le S^2\}
$$
for a given positive constant $S^2$. Note that $R$ is the intersection of a hyperplane with a sphere centered at the origin, so is a sphere in $(n-1)$-space. Our problem can now be formulated as
$$
\max_{x\in R} \max_i |x_i|
$$
since an $x$ maximizing that will be the most difficult case for the inequality.
This is a problem of finding the maximum of a convex function over a convex set, which in general are difficult problems (minimums are easy!). But, in this case the convex region is a sphere centered on the origin, and the function we want to maximize is the absolute value of the coordinates. It is obvious that that maximum is found at the boundary sphere of $R$, and by taking $|x_1|$ maximal, our first test case is forced. | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$ | The inequality as stated is true. It is quite clear intuitively that we get the most difficult case for the inequality (that is, maximizing the left hannd side for given $S^2$) by choosing one value, | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$
The inequality as stated is true. It is quite clear intuitively that we get the most difficult case for the inequality (that is, maximizing the left hannd side for given $S^2$) by choosing one value, say $x_1$ as large as possible while having all the others equal. Let us look at an example with such configuration:
$$
n=4, \quad x_1=x_2=x_3=0, x_4=4, \bar{x}=1, S^2=4,
$$
now $\frac{|x_i-\bar{x}|}{S}=\begin{cases} \frac12 ~\text{or}~ \\
\frac32 \end{cases}$ depending on $i$, while the given upper limit is equal to $\frac{4-1}{2}=1.5$ which is just enough. That idea can be completed to a proof.
EDIT
We will now prove the claim, as hinted above. First, for any given vector $ x=(x_1, x_2, \dots, x_n)$ in this problem, we can replace it with $x-\bar{x}$ without changing either side of the inequality above. So, in the following let us assume that $\bar{x}=0$. We can also by relabelling assume that $x_1$ is largest. Then, by choosing first $x_1>0$ and then $x_2=x_3=\dots=x_n=-\frac{x_1}{n-1}$ we can check by simple algebra that we have equality in the claimed inequality. So, it is sharp.
Then, define the (convex) region $R$ by
$$
R = \{ x\in\mathbb{R} \colon \bar{x}=0, \sum(x_i-\bar{x})^2/(n-1) \le S^2\}
$$
for a given positive constant $S^2$. Note that $R$ is the intersection of a hyperplane with a sphere centered at the origin, so is a sphere in $(n-1)$-space. Our problem can now be formulated as
$$
\max_{x\in R} \max_i |x_i|
$$
since an $x$ maximizing that will be the most difficult case for the inequality.
This is a problem of finding the maximum of a convex function over a convex set, which in general are difficult problems (minimums are easy!). But, in this case the convex region is a sphere centered on the origin, and the function we want to maximize is the absolute value of the coordinates. It is obvious that that maximum is found at the boundary sphere of $R$, and by taking $|x_1|$ maximal, our first test case is forced. | How to prove that $\frac{\left|X_i -\bar{X} \right|}{S} \leq\frac{n-1}{\sqrt{n}}$
The inequality as stated is true. It is quite clear intuitively that we get the most difficult case for the inequality (that is, maximizing the left hannd side for given $S^2$) by choosing one value, |
28,079 | Where can I read about gamma coefficient in SVM in scikit-learn? | The RBF kernel function is as follows, for two vectors $\mathbf{u}$ and $\mathbf{v}$:
$$
\kappa(\mathbf{u},\mathbf{v}) = \exp(-\gamma \|\mathbf{u}-\mathbf{v}\|^2).
$$
The hyperparameter $\gamma$ is used to configure the sensitivity to differences in feature vectors, which in turn depends on various things such as input space dimensionality and feature normalization.
If you set $\gamma$ too large, you will end up overfitting. In the limit case $\gamma\rightarrow\infty$, the kernel matrix becomes the unit matrix which leads to a perfect fit of the training data, though an entirely useless model.
The optimal value of $\gamma$ depends entirely on your data, any rules of thumb should be taken with a pound of salt. That said, you can use specialized libraries to optimize hyperparameters for you (e.g. Optunity (*)), in the case of SVM with RBF kernel that is $\gamma$ and $C$. You can find an example of optimizing these parameters automatically with Optunity and scikit-learn here.
(*) disclaimer: I'm the lead developer of Optunity. | Where can I read about gamma coefficient in SVM in scikit-learn? | The RBF kernel function is as follows, for two vectors $\mathbf{u}$ and $\mathbf{v}$:
$$
\kappa(\mathbf{u},\mathbf{v}) = \exp(-\gamma \|\mathbf{u}-\mathbf{v}\|^2).
$$
The hyperparameter $\gamma$ is us | Where can I read about gamma coefficient in SVM in scikit-learn?
The RBF kernel function is as follows, for two vectors $\mathbf{u}$ and $\mathbf{v}$:
$$
\kappa(\mathbf{u},\mathbf{v}) = \exp(-\gamma \|\mathbf{u}-\mathbf{v}\|^2).
$$
The hyperparameter $\gamma$ is used to configure the sensitivity to differences in feature vectors, which in turn depends on various things such as input space dimensionality and feature normalization.
If you set $\gamma$ too large, you will end up overfitting. In the limit case $\gamma\rightarrow\infty$, the kernel matrix becomes the unit matrix which leads to a perfect fit of the training data, though an entirely useless model.
The optimal value of $\gamma$ depends entirely on your data, any rules of thumb should be taken with a pound of salt. That said, you can use specialized libraries to optimize hyperparameters for you (e.g. Optunity (*)), in the case of SVM with RBF kernel that is $\gamma$ and $C$. You can find an example of optimizing these parameters automatically with Optunity and scikit-learn here.
(*) disclaimer: I'm the lead developer of Optunity. | Where can I read about gamma coefficient in SVM in scikit-learn?
The RBF kernel function is as follows, for two vectors $\mathbf{u}$ and $\mathbf{v}$:
$$
\kappa(\mathbf{u},\mathbf{v}) = \exp(-\gamma \|\mathbf{u}-\mathbf{v}\|^2).
$$
The hyperparameter $\gamma$ is us |
28,080 | X, Y are iid from N(0,1). What's the probability that X>2Y | With a bivariate standard normal (i.e. iid standard normal), the probability of lying on one side of a line through the origin is $\frac{_1}{^2}$ no matter what the slope of the line is.
This follows, for example, from the rotational symmetry of the bivariate distribution about $O$, since we could rotate the problem to one of considering $P(X'\gt0)$ in rotated coordinates.
Indeed, considering the use of affine transformations means it must be $\frac{_1}{^2}$ much more generally -- the argument will apply to any bivariate normal where both variances are greater than 0. | X, Y are iid from N(0,1). What's the probability that X>2Y | With a bivariate standard normal (i.e. iid standard normal), the probability of lying on one side of a line through the origin is $\frac{_1}{^2}$ no matter what the slope of the line is.
This follow | X, Y are iid from N(0,1). What's the probability that X>2Y
With a bivariate standard normal (i.e. iid standard normal), the probability of lying on one side of a line through the origin is $\frac{_1}{^2}$ no matter what the slope of the line is.
This follows, for example, from the rotational symmetry of the bivariate distribution about $O$, since we could rotate the problem to one of considering $P(X'\gt0)$ in rotated coordinates.
Indeed, considering the use of affine transformations means it must be $\frac{_1}{^2}$ much more generally -- the argument will apply to any bivariate normal where both variances are greater than 0. | X, Y are iid from N(0,1). What's the probability that X>2Y
With a bivariate standard normal (i.e. iid standard normal), the probability of lying on one side of a line through the origin is $\frac{_1}{^2}$ no matter what the slope of the line is.
This follow |
28,081 | How does Scikit Learn resolve ties in the KNN classification? | From the documentation for KNeighborsClassifier:
Warning: Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor k+1 and k, have identical distances but but different labels, the results will depend on the ordering of the training data.
To get exactly what happens, we'll have to look at the source. You can see that, in the unweighted case, KNeighborsClassifier.predict ends up calling scipy.stats.mode, whose documentation says
Returns an array of the modal (most common) value in the passed array.
If there is more than one such value, only the first is returned.
So, in the case of ties, the answer will be the class that happens to appear first in the set of neighbors.
Digging a little deeper, the used neigh_ind array is the result of calling the kneighbors method, which (though the documentation doesn't say so) appears to return results in sorted order.
So ties should be broken by choosing the class with the point closest to the query point, but this behavior isn't documented and I'm not 100% sure it always happens. | How does Scikit Learn resolve ties in the KNN classification? | From the documentation for KNeighborsClassifier:
Warning: Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor k+1 and k, have identical distances but but different | How does Scikit Learn resolve ties in the KNN classification?
From the documentation for KNeighborsClassifier:
Warning: Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor k+1 and k, have identical distances but but different labels, the results will depend on the ordering of the training data.
To get exactly what happens, we'll have to look at the source. You can see that, in the unweighted case, KNeighborsClassifier.predict ends up calling scipy.stats.mode, whose documentation says
Returns an array of the modal (most common) value in the passed array.
If there is more than one such value, only the first is returned.
So, in the case of ties, the answer will be the class that happens to appear first in the set of neighbors.
Digging a little deeper, the used neigh_ind array is the result of calling the kneighbors method, which (though the documentation doesn't say so) appears to return results in sorted order.
So ties should be broken by choosing the class with the point closest to the query point, but this behavior isn't documented and I'm not 100% sure it always happens. | How does Scikit Learn resolve ties in the KNN classification?
From the documentation for KNeighborsClassifier:
Warning: Regarding the Nearest Neighbors algorithms, if it is found that two neighbors, neighbor k+1 and k, have identical distances but but different |
28,082 | How does Scikit Learn resolve ties in the KNN classification? | This answer is just to show with a brief example how sklearn resolves the ties in kNN choosing the class with lowest value:
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
# We start defining 4 points in a 1D space: x1=10, x2=11, x3=12, x4=13
x = np.array([10,11,12,13]).reshape(-1,1) # reshape is needed as long as is 1D
# We assign different classes to the points
y = np.array([0,1,1,2])
# we fit a 2-NN classifier
knn = KNeighborsClassifier(n_neighbors=2 , weights='uniform')
knn.fit(x, y)
# We try to predict samples with 5 and 15 values (it will be a tie in both cases)
x_test=np.array([5,15]).reshape(-1,1)
pred = knn.predict(x_test)
print(pred)
#[0 1]
We see how the tie is resolved assigning not the closest neighbor's value but the lowest class value. | How does Scikit Learn resolve ties in the KNN classification? | This answer is just to show with a brief example how sklearn resolves the ties in kNN choosing the class with lowest value:
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
# We | How does Scikit Learn resolve ties in the KNN classification?
This answer is just to show with a brief example how sklearn resolves the ties in kNN choosing the class with lowest value:
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
# We start defining 4 points in a 1D space: x1=10, x2=11, x3=12, x4=13
x = np.array([10,11,12,13]).reshape(-1,1) # reshape is needed as long as is 1D
# We assign different classes to the points
y = np.array([0,1,1,2])
# we fit a 2-NN classifier
knn = KNeighborsClassifier(n_neighbors=2 , weights='uniform')
knn.fit(x, y)
# We try to predict samples with 5 and 15 values (it will be a tie in both cases)
x_test=np.array([5,15]).reshape(-1,1)
pred = knn.predict(x_test)
print(pred)
#[0 1]
We see how the tie is resolved assigning not the closest neighbor's value but the lowest class value. | How does Scikit Learn resolve ties in the KNN classification?
This answer is just to show with a brief example how sklearn resolves the ties in kNN choosing the class with lowest value:
from sklearn.neighbors import KNeighborsClassifier
import numpy as np
# We |
28,083 | Zero-centering the testing set after PCA on the training set | Do I just subtract the means of the training data as I did to zero-center the training data?
Yes.
You are supposed to do to the test data exactly the same transformation that you did to the training data; this includes centering -- it should be done using the mean values obtained on the training set. If you standardized the training set, then you would also divide your test set by the standard deviations obtained on the training set. After that, you can project your test set onto the PCs of the training set. | Zero-centering the testing set after PCA on the training set | Do I just subtract the means of the training data as I did to zero-center the training data?
Yes.
You are supposed to do to the test data exactly the same transformation that you did to the training | Zero-centering the testing set after PCA on the training set
Do I just subtract the means of the training data as I did to zero-center the training data?
Yes.
You are supposed to do to the test data exactly the same transformation that you did to the training data; this includes centering -- it should be done using the mean values obtained on the training set. If you standardized the training set, then you would also divide your test set by the standard deviations obtained on the training set. After that, you can project your test set onto the PCs of the training set. | Zero-centering the testing set after PCA on the training set
Do I just subtract the means of the training data as I did to zero-center the training data?
Yes.
You are supposed to do to the test data exactly the same transformation that you did to the training |
28,084 | Zero-centering the testing set after PCA on the training set | You need to subtract the mean of the training set from the test set and then take the projection on the eigenvectors. You should not take the mean of (training + test) set.
Also refer to Andrej Karpathy's notes here: http://cs231n.github.io/neural-networks-2/ | Zero-centering the testing set after PCA on the training set | You need to subtract the mean of the training set from the test set and then take the projection on the eigenvectors. You should not take the mean of (training + test) set.
Also refer to Andrej Karpa | Zero-centering the testing set after PCA on the training set
You need to subtract the mean of the training set from the test set and then take the projection on the eigenvectors. You should not take the mean of (training + test) set.
Also refer to Andrej Karpathy's notes here: http://cs231n.github.io/neural-networks-2/ | Zero-centering the testing set after PCA on the training set
You need to subtract the mean of the training set from the test set and then take the projection on the eigenvectors. You should not take the mean of (training + test) set.
Also refer to Andrej Karpa |
28,085 | Zero-centering the testing set after PCA on the training set | PCA computes the eigenvectors of the Covaraince matrix. The Covariance matrix uses an implicit centering of data. So it does not really matter whether you center your training data or not. The resulting eigenvectors and eigenvalues will be the same. This means you don't really have to center your test data. The projections that you obtain would only be in a different(tranlated by the mean) co-ordinate system if you center the test data. | Zero-centering the testing set after PCA on the training set | PCA computes the eigenvectors of the Covaraince matrix. The Covariance matrix uses an implicit centering of data. So it does not really matter whether you center your training data or not. The resulti | Zero-centering the testing set after PCA on the training set
PCA computes the eigenvectors of the Covaraince matrix. The Covariance matrix uses an implicit centering of data. So it does not really matter whether you center your training data or not. The resulting eigenvectors and eigenvalues will be the same. This means you don't really have to center your test data. The projections that you obtain would only be in a different(tranlated by the mean) co-ordinate system if you center the test data. | Zero-centering the testing set after PCA on the training set
PCA computes the eigenvectors of the Covaraince matrix. The Covariance matrix uses an implicit centering of data. So it does not really matter whether you center your training data or not. The resulti |
28,086 | Implementation of nested cross-validation | UPS, the code is wrong, but in a very subtle way!
a) the splitting of the train set into a inner training set and test set is OK.
b) the problem is the last two lines, which reflect the subtle misunderstanding about the purpose of a nested cross-validation. The purpose of a nested CV is not to select the parameters, but to have an unbiased evaluation of what is the expected accuracy of your algorithm, in this case ensemble.ExtraTreesRegressor in this data with the best hyperparameter whatever they might be.
And this is what your code correctly computes up to the line:
print 'Unbiased prediction error: %.4f' % (np.mean(outer_scores))
It used the nested-CV to compute an unbiased prediction of the classifier. But notice that each pass of the outer loop may generate a different best hyperparameter, as you knew when you wrote the line:
print 'Best parameter of %i fold: %i' % (fold + 1, tuned_parameter[index])
So now you need a standard CV loop to select the final best hyperparameter, using folds:
tuned_parameter = [1000, 1100, 1200]
for param in tuned_parameter:
scores = []
# normal cross-validation
kfolds = cross_validation.KFold(len(y), n_folds=3, shuffle=True, random_state=state)
for train_index, test_index in kfolds:
# split the training data
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# fit extremely randomized trees regressor to training data
clf2_5 = ensemble.ExtraTreesRegressor(param, n_jobs=-1, random_state=1)
clf2_5.fit(X_train, y_train)
scores.append(clf2_5.score(X_test, y_test))
# calculate mean score for folds
mean_scores.append(np.mean(scores))
# get maximum score index
index, value = max(enumerate(mean_scores), key=operator.itemgetter(1))
print 'Best parameter : %i' % (tuned_parameter[index])
which is your code but with references to inner removed.
Now the best parameter is tuned_parameter[index], and now you can learn the final classifier clf3 as in your code. | Implementation of nested cross-validation | UPS, the code is wrong, but in a very subtle way!
a) the splitting of the train set into a inner training set and test set is OK.
b) the problem is the last two lines, which reflect the subtle misund | Implementation of nested cross-validation
UPS, the code is wrong, but in a very subtle way!
a) the splitting of the train set into a inner training set and test set is OK.
b) the problem is the last two lines, which reflect the subtle misunderstanding about the purpose of a nested cross-validation. The purpose of a nested CV is not to select the parameters, but to have an unbiased evaluation of what is the expected accuracy of your algorithm, in this case ensemble.ExtraTreesRegressor in this data with the best hyperparameter whatever they might be.
And this is what your code correctly computes up to the line:
print 'Unbiased prediction error: %.4f' % (np.mean(outer_scores))
It used the nested-CV to compute an unbiased prediction of the classifier. But notice that each pass of the outer loop may generate a different best hyperparameter, as you knew when you wrote the line:
print 'Best parameter of %i fold: %i' % (fold + 1, tuned_parameter[index])
So now you need a standard CV loop to select the final best hyperparameter, using folds:
tuned_parameter = [1000, 1100, 1200]
for param in tuned_parameter:
scores = []
# normal cross-validation
kfolds = cross_validation.KFold(len(y), n_folds=3, shuffle=True, random_state=state)
for train_index, test_index in kfolds:
# split the training data
X_train, X_test = X[train_index], X[test_index]
y_train, y_test = y[train_index], y[test_index]
# fit extremely randomized trees regressor to training data
clf2_5 = ensemble.ExtraTreesRegressor(param, n_jobs=-1, random_state=1)
clf2_5.fit(X_train, y_train)
scores.append(clf2_5.score(X_test, y_test))
# calculate mean score for folds
mean_scores.append(np.mean(scores))
# get maximum score index
index, value = max(enumerate(mean_scores), key=operator.itemgetter(1))
print 'Best parameter : %i' % (tuned_parameter[index])
which is your code but with references to inner removed.
Now the best parameter is tuned_parameter[index], and now you can learn the final classifier clf3 as in your code. | Implementation of nested cross-validation
UPS, the code is wrong, but in a very subtle way!
a) the splitting of the train set into a inner training set and test set is OK.
b) the problem is the last two lines, which reflect the subtle misund |
28,087 | Implementation of nested cross-validation | I have released a package that can help implementing nested cross validation in Python (for the moment, it only works for binary classifiers). If you want to check it out, it's here:
https://github.com/JaimeArboleda/nestedcvtraining
It's my first Python package, so any comments, suggestions or critics will be more than welcome!!
I post it as an answer because nested cross validation is performed inside the main function and you don't have to take care of how to implement it. It comes with many options thay may be enough for a lot of common settings I think. | Implementation of nested cross-validation | I have released a package that can help implementing nested cross validation in Python (for the moment, it only works for binary classifiers). If you want to check it out, it's here:
https://github.co | Implementation of nested cross-validation
I have released a package that can help implementing nested cross validation in Python (for the moment, it only works for binary classifiers). If you want to check it out, it's here:
https://github.com/JaimeArboleda/nestedcvtraining
It's my first Python package, so any comments, suggestions or critics will be more than welcome!!
I post it as an answer because nested cross validation is performed inside the main function and you don't have to take care of how to implement it. It comes with many options thay may be enough for a lot of common settings I think. | Implementation of nested cross-validation
I have released a package that can help implementing nested cross validation in Python (for the moment, it only works for binary classifiers). If you want to check it out, it's here:
https://github.co |
28,088 | Implementation of nested cross-validation | To summarize Jacques' answer,
Nested CV is required for a model's unbiased error estimation. We can compare the score of different models in this manner. Using this information, we can then perform a separate K-fold CV loop for parameter tuning of the selected models. | Implementation of nested cross-validation | To summarize Jacques' answer,
Nested CV is required for a model's unbiased error estimation. We can compare the score of different models in this manner. Using this information, we can then perform a | Implementation of nested cross-validation
To summarize Jacques' answer,
Nested CV is required for a model's unbiased error estimation. We can compare the score of different models in this manner. Using this information, we can then perform a separate K-fold CV loop for parameter tuning of the selected models. | Implementation of nested cross-validation
To summarize Jacques' answer,
Nested CV is required for a model's unbiased error estimation. We can compare the score of different models in this manner. Using this information, we can then perform a |
28,089 | How to visualize percentages compared along with number of entries. | You wish to compare "effectiveness" and evaluate the numbers of patients reporting each treatment. Effectiveness is recorded in five discrete, ordered categories, but (somehow) is also summarized into an "Avg." (average) value, suggesting it is thought of as a quantitative variable.
Accordingly, we should choose a graphic whose elements are well adapted to convey this kind of information. Among the many excellent solutions suggest themselves, one uses this schema:
Represent total or average effectiveness as a position along a linear scale. Such positions are most readily grasped visually and accurately read quantitatively. Make the scale common to all 34 treatments.
Represent numbers of patients by some graphical symbol that is easily seen to be directly proportional to those numbers. Rectangles are well suited: they can be positioned to satisfy the preceding requirement and sized in the orthogonal direction so that both their heights and their areas convey the patient-number information.
Distinguish the five effectiveness categories by a color and/or shading value. Maintain the ordering of these categories.
One enormous error made by the graphic in the question is that the most prominent visual values--the lengths of the bars--depict the patient-number information rather than the total effectiveness information. We can fix that easily by recentering each bar about a natural middle value.
Without making any other changes (such as improving the color scheme, which is exceptionally poor for any color-blind person), here is the redesign.
I added horizontal dotted lines to help the eye connect labels with plots, and erased a thin vertical line to show the common central location.
The patterns and numbers of responses are much more evident. In particular, we essentially get two graphics for the price of one: on the left hand side we can read off a measure of adverse effects while on the right hand side we can see how strong the positive effects are. Being able to balance the risk, on the one hand, against the benefit, on the other, is important in this application.
One serendipitous effect of this redesign is that the names of treatments with many responses are vertically separated from the others, making it easy to scan down and see which treatments are the most popular.
Another interesting aspect is that this graphic calls into question the algorithm used to order the treatments by "Avg. effectiveness": why, for instance, is "Headache tracking" placed so low when, among all the most popular treatments, it was the only to have no adverse effects?
The quick-and-dirty R code that produced this plot is appended.
x <- c(0,0,3,5,5,
0,0,0,0,2,
0,0,3,2,4,
0,1,7,9,7,
0,0,3,2,3,
0,0,0,0,1,
0,1,1,1,2,
0,0,2,2,1,
0,0,1,0,1,
0,0,3,2,1,
0,0,2,0,1,
1,0,5,5,2,
1,3,15,15,4,
1,2,5,7,3,
0,0,4,4,0,
0,0,2,2,0,
0,0,3,0,1,
0,0,2,2,0,
0,4,18,19,2,
0,0,2,1,0,
3,1,27,25,3,
1,0,2,2,1,
0,0,4,2,0,
0,1,6,5,0,
0,0,3,1,0,
3,0,3,7,2,
0,1,0,1,0,
0,0,21,4,2,
0,0,6,1,0,
1,0,2,0,1,
2,4,15,8,1,
1,1,3,1,0,
0,0,1,0,0,
0,0,1,0,0)
levels <- c("Made it much worse", "Made it slightly worse", "No effect or uncertain",
"Moderate improvement", "Major improvement")
treatments <- c("Oxygen", "Gluten-free diet", "Zomig", "Sumatriptan", "Rizatriptan (Maxalt)",
"Dilaudid suppository", "Dilaudid-Morphine", "Verapamil",
"Magic mushrooms", "Magnesium", "Psilocybine", "Excedrin Migraine",
"Ice packs on neck and head", "Passage of time", "Red Bull", "Lidocaine",
"Vitamin B-2 (Roboflavin)", "Caffergot", "Caffeine", "Tobasco in nose / on tongue")
treatments <- c(treatments,
"Ibuprofen", "Topamax", "Excedrin Tension Headache", "Acetaminophen (Tylenol)",
"Extra Strength Excedrin", "Hot water bottle", "Eletriptan",
"Headache tracking", "Women to Women vitamins", "Effexor", "Aspirin",
"Propanolol", "L-Arginine", "Fioricet")
x <- t(matrix(x, 5, dimnames=list(levels, treatments)))
#
# Precomputation for plotting.
#
n <- dim(x)[1]
m <- dim(x)[2]
d <- as.data.frame(x)
d$Total <- rowSums(d)
d$Effectiveness <- (x %*% c(-2,-1,0,1,2)) / d$Total
d$Root <- (d$Total)
#
# Set up the plot area.
#
colors <- c("#704030", "#d07030", "#d0d0d0", "#60c060", "#387038")
x.left <- 0; x.right <- 6; dx <- x.right - x.left; x.0 <- x.left-4
y.bottom <- 0; y.top <- 10; dy <- y.top - y.bottom
gap <- 0.4
par(mfrow=c(1,1))
plot(c(x.left-1, x.right), c(y.bottom, y.top), type="n",
bty="n", xaxt="n", yaxt="n", xlab="", ylab="", asp=(y.top-y.bottom)/(dx+1))
#
# Make the plots.
#
u <- t(apply(x, 1, function(z) c(0, cumsum(z)) / sum(z)))
y <- y.top - dy * c(0, cumsum(d$Root/sum(d$Root) + gap/n)) / (1+gap)
invisible(sapply(1:n, function(i) {
lines(x=c(x.0+1/4, x.right), y=rep(dy*gap/(2*n)+(y[i]+y[i+1])/2, 2),
lty=3, col="#e0e0e0")
sapply(1:m, function(j) {
mid <- (x.left - (u[i,3] + u[i,4])/2)*dx
rect(mid + u[i,j]*dx, y[i+1] + (gap/n)*(y.top-y.bottom),
mid + u[i,j+1]*dx, y[i],
col=colors[j], border=NA)
})}))
abline(v = x.left, col="White")
labels <- mapply(function(s,n) paste0(s, " (", n, ")"), rownames(x), d$Total)
text(x.0, (y[-(n+1)]+y[-1])/2, labels=labels, adj=c(1, 0), cex=0.8,
col="#505050") | How to visualize percentages compared along with number of entries. | You wish to compare "effectiveness" and evaluate the numbers of patients reporting each treatment. Effectiveness is recorded in five discrete, ordered categories, but (somehow) is also summarized int | How to visualize percentages compared along with number of entries.
You wish to compare "effectiveness" and evaluate the numbers of patients reporting each treatment. Effectiveness is recorded in five discrete, ordered categories, but (somehow) is also summarized into an "Avg." (average) value, suggesting it is thought of as a quantitative variable.
Accordingly, we should choose a graphic whose elements are well adapted to convey this kind of information. Among the many excellent solutions suggest themselves, one uses this schema:
Represent total or average effectiveness as a position along a linear scale. Such positions are most readily grasped visually and accurately read quantitatively. Make the scale common to all 34 treatments.
Represent numbers of patients by some graphical symbol that is easily seen to be directly proportional to those numbers. Rectangles are well suited: they can be positioned to satisfy the preceding requirement and sized in the orthogonal direction so that both their heights and their areas convey the patient-number information.
Distinguish the five effectiveness categories by a color and/or shading value. Maintain the ordering of these categories.
One enormous error made by the graphic in the question is that the most prominent visual values--the lengths of the bars--depict the patient-number information rather than the total effectiveness information. We can fix that easily by recentering each bar about a natural middle value.
Without making any other changes (such as improving the color scheme, which is exceptionally poor for any color-blind person), here is the redesign.
I added horizontal dotted lines to help the eye connect labels with plots, and erased a thin vertical line to show the common central location.
The patterns and numbers of responses are much more evident. In particular, we essentially get two graphics for the price of one: on the left hand side we can read off a measure of adverse effects while on the right hand side we can see how strong the positive effects are. Being able to balance the risk, on the one hand, against the benefit, on the other, is important in this application.
One serendipitous effect of this redesign is that the names of treatments with many responses are vertically separated from the others, making it easy to scan down and see which treatments are the most popular.
Another interesting aspect is that this graphic calls into question the algorithm used to order the treatments by "Avg. effectiveness": why, for instance, is "Headache tracking" placed so low when, among all the most popular treatments, it was the only to have no adverse effects?
The quick-and-dirty R code that produced this plot is appended.
x <- c(0,0,3,5,5,
0,0,0,0,2,
0,0,3,2,4,
0,1,7,9,7,
0,0,3,2,3,
0,0,0,0,1,
0,1,1,1,2,
0,0,2,2,1,
0,0,1,0,1,
0,0,3,2,1,
0,0,2,0,1,
1,0,5,5,2,
1,3,15,15,4,
1,2,5,7,3,
0,0,4,4,0,
0,0,2,2,0,
0,0,3,0,1,
0,0,2,2,0,
0,4,18,19,2,
0,0,2,1,0,
3,1,27,25,3,
1,0,2,2,1,
0,0,4,2,0,
0,1,6,5,0,
0,0,3,1,0,
3,0,3,7,2,
0,1,0,1,0,
0,0,21,4,2,
0,0,6,1,0,
1,0,2,0,1,
2,4,15,8,1,
1,1,3,1,0,
0,0,1,0,0,
0,0,1,0,0)
levels <- c("Made it much worse", "Made it slightly worse", "No effect or uncertain",
"Moderate improvement", "Major improvement")
treatments <- c("Oxygen", "Gluten-free diet", "Zomig", "Sumatriptan", "Rizatriptan (Maxalt)",
"Dilaudid suppository", "Dilaudid-Morphine", "Verapamil",
"Magic mushrooms", "Magnesium", "Psilocybine", "Excedrin Migraine",
"Ice packs on neck and head", "Passage of time", "Red Bull", "Lidocaine",
"Vitamin B-2 (Roboflavin)", "Caffergot", "Caffeine", "Tobasco in nose / on tongue")
treatments <- c(treatments,
"Ibuprofen", "Topamax", "Excedrin Tension Headache", "Acetaminophen (Tylenol)",
"Extra Strength Excedrin", "Hot water bottle", "Eletriptan",
"Headache tracking", "Women to Women vitamins", "Effexor", "Aspirin",
"Propanolol", "L-Arginine", "Fioricet")
x <- t(matrix(x, 5, dimnames=list(levels, treatments)))
#
# Precomputation for plotting.
#
n <- dim(x)[1]
m <- dim(x)[2]
d <- as.data.frame(x)
d$Total <- rowSums(d)
d$Effectiveness <- (x %*% c(-2,-1,0,1,2)) / d$Total
d$Root <- (d$Total)
#
# Set up the plot area.
#
colors <- c("#704030", "#d07030", "#d0d0d0", "#60c060", "#387038")
x.left <- 0; x.right <- 6; dx <- x.right - x.left; x.0 <- x.left-4
y.bottom <- 0; y.top <- 10; dy <- y.top - y.bottom
gap <- 0.4
par(mfrow=c(1,1))
plot(c(x.left-1, x.right), c(y.bottom, y.top), type="n",
bty="n", xaxt="n", yaxt="n", xlab="", ylab="", asp=(y.top-y.bottom)/(dx+1))
#
# Make the plots.
#
u <- t(apply(x, 1, function(z) c(0, cumsum(z)) / sum(z)))
y <- y.top - dy * c(0, cumsum(d$Root/sum(d$Root) + gap/n)) / (1+gap)
invisible(sapply(1:n, function(i) {
lines(x=c(x.0+1/4, x.right), y=rep(dy*gap/(2*n)+(y[i]+y[i+1])/2, 2),
lty=3, col="#e0e0e0")
sapply(1:m, function(j) {
mid <- (x.left - (u[i,3] + u[i,4])/2)*dx
rect(mid + u[i,j]*dx, y[i+1] + (gap/n)*(y.top-y.bottom),
mid + u[i,j+1]*dx, y[i],
col=colors[j], border=NA)
})}))
abline(v = x.left, col="White")
labels <- mapply(function(s,n) paste0(s, " (", n, ")"), rownames(x), d$Total)
text(x.0, (y[-(n+1)]+y[-1])/2, labels=labels, adj=c(1, 0), cex=0.8,
col="#505050") | How to visualize percentages compared along with number of entries.
You wish to compare "effectiveness" and evaluate the numbers of patients reporting each treatment. Effectiveness is recorded in five discrete, ordered categories, but (somehow) is also summarized int |
28,090 | How to visualize percentages compared along with number of entries. | You could certainly turn each row into percentages and plot all the bars as the same length, with the fraction of the bar that is green then giving a good visual indicator of effectiveness. You could retain the number in brackets by the side to indicate what sample size the results are based on.
If you want to retain a visual indicator of number of samples as well as effectiveness, you could consider the chart as is, but centre the bars based on the centre of the grey section. Then, the overall size of the bar will visually indicate the sample size and the proportion of the bar that is to the right (or left) of the centre line will give an indication of effectiveness (or otherwise). In combination, you get a visual indication of popular and rated effective treatment from those bars that reach furthest to the right. You could sort in any of the three ways that are available on the page you linked. | How to visualize percentages compared along with number of entries. | You could certainly turn each row into percentages and plot all the bars as the same length, with the fraction of the bar that is green then giving a good visual indicator of effectiveness. You could | How to visualize percentages compared along with number of entries.
You could certainly turn each row into percentages and plot all the bars as the same length, with the fraction of the bar that is green then giving a good visual indicator of effectiveness. You could retain the number in brackets by the side to indicate what sample size the results are based on.
If you want to retain a visual indicator of number of samples as well as effectiveness, you could consider the chart as is, but centre the bars based on the centre of the grey section. Then, the overall size of the bar will visually indicate the sample size and the proportion of the bar that is to the right (or left) of the centre line will give an indication of effectiveness (or otherwise). In combination, you get a visual indication of popular and rated effective treatment from those bars that reach furthest to the right. You could sort in any of the three ways that are available on the page you linked. | How to visualize percentages compared along with number of entries.
You could certainly turn each row into percentages and plot all the bars as the same length, with the fraction of the bar that is green then giving a good visual indicator of effectiveness. You could |
28,091 | Can we model non random factors as random in a multilevel/hierarchical design? | I'm puzzled by your question. I know you say you understand fixed vs. random effects, but perhaps you don't understand them in the same way I do. I've posted a rather extended excerpt from an in-press book chapter here which explains my view (rather pragmatic, fairly closely aligned with Andrew Gelman's).
More directly answering the question:
it doesn't (IMO) make any sense to include the main effects of socioeconomic variables such as income as random. If you had more than one measurement of income per individual, you could include individual as a grouping variable and allow the effects of income on the response (whatever it is) to vary across individuals.
Race seems to make most sense as a fixed effect, and it's unlikely that you're going to be able to measure an individual under the effects of more than one race, but you might (e.g.) be able to characterize random variation in the effects of race across different countries. You could treat it as a random effect (i.e. model differences among races as being drawn from a Normal distribution), but it's likely to be impractical because you probably won't have enough different races in your data set, and it would be hard for me to come up with a good conceptual argument for this either ...
"area of living" make sense as a grouping variable, which could certainly be a reasonable random effect (i.e. the intercept would vary across living areas). Individual would probably be nested within area, unless individuals move between areas over the time scale of your study.
your situation seems to be a case where you have some random variation across individuals, but you also have individual-level covariates. Adding these individual-level covariates (race, income, etc.) to the model will account for some of the among-individual variability (and is probably a good idea).
It may add clarity to distinguish among grouping variables (which must be categorical), which represent the groups across which things vary, and effects, which are the differences in some parameter/effect (usually the intercept, but could be the effects of income/education/whatever) across the levels of some grouping variable.
update: I will take the liberty of giving some counterpoint to your
My understanding of random effects: factors that are randomly selected from a population;
Maybe, it depends on your philosophical outlook. This is required in the classical frequentist paradigm, but I would relax it somewhat by asking whether it's reasonable to treat the effects as being random draws from some hypothetical population. (The classic examples here are (1) exhaustive sampling (what if you have measurements for every neighborhood in the city, or every region/province/state in a country? Can you still treat them as random draws from some superpopulation? and (2) time periods measured sequentially (e.g. years 2002-2012). In both of these cases I would say it makes pragmatic sense to model them using random effects.)
levels of the factor is of little interest;
not necessarily. I don't think the idea that random effects must be nuisance variables holds up in practice. For example, in animal-breeding analyses one may be very interested in knowing the breeding value (BLUP) of a particular animal. (The so-called level of focus does have some implications for how one compares models.)
variables are unobserved factors.
I'm not sure what this one means. You know what neighborhood each observation comes from, right? How is that "unobserved"? (If you suspected clustering in your data based on unobserved factors you would need to fit a discrete mixture model.) If you mean that you don't know why neighborhoods are different, I don't think that matters here.
So take neighborhood as an example. It is my variable of main interest, the levels are important. I use mixed models and verify that a great deal of variance lies within it.
The only reason I can think of not to use neighborhood as a random effect would be if you had only measured a small number (say <6) of neighborhoods. | Can we model non random factors as random in a multilevel/hierarchical design? | I'm puzzled by your question. I know you say you understand fixed vs. random effects, but perhaps you don't understand them in the same way I do. I've posted a rather extended excerpt from an in-pres | Can we model non random factors as random in a multilevel/hierarchical design?
I'm puzzled by your question. I know you say you understand fixed vs. random effects, but perhaps you don't understand them in the same way I do. I've posted a rather extended excerpt from an in-press book chapter here which explains my view (rather pragmatic, fairly closely aligned with Andrew Gelman's).
More directly answering the question:
it doesn't (IMO) make any sense to include the main effects of socioeconomic variables such as income as random. If you had more than one measurement of income per individual, you could include individual as a grouping variable and allow the effects of income on the response (whatever it is) to vary across individuals.
Race seems to make most sense as a fixed effect, and it's unlikely that you're going to be able to measure an individual under the effects of more than one race, but you might (e.g.) be able to characterize random variation in the effects of race across different countries. You could treat it as a random effect (i.e. model differences among races as being drawn from a Normal distribution), but it's likely to be impractical because you probably won't have enough different races in your data set, and it would be hard for me to come up with a good conceptual argument for this either ...
"area of living" make sense as a grouping variable, which could certainly be a reasonable random effect (i.e. the intercept would vary across living areas). Individual would probably be nested within area, unless individuals move between areas over the time scale of your study.
your situation seems to be a case where you have some random variation across individuals, but you also have individual-level covariates. Adding these individual-level covariates (race, income, etc.) to the model will account for some of the among-individual variability (and is probably a good idea).
It may add clarity to distinguish among grouping variables (which must be categorical), which represent the groups across which things vary, and effects, which are the differences in some parameter/effect (usually the intercept, but could be the effects of income/education/whatever) across the levels of some grouping variable.
update: I will take the liberty of giving some counterpoint to your
My understanding of random effects: factors that are randomly selected from a population;
Maybe, it depends on your philosophical outlook. This is required in the classical frequentist paradigm, but I would relax it somewhat by asking whether it's reasonable to treat the effects as being random draws from some hypothetical population. (The classic examples here are (1) exhaustive sampling (what if you have measurements for every neighborhood in the city, or every region/province/state in a country? Can you still treat them as random draws from some superpopulation? and (2) time periods measured sequentially (e.g. years 2002-2012). In both of these cases I would say it makes pragmatic sense to model them using random effects.)
levels of the factor is of little interest;
not necessarily. I don't think the idea that random effects must be nuisance variables holds up in practice. For example, in animal-breeding analyses one may be very interested in knowing the breeding value (BLUP) of a particular animal. (The so-called level of focus does have some implications for how one compares models.)
variables are unobserved factors.
I'm not sure what this one means. You know what neighborhood each observation comes from, right? How is that "unobserved"? (If you suspected clustering in your data based on unobserved factors you would need to fit a discrete mixture model.) If you mean that you don't know why neighborhoods are different, I don't think that matters here.
So take neighborhood as an example. It is my variable of main interest, the levels are important. I use mixed models and verify that a great deal of variance lies within it.
The only reason I can think of not to use neighborhood as a random effect would be if you had only measured a small number (say <6) of neighborhoods. | Can we model non random factors as random in a multilevel/hierarchical design?
I'm puzzled by your question. I know you say you understand fixed vs. random effects, but perhaps you don't understand them in the same way I do. I've posted a rather extended excerpt from an in-pres |
28,092 | What is the relationship between $p$ values and Type I errors [duplicate] | [Assume, for the moment that we're not talking about composite null hypotheses, since it will simplify the discussion to stick to the simpler case. Similar points could be made in the composite case but the resulting additional discussion would be likely to prove less illuminating]
The probability of a type I error, which (if the assumptions hold) is given by $\alpha$ is probability under the notion of repeated sampling. If you collect data many times when the null is true, in the long run a proportion of $\alpha$ of those times you would reject. In effect it tells you the probability of a Type I error before you sample.
The p-value is instance-specific and conditional. It does not tell you the probability of a type I error, either before you sample (it can't tell you that, since it depends on the sample), or after:
If $p\geq\alpha$ then the chance you made a Type I error is zero.
If the null is true and $p<\alpha$ then the chance you made a Type I error is 1.
Take another look at the two things under discussion:
P(Type I error) = P(reject H$_0$|H$_0$ true)
p-value = P(sample result at least as extreme as the observed sample value|H$_0$ true, sample)
They're distinct things.
Edit - It appears from comments that it is necessary to address your second paragraph in detail:
The p value seems to give an exact estimate of the probability of falsely rejecting a true null hypothesis
Not so, as discussed above. (I assumed this was sufficient to render the rest of the question moot.)
α seems to be a maximum acceptable Type I error,
In effect, yes (though of course we may choose a lower $\alpha$ that the absolute maximum rate we'd be prepared to accept for a variety of reasons).
whereas p is exact.
Again, not so; it's not equivalent to $\alpha$ in the suggested sense. As I suggest, both the numerator and denominator in the conditional probability differ from the ones for $\alpha$.
Put differently it appears to give the minimum α level under which we could still reject the null.
In spite of my earlier caveats, there is a direct (and not necessarily particularly interesting) sense in which this is true. Note that $\alpha$ is chosen before the test, $p$ is observed after, so it's necessary to shift from our usual situation.
If we posit the following counterfactual:
we have a collection of hypothesis testers, each operating at their own significance level
they are each presented with the same set of data
then it is the case that the p-value is a dividing line between those testers that reject and those that accept. In that sense, the p-value is the minimum α level under which testers could still reject the null. But in a real testing situation, $\alpha$ is fixed, not variable, and the probability we're dealing with is either 0 or 1 (in a somewhat similar sense to the way people say "the probability that the confidence interval includes the parameter").
Our probability statements refer to repeated sampling; if we posit a collection of testers each with their individual $\alpha$, and consider only a single data set to test one, it's not clear $\alpha$ is the probability of anything in that scenario - rather, $\alpha$ represents something if we had a collection of testers and repeated sampling where the null is true - they'd each be rejecting a proportion $\alpha$ of their nulls across samples, while $p$ would represent something about each sample. | What is the relationship between $p$ values and Type I errors [duplicate] | [Assume, for the moment that we're not talking about composite null hypotheses, since it will simplify the discussion to stick to the simpler case. Similar points could be made in the composite case b | What is the relationship between $p$ values and Type I errors [duplicate]
[Assume, for the moment that we're not talking about composite null hypotheses, since it will simplify the discussion to stick to the simpler case. Similar points could be made in the composite case but the resulting additional discussion would be likely to prove less illuminating]
The probability of a type I error, which (if the assumptions hold) is given by $\alpha$ is probability under the notion of repeated sampling. If you collect data many times when the null is true, in the long run a proportion of $\alpha$ of those times you would reject. In effect it tells you the probability of a Type I error before you sample.
The p-value is instance-specific and conditional. It does not tell you the probability of a type I error, either before you sample (it can't tell you that, since it depends on the sample), or after:
If $p\geq\alpha$ then the chance you made a Type I error is zero.
If the null is true and $p<\alpha$ then the chance you made a Type I error is 1.
Take another look at the two things under discussion:
P(Type I error) = P(reject H$_0$|H$_0$ true)
p-value = P(sample result at least as extreme as the observed sample value|H$_0$ true, sample)
They're distinct things.
Edit - It appears from comments that it is necessary to address your second paragraph in detail:
The p value seems to give an exact estimate of the probability of falsely rejecting a true null hypothesis
Not so, as discussed above. (I assumed this was sufficient to render the rest of the question moot.)
α seems to be a maximum acceptable Type I error,
In effect, yes (though of course we may choose a lower $\alpha$ that the absolute maximum rate we'd be prepared to accept for a variety of reasons).
whereas p is exact.
Again, not so; it's not equivalent to $\alpha$ in the suggested sense. As I suggest, both the numerator and denominator in the conditional probability differ from the ones for $\alpha$.
Put differently it appears to give the minimum α level under which we could still reject the null.
In spite of my earlier caveats, there is a direct (and not necessarily particularly interesting) sense in which this is true. Note that $\alpha$ is chosen before the test, $p$ is observed after, so it's necessary to shift from our usual situation.
If we posit the following counterfactual:
we have a collection of hypothesis testers, each operating at their own significance level
they are each presented with the same set of data
then it is the case that the p-value is a dividing line between those testers that reject and those that accept. In that sense, the p-value is the minimum α level under which testers could still reject the null. But in a real testing situation, $\alpha$ is fixed, not variable, and the probability we're dealing with is either 0 or 1 (in a somewhat similar sense to the way people say "the probability that the confidence interval includes the parameter").
Our probability statements refer to repeated sampling; if we posit a collection of testers each with their individual $\alpha$, and consider only a single data set to test one, it's not clear $\alpha$ is the probability of anything in that scenario - rather, $\alpha$ represents something if we had a collection of testers and repeated sampling where the null is true - they'd each be rejecting a proportion $\alpha$ of their nulls across samples, while $p$ would represent something about each sample. | What is the relationship between $p$ values and Type I errors [duplicate]
[Assume, for the moment that we're not talking about composite null hypotheses, since it will simplify the discussion to stick to the simpler case. Similar points could be made in the composite case b |
28,093 | What is the relationship between $p$ values and Type I errors [duplicate] | Your interpretation seems about correct. The caveat I would add is that $\alpha$ is an a priori decision to be made before conducting an hypothesis test. So it's no good finding that the p-value for a test statistic is, say, 0.00021, and then reporting that your test had an $\alpha$ of 0.00021; that would make $\alpha$ and p (falsely) synonymous. | What is the relationship between $p$ values and Type I errors [duplicate] | Your interpretation seems about correct. The caveat I would add is that $\alpha$ is an a priori decision to be made before conducting an hypothesis test. So it's no good finding that the p-value for a | What is the relationship between $p$ values and Type I errors [duplicate]
Your interpretation seems about correct. The caveat I would add is that $\alpha$ is an a priori decision to be made before conducting an hypothesis test. So it's no good finding that the p-value for a test statistic is, say, 0.00021, and then reporting that your test had an $\alpha$ of 0.00021; that would make $\alpha$ and p (falsely) synonymous. | What is the relationship between $p$ values and Type I errors [duplicate]
Your interpretation seems about correct. The caveat I would add is that $\alpha$ is an a priori decision to be made before conducting an hypothesis test. So it's no good finding that the p-value for a |
28,094 | What is the relationship between $p$ values and Type I errors [duplicate] | The $p$-value is not "an exact estimate of the probability of falsely rejecting a true null hypothesis". This probability is fixed by construction of an $\alpha$-level test. Rather it is an estimate of the probability that other realisations of the experiment are more extreme than the actual realisation. Only if the present realisation belongs to the top $\alpha$ extreme realisations, we reject the null hypothesis.
But it is right that you can imagine the $p$-value to be the minimum $\alpha$, such that , if this $\alpha$ had been chosen this way, the test would be on the border of significance to insignificance for the present data.
Maybe a different explanation helps: We say that we reject the null hypothesis, iff the present outcomes can be shown to belong to the extreme $100 \alpha \%$ of possible outcomes, provided the null hypothesis holds. The $p$-value just indicates how extreme our outcomes actually are. | What is the relationship between $p$ values and Type I errors [duplicate] | The $p$-value is not "an exact estimate of the probability of falsely rejecting a true null hypothesis". This probability is fixed by construction of an $\alpha$-level test. Rather it is an estimate o | What is the relationship between $p$ values and Type I errors [duplicate]
The $p$-value is not "an exact estimate of the probability of falsely rejecting a true null hypothesis". This probability is fixed by construction of an $\alpha$-level test. Rather it is an estimate of the probability that other realisations of the experiment are more extreme than the actual realisation. Only if the present realisation belongs to the top $\alpha$ extreme realisations, we reject the null hypothesis.
But it is right that you can imagine the $p$-value to be the minimum $\alpha$, such that , if this $\alpha$ had been chosen this way, the test would be on the border of significance to insignificance for the present data.
Maybe a different explanation helps: We say that we reject the null hypothesis, iff the present outcomes can be shown to belong to the extreme $100 \alpha \%$ of possible outcomes, provided the null hypothesis holds. The $p$-value just indicates how extreme our outcomes actually are. | What is the relationship between $p$ values and Type I errors [duplicate]
The $p$-value is not "an exact estimate of the probability of falsely rejecting a true null hypothesis". This probability is fixed by construction of an $\alpha$-level test. Rather it is an estimate o |
28,095 | What is the relationship between $p$ values and Type I errors [duplicate] | You're confusing probability and $p$ in two parallel ways.
Long run probabilities, like Type I error rates, should not be thought about as directly comparable to a conditional probability connected to a single event (the data collected). In this case the latter is the probability that data with values as extreme, or moreso, than the current data are produced by a null model, $p$. And, the $p$ is not the probability of falsely rejecting the null.
Imagine a range of experiments where the null must be true (e.g. comparing two coins for bias). Further imagine selecting various $\alpha$ values prior to running experiments. Won't the smaller $\alpha$'s result in it being less likely that you'll make the Type I error? Would the Type I error be affected at all by the outcome of any one experiment?
I think such confusion often arises because we estimate population parameters while doing testing of a sample. So the mean is an estimate of $\mu$ and the standard deviation is an estimate of $\sigma$ but the $p$ is not a population parameter at all. It's just the probability of the current data or more extreme values if the effect was 0. If you decide that the effect is not 0 then it doesn't mean anything. | What is the relationship between $p$ values and Type I errors [duplicate] | You're confusing probability and $p$ in two parallel ways.
Long run probabilities, like Type I error rates, should not be thought about as directly comparable to a conditional probability connected t | What is the relationship between $p$ values and Type I errors [duplicate]
You're confusing probability and $p$ in two parallel ways.
Long run probabilities, like Type I error rates, should not be thought about as directly comparable to a conditional probability connected to a single event (the data collected). In this case the latter is the probability that data with values as extreme, or moreso, than the current data are produced by a null model, $p$. And, the $p$ is not the probability of falsely rejecting the null.
Imagine a range of experiments where the null must be true (e.g. comparing two coins for bias). Further imagine selecting various $\alpha$ values prior to running experiments. Won't the smaller $\alpha$'s result in it being less likely that you'll make the Type I error? Would the Type I error be affected at all by the outcome of any one experiment?
I think such confusion often arises because we estimate population parameters while doing testing of a sample. So the mean is an estimate of $\mu$ and the standard deviation is an estimate of $\sigma$ but the $p$ is not a population parameter at all. It's just the probability of the current data or more extreme values if the effect was 0. If you decide that the effect is not 0 then it doesn't mean anything. | What is the relationship between $p$ values and Type I errors [duplicate]
You're confusing probability and $p$ in two parallel ways.
Long run probabilities, like Type I error rates, should not be thought about as directly comparable to a conditional probability connected t |
28,096 | What is the relationship between $p$ values and Type I errors [duplicate] | When one calculates a $p$-value, one is actually computing a conditional probability in which the condition being assumed to be true is the null hypothesis. So in this way, the $p$-value is in a sense a quantifier of how likely we would expect to observe a sample at least as extreme as the one we saw, assuming the sample satisfies the distributional assumptions of the null hypothesis. This latter part is extremely important, because only then can we infer from the $p$-value whether or not there exists sufficient evidence to reject the null hypothesis.
If I give you a coin but tell you nothing about whether it is fair, and you toss it $100$ times and get $99$ heads and $1$ tail, you would very likely and reasonably conclude that the coin is not in fact fair. The way you would quantify this impression, as a statistician, is to first suppose that the coin is fair, and then demonstrate through the use of a binomial proportion test, that the chance that you could have gotten such an extreme result under that supposition is incredibly small--this value is the $p$-value of the test.
That said, even though that chance is astronomically small, it still isn't zero. There is a tiny, tiny possibility that a truly fair coin, tossed $100$ times, could give you $99$ heads and $1$ tail; or $99$ tails and one head; or all heads; or all tails, all due to random chance. Just because an event is rare does not mean it is impossible. Therefore, whenever we conduct a nontrivial statistical test, there is always some possibility of error. You would be quite confident that the coin isn't fair, but you could be wrong, and the probability you could be wrong in this sense is the Type I error. | What is the relationship between $p$ values and Type I errors [duplicate] | When one calculates a $p$-value, one is actually computing a conditional probability in which the condition being assumed to be true is the null hypothesis. So in this way, the $p$-value is in a sens | What is the relationship between $p$ values and Type I errors [duplicate]
When one calculates a $p$-value, one is actually computing a conditional probability in which the condition being assumed to be true is the null hypothesis. So in this way, the $p$-value is in a sense a quantifier of how likely we would expect to observe a sample at least as extreme as the one we saw, assuming the sample satisfies the distributional assumptions of the null hypothesis. This latter part is extremely important, because only then can we infer from the $p$-value whether or not there exists sufficient evidence to reject the null hypothesis.
If I give you a coin but tell you nothing about whether it is fair, and you toss it $100$ times and get $99$ heads and $1$ tail, you would very likely and reasonably conclude that the coin is not in fact fair. The way you would quantify this impression, as a statistician, is to first suppose that the coin is fair, and then demonstrate through the use of a binomial proportion test, that the chance that you could have gotten such an extreme result under that supposition is incredibly small--this value is the $p$-value of the test.
That said, even though that chance is astronomically small, it still isn't zero. There is a tiny, tiny possibility that a truly fair coin, tossed $100$ times, could give you $99$ heads and $1$ tail; or $99$ tails and one head; or all heads; or all tails, all due to random chance. Just because an event is rare does not mean it is impossible. Therefore, whenever we conduct a nontrivial statistical test, there is always some possibility of error. You would be quite confident that the coin isn't fair, but you could be wrong, and the probability you could be wrong in this sense is the Type I error. | What is the relationship between $p$ values and Type I errors [duplicate]
When one calculates a $p$-value, one is actually computing a conditional probability in which the condition being assumed to be true is the null hypothesis. So in this way, the $p$-value is in a sens |
28,097 | How can you test homogeneity of variance of two groups with different sample sizes? | I don't know what code you used, but tests do not require equal sample sizes. You can use Levene's test to check for heteroscedasticity. In R, you can use ?leveneTest in the car package:
set.seed(9719) # this makes the example exactly reproducible
g1 = rnorm( 50, mean=2, sd=2) # here I generate data w/ different variances
g2 = rnorm(100, mean=3, sd=3) # & different sample sizes
my.data = stack(list(g1=g1, g2=g2)) # getting the data into 'stacked' format
library(car) # this package houses the function
leveneTest(values~ind, my.data) # here I test for heteroscedasticity:
# Levene's Test for Homogeneity of Variance (center = median)
# Df F value Pr(>F)
# group 1 8.4889 0.004128 **
# 148
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Levene's test is just a $t$-test ($F$-test) on transformed data. (I discuss tests for heteroscedasticity here: Why Levene test of equality of variances rather than F ratio?) What having unequal sample sizes will do is cause you to have less power to detect a difference. To understand this more fully, it may help to read my answer here: How should one interpret the comparison of means from different sample sizes? Note however, that running a test of your assumptions and then choosing a primary test is not generally recommended (see, e.g., here: A principled method for choosing between t-test or non-parametric e.g. Wilcoxon in small samples). If you are worried that there may be heteroscedasticity, you might do best to simply use a test that won't be susceptible to it, such as the Welch $t$-test, or even the Mann-Whitney $U$-test (which doesn't even require normality). Some information about alternative strategies can be gathered from my answer here: Alternatives to one-way ANOVA for heteroskedastic data. | How can you test homogeneity of variance of two groups with different sample sizes? | I don't know what code you used, but tests do not require equal sample sizes. You can use Levene's test to check for heteroscedasticity. In R, you can use ?leveneTest in the car package:
set.seed( | How can you test homogeneity of variance of two groups with different sample sizes?
I don't know what code you used, but tests do not require equal sample sizes. You can use Levene's test to check for heteroscedasticity. In R, you can use ?leveneTest in the car package:
set.seed(9719) # this makes the example exactly reproducible
g1 = rnorm( 50, mean=2, sd=2) # here I generate data w/ different variances
g2 = rnorm(100, mean=3, sd=3) # & different sample sizes
my.data = stack(list(g1=g1, g2=g2)) # getting the data into 'stacked' format
library(car) # this package houses the function
leveneTest(values~ind, my.data) # here I test for heteroscedasticity:
# Levene's Test for Homogeneity of Variance (center = median)
# Df F value Pr(>F)
# group 1 8.4889 0.004128 **
# 148
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Levene's test is just a $t$-test ($F$-test) on transformed data. (I discuss tests for heteroscedasticity here: Why Levene test of equality of variances rather than F ratio?) What having unequal sample sizes will do is cause you to have less power to detect a difference. To understand this more fully, it may help to read my answer here: How should one interpret the comparison of means from different sample sizes? Note however, that running a test of your assumptions and then choosing a primary test is not generally recommended (see, e.g., here: A principled method for choosing between t-test or non-parametric e.g. Wilcoxon in small samples). If you are worried that there may be heteroscedasticity, you might do best to simply use a test that won't be susceptible to it, such as the Welch $t$-test, or even the Mann-Whitney $U$-test (which doesn't even require normality). Some information about alternative strategies can be gathered from my answer here: Alternatives to one-way ANOVA for heteroskedastic data. | How can you test homogeneity of variance of two groups with different sample sizes?
I don't know what code you used, but tests do not require equal sample sizes. You can use Levene's test to check for heteroscedasticity. In R, you can use ?leveneTest in the car package:
set.seed( |
28,098 | How can you test homogeneity of variance of two groups with different sample sizes? | If you're trying to test regression/anova/t-test assumptions, the advice of multiple papers is you're better off not testing the assumption as a basis for choosing a a procedure to apply (e.g. for choosing between an equal variance t-test and a Welch-t-test or between ANOVA and a Welch-Satterthwaite type adjusted ANOVA).
If you can't make the assumption a priori and your original sample sizes are not equal (or at least very close to equal) you should simply not use a procedure that assumes equal variances (in effect, always assume your heteroskedasticity test would reject, without looking). | How can you test homogeneity of variance of two groups with different sample sizes? | If you're trying to test regression/anova/t-test assumptions, the advice of multiple papers is you're better off not testing the assumption as a basis for choosing a a procedure to apply (e.g. for cho | How can you test homogeneity of variance of two groups with different sample sizes?
If you're trying to test regression/anova/t-test assumptions, the advice of multiple papers is you're better off not testing the assumption as a basis for choosing a a procedure to apply (e.g. for choosing between an equal variance t-test and a Welch-t-test or between ANOVA and a Welch-Satterthwaite type adjusted ANOVA).
If you can't make the assumption a priori and your original sample sizes are not equal (or at least very close to equal) you should simply not use a procedure that assumes equal variances (in effect, always assume your heteroskedasticity test would reject, without looking). | How can you test homogeneity of variance of two groups with different sample sizes?
If you're trying to test regression/anova/t-test assumptions, the advice of multiple papers is you're better off not testing the assumption as a basis for choosing a a procedure to apply (e.g. for cho |
28,099 | How can you test homogeneity of variance of two groups with different sample sizes? | In R:
install.packages('lawstat')
require(lawstat)
levene.test(x ~ y, your_data)
Should work this way even if you have unequal sample size | How can you test homogeneity of variance of two groups with different sample sizes? | In R:
install.packages('lawstat')
require(lawstat)
levene.test(x ~ y, your_data)
Should work this way even if you have unequal sample size | How can you test homogeneity of variance of two groups with different sample sizes?
In R:
install.packages('lawstat')
require(lawstat)
levene.test(x ~ y, your_data)
Should work this way even if you have unequal sample size | How can you test homogeneity of variance of two groups with different sample sizes?
In R:
install.packages('lawstat')
require(lawstat)
levene.test(x ~ y, your_data)
Should work this way even if you have unequal sample size |
28,100 | How to evaluate the goodness of fit for survial functions | The main problem with statistics like the Cox model $R^2$ (described in another answer) is that it's very dependent on the censorship distribution of your data. Other natural things you might look at, such as the likelihood ratio to the null model, also have this problem. (This is basically because the contribution of a censored datapoint to the likelihood is very different from the contribution of a datapoint where the event is observed, because one of them comes from a PDF and one of them comes from a CDF.) Various researchers have proposed ways to get around this, but the ones I've seen usually require you to have a model of the censorship distribution or something equally impractical. I haven't looked into how bad this dependence is in practice, so if your censoring is fairly mild, you could still look into likelihood-ratio-based statistics. For survival CART models, you can always look at the actual likelihood ratio they give over, say, the Kaplan-Meier estimate of the hazard function.
For generic survival models, one frequently-used statistic is Harrell's c index, an analog of Kendall's $\tau$ or the ROC AUC for survival models. Essentially, c is the proportion, out of all instances where you know that one instance experienced an event later than the other, that the model ranks correctly. (In other words, for a pair of instances to be included in the denominator here, at most one can be censored, and it must be censored after the other one experienced an event.) The c index also depends on the censorship distribution, but according to Harrell the dependence is milder than for the other statistics I mentioned above. Unfortunately, Harrell's c is also less sensitive than the above statistics, so you may not want to choose between models based on it if the difference between them is small; it's more useful as an interpretable index of general performance than a way to compare different models.
(Lastly, of course if you have a specific purpose in mind for the models--that is, if you know what your prediction loss function is--you can always evaluate them according to the loss function! But I'm guessing you're not so lucky...)
For a more in-depth discussion of both likelihood-ratio statistics and Harrell's c, you should look at Harrell's excellent textbook Regression Modeling Strategies. The section on evaluating survival models is §19.10, pp. 492-493. I'm sorry I can't give you a single definitive answer, but I don't think this is a solved problem! | How to evaluate the goodness of fit for survial functions | The main problem with statistics like the Cox model $R^2$ (described in another answer) is that it's very dependent on the censorship distribution of your data. Other natural things you might look at, | How to evaluate the goodness of fit for survial functions
The main problem with statistics like the Cox model $R^2$ (described in another answer) is that it's very dependent on the censorship distribution of your data. Other natural things you might look at, such as the likelihood ratio to the null model, also have this problem. (This is basically because the contribution of a censored datapoint to the likelihood is very different from the contribution of a datapoint where the event is observed, because one of them comes from a PDF and one of them comes from a CDF.) Various researchers have proposed ways to get around this, but the ones I've seen usually require you to have a model of the censorship distribution or something equally impractical. I haven't looked into how bad this dependence is in practice, so if your censoring is fairly mild, you could still look into likelihood-ratio-based statistics. For survival CART models, you can always look at the actual likelihood ratio they give over, say, the Kaplan-Meier estimate of the hazard function.
For generic survival models, one frequently-used statistic is Harrell's c index, an analog of Kendall's $\tau$ or the ROC AUC for survival models. Essentially, c is the proportion, out of all instances where you know that one instance experienced an event later than the other, that the model ranks correctly. (In other words, for a pair of instances to be included in the denominator here, at most one can be censored, and it must be censored after the other one experienced an event.) The c index also depends on the censorship distribution, but according to Harrell the dependence is milder than for the other statistics I mentioned above. Unfortunately, Harrell's c is also less sensitive than the above statistics, so you may not want to choose between models based on it if the difference between them is small; it's more useful as an interpretable index of general performance than a way to compare different models.
(Lastly, of course if you have a specific purpose in mind for the models--that is, if you know what your prediction loss function is--you can always evaluate them according to the loss function! But I'm guessing you're not so lucky...)
For a more in-depth discussion of both likelihood-ratio statistics and Harrell's c, you should look at Harrell's excellent textbook Regression Modeling Strategies. The section on evaluating survival models is §19.10, pp. 492-493. I'm sorry I can't give you a single definitive answer, but I don't think this is a solved problem! | How to evaluate the goodness of fit for survial functions
The main problem with statistics like the Cox model $R^2$ (described in another answer) is that it's very dependent on the censorship distribution of your data. Other natural things you might look at, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.