idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
9,501
|
Why doesn't backpropagation work when you initialize the weights the same value?
|
To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm works by looking at a local neighborhood of a point and seeing which direction will lead to a smaller error. This will then give you a local minimum.
What you want it a global minimum, but you have no guaranteed way of finding it. And if your surface has several local minima then you may be in trouble.
But if it has only a few then Thierry's strategy should work - performing multiple searches for local minima by starting at randomly selected points should increase the chances of your finding the global minimum.
And in the happy case in which there is only one minimum - any initial weight vector will lead you to it.
|
Why doesn't backpropagation work when you initialize the weights the same value?
|
To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm w
|
Why doesn't backpropagation work when you initialize the weights the same value?
To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm works by looking at a local neighborhood of a point and seeing which direction will lead to a smaller error. This will then give you a local minimum.
What you want it a global minimum, but you have no guaranteed way of finding it. And if your surface has several local minima then you may be in trouble.
But if it has only a few then Thierry's strategy should work - performing multiple searches for local minima by starting at randomly selected points should increase the chances of your finding the global minimum.
And in the happy case in which there is only one minimum - any initial weight vector will lead you to it.
|
Why doesn't backpropagation work when you initialize the weights the same value?
To add to Thierry's answer, you can think of the error as a function of the weight vector i.e. as a function from $R^n \rightarrow R$ which you would like to minimize. The back propagation algorithm w
|
9,502
|
How to interpret main effects when the interaction effect is not significant?
|
A little niggle
'Now many textbook examples tell me that if there is a significant
effect of the interaction, the main effects cannot be interpreted'
I hope that's not true. They should say that if there is an interaction term, say between X and Z called XZ, then the interpretation of the individual coefficients for X and for Z cannot be interpreted in the same way as if XZ were not present. You can definitely interpret it.
Question 2
If the interaction makes theoretical sense then there is no reason not to leave it in, unless concerns for statistical efficiency for some reason override concerns about misspecification and allowing your theory and your model to diverge.
Given that you have left it in, then interpret your model using marginal effects in the same way as if the interaction were significant. For reference, I include a link to Brambor, Clark and Golder (2006) who explain how to interpret interaction models and how to avoid the common pitfalls.
Think of it this way: you often have control variables in a model that turn out not to be significant, but you don't (or shouldn't) go chopping them out at the first sign of missing stars.
Question 1
You ask whether you can 'conclude that the two predictors have an effect on the response?' Apparently you can, but you can also do better. For the model with the interaction term you can report what effect the two predictors actually have on the dependent variable (marginal effects) in a way that is indifferent to whether the interaction is significant, or even present in the model.
The Bottom Line
If you remove the interaction you are re-specifying the model. This may be a reasonable thing to do for many reasons, some theoretical and some statistical, but making it easier to interpret the coefficients is not one of them.
|
How to interpret main effects when the interaction effect is not significant?
|
A little niggle
'Now many textbook examples tell me that if there is a significant
effect of the interaction, the main effects cannot be interpreted'
I hope that's not true. They should say that i
|
How to interpret main effects when the interaction effect is not significant?
A little niggle
'Now many textbook examples tell me that if there is a significant
effect of the interaction, the main effects cannot be interpreted'
I hope that's not true. They should say that if there is an interaction term, say between X and Z called XZ, then the interpretation of the individual coefficients for X and for Z cannot be interpreted in the same way as if XZ were not present. You can definitely interpret it.
Question 2
If the interaction makes theoretical sense then there is no reason not to leave it in, unless concerns for statistical efficiency for some reason override concerns about misspecification and allowing your theory and your model to diverge.
Given that you have left it in, then interpret your model using marginal effects in the same way as if the interaction were significant. For reference, I include a link to Brambor, Clark and Golder (2006) who explain how to interpret interaction models and how to avoid the common pitfalls.
Think of it this way: you often have control variables in a model that turn out not to be significant, but you don't (or shouldn't) go chopping them out at the first sign of missing stars.
Question 1
You ask whether you can 'conclude that the two predictors have an effect on the response?' Apparently you can, but you can also do better. For the model with the interaction term you can report what effect the two predictors actually have on the dependent variable (marginal effects) in a way that is indifferent to whether the interaction is significant, or even present in the model.
The Bottom Line
If you remove the interaction you are re-specifying the model. This may be a reasonable thing to do for many reasons, some theoretical and some statistical, but making it easier to interpret the coefficients is not one of them.
|
How to interpret main effects when the interaction effect is not significant?
A little niggle
'Now many textbook examples tell me that if there is a significant
effect of the interaction, the main effects cannot be interpreted'
I hope that's not true. They should say that i
|
9,503
|
How to interpret main effects when the interaction effect is not significant?
|
If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects correctly. The main effects calculated with the interaction present are different from the main effects as one typically interprets them in something like ANOVA. For example, it's possible to have a trivial and non-signficant interaction the main effects won't be apparent when the interaction is in the model.
Let's say you have two predictors, A and B. When you include the interaction term then the magnitude of A is allowed to vary depending on B and vice versa. The reported beta coefficient in the regression output for A is then just one of many possible values. The default is to use the coefficient of A for the case when B is 0 and the interaction term is 0. But, when the regression is just additive A is not allowed to vary across B and you just get the main effect of A independent of B. These can be a very different values even if the interaction is trivial because they mean different things. The additive model is the only way to really assess the main effect by itself. On the other hand, when your interaction is meaningful (theoretically, not statistically) and you want to keep it in your model then the only way to assess A is looking at it across levels of B. That's actually the kind of thing you have to consider with respect to the interaction, not whether A is significant. You can only really see whether there's an unconditional effect of A in the additive model.
So, the models are looking at very different things and this is not an issue of multiple testing. You must look at it both ways. You don't decide based on significance. The best main effect to report is from the additive model. You make a decision on including or presenting the non significant interaction based on theoretical issues, or data presentation issues, etc.
(This is not to say that there are no potential multiple testing issues here. But what they mean depends a great deal on the theory driving the tests.)
|
How to interpret main effects when the interaction effect is not significant?
|
If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects
|
How to interpret main effects when the interaction effect is not significant?
If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects correctly. The main effects calculated with the interaction present are different from the main effects as one typically interprets them in something like ANOVA. For example, it's possible to have a trivial and non-signficant interaction the main effects won't be apparent when the interaction is in the model.
Let's say you have two predictors, A and B. When you include the interaction term then the magnitude of A is allowed to vary depending on B and vice versa. The reported beta coefficient in the regression output for A is then just one of many possible values. The default is to use the coefficient of A for the case when B is 0 and the interaction term is 0. But, when the regression is just additive A is not allowed to vary across B and you just get the main effect of A independent of B. These can be a very different values even if the interaction is trivial because they mean different things. The additive model is the only way to really assess the main effect by itself. On the other hand, when your interaction is meaningful (theoretically, not statistically) and you want to keep it in your model then the only way to assess A is looking at it across levels of B. That's actually the kind of thing you have to consider with respect to the interaction, not whether A is significant. You can only really see whether there's an unconditional effect of A in the additive model.
So, the models are looking at very different things and this is not an issue of multiple testing. You must look at it both ways. You don't decide based on significance. The best main effect to report is from the additive model. You make a decision on including or presenting the non significant interaction based on theoretical issues, or data presentation issues, etc.
(This is not to say that there are no potential multiple testing issues here. But what they mean depends a great deal on the theory driving the tests.)
|
How to interpret main effects when the interaction effect is not significant?
If you want the unconditional main effect then yes you do want to run a new model without the interaction term because that interaction term is not allowing you to see your unconditional main effects
|
9,504
|
How to interpret main effects when the interaction effect is not significant?
|
If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested.
You do not need to run another model without the interaction (it is generally not the best advice to exclude parameters based on significance, there are many answers here discussing that). Just take the results as they are.
|
How to interpret main effects when the interaction effect is not significant?
|
If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested.
You do not need to run another model without the interaction (it is generally not t
|
How to interpret main effects when the interaction effect is not significant?
If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested.
You do not need to run another model without the interaction (it is generally not the best advice to exclude parameters based on significance, there are many answers here discussing that). Just take the results as they are.
|
How to interpret main effects when the interaction effect is not significant?
If the main effects are significant but not the interaction you simply interpret the main effects, as you suggested.
You do not need to run another model without the interaction (it is generally not t
|
9,505
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
|
Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NPS, imagine filling a large hat with a huge number of tickets (one for each member of your population) labeled $+1$ for promoters, $0$ for passives, and $-1$ for detractors, in the given proportions, and then drawing $n$ of them at random. The sample NPS is the average value on the tickets that were drawn. The true NPS is computed as the average value of all the tickets in the hat: it is the expected value (or expectation) of the hat.
A good estimator of the true NPS is the sample NPS. The sample NPS also has an expectation. It can be considered to be the average of all the possible sample NPS's. This expectation happens to equal the true NPS. The standard error of the sample NPS is a measure of how much the sample NPS's typically vary between one random sample and another. Fortunately, we do not have to compute all possible samples to find the SE: it can be found more simply by computing the standard deviation of the tickets in the hat and dividing by $\sqrt{n}$. (A small adjustment can be made when the sample is an appreciable proportion of the population, but that's not likely to be needed here.)
For example, consider a population of $p_1=1/2$ promoters, $p_0=1/3$ passives, and $p_{-1}=1/6$ detractors. The true NPS is
$$\mbox{NPS} = 1\times 1/2 + 0\times 1/3 + -1\times 1/6 = 1/3.$$
The variance is therefore
$$\eqalign{
\mbox{Var(NPS)} &= (1-\mbox{NPS})^2\times p_1 + (0-\mbox{NPS})^2\times p_0 + (-1-\mbox{NPS})^2\times p_{-1}\\
&=(1-1/3)^2\times 1/2 + (0-1/3)^2\times 1/3 + (-1-1/3)^2\times 1/6 \\
&= 5/9.
}$$
The standard deviation is the square root of this, about equal to $0.75.$
In a sample of, say, $324$, you would therefore expect to observe an NPS around $1/3 = 33$% with a standard error of $0.75/\sqrt{324}=$ about $4.1$%.
You don't, in fact, know the standard deviation of the tickets in the hat, so you estimate it by using the standard deviation of your sample instead. When divided by the square root of the sample size, it estimates the standard error of the NPS: this estimate is the margin of error (MoE).
Provided you observe substantial numbers of each type of customer (typically, about 5 or more of each will do), the distribution of the sample NPS will be close to Normal. This implies you can interpret the MoE in the usual ways. In particular, about 2/3 of the time the sample NPS will lie within one MoE of the true NPS and about 19/20 of the time (95%) the sample NPS will lie within two MoEs of the true NPS. In the example, if the margin of error really were 4.1%, we would have 95% confidence that the survey result (the sample NPS) is within 8.2% of the population NPS.
Each survey will have its own margin of error. To compare two such results you need to account for the possibility of error in each. When survey sizes are about the same, the standard error of their difference can be found by a Pythagorean theorem: take the square root of the sum of their squares. For instance, if one year the MoE is 4.1% and another year the MoE is 3.5%, then roughly figure a margin of error around $\sqrt{3.5^2+4.1^2}$ = 5.4% for the difference in those two results. In this case, you can conclude with 95% confidence that the population NPS changed from one survey to the next provided the difference in the two survey results is 10.8% or greater.
When comparing many survey results over time, more sophisticated methods can help, because you have to cope with many separate margins of error. When the margins of error are all pretty similar, a crude rule of thumb is to consider a change of three or more MoEs as "significant." In this example, if the MoEs hover around 4%, then a change of around 12% or larger over a period of several surveys ought to get your attention and smaller changes could validly be dismissed as survey error. Regardless, the analysis and rules of thumb provided here usually provide a good start when thinking about what the differences among the surveys might mean.
Note that you cannot compute the margin of error from the observed NPS alone: it depends on the observed numbers of each of the three types of respondents. For example, if almost everybody is a "passive," the survey NPS will be near $0$ with a tiny margin of error. If the population is polarized equally between promoters and detractors, the survey NPS will still be near $0$ but will have the largest possible margin of error (equal to $1/\sqrt{n}$ in a sample of $n$ people).
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
|
Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NP
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NPS, imagine filling a large hat with a huge number of tickets (one for each member of your population) labeled $+1$ for promoters, $0$ for passives, and $-1$ for detractors, in the given proportions, and then drawing $n$ of them at random. The sample NPS is the average value on the tickets that were drawn. The true NPS is computed as the average value of all the tickets in the hat: it is the expected value (or expectation) of the hat.
A good estimator of the true NPS is the sample NPS. The sample NPS also has an expectation. It can be considered to be the average of all the possible sample NPS's. This expectation happens to equal the true NPS. The standard error of the sample NPS is a measure of how much the sample NPS's typically vary between one random sample and another. Fortunately, we do not have to compute all possible samples to find the SE: it can be found more simply by computing the standard deviation of the tickets in the hat and dividing by $\sqrt{n}$. (A small adjustment can be made when the sample is an appreciable proportion of the population, but that's not likely to be needed here.)
For example, consider a population of $p_1=1/2$ promoters, $p_0=1/3$ passives, and $p_{-1}=1/6$ detractors. The true NPS is
$$\mbox{NPS} = 1\times 1/2 + 0\times 1/3 + -1\times 1/6 = 1/3.$$
The variance is therefore
$$\eqalign{
\mbox{Var(NPS)} &= (1-\mbox{NPS})^2\times p_1 + (0-\mbox{NPS})^2\times p_0 + (-1-\mbox{NPS})^2\times p_{-1}\\
&=(1-1/3)^2\times 1/2 + (0-1/3)^2\times 1/3 + (-1-1/3)^2\times 1/6 \\
&= 5/9.
}$$
The standard deviation is the square root of this, about equal to $0.75.$
In a sample of, say, $324$, you would therefore expect to observe an NPS around $1/3 = 33$% with a standard error of $0.75/\sqrt{324}=$ about $4.1$%.
You don't, in fact, know the standard deviation of the tickets in the hat, so you estimate it by using the standard deviation of your sample instead. When divided by the square root of the sample size, it estimates the standard error of the NPS: this estimate is the margin of error (MoE).
Provided you observe substantial numbers of each type of customer (typically, about 5 or more of each will do), the distribution of the sample NPS will be close to Normal. This implies you can interpret the MoE in the usual ways. In particular, about 2/3 of the time the sample NPS will lie within one MoE of the true NPS and about 19/20 of the time (95%) the sample NPS will lie within two MoEs of the true NPS. In the example, if the margin of error really were 4.1%, we would have 95% confidence that the survey result (the sample NPS) is within 8.2% of the population NPS.
Each survey will have its own margin of error. To compare two such results you need to account for the possibility of error in each. When survey sizes are about the same, the standard error of their difference can be found by a Pythagorean theorem: take the square root of the sum of their squares. For instance, if one year the MoE is 4.1% and another year the MoE is 3.5%, then roughly figure a margin of error around $\sqrt{3.5^2+4.1^2}$ = 5.4% for the difference in those two results. In this case, you can conclude with 95% confidence that the population NPS changed from one survey to the next provided the difference in the two survey results is 10.8% or greater.
When comparing many survey results over time, more sophisticated methods can help, because you have to cope with many separate margins of error. When the margins of error are all pretty similar, a crude rule of thumb is to consider a change of three or more MoEs as "significant." In this example, if the MoEs hover around 4%, then a change of around 12% or larger over a period of several surveys ought to get your attention and smaller changes could validly be dismissed as survey error. Regardless, the analysis and rules of thumb provided here usually provide a good start when thinking about what the differences among the surveys might mean.
Note that you cannot compute the margin of error from the observed NPS alone: it depends on the observed numbers of each of the three types of respondents. For example, if almost everybody is a "passive," the survey NPS will be near $0$ with a tiny margin of error. If the population is polarized equally between promoters and detractors, the survey NPS will still be near $0$ but will have the largest possible margin of error (equal to $1/\sqrt{n}$ in a sample of $n$ people).
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
Suppose the population, from which we assume you are sampling randomly, contains proportions $p_1$ of promoters, $p_0$ of passives, and $p_{-1}$ of detractors, with $p_1+p_0+p_{-1}=1$. To model the NP
|
9,506
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
|
You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for calculating the sample variance: https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation As others noted, Whubers solution is based on population formulae. However, since you are running a survey, I'm pretty sure you've drawn a sample, so I would recommend using the unbiased estimator (dividing the sum of squares by n-1, not only by n). Of course, for large sample sizes, the difference between the biased and unbiased estimator is virtually non-existent.
I'd also recommend to use a t-test procedure, if you have medium sample sizes, instead of using the z-score approach: https://en.wikipedia.org/wiki/Student's_t-test
@whuber: since others have asked it too: how would one calculate the unbiased sample estimator for variance/sd for your random discrete variable approach? I've tried to find it on my own, but weren't successful. Thanks.
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
|
You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for cal
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for calculating the sample variance: https://en.wikipedia.org/wiki/Unbiased_estimation_of_standard_deviation As others noted, Whubers solution is based on population formulae. However, since you are running a survey, I'm pretty sure you've drawn a sample, so I would recommend using the unbiased estimator (dividing the sum of squares by n-1, not only by n). Of course, for large sample sizes, the difference between the biased and unbiased estimator is virtually non-existent.
I'd also recommend to use a t-test procedure, if you have medium sample sizes, instead of using the z-score approach: https://en.wikipedia.org/wiki/Student's_t-test
@whuber: since others have asked it too: how would one calculate the unbiased sample estimator for variance/sd for your random discrete variable approach? I've tried to find it on my own, but weren't successful. Thanks.
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You could also use the variance estimator for continuous variables. Actually, I'd prefer it over the variance estimator for the random discrete variable, since there is a well-known correction for cal
|
9,507
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
|
You can potentially use bootstrap to simplify your calculations. In R the code would be:
library(bootstrap)
NPS=function(x){
if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")}
if(sum(x>10|x<0)>0){stop("Scores not on scale of 0 to 10.")}
sum(ifelse(x<7,-1,ifelse(x>8,1,0)))/length(x)*100
}
NPSconfInt=function(x,confidence=.9,iterations=10000){
quantile(bootstrap(x,iterations,NPS)$thetastar,c((1-confidence)/2, 1-(1-confidence)/2))
}
npsData=c(1,5,6,8,9,7,0,10,7,8,
6,5,7,8,2,8,10,9,8,7,0,10) # Supply NPS data
hist(npsData,breaks=11) # Histogram of NPS responses
NPS(npsData) # Calculate NPS (evaluates to -14)
NPSconfInt(npsData,.7) # 70% confidence interval (evaluates to approx. -32 to 5)
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
|
You can potentially use bootstrap to simplify your calculations. In R the code would be:
library(bootstrap)
NPS=function(x){
if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")}
if(sum(
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You can potentially use bootstrap to simplify your calculations. In R the code would be:
library(bootstrap)
NPS=function(x){
if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")}
if(sum(x>10|x<0)>0){stop("Scores not on scale of 0 to 10.")}
sum(ifelse(x<7,-1,ifelse(x>8,1,0)))/length(x)*100
}
NPSconfInt=function(x,confidence=.9,iterations=10000){
quantile(bootstrap(x,iterations,NPS)$thetastar,c((1-confidence)/2, 1-(1-confidence)/2))
}
npsData=c(1,5,6,8,9,7,0,10,7,8,
6,5,7,8,2,8,10,9,8,7,0,10) # Supply NPS data
hist(npsData,breaks=11) # Histogram of NPS responses
NPS(npsData) # Calculate NPS (evaluates to -14)
NPSconfInt(npsData,.7) # 70% confidence interval (evaluates to approx. -32 to 5)
|
How can I calculate margin of error in a NPS (Net Promoter Score) result?
You can potentially use bootstrap to simplify your calculations. In R the code would be:
library(bootstrap)
NPS=function(x){
if(sum(!x%%1==0)>0){stop("Non-integers found in the scores.")}
if(sum(
|
9,508
|
What is Bayes Error in machine learning?
|
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made if the process is random. This is also what is meant by "$y$ is inherently stochastic".
For example, when flipping a fair coin, we know exactly what process generates the outcome (a binomial distribution). However, if we were to predict the outcome of a series of coin flips, we would still make errors, because the process is inherently random (i.e. stochastic).
To answer your other question, you are correct in stating that the total error is the sum of (squared) bias, variance and irreducible error. See also this article for an easy to understand explanation of these three concepts.
|
What is Bayes Error in machine learning?
|
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made
|
What is Bayes Error in machine learning?
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made if the process is random. This is also what is meant by "$y$ is inherently stochastic".
For example, when flipping a fair coin, we know exactly what process generates the outcome (a binomial distribution). However, if we were to predict the outcome of a series of coin flips, we would still make errors, because the process is inherently random (i.e. stochastic).
To answer your other question, you are correct in stating that the total error is the sum of (squared) bias, variance and irreducible error. See also this article for an easy to understand explanation of these three concepts.
|
What is Bayes Error in machine learning?
Bayes error is the lowest possible prediction error that can be achieved and is the same as irreducible error. If one would know exactly what process generates the data, then errors will still be made
|
9,509
|
What is Bayes Error in machine learning?
|
the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... If it is determined, it will surely know the output of that experiment. But we can't determine it all. Or in the determining price of the house, we have to know the location, market, macroeconomic,.. not only the distance to the center and the size of the house.
=> Therefore, in ML, if we have the training set only include the distance to the center and the size of the house, the output is still stochastic, not determinable, -> also have the error, even with the oracle (in Deep Learning book: “y may be a deterministic function that involves other variables besides those included in x “)
|
What is Bayes Error in machine learning?
|
the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... I
|
What is Bayes Error in machine learning?
the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... If it is determined, it will surely know the output of that experiment. But we can't determine it all. Or in the determining price of the house, we have to know the location, market, macroeconomic,.. not only the distance to the center and the size of the house.
=> Therefore, in ML, if we have the training set only include the distance to the center and the size of the house, the output is still stochastic, not determinable, -> also have the error, even with the oracle (in Deep Learning book: “y may be a deterministic function that involves other variables besides those included in x “)
|
What is Bayes Error in machine learning?
the essential of the statistic is the lack of information: ex: to determine the output of flip coin, we have to know earth gravitation at the test point, coin curvature, wind speed, hand posture,... I
|
9,510
|
What is Bayes Error in machine learning?
|
From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf.
For classification task, bayes error is defined as :
$min_f=Cost(f)$
Bayes Classifier is defined as:
$argmin_f=Cost(f)$
So total error=bayes error + how much your model is worse than bayes error$\not\equiv$Bias + Variance +Bayes error which may depend on your model and the inherent nature of "distribution noise"
What is meaning of "y may be inherently stochastic"? For example, $y=f(x)=sin(x)$.
But what you collect as y is always polluted as $\tilde{y}=y+t
$, where $t\sim N(0, \sigma^2)$ So you have no way to know real y, and the cost estimation you have is inherently polluted. Even Oracle gives you the right answer, you think they are wrong.
|
What is Bayes Error in machine learning?
|
From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf.
For classification task, bayes error is defined as :
$min_f=Cost(f)$
Bayes Classifier is defined as:
$argmin_f=Cost(f)$
So total
|
What is Bayes Error in machine learning?
From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf.
For classification task, bayes error is defined as :
$min_f=Cost(f)$
Bayes Classifier is defined as:
$argmin_f=Cost(f)$
So total error=bayes error + how much your model is worse than bayes error$\not\equiv$Bias + Variance +Bayes error which may depend on your model and the inherent nature of "distribution noise"
What is meaning of "y may be inherently stochastic"? For example, $y=f(x)=sin(x)$.
But what you collect as y is always polluted as $\tilde{y}=y+t
$, where $t\sim N(0, \sigma^2)$ So you have no way to know real y, and the cost estimation you have is inherently polluted. Even Oracle gives you the right answer, you think they are wrong.
|
What is Bayes Error in machine learning?
From https://www.cs.helsinki.fi/u/jkivinen/opetus/iml/2013/Bayes.pdf.
For classification task, bayes error is defined as :
$min_f=Cost(f)$
Bayes Classifier is defined as:
$argmin_f=Cost(f)$
So total
|
9,511
|
What is the mathematical difference between random- and fixed-effects?
|
The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J,
\qquad
\mu_i \sim_{\text{iid}} {\cal N}(\mu, \sigma^2_b), \quad i=1,\ldots,I.$$
Here the random effects are the $\mu_i$. They are random variables, whereas they are fixed numbers in the ANOVA model with fixed effects.
For example each of three technicians $i=1,2,3$ in a laboratory records a series of measurements, and $y_{ij}$ is the $j$-th measurement of technician $i$. Call $\mu_i$ the "true mean value" of the series generated by technician $i$; this is a slightly artificial parameter, you can see $\mu_i$ as the mean value that technician $i$ would have been obtained if he/she had recorded a huge series of measurements.
If you are interested in evaluating $\mu_1$, $\mu_2$, $\mu_3$ (for example in order to assess the bias between operators), then you have to use the ANOVA model with fixed effects.
You have to use the ANOVA model with random effects when you are interested in the variances $\sigma^2_w$ and $\sigma^2_b$ defining the model, and the total variance $\sigma^2_b+\sigma^2_w$ (see below). The variance $\sigma^2_w$ is the variance of the recordings generated by one technician (it is assumed to be the same for all technicians), and $\sigma^2_b$ is called the between-technicians variance. Maybe ideally, the technicians should be selected at random.
This model reflects the decomposition of variance formula for a data sample :
Total variance = variance of means $+$ means of intra-variances
which is reflected by the ANOVA model with random effects:
Indeed, the distribution of $y_{ij}$ is defined by its conditional distribution $(y_{ij})$ given $\mu_i$ and by the distribution of $\mu_i$. If one computes the "unconditional" distribution of $y_{ij}$ then we find $\boxed{y_{ij} \sim {\cal N}(\mu, \sigma^2_b+\sigma^2_w)}$.
See slide 24 and slide 25 here for better pictures (you have to save the pdf file to appreciate the overlays, don't watch the online version).
|
What is the mathematical difference between random- and fixed-effects?
|
The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(
|
What is the mathematical difference between random- and fixed-effects?
The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(\mu_i, \sigma^2_w), \quad j=1,\ldots,J,
\qquad
\mu_i \sim_{\text{iid}} {\cal N}(\mu, \sigma^2_b), \quad i=1,\ldots,I.$$
Here the random effects are the $\mu_i$. They are random variables, whereas they are fixed numbers in the ANOVA model with fixed effects.
For example each of three technicians $i=1,2,3$ in a laboratory records a series of measurements, and $y_{ij}$ is the $j$-th measurement of technician $i$. Call $\mu_i$ the "true mean value" of the series generated by technician $i$; this is a slightly artificial parameter, you can see $\mu_i$ as the mean value that technician $i$ would have been obtained if he/she had recorded a huge series of measurements.
If you are interested in evaluating $\mu_1$, $\mu_2$, $\mu_3$ (for example in order to assess the bias between operators), then you have to use the ANOVA model with fixed effects.
You have to use the ANOVA model with random effects when you are interested in the variances $\sigma^2_w$ and $\sigma^2_b$ defining the model, and the total variance $\sigma^2_b+\sigma^2_w$ (see below). The variance $\sigma^2_w$ is the variance of the recordings generated by one technician (it is assumed to be the same for all technicians), and $\sigma^2_b$ is called the between-technicians variance. Maybe ideally, the technicians should be selected at random.
This model reflects the decomposition of variance formula for a data sample :
Total variance = variance of means $+$ means of intra-variances
which is reflected by the ANOVA model with random effects:
Indeed, the distribution of $y_{ij}$ is defined by its conditional distribution $(y_{ij})$ given $\mu_i$ and by the distribution of $\mu_i$. If one computes the "unconditional" distribution of $y_{ij}$ then we find $\boxed{y_{ij} \sim {\cal N}(\mu, \sigma^2_b+\sigma^2_w)}$.
See slide 24 and slide 25 here for better pictures (you have to save the pdf file to appreciate the overlays, don't watch the online version).
|
What is the mathematical difference between random- and fixed-effects?
The simplest model with random effects is the one-way ANOVA model with random effects, given by observations $y_{ij}$ with distributional assumptions: $$(y_{ij} \mid \mu_i) \sim_{\text{iid}} {\cal N}(
|
9,512
|
What is the mathematical difference between random- and fixed-effects?
|
Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution.
For example, if you have some sort of model regarding grades and you want to account for your student data coming from different schools and you model school as a random factor this means that you assume that the by school averages are normally distributed. That means two sources of variation are modelling: the in-school variability of student grades and the between school variability.
This results in something called partial pooling. Consider two extremes:
School does not have any effect (between school variability is zero). In this case a linear model which does not account for school would be optimal.
School variability is larger than student variability. Then you basically need to work on the school level instead of the students level (less # samples). This is basically the model where you account for school using fixed effects. This can be problematic if you have few samples per school.
By estimating the variability at both levels the mixed model makes a smart compromise between these two approaches. Especially if you have a not so large #students per school this means that you will get shrinkage of the effects for the individual schools as estimated by model 2 towards the overall mean of model 1.
That is because the models says that if you have one school with two students included which is better than what is "normal" for the population of schools then it is likely that part of this effect is explained by the school having been lucky in the choice of the two students looked at. It does not make this blindly, it does so depending on the estimate of the within school variability. This also means that effect levels with fewer samples are more strongly pulled toward the overall mean than large schools.
The important thing is that you need exchangeability on the levels of the random factor. That means in this case that the schools are (from your knowledge) exchangeable and you know nothing which makes them distinct (other than some sort of ID). If you have additional information you can include this as an additional factor, it is enough that the schools are exchangeable conditional on the other information accounted for.
For example, it would make sense to assume that 30 year old adults living in New York are exchangeable conditional on gender. If you have more information (age, ethnicity, education) it would make sense to include that information as well.
OTH if you have study with one control group and three wildly different disease groups it does not make sense to model group as random since specific disease are not exchangeable. However, many people like the shrinkage effect so well that they would still argue for a random effects model but that's another story.
I notice I didn't get too much into the mathematics, but basically the difference is that the random effects model estimated a normally distributed error both on the level of schools and on the level of students while the fixed effect model has the error just on the level of students. Especially this means that each school has it's own level that is not connected to the other levels by a common distribution. This also means that the fixed model does not allow extrapolating to a student of school not included in the original data while the random effect model does so, with an variability that is the sum of the student level and the school level variability. If you are specifically interested in the likelihood we could work that in.
|
What is the mathematical difference between random- and fixed-effects?
|
Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution.
For example, if you have some so
|
What is the mathematical difference between random- and fixed-effects?
Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution.
For example, if you have some sort of model regarding grades and you want to account for your student data coming from different schools and you model school as a random factor this means that you assume that the by school averages are normally distributed. That means two sources of variation are modelling: the in-school variability of student grades and the between school variability.
This results in something called partial pooling. Consider two extremes:
School does not have any effect (between school variability is zero). In this case a linear model which does not account for school would be optimal.
School variability is larger than student variability. Then you basically need to work on the school level instead of the students level (less # samples). This is basically the model where you account for school using fixed effects. This can be problematic if you have few samples per school.
By estimating the variability at both levels the mixed model makes a smart compromise between these two approaches. Especially if you have a not so large #students per school this means that you will get shrinkage of the effects for the individual schools as estimated by model 2 towards the overall mean of model 1.
That is because the models says that if you have one school with two students included which is better than what is "normal" for the population of schools then it is likely that part of this effect is explained by the school having been lucky in the choice of the two students looked at. It does not make this blindly, it does so depending on the estimate of the within school variability. This also means that effect levels with fewer samples are more strongly pulled toward the overall mean than large schools.
The important thing is that you need exchangeability on the levels of the random factor. That means in this case that the schools are (from your knowledge) exchangeable and you know nothing which makes them distinct (other than some sort of ID). If you have additional information you can include this as an additional factor, it is enough that the schools are exchangeable conditional on the other information accounted for.
For example, it would make sense to assume that 30 year old adults living in New York are exchangeable conditional on gender. If you have more information (age, ethnicity, education) it would make sense to include that information as well.
OTH if you have study with one control group and three wildly different disease groups it does not make sense to model group as random since specific disease are not exchangeable. However, many people like the shrinkage effect so well that they would still argue for a random effects model but that's another story.
I notice I didn't get too much into the mathematics, but basically the difference is that the random effects model estimated a normally distributed error both on the level of schools and on the level of students while the fixed effect model has the error just on the level of students. Especially this means that each school has it's own level that is not connected to the other levels by a common distribution. This also means that the fixed model does not allow extrapolating to a student of school not included in the original data while the random effect model does so, with an variability that is the sum of the student level and the school level variability. If you are specifically interested in the likelihood we could work that in.
|
What is the mathematical difference between random- and fixed-effects?
Basically, what I think is the most distinct difference if you model a factor as random, is that the effects are assumed to be drawn from a common normal distribution.
For example, if you have some so
|
9,513
|
What is the mathematical difference between random- and fixed-effects?
|
In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed effects estimation method allows for correlation between the unit-specific intercepts and the independent explanatory variables. The random effects does not. The cost of using the more flexible fixed effects is that you cannot estimate the coefficient on variables that are time-invariant (like gender, religion, or race).
N.B. Other fields have their own terminology, which can be rather confusing.
|
What is the mathematical difference between random- and fixed-effects?
|
In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed ef
|
What is the mathematical difference between random- and fixed-effects?
In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed effects estimation method allows for correlation between the unit-specific intercepts and the independent explanatory variables. The random effects does not. The cost of using the more flexible fixed effects is that you cannot estimate the coefficient on variables that are time-invariant (like gender, religion, or race).
N.B. Other fields have their own terminology, which can be rather confusing.
|
What is the mathematical difference between random- and fixed-effects?
In econ land, such effects are individual-specific intercepts (or constants) that are unobserved, but can be estimated using panel data (repeated observation on the same units over time). The fixed ef
|
9,514
|
What is the mathematical difference between random- and fixed-effects?
|
In a standard software package (e.g. R's lmer), the basic difference is:
fixed effects are estimated by maximum likelihood (least squares for a linear model)
random effects are estimated by empirical Bayes (least squares with some shrinkage for a linear model, where the shrinkage parameter is chosen by maximum likelihood)
If you're being Bayesian (e.g. WinBUGS), then there is no real difference.
|
What is the mathematical difference between random- and fixed-effects?
|
In a standard software package (e.g. R's lmer), the basic difference is:
fixed effects are estimated by maximum likelihood (least squares for a linear model)
random effects are estimated by empirical
|
What is the mathematical difference between random- and fixed-effects?
In a standard software package (e.g. R's lmer), the basic difference is:
fixed effects are estimated by maximum likelihood (least squares for a linear model)
random effects are estimated by empirical Bayes (least squares with some shrinkage for a linear model, where the shrinkage parameter is chosen by maximum likelihood)
If you're being Bayesian (e.g. WinBUGS), then there is no real difference.
|
What is the mathematical difference between random- and fixed-effects?
In a standard software package (e.g. R's lmer), the basic difference is:
fixed effects are estimated by maximum likelihood (least squares for a linear model)
random effects are estimated by empirical
|
9,515
|
What is the mathematical difference between random- and fixed-effects?
|
From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested is whether A sample differs from B sample (e.g., Are males taller than females?). While if that's not our aim, estimation of individual means can be sometimes meaning less. E.g., 10 people tested in two conditions, and the absolute value of the 20 means are meaning less, because the participants were sampled - what we really interested in is whether the two conditions differ. And then we assume that the individual means are drawn from a Gaussian. And that answers why we should turn to fixed effects when every level is drawn from from the factor - because it is no longer reasonable to assume a hypothetical distribution when the actual distribution is given. I admit that I don't know much about the math behind the calculations.
|
What is the mathematical difference between random- and fixed-effects?
|
From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested
|
What is the mathematical difference between random- and fixed-effects?
From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested is whether A sample differs from B sample (e.g., Are males taller than females?). While if that's not our aim, estimation of individual means can be sometimes meaning less. E.g., 10 people tested in two conditions, and the absolute value of the 20 means are meaning less, because the participants were sampled - what we really interested in is whether the two conditions differ. And then we assume that the individual means are drawn from a Gaussian. And that answers why we should turn to fixed effects when every level is drawn from from the factor - because it is no longer reasonable to assume a hypothetical distribution when the actual distribution is given. I admit that I don't know much about the math behind the calculations.
|
What is the mathematical difference between random- and fixed-effects?
From reading the answers above I guess the major difference is whether we assume a Gaussian for the individual means. Fixed effects don't say much about that assumption because what we are interested
|
9,516
|
How can I test the fairness of a d20?
|
Here's an example with R code. The output is preceded by #'s.
A fair die:
rolls <- sample(1:20, 200, replace = T)
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 7 8 11 9 12 14 9 14 11 7 11 10 13 8 8 5 13 9 10 11
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 11.6, df = 19, p-value = 0.902
A biased die - numbers 1 to 10 each have a probability of 0.045; those 11-20 have a probability of 0.055 - 200 throws:
rolls <- sample(1:20, 200, replace = T, prob=cbind(rep(0.045,10), rep(0.055,10)))
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 8 9 7 12 9 7 14 5 10 12 11 13 14 16 6 10 10 7 9 11
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 16.2, df = 19, p-value = 0.6439
We have insufficient evidence of bias (p = 0.64).
A biased die, 1000 throws:
rolls <- sample(1:20, 1000, replace = T, prob=cbind(rep(0.045,10), rep(0.055,10)))
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 42 47 34 42 47 45 48 43 42 45 52 50 57 57 60 68 49 67 42 63
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 32.36, df = 19, p-value = 0.02846
Now p<0.05 and we are starting to see evidence of bias. You can use similar simulations to estimate the level of bias you can expect to detect and the number of throws needed to detect it with an given p-level.
Wow, 2 other answers even before I finished typing.
|
How can I test the fairness of a d20?
|
Here's an example with R code. The output is preceded by #'s.
A fair die:
rolls <- sample(1:20, 200, replace = T)
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 7
|
How can I test the fairness of a d20?
Here's an example with R code. The output is preceded by #'s.
A fair die:
rolls <- sample(1:20, 200, replace = T)
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 7 8 11 9 12 14 9 14 11 7 11 10 13 8 8 5 13 9 10 11
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 11.6, df = 19, p-value = 0.902
A biased die - numbers 1 to 10 each have a probability of 0.045; those 11-20 have a probability of 0.055 - 200 throws:
rolls <- sample(1:20, 200, replace = T, prob=cbind(rep(0.045,10), rep(0.055,10)))
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 8 9 7 12 9 7 14 5 10 12 11 13 14 16 6 10 10 7 9 11
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 16.2, df = 19, p-value = 0.6439
We have insufficient evidence of bias (p = 0.64).
A biased die, 1000 throws:
rolls <- sample(1:20, 1000, replace = T, prob=cbind(rep(0.045,10), rep(0.055,10)))
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 42 47 34 42 47 45 48 43 42 45 52 50 57 57 60 68 49 67 42 63
chisq.test(table(rolls), p = rep(0.05, 20))
# Chi-squared test for given probabilities
#
# data: table(rolls)
# X-squared = 32.36, df = 19, p-value = 0.02846
Now p<0.05 and we are starting to see evidence of bias. You can use similar simulations to estimate the level of bias you can expect to detect and the number of throws needed to detect it with an given p-level.
Wow, 2 other answers even before I finished typing.
|
How can I test the fairness of a d20?
Here's an example with R code. The output is preceded by #'s.
A fair die:
rolls <- sample(1:20, 200, replace = T)
table(rolls)
#rolls
# 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20
# 7
|
9,517
|
How can I test the fairness of a d20?
|
Do you want to do it by hand, or in excel ?
If you want to do it in R, you can do it this way:
Step 1: roll your die (let's say) 100 times.
Step 2: count how many times you got each of your numbers
Step 3: put them in R like this (write the number of times each die roll you got, instead of the numbers I wrote):
x <- as.table(c(1,2,3,4,5,6,7,80,9,10,11,12,13,14,15,16,17,18,19,20))
Step 4: simply run this command:
chisq.test(x)
If the P value is low (e.g: bellow 0.05) - your die is not balanced.
This command simulates a balanced die (P= ~.5):
chisq.test(table(sample(1:20, 100, T)))
And this simulates an unbalanced die:
chisq.test(table(c(rep(20,10),sample(1:20, 100, T))))
(It get's to be about P = ~.005)
Now the real question is how many die's should be rolled to what level of power of detection. If someone wants to go into solving that, he is welcomed...
Update: There is also a nice article on this topic here.
|
How can I test the fairness of a d20?
|
Do you want to do it by hand, or in excel ?
If you want to do it in R, you can do it this way:
Step 1: roll your die (let's say) 100 times.
Step 2: count how many times you got each of your numbers
St
|
How can I test the fairness of a d20?
Do you want to do it by hand, or in excel ?
If you want to do it in R, you can do it this way:
Step 1: roll your die (let's say) 100 times.
Step 2: count how many times you got each of your numbers
Step 3: put them in R like this (write the number of times each die roll you got, instead of the numbers I wrote):
x <- as.table(c(1,2,3,4,5,6,7,80,9,10,11,12,13,14,15,16,17,18,19,20))
Step 4: simply run this command:
chisq.test(x)
If the P value is low (e.g: bellow 0.05) - your die is not balanced.
This command simulates a balanced die (P= ~.5):
chisq.test(table(sample(1:20, 100, T)))
And this simulates an unbalanced die:
chisq.test(table(c(rep(20,10),sample(1:20, 100, T))))
(It get's to be about P = ~.005)
Now the real question is how many die's should be rolled to what level of power of detection. If someone wants to go into solving that, he is welcomed...
Update: There is also a nice article on this topic here.
|
How can I test the fairness of a d20?
Do you want to do it by hand, or in excel ?
If you want to do it in R, you can do it this way:
Step 1: roll your die (let's say) 100 times.
Step 2: count how many times you got each of your numbers
St
|
9,518
|
How can I test the fairness of a d20?
|
Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$ sides.
First, in line with what @Glen_b said, a bayesian is not actually interested whether or not the die is exactly fair - it isn't. What (s)he cares about is whether it's close enough, whatever "enough" means in the context, say, within 5% of fair for each side.
If $p_1$, $p_2$, and $p_3$ represent the probabilites of rolling 1, 2, and 3, respectively, then we represent our prior knowledge about $p=(p_1,p_2,p_3)$ with a prior distribution, and to make the math easy we could choose a Dirichlet distribution. Note that $p_1+p_2+p_3=1$. For a non-informative prior we might pick prior parameters, say, $\alpha_0=(1,1,1)$.
If $X=(X_1,X_2,X_3)$ represents the observed counts of 1,2,3 then of course $X$ has a multinomial distribution with parameters $p=(p_1,p_2,p_3)$, and the theory says that the posterior is also a Dirichlet distribution with parameters $\alpha=(x_1+1,x_2+1,x_3+1)$. (Dirichlet is called a conjugate prior, here.)
We observe data, find the posterior with Bayes' rule, then ALL inference is based on the posterior. Want an estimate for $p$? Find the mean of the posterior. Want confidence intervals (no, rather credible intervals)? Calculate some areas under the posterior. For complicated problems in the real world we usually simulate from the posterior and get simulated estimates for all of the above.
Anyway, here's how (with R):
First, get some data. We roll the die 500 times.
set.seed(1)
y <- rmultinom(1, size = 500, prob = c(1,1,1))
(we're starting with a fair die; in practice these data would be observed.)
Next, we simulate 5000 observations of $p$ from the posterior and take a look at the results.
library(MCMCpack)
A <- MCmultinomdirichlet(y, alpha0 = c(1,1,1), mc = 5000)
plot(A)
summary(A)
Finally, let's estimate our posterior probability (after observing the data) that the die is within 0.05 of fair in each coordinate.
B <- as.matrix(A)
f <- function(x) all((x > 0.28)*(x < 0.38))
mean(apply(B, MARGIN = 1, FUN = f))
The result is about 0.9486 on my machine. (Not a surprise, really. We started with a fair die after all.)
Quick remark: it probably isn't reasonable for us to have used a non-informative prior in this example. Since there's even a question presumably the die appears approximately balanced in the first place, so it may be better to pick a prior that is concentrated closer to 1/3 in all coordinates. Above this would simply have made our estimated posterior probability of "close to fair" even higher.
|
How can I test the fairness of a d20?
|
Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$
|
How can I test the fairness of a d20?
Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$ sides.
First, in line with what @Glen_b said, a bayesian is not actually interested whether or not the die is exactly fair - it isn't. What (s)he cares about is whether it's close enough, whatever "enough" means in the context, say, within 5% of fair for each side.
If $p_1$, $p_2$, and $p_3$ represent the probabilites of rolling 1, 2, and 3, respectively, then we represent our prior knowledge about $p=(p_1,p_2,p_3)$ with a prior distribution, and to make the math easy we could choose a Dirichlet distribution. Note that $p_1+p_2+p_3=1$. For a non-informative prior we might pick prior parameters, say, $\alpha_0=(1,1,1)$.
If $X=(X_1,X_2,X_3)$ represents the observed counts of 1,2,3 then of course $X$ has a multinomial distribution with parameters $p=(p_1,p_2,p_3)$, and the theory says that the posterior is also a Dirichlet distribution with parameters $\alpha=(x_1+1,x_2+1,x_3+1)$. (Dirichlet is called a conjugate prior, here.)
We observe data, find the posterior with Bayes' rule, then ALL inference is based on the posterior. Want an estimate for $p$? Find the mean of the posterior. Want confidence intervals (no, rather credible intervals)? Calculate some areas under the posterior. For complicated problems in the real world we usually simulate from the posterior and get simulated estimates for all of the above.
Anyway, here's how (with R):
First, get some data. We roll the die 500 times.
set.seed(1)
y <- rmultinom(1, size = 500, prob = c(1,1,1))
(we're starting with a fair die; in practice these data would be observed.)
Next, we simulate 5000 observations of $p$ from the posterior and take a look at the results.
library(MCMCpack)
A <- MCmultinomdirichlet(y, alpha0 = c(1,1,1), mc = 5000)
plot(A)
summary(A)
Finally, let's estimate our posterior probability (after observing the data) that the die is within 0.05 of fair in each coordinate.
B <- as.matrix(A)
f <- function(x) all((x > 0.28)*(x < 0.38))
mean(apply(B, MARGIN = 1, FUN = f))
The result is about 0.9486 on my machine. (Not a surprise, really. We started with a fair die after all.)
Quick remark: it probably isn't reasonable for us to have used a non-informative prior in this example. Since there's even a question presumably the die appears approximately balanced in the first place, so it may be better to pick a prior that is concentrated closer to 1/3 in all coordinates. Above this would simply have made our estimated posterior probability of "close to fair" even higher.
|
How can I test the fairness of a d20?
Nobody has suggested a Bayesian approach yet? I know the question has been answered already, but what the heck. Below is for only a 3-sided die, but I'm guessing it's obvious how to fix it for $n=37$
|
9,519
|
How can I test the fairness of a d20?
|
If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 times. All a chi-square test does is compare what you observed with what you get. If this difference is too large, then this would indicate a problem.
Other tests
If you were interested in other aspects of randonness, for example, if you dice gave the following output:
1, 2, 3, 4...., 20,1,2,..
Then although this output has the correct number of each individual value, it is clearly not random. In this case, take a look at this question. This probably only makes sense for electronic dice.
Chi-squared test in R
In R, this would be
##Roll 200 times
> rolls = sample(1:20, 200, replace=TRUE)
> chisq.test(table(rolls), p = rep(0.05, 20))
Chi-squared test for given probabilities
data: table(rolls)
X-squared = 16.2, df = 19, p-value = 0.6439
## Too many 1's in the sample
> badrolls = cbind(rolls, rep(1, 10))
> chisq.test(table(badrolls), p = rep(0.05, 20))
Chi-squared test for given probabilities
data: table(badrolls)
X-squared = 1848.1, df = 19, p-value < 2.2e-16
|
How can I test the fairness of a d20?
|
If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 t
|
How can I test the fairness of a d20?
If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 times. All a chi-square test does is compare what you observed with what you get. If this difference is too large, then this would indicate a problem.
Other tests
If you were interested in other aspects of randonness, for example, if you dice gave the following output:
1, 2, 3, 4...., 20,1,2,..
Then although this output has the correct number of each individual value, it is clearly not random. In this case, take a look at this question. This probably only makes sense for electronic dice.
Chi-squared test in R
In R, this would be
##Roll 200 times
> rolls = sample(1:20, 200, replace=TRUE)
> chisq.test(table(rolls), p = rep(0.05, 20))
Chi-squared test for given probabilities
data: table(rolls)
X-squared = 16.2, df = 19, p-value = 0.6439
## Too many 1's in the sample
> badrolls = cbind(rolls, rep(1, 10))
> chisq.test(table(badrolls), p = rep(0.05, 20))
Chi-squared test for given probabilities
data: table(badrolls)
X-squared = 1848.1, df = 19, p-value < 2.2e-16
|
How can I test the fairness of a d20?
If you are interested in just checking the number of times each number appears, then a Chi-squared test would be suitable. Suppose you roll a die N times. You would expect each value to come up N/20 t
|
9,520
|
How can I test the fairness of a d20?
|
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checking that the probability that you roll under (or possibly exceed) each outcome is close to what it should be.
What I am getting at is that there are some kinds of deviations from fairness that will heavily impact whatever you're using a d20 for and other kinds of deviations that hardly matter at all, and the chi-squared test will divide power between more interesting and less interesting alternatives. The consequence is that to have enough power to pick up even fairly moderate deviations from fairness, you need a huge number of rolls - far more than you would ever want to sit and generate.
(Hint: come up with a few sets of non-uniform probabilities for your d20 that will most heavily impact the outcome that you're using the d20 for and use simulation and chi-squared tests to find out what power you have against them for various numbers of rolls, so you get some idea of the number of rolls you will need.)
There are a variety of ways of checking for "interesting" deviations (ones that will be more likely to substantively affect typical uses of a d20)
My recommendation is to do an ECDF test (Kolmogorov-Smirnov/Anderson-Darling-type test - but you'll probably want to adjust for the conservativeness that results from the distribution being discrete - at least by lifting the nominal alpha level, but even better by just simulating the distribution to see how the distribution of the test statistic goes for a d20).
These can still pick up any kind of deviation, but they put relatively more weight on the more important kinds of deviation.
An even more powerful approach is to specifically construct a test statistic that is specifically sensitive to the most important alternatives to you, but it involves a bit more work.
In this answer I suggest a graphical method for testing a die based on the size of the individual deviations. Like the chi-squared test this makes more sense for dice with few sides like d4 or d6.
|
How can I test the fairness of a d20?
|
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checki
|
How can I test the fairness of a d20?
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checking that the probability that you roll under (or possibly exceed) each outcome is close to what it should be.
What I am getting at is that there are some kinds of deviations from fairness that will heavily impact whatever you're using a d20 for and other kinds of deviations that hardly matter at all, and the chi-squared test will divide power between more interesting and less interesting alternatives. The consequence is that to have enough power to pick up even fairly moderate deviations from fairness, you need a huge number of rolls - far more than you would ever want to sit and generate.
(Hint: come up with a few sets of non-uniform probabilities for your d20 that will most heavily impact the outcome that you're using the d20 for and use simulation and chi-squared tests to find out what power you have against them for various numbers of rolls, so you get some idea of the number of rolls you will need.)
There are a variety of ways of checking for "interesting" deviations (ones that will be more likely to substantively affect typical uses of a d20)
My recommendation is to do an ECDF test (Kolmogorov-Smirnov/Anderson-Darling-type test - but you'll probably want to adjust for the conservativeness that results from the distribution being discrete - at least by lifting the nominal alpha level, but even better by just simulating the distribution to see how the distribution of the test statistic goes for a d20).
These can still pick up any kind of deviation, but they put relatively more weight on the more important kinds of deviation.
An even more powerful approach is to specifically construct a test statistic that is specifically sensitive to the most important alternatives to you, but it involves a bit more work.
In this answer I suggest a graphical method for testing a die based on the size of the individual deviations. Like the chi-squared test this makes more sense for dice with few sides like d4 or d6.
|
How can I test the fairness of a d20?
A chi-squared goodness of fit test aims to find all possible kinds of deviations from strict uniformity. This is reasonable with a d4 or a d6, but with a d20, you're probably more interested in checki
|
9,521
|
How can I test the fairness of a d20?
|
Perhaps one should not focus as much on one set of rolls.
Try rolling a 6 side die 10 times and repeat the process 8 times.
> xy <- rmultinom(10, n = N, prob = rep(1, K)/K)
> xy
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 3 1 0 0 1 1 2 1
[2,] 0 0 1 2 1 1 0 1
[3,] 1 3 6 0 1 3 2 4
[4,] 2 1 0 5 2 0 2 1
[5,] 3 2 0 2 1 3 3 0
[6,] 1 3 3 1 4 2 1 3
You can check that the sum for each repeat sums to 10.
> apply(xy, MARGIN = 2, FUN = sum)
[1] 10 10 10 10 10 10 10 10
For each repeat (column-wise) you can calculate goodness of fit using Chi^2 test.
unlist(unname(sapply(apply(xy, MARGIN = 2, FUN = chisq.test), "[", "p.value")))
[1] 0.493373524 0.493373524 0.003491841 0.064663031 0.493373524 0.493373524 0.669182902
[8] 0.235944538
The more throws you make, the less biased you will see. Let's do this for a large number.
K <- 20
N <- 10000
xy <- rmultinom(100, n = N, prob = rep(1, K)/K)
hist(unlist(unname(sapply(apply(xy, MARGIN = 2, FUN = chisq.test), "[", "p.value"))))
|
How can I test the fairness of a d20?
|
Perhaps one should not focus as much on one set of rolls.
Try rolling a 6 side die 10 times and repeat the process 8 times.
> xy <- rmultinom(10, n = N, prob = rep(1, K)/K)
> xy
[,1] [,2] [,3] [,
|
How can I test the fairness of a d20?
Perhaps one should not focus as much on one set of rolls.
Try rolling a 6 side die 10 times and repeat the process 8 times.
> xy <- rmultinom(10, n = N, prob = rep(1, K)/K)
> xy
[,1] [,2] [,3] [,4] [,5] [,6] [,7] [,8]
[1,] 3 1 0 0 1 1 2 1
[2,] 0 0 1 2 1 1 0 1
[3,] 1 3 6 0 1 3 2 4
[4,] 2 1 0 5 2 0 2 1
[5,] 3 2 0 2 1 3 3 0
[6,] 1 3 3 1 4 2 1 3
You can check that the sum for each repeat sums to 10.
> apply(xy, MARGIN = 2, FUN = sum)
[1] 10 10 10 10 10 10 10 10
For each repeat (column-wise) you can calculate goodness of fit using Chi^2 test.
unlist(unname(sapply(apply(xy, MARGIN = 2, FUN = chisq.test), "[", "p.value")))
[1] 0.493373524 0.493373524 0.003491841 0.064663031 0.493373524 0.493373524 0.669182902
[8] 0.235944538
The more throws you make, the less biased you will see. Let's do this for a large number.
K <- 20
N <- 10000
xy <- rmultinom(100, n = N, prob = rep(1, K)/K)
hist(unlist(unname(sapply(apply(xy, MARGIN = 2, FUN = chisq.test), "[", "p.value"))))
|
How can I test the fairness of a d20?
Perhaps one should not focus as much on one set of rolls.
Try rolling a 6 side die 10 times and repeat the process 8 times.
> xy <- rmultinom(10, n = N, prob = rep(1, K)/K)
> xy
[,1] [,2] [,3] [,
|
9,522
|
Unsupervised, supervised and semi-supervised learning
|
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling.
In supervised learning one is furnished with input ($x_1$, $x_2$, ...,) and output ($y_1$, $y_2$, ...,) and are challenged with finding a function that approximates this behavior in a generalizable fashion. The output could be a class label (in classification) or a real number (in regression)-- these are the "supervision" in supervised learning.
In the case of unsupervised learning, in the base case, you receives inputs $x_1$, $x_2$, ..., but neither target outputs, nor rewards from its environment are provided. Based on the problem (classify, or predict) and your background knowledge of the space sampled, you may use various methods: density estimation (estimating some underlying PDF for prediction), k-means clustering (classifying unlabeled real valued data), k-modes clustering (classifying unlabeled categorical data), etc.
Semi-supervised learning involves function estimation on labeled and unlabeled data. This approach is motivated by the fact that labeled data is often costly to generate, whereas unlabeled data is generally not. The challenge here mostly involves the technical question of how to treat data mixed in this fashion. See this Semi-Supervised Learning Literature Survey for more details on semi-supervised learning methods.
In addition to these kinds of learning, there are others, such as reinforcement learning whereby the learning method interacts with its environment by producing actions $a_1$, $a_2$, . . .. that produce rewards or punishments $r_1$, $r_2$, ...
|
Unsupervised, supervised and semi-supervised learning
|
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling.
In supervised learning one is furnished with input ($x_1$, $
|
Unsupervised, supervised and semi-supervised learning
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling.
In supervised learning one is furnished with input ($x_1$, $x_2$, ...,) and output ($y_1$, $y_2$, ...,) and are challenged with finding a function that approximates this behavior in a generalizable fashion. The output could be a class label (in classification) or a real number (in regression)-- these are the "supervision" in supervised learning.
In the case of unsupervised learning, in the base case, you receives inputs $x_1$, $x_2$, ..., but neither target outputs, nor rewards from its environment are provided. Based on the problem (classify, or predict) and your background knowledge of the space sampled, you may use various methods: density estimation (estimating some underlying PDF for prediction), k-means clustering (classifying unlabeled real valued data), k-modes clustering (classifying unlabeled categorical data), etc.
Semi-supervised learning involves function estimation on labeled and unlabeled data. This approach is motivated by the fact that labeled data is often costly to generate, whereas unlabeled data is generally not. The challenge here mostly involves the technical question of how to treat data mixed in this fashion. See this Semi-Supervised Learning Literature Survey for more details on semi-supervised learning methods.
In addition to these kinds of learning, there are others, such as reinforcement learning whereby the learning method interacts with its environment by producing actions $a_1$, $a_2$, . . .. that produce rewards or punishments $r_1$, $r_2$, ...
|
Unsupervised, supervised and semi-supervised learning
Generally, the problems of machine learning may be considered variations on function estimation for classification, prediction or modeling.
In supervised learning one is furnished with input ($x_1$, $
|
9,523
|
Unsupervised, supervised and semi-supervised learning
|
Unsupervised Learning
Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods.
Supervised Learning
In this case your training data exists out of labeled data. The problem you solve here is often predicting the labels for data points without label.
Semi-Supervised Learning
In this case both labeled data and unlabeled data are used. This for example can be used in Deep belief networks, where some layers are learning the structure of the data (unsupervised) and one layer is used to make the classification (trained with supervised data)
|
Unsupervised, supervised and semi-supervised learning
|
Unsupervised Learning
Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods.
Supervised Learning
In this case your training data
|
Unsupervised, supervised and semi-supervised learning
Unsupervised Learning
Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods.
Supervised Learning
In this case your training data exists out of labeled data. The problem you solve here is often predicting the labels for data points without label.
Semi-Supervised Learning
In this case both labeled data and unlabeled data are used. This for example can be used in Deep belief networks, where some layers are learning the structure of the data (unsupervised) and one layer is used to make the classification (trained with supervised data)
|
Unsupervised, supervised and semi-supervised learning
Unsupervised Learning
Unsupervised learning is when you have no labeled data available for training. Examples of this are often clustering methods.
Supervised Learning
In this case your training data
|
9,524
|
Unsupervised, supervised and semi-supervised learning
|
I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks:
prediction. if you are predicting a real number, it is called regression. if you are predicting a whole number or class, it is called classification.
modeling. modeling is the same as prediction, but the model is comprehensible by humans. Neural networks and support vector machines work great, but do not produce comprehensible models [1]. decision trees and classic linear regression are examples of easy-to-understand models.
similarity. if you are trying to find natural groups of attributes, it is called factor analysis. if you are trying to find natural groups of observations, it is called clustering.
association. it's much like correlation, but for enormous binary datasets.
[1] Apparently Goldman Sachs created tons of great neural networks for prediction, but then no one understood them, so they had to write other programs to try to explain the neural networks.
|
Unsupervised, supervised and semi-supervised learning
|
I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks:
prediction. if yo
|
Unsupervised, supervised and semi-supervised learning
I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks:
prediction. if you are predicting a real number, it is called regression. if you are predicting a whole number or class, it is called classification.
modeling. modeling is the same as prediction, but the model is comprehensible by humans. Neural networks and support vector machines work great, but do not produce comprehensible models [1]. decision trees and classic linear regression are examples of easy-to-understand models.
similarity. if you are trying to find natural groups of attributes, it is called factor analysis. if you are trying to find natural groups of observations, it is called clustering.
association. it's much like correlation, but for enormous binary datasets.
[1] Apparently Goldman Sachs created tons of great neural networks for prediction, but then no one understood them, so they had to write other programs to try to explain the neural networks.
|
Unsupervised, supervised and semi-supervised learning
I don't think that supervised/unsupervised is the best way to think about it. For basic data mining, it's better to think about what you are trying to do. There are four main tasks:
prediction. if yo
|
9,525
|
How to find/estimate probability density function from density function in R
|
?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function:
x <- log(rgamma(150,5))
df <- approxfun(density(x))
plot(density(x))
xnew <- c(0.45,1.84,2.3)
points(xnew,df(xnew),col=2)
By use of integrate starting from an appropriate distance below the minimum in the sample (a multiple - say 4 or 5, perhaps - of the bandwidth used in df would generally do for an appropriate distance), one can obtain a good approximation of the cdf corresponding to df.
|
How to find/estimate probability density function from density function in R
|
?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function:
x <- log(rgamma(150,5))
df <- approxfun(density(x))
plot(de
|
How to find/estimate probability density function from density function in R
?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function:
x <- log(rgamma(150,5))
df <- approxfun(density(x))
plot(density(x))
xnew <- c(0.45,1.84,2.3)
points(xnew,df(xnew),col=2)
By use of integrate starting from an appropriate distance below the minimum in the sample (a multiple - say 4 or 5, perhaps - of the bandwidth used in df would generally do for an appropriate distance), one can obtain a good approximation of the cdf corresponding to df.
|
How to find/estimate probability density function from density function in R
?density points out that it uses approx to do linear interpolation already; ?approx points out that approxfun generates a suitable function:
x <- log(rgamma(150,5))
df <- approxfun(density(x))
plot(de
|
9,526
|
How to find/estimate probability density function from density function in R
|
spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density().
set.seed(123)
x <- rnorm(10000000)
x_density <- density(x, n = 10000)
x_cdf <- spatstat.core::CDF(x_density)
sds <- c(-2, -1, 0, 1, 2)
names(sds) <- sds
# check cdf at different values
setNames(
x_cdf(sds),
sds)
#> -2 -1 0 1 2
#> 0.02285086 0.15889356 0.50009332 0.84134448 0.97717762
# compare against theoretical
pnorm(sds)
#> -2 -1 0 1 2
#> 0.02275013 0.15865525 0.50000000 0.84134475 0.97724987
Created on 2021-11-22 by the reprex package (v2.0.0)
Update
A previous version of this answer copied code from the deprecated spatstat:::CDF() which was broken up (in ?2020?) into several other packages. If anyone knows a lighter weight package where this CDF function currently exists would love to hear about it in the comments!
|
How to find/estimate probability density function from density function in R
|
spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density().
set.seed(123)
x <- rnorm(10000000)
x_density <- density(x, n = 10000)
x_cdf <- spatsta
|
How to find/estimate probability density function from density function in R
spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density().
set.seed(123)
x <- rnorm(10000000)
x_density <- density(x, n = 10000)
x_cdf <- spatstat.core::CDF(x_density)
sds <- c(-2, -1, 0, 1, 2)
names(sds) <- sds
# check cdf at different values
setNames(
x_cdf(sds),
sds)
#> -2 -1 0 1 2
#> 0.02285086 0.15889356 0.50009332 0.84134448 0.97717762
# compare against theoretical
pnorm(sds)
#> -2 -1 0 1 2
#> 0.02275013 0.15865525 0.50000000 0.84134475 0.97724987
Created on 2021-11-22 by the reprex package (v2.0.0)
Update
A previous version of this answer copied code from the deprecated spatstat:::CDF() which was broken up (in ?2020?) into several other packages. If anyone knows a lighter weight package where this CDF function currently exists would love to hear about it in the comments!
|
How to find/estimate probability density function from density function in R
spatstat.core::CDF() can be used to to create a cumulative density function from a given output from density().
set.seed(123)
x <- rnorm(10000000)
x_density <- density(x, n = 10000)
x_cdf <- spatsta
|
9,527
|
Proof that F-statistic follows F-distribution
|
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization of the $F$ distribution, be written as the ratio of independent $\chi^2$ r.v.s divided by their degrees of freedom.
Let $H_{0}:R^\prime\beta=r$ with $R$ and $r$ known, nonrandom and $R:k\times q$ has full column rank $q$. This represents $q$ linear restrictions for (unlike in OPs notation) $k$ regressors including the constant term. So, in @user1627466's example, $p-1$ corresponds to the $q=k-1$ restrictions of setting all slope coefficients to zero.
In view of $Var\bigl(\hat{\beta}_{\text{ols}}\bigr)=\sigma^2(X'X)^{-1}$, we have
\begin{eqnarray*}
R^\prime(\hat{\beta}_{\text{ols}}-\beta)\sim N\left(0,\sigma^{2}R^\prime(X^\prime X)^{-1} R\right),
\end{eqnarray*}
so that (with $B^{-1/2}=\{R^\prime(X^\prime X)^{-1} R\}^{-1/2}$ being a "matrix square root" of $B^{-1}=\{R^\prime(X^\prime X)^{-1} R\}^{-1}$, via, e.g., a Cholesky decomposition)
\begin{eqnarray*}
n:=\frac{B^{-1/2}}{\sigma}R^\prime(\hat{\beta}_{\text{ols}}-\beta)\sim N(0,I_{q}),
\end{eqnarray*}
as
\begin{eqnarray*}
Var(n)&=&\frac{B^{-1/2}}{\sigma}R^\prime Var\bigl(\hat{\beta}_{\text{ols}}\bigr)R\frac{B^{-1/2}}{\sigma}\\
&=&\frac{B^{-1/2}}{\sigma}\sigma^2B\frac{B^{-1/2}}{\sigma}=I
\end{eqnarray*}
where the second line uses the variance of the OLSE.
This, as shown in the answer that you link to (see also here), is independent of $$d:=(n-k)\frac{\hat{\sigma}^{2}}{\sigma^{2}}\sim\chi^{2}_{n-k},$$
where $\hat{\sigma}^{2}=y'M_Xy/(n-k)$ is the usual unbiased error variance estimate, with $M_{X}=I-X(X'X)^{-1}X'$ is the "residual maker matrix" from regressing on $X$.
So, as $n'n$ is a quadratic form in normals,
\begin{eqnarray*}
\frac{\overbrace{n^\prime n}^{\sim\chi^{2}_{q}}/q}{d/(n-k)}=\frac{(\hat{\beta}_{\text{ols}}-\beta)^\prime R\left\{R^\prime(X^\prime X)^{-1}R\right\}^{-1}R^\prime(\hat{\beta}_{\text{ols}}-\beta)/q}{\hat{\sigma}^{2}}\sim F_{q,n-k}.
\end{eqnarray*}
In particular, under $H_{0}:R^\prime\beta=r$, this reduces to the statistic
\begin{eqnarray}
F=\frac{(R^\prime\hat{\beta}_{\text{ols}}-r)^\prime\left\{R^\prime(X^\prime X)^{-1}R\right\}^{-1}(R^\prime\hat{\beta}_{\text{ols}}-r)/q}{\hat{\sigma}^{2}}\sim F_{q,n-k}.
\end{eqnarray}
For illustration, consider the special case $R^\prime=I$, $r=0$, $q=2$, $\hat{\sigma}^{2}=1$ and $X^\prime X=I$. Then,
\begin{eqnarray}
F=\hat{\beta}_{\text{ols}}^\prime\hat{\beta}_{\text{ols}}/2=\frac{\hat{\beta}_{\text{ols},1}^2+\hat{\beta}_{\text{ols},2}^2}{2},
\end{eqnarray}
the squared Euclidean distance of the OLS estimate from the origin standardized by the number of elements - highlighting that, since $\hat{\beta}_{\text{ols},2}^2$ are squared standard normals and hence $\chi^2_1$, the $F$ distribution may be seen as an "average $\chi^2$ distribution.
In case you prefer a little simulation (which is of course not a proof!), in which the null is tested that none of the $k$ regressors matter - which they indeed do not, so that we simulate the null distribution.
We see very good agreement between the theoretical density and the histogram of the Monte Carlo test statistics.
library(lmtest)
n <- 100
reps <- 20000
sloperegs <- 5 # number of slope regressors, q or k-1 (minus the constant) in the above notation
critical.value <- qf(p = .95, df1 = sloperegs, df2 = n-sloperegs-1)
# for the null that none of the slope regrssors matter
Fstat <- rep(NA,reps)
for (i in 1:reps){
y <- rnorm(n)
X <- matrix(rnorm(n*sloperegs), ncol=sloperegs)
reg <- lm(y~X)
Fstat[i] <- waldtest(reg, test="F")$F[2]
}
mean(Fstat>critical.value) # very close to 0.05
hist(Fstat, breaks = 60, col="lightblue", freq = F, xlim=c(0,4))
x <- seq(0,6,by=.1)
lines(x, df(x, df1 = sloperegs, df2 = n-sloperegs-1), lwd=2, col="purple")
To see that the versions of the test statistics in the question and the answer are indeed equivalent, note that the null corresponds to the restrictions $R'=[0\;\;I]$ and $r=0$.
Let $X=[X_1\;\;X_2]$ be partitioned according to which coefficients are restricted to be zero under the null (in your case, all but the constant, but the derivation to follow is general). Also, let $\hat{\beta}_{\text{ols}}=(\hat{\beta}_{\text{ols},1}^\prime,\hat{\beta}_{\text{ols},2}')'$ be the suitably partitioned OLS estimate.
Then,
$$
R'\hat{\beta}_{\text{ols}}=\hat{\beta}_{\text{ols},2}
$$
and
$$
R^\prime(X^\prime X)^{-1}R\equiv\tilde D,
$$
the lower right block of
\begin{align*}
(X^TX)^{-1}&=\left( \begin{array} {c,c} X_1'X_1&X_1'X_2 \\ X_2'X_1&X_2'X_2\end{array} \right)^{-1}\\&\equiv\left( \begin{array} {c,c} \tilde A&\tilde B \\ \tilde C&\tilde D\end{array} \right)
\end{align*}
Now, use results for partitioned inverses to obtain
$$
\tilde D=(X_2'X_2-X_2'X_1(X_1'X_1)^{-1}X_1'X_2)^{-1}=(X_2'M_{X_1}X_2)^{-1}
$$
where $M_{X_1}=I-X_1(X_1'X_1)^{-1}X_1'$.
Thus, the numerator of the $F$ statistic becomes (without the division by $q$)
$$
F_{num}=\hat{\beta}_{\text{ols},2}'(X_2'M_{X_1}X_2)\hat{\beta}_{\text{ols},2}
$$
Next, recall that by the Frisch-Waugh-Lovell theorem we may write
$$
\hat{\beta}_{\text{ols},2}=(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
$$
so that
\begin{align*}
F_{num}&=y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}(X_2'M_{X_1}X_2)(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y\\
&=y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
\end{align*}
It remains to show that this numerator is identical to $\text{RSSR}-\text{USSR}$, the difference in restricted and unrestricted sum of squared residuals.
Here,
$$\text{RSSR}=y'M_{X_1}y$$
is the residual sum of squares from regressing $y$ on $X_1$, i.e., with $H_0$ imposed. In your special case, this is just $TSS=\sum_i(y_i-\bar y)^2$, the residuals of a regression on a constant.
Again using FWL (which also shows that the residuals of the two approaches are identical), we can write $\text{USSR}$ (SSR in your notation) as the SSR of the regression
$$
M_{X_1}y\quad\text{on}\quad M_{X_1}X_2
$$
That is,
\begin{eqnarray*}
\text{USSR}&=&y'M_{X_1}'M_{M_{X_1}X_2}M_{X_1}y\\
&=&y'M_{X_1}'(I-P_{M_{X_1}X_2})M_{X_1}y\\
&=&y'M_{X_1}y-y'M_{X_1}M_{X_1}X_2((M_{X_1}X_2)'M_{X_1}X_2)^{-1}(M_{X_1}X_2)'M_{X_1}y\\
&=&y'M_{X_1}y-y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\text{RSSR}-\text{USSR}&=&y'M_{X_1}y-(y'M_{X_1}y-y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y)\\
&=&y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
\end{eqnarray*}
|
Proof that F-statistic follows F-distribution
|
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization o
|
Proof that F-statistic follows F-distribution
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization of the $F$ distribution, be written as the ratio of independent $\chi^2$ r.v.s divided by their degrees of freedom.
Let $H_{0}:R^\prime\beta=r$ with $R$ and $r$ known, nonrandom and $R:k\times q$ has full column rank $q$. This represents $q$ linear restrictions for (unlike in OPs notation) $k$ regressors including the constant term. So, in @user1627466's example, $p-1$ corresponds to the $q=k-1$ restrictions of setting all slope coefficients to zero.
In view of $Var\bigl(\hat{\beta}_{\text{ols}}\bigr)=\sigma^2(X'X)^{-1}$, we have
\begin{eqnarray*}
R^\prime(\hat{\beta}_{\text{ols}}-\beta)\sim N\left(0,\sigma^{2}R^\prime(X^\prime X)^{-1} R\right),
\end{eqnarray*}
so that (with $B^{-1/2}=\{R^\prime(X^\prime X)^{-1} R\}^{-1/2}$ being a "matrix square root" of $B^{-1}=\{R^\prime(X^\prime X)^{-1} R\}^{-1}$, via, e.g., a Cholesky decomposition)
\begin{eqnarray*}
n:=\frac{B^{-1/2}}{\sigma}R^\prime(\hat{\beta}_{\text{ols}}-\beta)\sim N(0,I_{q}),
\end{eqnarray*}
as
\begin{eqnarray*}
Var(n)&=&\frac{B^{-1/2}}{\sigma}R^\prime Var\bigl(\hat{\beta}_{\text{ols}}\bigr)R\frac{B^{-1/2}}{\sigma}\\
&=&\frac{B^{-1/2}}{\sigma}\sigma^2B\frac{B^{-1/2}}{\sigma}=I
\end{eqnarray*}
where the second line uses the variance of the OLSE.
This, as shown in the answer that you link to (see also here), is independent of $$d:=(n-k)\frac{\hat{\sigma}^{2}}{\sigma^{2}}\sim\chi^{2}_{n-k},$$
where $\hat{\sigma}^{2}=y'M_Xy/(n-k)$ is the usual unbiased error variance estimate, with $M_{X}=I-X(X'X)^{-1}X'$ is the "residual maker matrix" from regressing on $X$.
So, as $n'n$ is a quadratic form in normals,
\begin{eqnarray*}
\frac{\overbrace{n^\prime n}^{\sim\chi^{2}_{q}}/q}{d/(n-k)}=\frac{(\hat{\beta}_{\text{ols}}-\beta)^\prime R\left\{R^\prime(X^\prime X)^{-1}R\right\}^{-1}R^\prime(\hat{\beta}_{\text{ols}}-\beta)/q}{\hat{\sigma}^{2}}\sim F_{q,n-k}.
\end{eqnarray*}
In particular, under $H_{0}:R^\prime\beta=r$, this reduces to the statistic
\begin{eqnarray}
F=\frac{(R^\prime\hat{\beta}_{\text{ols}}-r)^\prime\left\{R^\prime(X^\prime X)^{-1}R\right\}^{-1}(R^\prime\hat{\beta}_{\text{ols}}-r)/q}{\hat{\sigma}^{2}}\sim F_{q,n-k}.
\end{eqnarray}
For illustration, consider the special case $R^\prime=I$, $r=0$, $q=2$, $\hat{\sigma}^{2}=1$ and $X^\prime X=I$. Then,
\begin{eqnarray}
F=\hat{\beta}_{\text{ols}}^\prime\hat{\beta}_{\text{ols}}/2=\frac{\hat{\beta}_{\text{ols},1}^2+\hat{\beta}_{\text{ols},2}^2}{2},
\end{eqnarray}
the squared Euclidean distance of the OLS estimate from the origin standardized by the number of elements - highlighting that, since $\hat{\beta}_{\text{ols},2}^2$ are squared standard normals and hence $\chi^2_1$, the $F$ distribution may be seen as an "average $\chi^2$ distribution.
In case you prefer a little simulation (which is of course not a proof!), in which the null is tested that none of the $k$ regressors matter - which they indeed do not, so that we simulate the null distribution.
We see very good agreement between the theoretical density and the histogram of the Monte Carlo test statistics.
library(lmtest)
n <- 100
reps <- 20000
sloperegs <- 5 # number of slope regressors, q or k-1 (minus the constant) in the above notation
critical.value <- qf(p = .95, df1 = sloperegs, df2 = n-sloperegs-1)
# for the null that none of the slope regrssors matter
Fstat <- rep(NA,reps)
for (i in 1:reps){
y <- rnorm(n)
X <- matrix(rnorm(n*sloperegs), ncol=sloperegs)
reg <- lm(y~X)
Fstat[i] <- waldtest(reg, test="F")$F[2]
}
mean(Fstat>critical.value) # very close to 0.05
hist(Fstat, breaks = 60, col="lightblue", freq = F, xlim=c(0,4))
x <- seq(0,6,by=.1)
lines(x, df(x, df1 = sloperegs, df2 = n-sloperegs-1), lwd=2, col="purple")
To see that the versions of the test statistics in the question and the answer are indeed equivalent, note that the null corresponds to the restrictions $R'=[0\;\;I]$ and $r=0$.
Let $X=[X_1\;\;X_2]$ be partitioned according to which coefficients are restricted to be zero under the null (in your case, all but the constant, but the derivation to follow is general). Also, let $\hat{\beta}_{\text{ols}}=(\hat{\beta}_{\text{ols},1}^\prime,\hat{\beta}_{\text{ols},2}')'$ be the suitably partitioned OLS estimate.
Then,
$$
R'\hat{\beta}_{\text{ols}}=\hat{\beta}_{\text{ols},2}
$$
and
$$
R^\prime(X^\prime X)^{-1}R\equiv\tilde D,
$$
the lower right block of
\begin{align*}
(X^TX)^{-1}&=\left( \begin{array} {c,c} X_1'X_1&X_1'X_2 \\ X_2'X_1&X_2'X_2\end{array} \right)^{-1}\\&\equiv\left( \begin{array} {c,c} \tilde A&\tilde B \\ \tilde C&\tilde D\end{array} \right)
\end{align*}
Now, use results for partitioned inverses to obtain
$$
\tilde D=(X_2'X_2-X_2'X_1(X_1'X_1)^{-1}X_1'X_2)^{-1}=(X_2'M_{X_1}X_2)^{-1}
$$
where $M_{X_1}=I-X_1(X_1'X_1)^{-1}X_1'$.
Thus, the numerator of the $F$ statistic becomes (without the division by $q$)
$$
F_{num}=\hat{\beta}_{\text{ols},2}'(X_2'M_{X_1}X_2)\hat{\beta}_{\text{ols},2}
$$
Next, recall that by the Frisch-Waugh-Lovell theorem we may write
$$
\hat{\beta}_{\text{ols},2}=(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
$$
so that
\begin{align*}
F_{num}&=y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}(X_2'M_{X_1}X_2)(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y\\
&=y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
\end{align*}
It remains to show that this numerator is identical to $\text{RSSR}-\text{USSR}$, the difference in restricted and unrestricted sum of squared residuals.
Here,
$$\text{RSSR}=y'M_{X_1}y$$
is the residual sum of squares from regressing $y$ on $X_1$, i.e., with $H_0$ imposed. In your special case, this is just $TSS=\sum_i(y_i-\bar y)^2$, the residuals of a regression on a constant.
Again using FWL (which also shows that the residuals of the two approaches are identical), we can write $\text{USSR}$ (SSR in your notation) as the SSR of the regression
$$
M_{X_1}y\quad\text{on}\quad M_{X_1}X_2
$$
That is,
\begin{eqnarray*}
\text{USSR}&=&y'M_{X_1}'M_{M_{X_1}X_2}M_{X_1}y\\
&=&y'M_{X_1}'(I-P_{M_{X_1}X_2})M_{X_1}y\\
&=&y'M_{X_1}y-y'M_{X_1}M_{X_1}X_2((M_{X_1}X_2)'M_{X_1}X_2)^{-1}(M_{X_1}X_2)'M_{X_1}y\\
&=&y'M_{X_1}y-y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
\end{eqnarray*}
Thus,
\begin{eqnarray*}
\text{RSSR}-\text{USSR}&=&y'M_{X_1}y-(y'M_{X_1}y-y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y)\\
&=&y'M_{X_1}X_2(X_2'M_{X_1}X_2)^{-1}X_2'M_{X_1}y
\end{eqnarray*}
|
Proof that F-statistic follows F-distribution
Let us show the result for the general case of which your formula for the test statistic is a special case. In general, we need to verify that the statistic can be, according to the characterization o
|
9,528
|
Proof that F-statistic follows F-distribution
|
@ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners.
A random variable $Y\sim F_{d_1,d_2}$ if $$Y=\frac{X_1/d_1}{X_2/d_2},$$ where $X_1\sim\chi^2_{d_1}$ and $X_2\sim\chi^2_{d_2}$ are independent. Thus, to show that the $F$-statistic has $F$-distribution, we may as well show that $c\text{ESS}\sim\chi^2_{p-1}$ and $c\text{RSS}\sim\chi^2_{n-p}$ for some constant $c$, and that they are independent.
In OLS model we write $$y=X\beta+\varepsilon,$$ where $X$ is a $n\times p$ matrix, and ideally $\varepsilon\sim N_n(\mathbf{0}, \sigma^2I)$. For convenience we introduce the hat matrix $H=X(X^TX)^{-1}X^{T}$ (note $\hat{y}=Hy$), and the residual maker $M=I-H$. Important properties of $H$ and $M$ are that they are both symmetric and idempotent. In addition, we have $\operatorname{tr}(H)=p$ and $HX=X$, these will come in handy later.
Let us denote the matrix of all ones as $J$, the sum of squares can then be expressed with quadratic forms: $$\text{TSS}=y^T\left(I-\frac{1}{n}J\right)y,\quad\text{RSS}=y^TMy,\quad\text{ESS}=y^T\left(H-\frac{1}{n}J\right)y.$$ Note that $M+(H-J/n)+J/n=I$. One can verify that $J/n$ is idempotent and $\operatorname{rank}(M)+\operatorname{rank}(H-J/n)+\operatorname{rank}(J/n)=n$. It follows from this then that $H-J/n$ is also idempotent and $M(H-J/n)=0$.
We can now set out to show that $F$-statistic has $F$-distribution (search Cochran's theorem for more). Here we need two facts:
Let $x\sim N_n(\mu,\Sigma)$. Suppose $A$ is symmetric with rank $r$ and $A\Sigma$ is idempotent, then $x^TAx\sim\chi^2_r(\mu^TA\mu/2)$, i.e. non-central $\chi^2$ with d.f. $r$ and non-centrality $\mu^TA\mu/2$. This is a special case of Baldessari's result, a proof can also be found here.
Let $x\sim N_n(\mu,\Sigma)$. If $A\Sigma B=0$, then $x^TAx$ and $x^TBx$ are independent. This is known as Craig's theorem.
Since $y\sim N_n(X\beta,\sigma^2I)$, we have $$\frac{\text{ESS}}{\sigma^2}=\left(\frac{y}{\sigma}\right)^T\left(H-\frac{1}{n}J\right)\frac{y}{\sigma}\sim\chi^2_{p-1}\left((X\beta)^T\left(H-\frac{J}{n}\right)X\beta\right).$$ However, under null hypothesis $\beta=\mathbf{0}$, so really $\text{ESS}/\sigma^2\sim\chi^2_{p-1}$. On the other hand, note that $y^TMy=\varepsilon^TM\varepsilon$ since $HX=X$. Therefore $\text{RSS}/\sigma^2\sim\chi^2_{n-p}$. Since $M(H-J/n)=0$, $\text{ESS}/\sigma^2$ and $\text{RSS}/\sigma^2$ are also independent. It immediately follows then $$F = \frac{(\text{TSS}-\text{RSS})/(p-1)}{\text{RSS}/(n-p)}=\frac{\dfrac{\text{ESS}}{\sigma^2}/(p-1)}{\dfrac{\text{RSS}}{\sigma^2}/(n-p)}\sim F_{p-1,n-p}.$$
|
Proof that F-statistic follows F-distribution
|
@ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners.
A random variable $Y\s
|
Proof that F-statistic follows F-distribution
@ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners.
A random variable $Y\sim F_{d_1,d_2}$ if $$Y=\frac{X_1/d_1}{X_2/d_2},$$ where $X_1\sim\chi^2_{d_1}$ and $X_2\sim\chi^2_{d_2}$ are independent. Thus, to show that the $F$-statistic has $F$-distribution, we may as well show that $c\text{ESS}\sim\chi^2_{p-1}$ and $c\text{RSS}\sim\chi^2_{n-p}$ for some constant $c$, and that they are independent.
In OLS model we write $$y=X\beta+\varepsilon,$$ where $X$ is a $n\times p$ matrix, and ideally $\varepsilon\sim N_n(\mathbf{0}, \sigma^2I)$. For convenience we introduce the hat matrix $H=X(X^TX)^{-1}X^{T}$ (note $\hat{y}=Hy$), and the residual maker $M=I-H$. Important properties of $H$ and $M$ are that they are both symmetric and idempotent. In addition, we have $\operatorname{tr}(H)=p$ and $HX=X$, these will come in handy later.
Let us denote the matrix of all ones as $J$, the sum of squares can then be expressed with quadratic forms: $$\text{TSS}=y^T\left(I-\frac{1}{n}J\right)y,\quad\text{RSS}=y^TMy,\quad\text{ESS}=y^T\left(H-\frac{1}{n}J\right)y.$$ Note that $M+(H-J/n)+J/n=I$. One can verify that $J/n$ is idempotent and $\operatorname{rank}(M)+\operatorname{rank}(H-J/n)+\operatorname{rank}(J/n)=n$. It follows from this then that $H-J/n$ is also idempotent and $M(H-J/n)=0$.
We can now set out to show that $F$-statistic has $F$-distribution (search Cochran's theorem for more). Here we need two facts:
Let $x\sim N_n(\mu,\Sigma)$. Suppose $A$ is symmetric with rank $r$ and $A\Sigma$ is idempotent, then $x^TAx\sim\chi^2_r(\mu^TA\mu/2)$, i.e. non-central $\chi^2$ with d.f. $r$ and non-centrality $\mu^TA\mu/2$. This is a special case of Baldessari's result, a proof can also be found here.
Let $x\sim N_n(\mu,\Sigma)$. If $A\Sigma B=0$, then $x^TAx$ and $x^TBx$ are independent. This is known as Craig's theorem.
Since $y\sim N_n(X\beta,\sigma^2I)$, we have $$\frac{\text{ESS}}{\sigma^2}=\left(\frac{y}{\sigma}\right)^T\left(H-\frac{1}{n}J\right)\frac{y}{\sigma}\sim\chi^2_{p-1}\left((X\beta)^T\left(H-\frac{J}{n}\right)X\beta\right).$$ However, under null hypothesis $\beta=\mathbf{0}$, so really $\text{ESS}/\sigma^2\sim\chi^2_{p-1}$. On the other hand, note that $y^TMy=\varepsilon^TM\varepsilon$ since $HX=X$. Therefore $\text{RSS}/\sigma^2\sim\chi^2_{n-p}$. Since $M(H-J/n)=0$, $\text{ESS}/\sigma^2$ and $\text{RSS}/\sigma^2$ are also independent. It immediately follows then $$F = \frac{(\text{TSS}-\text{RSS})/(p-1)}{\text{RSS}/(n-p)}=\frac{\dfrac{\text{ESS}}{\sigma^2}/(p-1)}{\dfrac{\text{RSS}}{\sigma^2}/(n-p)}\sim F_{p-1,n-p}.$$
|
Proof that F-statistic follows F-distribution
@ChristophHanck has provided a very comprehensive answer, here I will add a sketch of proof on the special case OP mentioned. Hopefully it's also easier to follow for beginners.
A random variable $Y\s
|
9,529
|
How well does bootstrapping approximate the sampling distribution of an estimator?
|
In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence
Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane arrivals in the Houston airport (from hflights package). Let $\hat \theta$ be the mean estimator. First, we find the sampling distribution of $\hat \theta$, and then the bootstrap distribution of $\hat \theta$
Here's the dataset:
The true mean is 7.09 min.
First, we do a certain number of samples to get the sampling distribution of $\hat \theta$, then we take one sample and take many bootstrap samples from it.
For example, let's take a look at two distributions with the sample size 100 and 5000 repetitions. We see visually that these distributions are quite apart, and the KL divergence is 0.48.
But when we increase the sample size to 1000, they start to converge (KL divergence is 0.11)
And when the sample size is 5000, they are very close (KL divergence is 0.01)
This, of course, depends on which bootstrap sample you get, but I believe you can see that the KL divergence goes down as we increase the sample size, and thus bootstrap distribution of $\hat \theta$ approaches sample distribution $\hat \theta$ in terms of KL Divergence. To be sure, you can try to do several bootstraps and take the average of the KL divergence.
Here's the R code of this experiment: https://gist.github.com/alexeygrigorev/0b97794aea78eee9d794
|
How well does bootstrapping approximate the sampling distribution of an estimator?
|
In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence
Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane a
|
How well does bootstrapping approximate the sampling distribution of an estimator?
In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence
Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane arrivals in the Houston airport (from hflights package). Let $\hat \theta$ be the mean estimator. First, we find the sampling distribution of $\hat \theta$, and then the bootstrap distribution of $\hat \theta$
Here's the dataset:
The true mean is 7.09 min.
First, we do a certain number of samples to get the sampling distribution of $\hat \theta$, then we take one sample and take many bootstrap samples from it.
For example, let's take a look at two distributions with the sample size 100 and 5000 repetitions. We see visually that these distributions are quite apart, and the KL divergence is 0.48.
But when we increase the sample size to 1000, they start to converge (KL divergence is 0.11)
And when the sample size is 5000, they are very close (KL divergence is 0.01)
This, of course, depends on which bootstrap sample you get, but I believe you can see that the KL divergence goes down as we increase the sample size, and thus bootstrap distribution of $\hat \theta$ approaches sample distribution $\hat \theta$ in terms of KL Divergence. To be sure, you can try to do several bootstraps and take the average of the KL divergence.
Here's the R code of this experiment: https://gist.github.com/alexeygrigorev/0b97794aea78eee9d794
|
How well does bootstrapping approximate the sampling distribution of an estimator?
In Information Theory the typical way to quantify how "close" one distribution to another is to use KL-divergence
Let's try to illustrate it with a highly skewed long-tail dataset - delays of plane a
|
9,530
|
How well does bootstrapping approximate the sampling distribution of an estimator?
|
Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is,
$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges (as $n$ goes to infinity) to $F(x)$ for every $x$. Hence convergence of the bootstrap distribution of $\hat{\theta}(X_1,\ldots,X_n)=g(\hat{F}_n)$ is driven by this convergence which occurs at a rate $\sqrt{n}$ for each $x$, since $$\sqrt{n}\{\hat{F}_n(x)-F(x)\}\stackrel{\text{dist}}{\longrightarrow}\mathsf{N}(0,F(x)[1-F(x)])$$ even though this rate and limiting distribution does not automatically transfer to $g(\hat{F}_n)$. In practice, to assess the variability of the approximation, you can produce a bootstrap evaluation of the distribution of $g(\hat{F}_n)$ by double-bootstrap, i.e., by bootstrapping bootstrap evaluations.
As an update, here is an illustration I use in class:
where the lhs compares the true cdf $F$ with the empirical cdf $\hat{F}_n$ for $n=100$ observations and the rhs plots $250$ replicas of the lhs, for 250 different samples, in order to measure the variability of the cdf approximation. In the example I know the truth and hence I can simulate from the truth to evaluate the variability. In a realistic situation, I do not know $F$ and hence I have to start from $\hat{F}_n$ instead to produce a similar graph.
Further update: Here is what the tube picture looks like when starting from the empirical cdf:
|
How well does bootstrapping approximate the sampling distribution of an estimator?
|
Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is,
$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges
|
How well does bootstrapping approximate the sampling distribution of an estimator?
Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is,
$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges (as $n$ goes to infinity) to $F(x)$ for every $x$. Hence convergence of the bootstrap distribution of $\hat{\theta}(X_1,\ldots,X_n)=g(\hat{F}_n)$ is driven by this convergence which occurs at a rate $\sqrt{n}$ for each $x$, since $$\sqrt{n}\{\hat{F}_n(x)-F(x)\}\stackrel{\text{dist}}{\longrightarrow}\mathsf{N}(0,F(x)[1-F(x)])$$ even though this rate and limiting distribution does not automatically transfer to $g(\hat{F}_n)$. In practice, to assess the variability of the approximation, you can produce a bootstrap evaluation of the distribution of $g(\hat{F}_n)$ by double-bootstrap, i.e., by bootstrapping bootstrap evaluations.
As an update, here is an illustration I use in class:
where the lhs compares the true cdf $F$ with the empirical cdf $\hat{F}_n$ for $n=100$ observations and the rhs plots $250$ replicas of the lhs, for 250 different samples, in order to measure the variability of the cdf approximation. In the example I know the truth and hence I can simulate from the truth to evaluate the variability. In a realistic situation, I do not know $F$ and hence I have to start from $\hat{F}_n$ instead to produce a similar graph.
Further update: Here is what the tube picture looks like when starting from the empirical cdf:
|
How well does bootstrapping approximate the sampling distribution of an estimator?
Bootstrap is based on the convergence of the empirical cdf to the true cdf, that is,
$$\hat{F}_n(x) = \frac{1}{n}\sum_{i=1}^n\mathbb{I}_{X_i\le x}\qquad X_i\stackrel{\text{iid}}{\sim}F(x)$$ converges
|
9,531
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is
$\mathrm{min} \frac12\|w\|^2 + C\sum_{i=1}^\ell \xi_i$
subject to
$y_i(x_i\cdot w _ b) \geq 1 - \xi_i, \quad \mathrm{and} \quad \xi_i \geq 0 \quad \forall i$
Note that the optimisation problem is basically a measure of the data mis-fit term (the summation over $\xi_i$) and a regularisation term, but the usual regrularisation parameter is placed with the data misfit term. Obviously the greater the number of training patterns we have, the larger the summation will be and the smaller $C$ ought to be to maintain the same balance with the magnitude of the weights.
Some implementations of the SVM reparameterise as
$\mathrm{min} \frac12\|w\|^2 + \frac{C}{\ell}\sum_{i=1}^\ell \xi_i$
in order to compensate, but some don't. So an additional point to consider is whether the optimal hyper-parameters depend on the number of training examples or not.
I agree with Jim that overfitting the model selection criterion is likely to be more of an issue, but if you have enough data even in the subsample then this may not be a substantial issue.
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is
|
Is hyperparameter tuning on sample of dataset a bad idea?
In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is
$\mathrm{min} \frac12\|w\|^2 + C\sum_{i=1}^\ell \xi_i$
subject to
$y_i(x_i\cdot w _ b) \geq 1 - \xi_i, \quad \mathrm{and} \quad \xi_i \geq 0 \quad \forall i$
Note that the optimisation problem is basically a measure of the data mis-fit term (the summation over $\xi_i$) and a regularisation term, but the usual regrularisation parameter is placed with the data misfit term. Obviously the greater the number of training patterns we have, the larger the summation will be and the smaller $C$ ought to be to maintain the same balance with the magnitude of the weights.
Some implementations of the SVM reparameterise as
$\mathrm{min} \frac12\|w\|^2 + \frac{C}{\ell}\sum_{i=1}^\ell \xi_i$
in order to compensate, but some don't. So an additional point to consider is whether the optimal hyper-parameters depend on the number of training examples or not.
I agree with Jim that overfitting the model selection criterion is likely to be more of an issue, but if you have enough data even in the subsample then this may not be a substantial issue.
|
Is hyperparameter tuning on sample of dataset a bad idea?
In addition to Jim's (+1) answer: For some classifiers, the hyper-parameter values are dependent on the number of training examples, for instance for a linear SVM, the primal optimization problem is
|
9,532
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
Is hyperparameter tuning on sample of dataset a bad idea?
A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split.
Do I limit my classification accuracy?
A: Yes, but common machine learning wisdom is: with your optimal hyperparameters, say $\lambda^*$, refit your model(s) on the whole dataset and make that model your final model for new, unseen, future cases.
Do I avoid using all the prediction power that my dataset can offer by tuning only on a subset?
A: see previous answer.
If such a harm of performance is happening is it somehow limited by some factor?
A: idem.
I measure my accuracy using 10-fold cross as I use to also evaluate the parameters
A: Note that this is different from what is asked in the title. 10-fold CV iterates over 10 test-train splits to arrive at an "unbiased" (less-biased) estimate of generalizability (measured in this case by accuracy). 10-fold CV exactly addresses the issue I talk about in the first answer.
the prediction accuracy I get from training on my whole dataset
A: this is an "in-sample" measure that could be optimistically biased. But don't forget that you have many cases and relatively few features, so that this optimism bias may not be an issue. Machine learning nugget: "the best regularizer is more data."
[cont'd], is always really close to the evaluation I get when tuning the parameters for the best set of parameters.
A: see previous answer. Look at the hyperparameter plots: does tuning decrease error and by how much? From what you are saying, the tuning is not doing much.
You could test this as follows. Take a 70%-30% train-test split. Compare predictive performance of:
an untuned model trained on the train set,
a 10-fold-CV tuned model trained on the train set.
Let both models predict the test set. If performance is very close, then tuning is not doing much. If performance is different in favor of the tuned model, then continue with the tuning approach.
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
Is hyperparameter tuning on sample of dataset a bad idea?
A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split.
Do I limit
|
Is hyperparameter tuning on sample of dataset a bad idea?
Is hyperparameter tuning on sample of dataset a bad idea?
A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split.
Do I limit my classification accuracy?
A: Yes, but common machine learning wisdom is: with your optimal hyperparameters, say $\lambda^*$, refit your model(s) on the whole dataset and make that model your final model for new, unseen, future cases.
Do I avoid using all the prediction power that my dataset can offer by tuning only on a subset?
A: see previous answer.
If such a harm of performance is happening is it somehow limited by some factor?
A: idem.
I measure my accuracy using 10-fold cross as I use to also evaluate the parameters
A: Note that this is different from what is asked in the title. 10-fold CV iterates over 10 test-train splits to arrive at an "unbiased" (less-biased) estimate of generalizability (measured in this case by accuracy). 10-fold CV exactly addresses the issue I talk about in the first answer.
the prediction accuracy I get from training on my whole dataset
A: this is an "in-sample" measure that could be optimistically biased. But don't forget that you have many cases and relatively few features, so that this optimism bias may not be an issue. Machine learning nugget: "the best regularizer is more data."
[cont'd], is always really close to the evaluation I get when tuning the parameters for the best set of parameters.
A: see previous answer. Look at the hyperparameter plots: does tuning decrease error and by how much? From what you are saying, the tuning is not doing much.
You could test this as follows. Take a 70%-30% train-test split. Compare predictive performance of:
an untuned model trained on the train set,
a 10-fold-CV tuned model trained on the train set.
Let both models predict the test set. If performance is very close, then tuning is not doing much. If performance is different in favor of the tuned model, then continue with the tuning approach.
|
Is hyperparameter tuning on sample of dataset a bad idea?
Is hyperparameter tuning on sample of dataset a bad idea?
A: Yes, because you risk overfitting (the hyperparameters) on that specific test set resulting from your chosen train-test split.
Do I limit
|
9,533
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets:
https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf
I think it is not a bad idea in contrast to what Jim said.
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets:
https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf
I think it is not a bad idea in
|
Is hyperparameter tuning on sample of dataset a bad idea?
This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets:
https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf
I think it is not a bad idea in contrast to what Jim said.
|
Is hyperparameter tuning on sample of dataset a bad idea?
This paper is about the topic of taking other/smaller datasets for the tuning of bigger datasets:
https://papers.nips.cc/paper/5086-multi-task-bayesian-optimization.pdf
I think it is not a bad idea in
|
9,534
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
I'll answer for artificial neural networks (ANNs).
The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidden units or layers).
Tuning architectural hyperparameters on a subset of your training set is probably not a good idea (unless your training set really lacks diversity, i.e. increasing the training set size doesn't increase the ANN performance), since architectural hyperparameters change the capacity of the ANN.
I would be less concerned tuning the hyperparameters that define the learning process on a subset of your training set, but I guess one should validate it empirically.
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
I'll answer for artificial neural networks (ANNs).
The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidd
|
Is hyperparameter tuning on sample of dataset a bad idea?
I'll answer for artificial neural networks (ANNs).
The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidden units or layers).
Tuning architectural hyperparameters on a subset of your training set is probably not a good idea (unless your training set really lacks diversity, i.e. increasing the training set size doesn't increase the ANN performance), since architectural hyperparameters change the capacity of the ANN.
I would be less concerned tuning the hyperparameters that define the learning process on a subset of your training set, but I guess one should validate it empirically.
|
Is hyperparameter tuning on sample of dataset a bad idea?
I'll answer for artificial neural networks (ANNs).
The hyperparameters of ANNs may define either its learning process (e.g., learning rate or mini-batch size) or its architecture (e.g., number of hidd
|
9,535
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
You can take a look at
https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27
in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world datasets...
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
You can take a look at
https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27
in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world dat
|
Is hyperparameter tuning on sample of dataset a bad idea?
You can take a look at
https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27
in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world datasets...
|
Is hyperparameter tuning on sample of dataset a bad idea?
You can take a look at
https://link.springer.com/chapter/10.1007/978-3-319-53480-0_27
in which we've investigated the effects of random sampling on SVM hyper-parameter tuning using 100 real-world dat
|
9,536
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperparameter values for the entire dataset. Such approaches typically allow to the reduce the total computational cost needed to run hyperparameter optimization.
|
Is hyperparameter tuning on sample of dataset a bad idea?
|
You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperpa
|
Is hyperparameter tuning on sample of dataset a bad idea?
You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperparameter values for the entire dataset. Such approaches typically allow to the reduce the total computational cost needed to run hyperparameter optimization.
|
Is hyperparameter tuning on sample of dataset a bad idea?
You can use hyperparameter optimization algorithms which support multifidelity evaluations, i.e., evaluations on sub-sets of your data in order to get a rough but useful estimate about optimal hyperpa
|
9,537
|
Introduction to machine learning for mathematicians
|
For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only place I have found what I would call a mathematical definition of machine learning (pac and weak pac). It is worth reading for that reason alone. I also have a math Phd. I'm familiar with, and like, many of the books mentioned above. I'm particularly fond of ESL for a broad spectrum of techniques and ideas, but it's a statistics book with lots of mathematics.
|
Introduction to machine learning for mathematicians
|
For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only p
|
Introduction to machine learning for mathematicians
For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only place I have found what I would call a mathematical definition of machine learning (pac and weak pac). It is worth reading for that reason alone. I also have a math Phd. I'm familiar with, and like, many of the books mentioned above. I'm particularly fond of ESL for a broad spectrum of techniques and ideas, but it's a statistics book with lots of mathematics.
|
Introduction to machine learning for mathematicians
For what you describe, I highly recommend "Foundations of Machine Learning" by Mohri et.al. It is an undergraduate text, but it is for really good undergraduates. It is readable and it is the only p
|
9,538
|
Introduction to machine learning for mathematicians
|
I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques work (and when they don't).
Also Introduction to Statistical Learning (which is more practical - how to do it in R). It has a course running statistical learning; you might find the lectures on YouTube (and again free PDF).
|
Introduction to machine learning for mathematicians
|
I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques w
|
Introduction to machine learning for mathematicians
I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques work (and when they don't).
Also Introduction to Statistical Learning (which is more practical - how to do it in R). It has a course running statistical learning; you might find the lectures on YouTube (and again free PDF).
|
Introduction to machine learning for mathematicians
I would recommend Elements of Statistical Learning (free PDF file). It has sufficient maths and a good introduction to all the relevant techniques - together with some insights on why the techniques w
|
9,539
|
Introduction to machine learning for mathematicians
|
You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous.
That said, you are probably better off reading research papers instead of textbooks. Research papers contain full derivations and proofs of convergence, bounds on performance, etc. which are very often not included in textbooks. A good place to start is the Journal of Machine Learning, which is highly regarded and fully open access. I also recommend the proceedings of conferences like ICML, NIPS, COLT and IJCNN.
|
Introduction to machine learning for mathematicians
|
You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous.
That said, you are probably better off reading research papers instead of tex
|
Introduction to machine learning for mathematicians
You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous.
That said, you are probably better off reading research papers instead of textbooks. Research papers contain full derivations and proofs of convergence, bounds on performance, etc. which are very often not included in textbooks. A good place to start is the Journal of Machine Learning, which is highly regarded and fully open access. I also recommend the proceedings of conferences like ICML, NIPS, COLT and IJCNN.
|
Introduction to machine learning for mathematicians
You will probably like Learning With Kernels by Schölkopf and Smola. Most of Schölkopf's work is mathematically rigorous.
That said, you are probably better off reading research papers instead of tex
|
9,540
|
Introduction to machine learning for mathematicians
|
I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approached every problem and discussion.
|
Introduction to machine learning for mathematicians
|
I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approa
|
Introduction to machine learning for mathematicians
I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approached every problem and discussion.
|
Introduction to machine learning for mathematicians
I would suggest Understanding Machine Learning: From Theory to Algorithms by Shai Shalev-Shwartz. I admit that I read only small portions of it but I immediately noticed rigor with which author approa
|
9,541
|
Moving-average model error terms
|
MA Model Estimation:
Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by
$$y_t=\varepsilon_t-\theta\varepsilon_{t-1},\quad t=1,2,\cdots,100\quad (1)$$
The error term here is not observed. So to obtain this, Box et al. Time Series Analysis: Forecasting and Control (3rd Edition), page 228, suggest that the error term is computed recursively by,
$$\varepsilon_t=y_t+\theta\varepsilon_{t-1}$$
So the error term for $t=1$ is,
$$\varepsilon_{1}=y_{1}+\theta\varepsilon_{0}$$
Now we cannot compute this without knowing the value of $\theta$. So to obtain this, we need to compute the Initial or Preliminary estimate of the model, refer to Box et al. of the said book, Section 6.3.2 page 202 state that,
It has been shown that the first $q$ autocorrelations of MA($q$) process
are nonzero and can be written in terms of the parameters of the model
as
$$\rho_k=\displaystyle\frac{-\theta_{k}+\theta_1\theta_{k+1}+\theta_2\theta_{k+2}+\cdots+\theta_{q-k}\theta_q}{1+\theta_1^2+\theta_2^2+\cdots+\theta_q^2}\quad k=1,2,\cdots, q$$ The expression above for$\rho_1,\rho_2\cdots,\rho_q$
in terms $\theta_1,\theta_2,\cdots,\theta_q$, supplies $q$ equations
in $q$ unknowns. Preliminary estimates of the $\theta$s can be
obtained by substituting estimates $r_k$ for $\rho_k$ in above
equation
Note that $r_k$ is the estimated autocorrelation. There are more discussion in Section 6.3 - Initial Estimates for the Parameters, please read on that. Now, assuming we obtain the initial estimate $\theta=0.5$. Then,
$$\varepsilon_{1}=y_{1}+0.5\varepsilon_{0}$$
Now, another problem is we don't have value for $\varepsilon_0$ because $t$ starts at 1, and so we cannot compute $\varepsilon_1$. Luckily, there are two methods two obtain this,
Conditional Likelihood
Unconditional Likelihood
According to Box et al. Section 7.1.3 page 227, the values of $\varepsilon_0$ can be substituted to zero as an approximation if $n$ is moderate or large, this method is Conditional Likelihood. Otherwise, Unconditional Likelihood is used, wherein the value of $\varepsilon_0$ is obtain by back-forecasting, Box et al. recommend this method. Read more about back-forecasting at Section 7.1.4 page 231.
After obtaining the initial estimates and value of $\varepsilon_0$, then finally we can proceed with the recursive calculation of the error term. Then the final stage is to estimate the parameter of the model $(1)$, remember this is not the preliminary estimate anymore.
In estimating the parameter $\theta$, I use Nonlinear Estimation procedure, particularly the Levenberg-Marquardt algorithm, since MA models are nonlinear on its parameter.
Overall, I would highly recommend you to read Box et al. Time Series Analysis: Forecasting and Control (3rd Edition).
|
Moving-average model error terms
|
MA Model Estimation:
Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by
$$y_t=\varepsilon_t-\theta\varepsilon_{t-1}
|
Moving-average model error terms
MA Model Estimation:
Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by
$$y_t=\varepsilon_t-\theta\varepsilon_{t-1},\quad t=1,2,\cdots,100\quad (1)$$
The error term here is not observed. So to obtain this, Box et al. Time Series Analysis: Forecasting and Control (3rd Edition), page 228, suggest that the error term is computed recursively by,
$$\varepsilon_t=y_t+\theta\varepsilon_{t-1}$$
So the error term for $t=1$ is,
$$\varepsilon_{1}=y_{1}+\theta\varepsilon_{0}$$
Now we cannot compute this without knowing the value of $\theta$. So to obtain this, we need to compute the Initial or Preliminary estimate of the model, refer to Box et al. of the said book, Section 6.3.2 page 202 state that,
It has been shown that the first $q$ autocorrelations of MA($q$) process
are nonzero and can be written in terms of the parameters of the model
as
$$\rho_k=\displaystyle\frac{-\theta_{k}+\theta_1\theta_{k+1}+\theta_2\theta_{k+2}+\cdots+\theta_{q-k}\theta_q}{1+\theta_1^2+\theta_2^2+\cdots+\theta_q^2}\quad k=1,2,\cdots, q$$ The expression above for$\rho_1,\rho_2\cdots,\rho_q$
in terms $\theta_1,\theta_2,\cdots,\theta_q$, supplies $q$ equations
in $q$ unknowns. Preliminary estimates of the $\theta$s can be
obtained by substituting estimates $r_k$ for $\rho_k$ in above
equation
Note that $r_k$ is the estimated autocorrelation. There are more discussion in Section 6.3 - Initial Estimates for the Parameters, please read on that. Now, assuming we obtain the initial estimate $\theta=0.5$. Then,
$$\varepsilon_{1}=y_{1}+0.5\varepsilon_{0}$$
Now, another problem is we don't have value for $\varepsilon_0$ because $t$ starts at 1, and so we cannot compute $\varepsilon_1$. Luckily, there are two methods two obtain this,
Conditional Likelihood
Unconditional Likelihood
According to Box et al. Section 7.1.3 page 227, the values of $\varepsilon_0$ can be substituted to zero as an approximation if $n$ is moderate or large, this method is Conditional Likelihood. Otherwise, Unconditional Likelihood is used, wherein the value of $\varepsilon_0$ is obtain by back-forecasting, Box et al. recommend this method. Read more about back-forecasting at Section 7.1.4 page 231.
After obtaining the initial estimates and value of $\varepsilon_0$, then finally we can proceed with the recursive calculation of the error term. Then the final stage is to estimate the parameter of the model $(1)$, remember this is not the preliminary estimate anymore.
In estimating the parameter $\theta$, I use Nonlinear Estimation procedure, particularly the Levenberg-Marquardt algorithm, since MA models are nonlinear on its parameter.
Overall, I would highly recommend you to read Box et al. Time Series Analysis: Forecasting and Control (3rd Edition).
|
Moving-average model error terms
MA Model Estimation:
Let us assume a series with 100 time points, and say this is characterized by MA(1) model with no intercept. Then the model is given by
$$y_t=\varepsilon_t-\theta\varepsilon_{t-1}
|
9,542
|
Moving-average model error terms
|
A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as
$$
Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1)
$$
so the MA(q) model is a "pure" error model, the degree $q$ defining how far the correlation goes back.
|
Moving-average model error terms
|
A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as
$$
Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1)
$$
so the MA(q) mode
|
Moving-average model error terms
A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as
$$
Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1)
$$
so the MA(q) model is a "pure" error model, the degree $q$ defining how far the correlation goes back.
|
Moving-average model error terms
A Gaussian MA(q) model is defined (not only by Box and Jenkins!) as
$$
Y_t = -\sum_{i=1}^q \vartheta_i e_{t-i} + \sigma e_t,\quad e_t\stackrel{\text{iid}}{\sim} \mathcal{N}(0,1)
$$
so the MA(q) mode
|
9,543
|
Moving-average model error terms
|
See my post here for an explanation of how to understand the disturbance terms in a MA series.
You need different estimation techniques to estimate them. This is because you cannot first get the residuals of a linear regression and then include the lagged residual values as explanatory variables because the MA process uses the residuals of the current regression. In your example you are making two regression equations and using residuals from one into the other. This is not what an MA process is. It cannot be estimated with OLS.
|
Moving-average model error terms
|
See my post here for an explanation of how to understand the disturbance terms in a MA series.
You need different estimation techniques to estimate them. This is because you cannot first get the resi
|
Moving-average model error terms
See my post here for an explanation of how to understand the disturbance terms in a MA series.
You need different estimation techniques to estimate them. This is because you cannot first get the residuals of a linear regression and then include the lagged residual values as explanatory variables because the MA process uses the residuals of the current regression. In your example you are making two regression equations and using residuals from one into the other. This is not what an MA process is. It cannot be estimated with OLS.
|
Moving-average model error terms
See my post here for an explanation of how to understand the disturbance terms in a MA series.
You need different estimation techniques to estimate them. This is because you cannot first get the resi
|
9,544
|
Moving-average model error terms
|
You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is that $Y$ is regressed against two predictor series $e_{t-1}$ and $e_{t−2}$ yielding an error process $e_t$ which will be uncorrelated for all i=3,4,,,,t .We then have two regression coefficients: $\theta_1$ representing the impact of $e_{t-1}$ and $\theta_2$ representing the impact of $e_{t-2}$. Thus $e_t$ is a white noise random series containing n-2 values. Since we have n-2 estimable relationships we start with the assumption that e1 and e2 are equal to 0.0 . Now for any pair of $\theta_1$ and $\theta_2$ we can estimate the t-2 residual values. The combination that yields the smallest error sum of squares would then be the best estimates of $\theta_1$ and $\theta_2$.
|
Moving-average model error terms
|
You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is t
|
Moving-average model error terms
You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is that $Y$ is regressed against two predictor series $e_{t-1}$ and $e_{t−2}$ yielding an error process $e_t$ which will be uncorrelated for all i=3,4,,,,t .We then have two regression coefficients: $\theta_1$ representing the impact of $e_{t-1}$ and $\theta_2$ representing the impact of $e_{t-2}$. Thus $e_t$ is a white noise random series containing n-2 values. Since we have n-2 estimable relationships we start with the assumption that e1 and e2 are equal to 0.0 . Now for any pair of $\theta_1$ and $\theta_2$ we can estimate the t-2 residual values. The combination that yields the smallest error sum of squares would then be the best estimates of $\theta_1$ and $\theta_2$.
|
Moving-average model error terms
You say "the observation $Y$ is first regressed against its previous values $Y_{t−1},...,Y_{t−n}$ and then one or more $Y−\hat{Y}$ values are used as the error terms for the MA model." What I say is t
|
9,545
|
Moving-average model error terms
|
With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model:
AR(m) model (with $m > max(p, q)$) is fitted to the data
Compute error terms for all $t$: $\epsilon_t$ = $y_t - \hat{y}_t$
Regress $y_t$ on $y^{(d)}_{t-1},..,y^{(d)}_{t-p},\epsilon_{t-1},...,\epsilon_{t-q}$ (For a pure MA model the regression would be done only against the error terms $\epsilon_{t-1},...,\epsilon_{t-q}$)
To improve accurancy optionally regress again with updated model parameters $\phi,\theta$ from step 3
So your suspicion that one needs some kind of model first, before one can compute the error part can be converted into an iterative algorithm for fitting the parameters of an ARIMA model.
See also
Brockwell, Davis (2016) Introduction to Time Series and Forecasting, chapter 5.1.4
|
Moving-average model error terms
|
With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model:
AR(m) model (with $m > max(p, q)$) is fitt
|
Moving-average model error terms
With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model:
AR(m) model (with $m > max(p, q)$) is fitted to the data
Compute error terms for all $t$: $\epsilon_t$ = $y_t - \hat{y}_t$
Regress $y_t$ on $y^{(d)}_{t-1},..,y^{(d)}_{t-p},\epsilon_{t-1},...,\epsilon_{t-q}$ (For a pure MA model the regression would be done only against the error terms $\epsilon_{t-1},...,\epsilon_{t-q}$)
To improve accurancy optionally regress again with updated model parameters $\phi,\theta$ from step 3
So your suspicion that one needs some kind of model first, before one can compute the error part can be converted into an iterative algorithm for fitting the parameters of an ARIMA model.
See also
Brockwell, Davis (2016) Introduction to Time Series and Forecasting, chapter 5.1.4
|
Moving-average model error terms
With Hannan–Rissanen (1982) algorithm to fit parameters for an ARIMA model you actually always do an AR regression as first step, even for an pure MA model:
AR(m) model (with $m > max(p, q)$) is fitt
|
9,546
|
Interpretation of ridge regularization in regression
|
Good questions!
Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing ridge penalty effectively lowers these correlations.
I think this is partly tradition, partly the fact that ridge regression formula as stated in your first equation follows from the following cost function: $$L=\| \mathbf y - \mathbf X \beta \|^2 + \lambda \|\beta\|^2.$$ If $\lambda=0$, the second term can be dropped, and minimizing the first term ("reconstruction error") leads to the standard OLS formula for $\beta$. Keeping the second term leads to the formula for $\beta_\mathrm{ridge}$. This cost function is mathematically very convenient to deal with, and this might be one of the reasons for preferring "non-normalized" lambda.
One possible way to normalize $\lambda$ is to scale it by the total variance $\mathrm{tr}(\mathbf X^\top \mathbf X)$, i.e. to use $\lambda \mathrm{tr}(\mathbf X^\top \mathbf X)$ instead of $\lambda$. This would not necessarily confine $\lambda$ to $[0,1]$, but would make it "dimensionless" and would probably result in optimal $\lambda$ being less then $1$ in all practical cases (NB: this is just a guess!).
"Attacking only small eigenvalues" does have a separate name and is called principal components regression. The connection between PCR and ridge regression is that in PCR you effectively have a "step penalty" cutting off all the eigenvalues after a certain number, whereas ridge regression applies a "soft penalty", penalizing all eigenvalues, with smaller ones getting penalized more. This is nicely explained in The Elements of Statistical Learning by Hastie et al. (freely available online), section 3.4.1. See also my answer in Relationship between ridge regression and PCA regression.
I have never seen this done, but note that you could consider a cost function in the form $$L=\| \mathbf y - \mathbf X \beta \|^2 + \lambda \|\beta-\beta_0\|^2.$$ This shrinks your $\beta$ not to zero, but to some other pre-defined value $\beta_0$. If one works out the math, you will arrive to the optimal $\beta$ given by $$\beta = (\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1} (\mathbf X^\top \mathbf y + \lambda \beta_0),$$ which perhaps can be seen as "regularizing cross-covariance"?
|
Interpretation of ridge regularization in regression
|
Good questions!
Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing
|
Interpretation of ridge regularization in regression
Good questions!
Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing ridge penalty effectively lowers these correlations.
I think this is partly tradition, partly the fact that ridge regression formula as stated in your first equation follows from the following cost function: $$L=\| \mathbf y - \mathbf X \beta \|^2 + \lambda \|\beta\|^2.$$ If $\lambda=0$, the second term can be dropped, and minimizing the first term ("reconstruction error") leads to the standard OLS formula for $\beta$. Keeping the second term leads to the formula for $\beta_\mathrm{ridge}$. This cost function is mathematically very convenient to deal with, and this might be one of the reasons for preferring "non-normalized" lambda.
One possible way to normalize $\lambda$ is to scale it by the total variance $\mathrm{tr}(\mathbf X^\top \mathbf X)$, i.e. to use $\lambda \mathrm{tr}(\mathbf X^\top \mathbf X)$ instead of $\lambda$. This would not necessarily confine $\lambda$ to $[0,1]$, but would make it "dimensionless" and would probably result in optimal $\lambda$ being less then $1$ in all practical cases (NB: this is just a guess!).
"Attacking only small eigenvalues" does have a separate name and is called principal components regression. The connection between PCR and ridge regression is that in PCR you effectively have a "step penalty" cutting off all the eigenvalues after a certain number, whereas ridge regression applies a "soft penalty", penalizing all eigenvalues, with smaller ones getting penalized more. This is nicely explained in The Elements of Statistical Learning by Hastie et al. (freely available online), section 3.4.1. See also my answer in Relationship between ridge regression and PCA regression.
I have never seen this done, but note that you could consider a cost function in the form $$L=\| \mathbf y - \mathbf X \beta \|^2 + \lambda \|\beta-\beta_0\|^2.$$ This shrinks your $\beta$ not to zero, but to some other pre-defined value $\beta_0$. If one works out the math, you will arrive to the optimal $\beta$ given by $$\beta = (\mathbf X^\top \mathbf X + \lambda \mathbf I)^{-1} (\mathbf X^\top \mathbf y + \lambda \beta_0),$$ which perhaps can be seen as "regularizing cross-covariance"?
|
Interpretation of ridge regularization in regression
Good questions!
Yes, this is exactly correct. You can see ridge penalty as one possible way to deal with multicollinearity problem that arises when many predictors are highly correlated. Introducing
|
9,547
|
Interpretation of ridge regularization in regression
|
A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone.
To see this, express the ridge regression estimator in terms of the singular value decomposition of $X$,
$$X=\sum_{i=1}^{n} \sigma_{i}u_{i}v_{i}^{T}$$
where the $u_{i}$ vectors are mutually orthogonal and the $v_{i}$ vectors are
also mutually orthogonal. Here the eigenvalues of $X^{T}X$ are $\sigma_{i}^{2}$, $i=1, 2, \ldots, n$.
Then you can show that
$$\beta_{\mbox{ridge}}=\sum_{i=1}^{n} \frac{\sigma_{i}^{2}}{\sigma_{i}^{2}+\lambda}\frac{1}{\sigma_{i}} (u_{i}^{T}y) v_{i}.$$
Now, consider the "filter factors" $\sigma_{i}^{2}/(\sigma_{i}^{2}+\lambda)$. If $\lambda=0$, then the filter factors are 1, and we get the conventional least squares solution. If $\lambda > 0$ and $\sigma_{i}^{2} \gg \lambda$, then the filter factor is essentially 1. If $\sigma_{i}^{2} \ll \lambda$, then this factor is essentially 0. Thus the terms corresponding to the small eigenvalues effectively drop out, while those corresponding to the larger eigenvalues are retained.
In comparison, principal components regression simply uses factors of 1 (for the larger eigenvalues) or 0 (for the smaller eigenvalues that are dropped) in this formula.
|
Interpretation of ridge regularization in regression
|
A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone.
To see this, expres
|
Interpretation of ridge regularization in regression
A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone.
To see this, express the ridge regression estimator in terms of the singular value decomposition of $X$,
$$X=\sum_{i=1}^{n} \sigma_{i}u_{i}v_{i}^{T}$$
where the $u_{i}$ vectors are mutually orthogonal and the $v_{i}$ vectors are
also mutually orthogonal. Here the eigenvalues of $X^{T}X$ are $\sigma_{i}^{2}$, $i=1, 2, \ldots, n$.
Then you can show that
$$\beta_{\mbox{ridge}}=\sum_{i=1}^{n} \frac{\sigma_{i}^{2}}{\sigma_{i}^{2}+\lambda}\frac{1}{\sigma_{i}} (u_{i}^{T}y) v_{i}.$$
Now, consider the "filter factors" $\sigma_{i}^{2}/(\sigma_{i}^{2}+\lambda)$. If $\lambda=0$, then the filter factors are 1, and we get the conventional least squares solution. If $\lambda > 0$ and $\sigma_{i}^{2} \gg \lambda$, then the filter factor is essentially 1. If $\sigma_{i}^{2} \ll \lambda$, then this factor is essentially 0. Thus the terms corresponding to the small eigenvalues effectively drop out, while those corresponding to the larger eigenvalues are retained.
In comparison, principal components regression simply uses factors of 1 (for the larger eigenvalues) or 0 (for the smaller eigenvalues that are dropped) in this formula.
|
Interpretation of ridge regularization in regression
A further comment on question 4. Actually, ridge regression does pretty effectively deal with the small eigenvalues of $X^{T}X$ while mostly leaving the large eigenvalues alone.
To see this, expres
|
9,548
|
Interpretation of ridge regularization in regression
|
Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this interpretation, one has first to make the assumption that $X$ is centered. This interpretation is based on the following equivalence:
$$
\lambda x + y = \kappa \left( \alpha x + (1-\alpha) y\right),
$$
with $\alpha=\frac{\lambda}{1+\lambda}$ and $\kappa = 1+\lambda$. If $0 \leq \lambda < + \infty$, it immediately follows that $0 < \alpha \leq 1$.
The technique you describe as "attack[ing] only the singular or near singular values" is also known as Singular Spectrum Analysis (for the purpose of linear regression) (see Eq. 19), if by "attacking", you mean "removing". The cross-covariance is unchanged.
Removing low singular values is also done by Principal Component Regression. In PCR, a PCA is performed on $X$ and a linear regression is applied on a selection of the obtained components. The difference with SSA is that it has an impact on the cross-covariance.
|
Interpretation of ridge regularization in regression
|
Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this
|
Interpretation of ridge regularization in regression
Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this interpretation, one has first to make the assumption that $X$ is centered. This interpretation is based on the following equivalence:
$$
\lambda x + y = \kappa \left( \alpha x + (1-\alpha) y\right),
$$
with $\alpha=\frac{\lambda}{1+\lambda}$ and $\kappa = 1+\lambda$. If $0 \leq \lambda < + \infty$, it immediately follows that $0 < \alpha \leq 1$.
The technique you describe as "attack[ing] only the singular or near singular values" is also known as Singular Spectrum Analysis (for the purpose of linear regression) (see Eq. 19), if by "attacking", you mean "removing". The cross-covariance is unchanged.
Removing low singular values is also done by Principal Component Regression. In PCR, a PCA is performed on $X$ and a linear regression is applied on a selection of the obtained components. The difference with SSA is that it has an impact on the cross-covariance.
|
Interpretation of ridge regularization in regression
Questions 1, 2 and 3 are linked. I like to think that yes, introducing a Ridge penalty in a linear regression model can be interpreted as a shrinkage on the eigen-values of $X$. In order to make this
|
9,549
|
Introductory reading on Copulas
|
A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement.
Also noteworthy is Embrechts 2009 - Copulas - A personal view.
For Schmidt I could not provide a better summary than the section titles. It provides basic definitions, intuition and examples. Discussion of sampling is bare-bone, and a brief literature review covers the must-have. As for Embrechts apart from the obligatory definitions, properties and examples the discussion is interesting since it touches drawbacks and some critical remarks made to copula modeling over the years. The bibliography is here more extensive and covers most works that one shall read
|
Introductory reading on Copulas
|
A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement.
Also noteworthy is Embrechts 2009 - Copulas - A personal view.
For Schmidt I could not provide a better summary than the
|
Introductory reading on Copulas
A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement.
Also noteworthy is Embrechts 2009 - Copulas - A personal view.
For Schmidt I could not provide a better summary than the section titles. It provides basic definitions, intuition and examples. Discussion of sampling is bare-bone, and a brief literature review covers the must-have. As for Embrechts apart from the obligatory definitions, properties and examples the discussion is interesting since it touches drawbacks and some critical remarks made to copula modeling over the years. The bibliography is here more extensive and covers most works that one shall read
|
Introductory reading on Copulas
A concise introduction is T. Schmidt 2008 - Copulas and dependent measurement.
Also noteworthy is Embrechts 2009 - Copulas - A personal view.
For Schmidt I could not provide a better summary than the
|
9,550
|
Introductory reading on Copulas
|
Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
|
Introductory reading on Copulas
|
Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
|
Introductory reading on Copulas
Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
|
Introductory reading on Copulas
Chris Genest has another introductory paper "Everything You Always Wanted to Know about Copula Modeling but Were Afraid to Ask".
|
9,551
|
Introductory reading on Copulas
|
A good layperson introduction to copulas and its use in quantative fianance is
http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
The concept of correlation of probabilities is illustrated by two elementary school students Alice and Britney. It also discusses how prices of credit default swaps are used as a shortcut to the traditional rating process, as well as dangers of linking all of these together.
|
Introductory reading on Copulas
|
A good layperson introduction to copulas and its use in quantative fianance is
http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
The concept of correlation of probabilities i
|
Introductory reading on Copulas
A good layperson introduction to copulas and its use in quantative fianance is
http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
The concept of correlation of probabilities is illustrated by two elementary school students Alice and Britney. It also discusses how prices of credit default swaps are used as a shortcut to the traditional rating process, as well as dangers of linking all of these together.
|
Introductory reading on Copulas
A good layperson introduction to copulas and its use in quantative fianance is
http://archive.wired.com/techbiz/it/magazine/17-03/wp_quant?currentPage=all
The concept of correlation of probabilities i
|
9,552
|
Introductory reading on Copulas
|
I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and how it can be used in the financial application. It's a nice easy read.
This should be followed by an article By Felix Salmon "Recipe for Disaster: The Formula That Killed Wall Street". Here how it starts:
A year ago, it was hardly unthinkable that a math wizard like David X.
Li might someday earn a Nobel Prize. After all, financial
economists—even Wall Street quants—have received the Nobel in
economics before, and Li's work on measuring risk has had more impact,
more quickly, than previous Nobel Prize-winning contributions to the
field. Today, though, as dazed bankers, politicians, regulators, and
investors survey the wreckage of the biggest financial meltdown since
the Great Depression, Li is probably thankful he still has a job in
finance at all. Not that his achievement should be dismissed. He took
a notoriously tough nut—determining correlation, or how seemingly
disparate events are related—and cracked it wide open with a simple
and elegant mathematical formula, one that would become ubiquitous in
finance worldwide.
Copulas are used to recover the joint probability function when only marginals are observed or available. One problem is that the joint probability may not be static, which seems to be the case with their use in default risk estimation. These two readings demonstrate that. Copulas worked fine in insurance, where the joint is very stable, such as death rate of spouses.
|
Introductory reading on Copulas
|
I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and
|
Introductory reading on Copulas
I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and how it can be used in the financial application. It's a nice easy read.
This should be followed by an article By Felix Salmon "Recipe for Disaster: The Formula That Killed Wall Street". Here how it starts:
A year ago, it was hardly unthinkable that a math wizard like David X.
Li might someday earn a Nobel Prize. After all, financial
economists—even Wall Street quants—have received the Nobel in
economics before, and Li's work on measuring risk has had more impact,
more quickly, than previous Nobel Prize-winning contributions to the
field. Today, though, as dazed bankers, politicians, regulators, and
investors survey the wreckage of the biggest financial meltdown since
the Great Depression, Li is probably thankful he still has a job in
finance at all. Not that his achievement should be dismissed. He took
a notoriously tough nut—determining correlation, or how seemingly
disparate events are related—and cracked it wide open with a simple
and elegant mathematical formula, one that would become ubiquitous in
finance worldwide.
Copulas are used to recover the joint probability function when only marginals are observed or available. One problem is that the joint probability may not be static, which seems to be the case with their use in default risk estimation. These two readings demonstrate that. Copulas worked fine in insurance, where the joint is very stable, such as death rate of spouses.
|
Introductory reading on Copulas
I recommend this paper as a must read: Li, David X. "On default correlation: A copula function approach." The Journal of Fixed Income 9.4 (2000): 43-54. Here's the PDF. It explains what copula is and
|
9,553
|
Introductory reading on Copulas
|
Another good introduction is An introduction to copulas (Nelsen 2006).
|
Introductory reading on Copulas
|
Another good introduction is An introduction to copulas (Nelsen 2006).
|
Introductory reading on Copulas
Another good introduction is An introduction to copulas (Nelsen 2006).
|
Introductory reading on Copulas
Another good introduction is An introduction to copulas (Nelsen 2006).
|
9,554
|
What are some illustrative applications of empirical likelihood?
|
I can think of no better place than Owen's book to learn about empirical likelihood.
One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on the observed data points $x_1, \ldots, x_n$. The likelihood is thus a function of the probability vector $(p_1, \ldots, p_n)$, the parameter space is really the $n$-dimensional simplex of probability vectors, and the MLE is putting weight $1/n$ on each of the observations (supposing they are all different). The dimension of the parameter space increases with the number of observations.
A central point is that empirical likelihood gives a method for computing confidence intervals by profiling without specifying a parametric model. If the parameter of interest is the mean, $\mu$, then for any probability vector $p = (p_1, \ldots, p_n)$ we have that the mean is
$$\mu(p) = \sum_{i=1}^n x_i p_i,$$
and we can compute the profile likelihood as
$$L_{\text{prof}}(\mu) = \max \{ L(p) \mid \mu(p) = \mu \}.$$
Then we can compute confidence intervals of the form
$$I_r = \{ \mu \mid L_{\text{prof}}(\mu) \geq r L_{\text{prof}}(\bar{x}) \}$$
with $r \in (0,1)$. Here $\bar{x}$ is the empirical mean and $L_{\text{prof}}(\bar{x}) = n^{-n}$. The intervals $I_r$ should perhaps just be called (profile) likelihood intervals since no statement about coverage is made upfront. With decreasing $r$ the intervals $I_r$ (yes, they are intervals) form a nested, increasing family of confidence intervals. Asymptotic theory or the bootstrap can be used to calibrate $r$ to achieve 95% coverage, say.
Owen's book covers this in detail and provides extensions to more complicated statistical problems and other parameters of interest.
|
What are some illustrative applications of empirical likelihood?
|
I can think of no better place than Owen's book to learn about empirical likelihood.
One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on t
|
What are some illustrative applications of empirical likelihood?
I can think of no better place than Owen's book to learn about empirical likelihood.
One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on the observed data points $x_1, \ldots, x_n$. The likelihood is thus a function of the probability vector $(p_1, \ldots, p_n)$, the parameter space is really the $n$-dimensional simplex of probability vectors, and the MLE is putting weight $1/n$ on each of the observations (supposing they are all different). The dimension of the parameter space increases with the number of observations.
A central point is that empirical likelihood gives a method for computing confidence intervals by profiling without specifying a parametric model. If the parameter of interest is the mean, $\mu$, then for any probability vector $p = (p_1, \ldots, p_n)$ we have that the mean is
$$\mu(p) = \sum_{i=1}^n x_i p_i,$$
and we can compute the profile likelihood as
$$L_{\text{prof}}(\mu) = \max \{ L(p) \mid \mu(p) = \mu \}.$$
Then we can compute confidence intervals of the form
$$I_r = \{ \mu \mid L_{\text{prof}}(\mu) \geq r L_{\text{prof}}(\bar{x}) \}$$
with $r \in (0,1)$. Here $\bar{x}$ is the empirical mean and $L_{\text{prof}}(\bar{x}) = n^{-n}$. The intervals $I_r$ should perhaps just be called (profile) likelihood intervals since no statement about coverage is made upfront. With decreasing $r$ the intervals $I_r$ (yes, they are intervals) form a nested, increasing family of confidence intervals. Asymptotic theory or the bootstrap can be used to calibrate $r$ to achieve 95% coverage, say.
Owen's book covers this in detail and provides extensions to more complicated statistical problems and other parameters of interest.
|
What are some illustrative applications of empirical likelihood?
I can think of no better place than Owen's book to learn about empirical likelihood.
One practical way to think about $L = L(p_1, \ldots, p_n)$ is as the likelihood for a multinomial distribution on t
|
9,555
|
What are some illustrative applications of empirical likelihood?
|
In econometrics, many applied papers start with the assumption that
$$
E[g(X,\theta)] = 0
$$
where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \mathbb{R}^p$ is an unknown parameter, $q \geq p$. The function $g$ comes from an economic model. The goal is to estimate $\theta$.
The traditional approach, in econometrics, for estimation and inference on $\theta$ is to use generalized method of moments:
$$
\hat{\theta}_\text{GMM} = \text{argmin}_{\theta \in \Theta} \; \bar{g}_n(\theta) 'W \bar{g}_n(\theta)
$$
where $W$ is a positive definite weighting matrix and
$$
\bar{g}_n(\theta) := \frac{1}{n} \sum_{i=1}^n g(X_i,\theta).
$$
Empirical likelihood providers an alternative estimator to GMM. The idea is to enforce the moment condition as a constraint when maximizing the nonparametric likelihood. First, fix a $\theta$. Then solve
$$
L(\theta) = \max_{p_1,\ldots,p_n} \; \prod_{i=1}^n p_i
$$
subject to
$$
\sum_{i=1}^n p_i=1,
\qquad
p_i \geq 0,
\qquad
\sum_{i=1}^n p_i \cdot g(X_i,\theta) = 0.
$$
This is the `inner loop'. Then maximize over $\theta$:
$$
\hat{\theta}_\text{EL} = \text{argmax}_{\theta \in \Theta} \; \log L(\theta).
$$
This approach has been shown to have better higher order properties than GMM (see Newey and Smith 2004, Econometrica), which is one reason why it is preferable over GMM. For additional reference, see the notes and lecture by Imbens and Wooldridge here (lecture 15).
There are of course many other reasons why EL has garnered attention in econometrics, but I hope this is a useful starting place. Moment equality models are very common in empirical economics.
|
What are some illustrative applications of empirical likelihood?
|
In econometrics, many applied papers start with the assumption that
$$
E[g(X,\theta)] = 0
$$
where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \ma
|
What are some illustrative applications of empirical likelihood?
In econometrics, many applied papers start with the assumption that
$$
E[g(X,\theta)] = 0
$$
where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \mathbb{R}^p$ is an unknown parameter, $q \geq p$. The function $g$ comes from an economic model. The goal is to estimate $\theta$.
The traditional approach, in econometrics, for estimation and inference on $\theta$ is to use generalized method of moments:
$$
\hat{\theta}_\text{GMM} = \text{argmin}_{\theta \in \Theta} \; \bar{g}_n(\theta) 'W \bar{g}_n(\theta)
$$
where $W$ is a positive definite weighting matrix and
$$
\bar{g}_n(\theta) := \frac{1}{n} \sum_{i=1}^n g(X_i,\theta).
$$
Empirical likelihood providers an alternative estimator to GMM. The idea is to enforce the moment condition as a constraint when maximizing the nonparametric likelihood. First, fix a $\theta$. Then solve
$$
L(\theta) = \max_{p_1,\ldots,p_n} \; \prod_{i=1}^n p_i
$$
subject to
$$
\sum_{i=1}^n p_i=1,
\qquad
p_i \geq 0,
\qquad
\sum_{i=1}^n p_i \cdot g(X_i,\theta) = 0.
$$
This is the `inner loop'. Then maximize over $\theta$:
$$
\hat{\theta}_\text{EL} = \text{argmax}_{\theta \in \Theta} \; \log L(\theta).
$$
This approach has been shown to have better higher order properties than GMM (see Newey and Smith 2004, Econometrica), which is one reason why it is preferable over GMM. For additional reference, see the notes and lecture by Imbens and Wooldridge here (lecture 15).
There are of course many other reasons why EL has garnered attention in econometrics, but I hope this is a useful starting place. Moment equality models are very common in empirical economics.
|
What are some illustrative applications of empirical likelihood?
In econometrics, many applied papers start with the assumption that
$$
E[g(X,\theta)] = 0
$$
where $X$ is a vector of data, $g$ is a known system of $q$ equations, and $\theta \in \Theta \subseteq \ma
|
9,556
|
What are some illustrative applications of empirical likelihood?
|
In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\hat{S}$ is a generalisation of the empirical distribution function which allows censoring. It can be derived heuristically, as given in most practical textbooks. But it can also be formally derived as a maximum (empirical) likelihood estimator. Here are more details.
|
What are some illustrative applications of empirical likelihood?
|
In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\
|
What are some illustrative applications of empirical likelihood?
In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\hat{S}$ is a generalisation of the empirical distribution function which allows censoring. It can be derived heuristically, as given in most practical textbooks. But it can also be formally derived as a maximum (empirical) likelihood estimator. Here are more details.
|
What are some illustrative applications of empirical likelihood?
In survival analysis, the Kaplan-Meier curve is the most famous non-parametric estimator of the survival function $S(t) = Pr(T > t)$, where $T$ denotes the time-to-event random variable. Basically, $\
|
9,557
|
How to derive the probabilistic interpretation of the AUC?
|
First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions:
We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and s is a generic increasing monotonic function of the estimated probability p(class = 1|x).
$f_{k}(s)$, with $k = \{0, 1\}$ := pdf of the scores for class k, with CDF $F_{k}(s)$
The classification of a new observation is obtained compraing the score s to a threshold t
Furthermore, for mathematical convenience, let's consider the positive class (event detected) k = 0, and negative k = 1. In this setting we can define:
Recall (aka Sensitivity, aka TPR): $F_{0}(t)$ (proportion of positive cases classified as positive)
Specificity (aka TNR): $1 - F_{1}(t)$ (proportion of negative cases classified as negative)
FPR (aka Fall-out): 1 - TNR = $F_{1}(t)$
The ROC curve is then a plot of $F_{0}(t)$ against $F_{1}(t)$. Setting $v = F_1(s)$, we can formally define the area under the ROC curve as:
$$AUC =\int_{0}^{1} F_{0}(F_{1}^{-1}(v)) dv$$
Changing variable ($dv = f_{1}(s)ds$):
$$AUC =\int_{ - \infty}^{\infty} F_{0}(s) f_{1}(s)ds$$
This formula can easiliy be seen to be the probability that a randomly drawn member of class 0 will produce a score lower than the score of a randomly drawn member of class 1.
This proof is taken from:
https://pdfs.semanticscholar.org/1fcb/f15898db36990f651c1e5cdc0b405855de2c.pdf
|
How to derive the probabilistic interpretation of the AUC?
|
First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions:
We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and
|
How to derive the probabilistic interpretation of the AUC?
First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions:
We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and s is a generic increasing monotonic function of the estimated probability p(class = 1|x).
$f_{k}(s)$, with $k = \{0, 1\}$ := pdf of the scores for class k, with CDF $F_{k}(s)$
The classification of a new observation is obtained compraing the score s to a threshold t
Furthermore, for mathematical convenience, let's consider the positive class (event detected) k = 0, and negative k = 1. In this setting we can define:
Recall (aka Sensitivity, aka TPR): $F_{0}(t)$ (proportion of positive cases classified as positive)
Specificity (aka TNR): $1 - F_{1}(t)$ (proportion of negative cases classified as negative)
FPR (aka Fall-out): 1 - TNR = $F_{1}(t)$
The ROC curve is then a plot of $F_{0}(t)$ against $F_{1}(t)$. Setting $v = F_1(s)$, we can formally define the area under the ROC curve as:
$$AUC =\int_{0}^{1} F_{0}(F_{1}^{-1}(v)) dv$$
Changing variable ($dv = f_{1}(s)ds$):
$$AUC =\int_{ - \infty}^{\infty} F_{0}(s) f_{1}(s)ds$$
This formula can easiliy be seen to be the probability that a randomly drawn member of class 0 will produce a score lower than the score of a randomly drawn member of class 1.
This proof is taken from:
https://pdfs.semanticscholar.org/1fcb/f15898db36990f651c1e5cdc0b405855de2c.pdf
|
How to derive the probabilistic interpretation of the AUC?
First thing, let's try to define the area under the ROC curve formally. Some assumptions and definitions:
We have a probabilistic classifier that outputs a "score" s(x), where x are the features, and
|
9,558
|
How to derive the probabilistic interpretation of the AUC?
|
@alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and 1 for the positive class):
Pdf and cdf of the score for negative class: $f_0(s)$ and $F_0(s)$
Pdf and cdf of the score for positive class: $f_1(s)$ and $F_1(s)$
FPR = $x(s) = 1-F_0(s)$
TPR = $y(s) = 1-F_1(s)$
Now, using that $dx(\tau)=x'(\tau)d\tau$
and $y(x(\tau))=y(\tau)$ by definition of the ROC curve.
$$\begin{align}
\text{AUC} &= \int_0^1 y(x) dx\\
&= \int_0^1 y(x(\tau)) dx(\tau) \\
&= \int_{+\infty}^{-\infty} y(\tau) x'(\tau) d\tau \\
&= \int_{+\infty}^{-\infty} \big( 1-F_1(\tau) \big) \big( -f_0(\tau) \big) d\tau \\
&= \int_{-\infty}^{+\infty} \big( 1-F_1(\tau) \big) f_0(\tau) d\tau
\end{align}$$
where $\tau$ stands for threshold. One can apply the interpretation in @alebu's answer to the last expression.
|
How to derive the probabilistic interpretation of the AUC?
|
@alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and
|
How to derive the probabilistic interpretation of the AUC?
@alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and 1 for the positive class):
Pdf and cdf of the score for negative class: $f_0(s)$ and $F_0(s)$
Pdf and cdf of the score for positive class: $f_1(s)$ and $F_1(s)$
FPR = $x(s) = 1-F_0(s)$
TPR = $y(s) = 1-F_1(s)$
Now, using that $dx(\tau)=x'(\tau)d\tau$
and $y(x(\tau))=y(\tau)$ by definition of the ROC curve.
$$\begin{align}
\text{AUC} &= \int_0^1 y(x) dx\\
&= \int_0^1 y(x(\tau)) dx(\tau) \\
&= \int_{+\infty}^{-\infty} y(\tau) x'(\tau) d\tau \\
&= \int_{+\infty}^{-\infty} \big( 1-F_1(\tau) \big) \big( -f_0(\tau) \big) d\tau \\
&= \int_{-\infty}^{+\infty} \big( 1-F_1(\tau) \big) f_0(\tau) d\tau
\end{align}$$
where $\tau$ stands for threshold. One can apply the interpretation in @alebu's answer to the last expression.
|
How to derive the probabilistic interpretation of the AUC?
@alebu's answer is great. But its notation is nonstandard and uses 0 for the positive class and 1 for the negative class. Below are the results for the standard notation (0 for the negative class and
|
9,559
|
How to derive the probabilistic interpretation of the AUC?
|
The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probability? Let's assume the following:
$A$ is the distribution of scores the model produces for data points that are actually in the positive class.
$B$ is the distribution of scores the model produces for data points that are actually in the negative class (we want this to be to the left of $A$).
$\tau$ is the cutoff threshold. If a data point get's a score greater than this, it's predicted as belonging to the positive class. Otherwise, it's predicted to be in the negative class.
Note that the TPR (recall) is given by: $P(A>\tau)$ and the FPR (fallout) is given be: $P(B>\tau)$.
Now, we plot the TPR on the y-axis and FPR on the x-axis, draw the curve for various $\tau$ and calculate the area under this curve ($AUC$).
We get:
$$AUC = \int_0^1 TPR(x)dx = \int_0^1 P(A>\tau(x))dx$$
where $x$ is the FPR.
Now, one way to calculate this integral is to consider $x$ as belonging to a uniform distribution. In that case, it simply becomes the expectation of the $TPR$.
$$AUC = E_x[P(A>\tau(x))] \tag{1}$$
if we consider $x \sim U[0,1)$ .
Now, $x$ here was just the $FPR$
$$x=FPR = P(B>\tau(x))$$
Since we considered $x$ to be from a uniform distribution,
$$P(B>\tau(x)) \sim U$$
$$=> P(B<\tau(x)) \sim (1-U) \sim U$$
\begin{equation}=> F_B(\tau(x)) \sim U \tag{2}\end{equation}
But we know from the inverse transform law that for any random variable $X$, if $F_X(Y) \sim U$ then $Y \sim X$. This follows since taking any random variable and applying its own CDF to it leads to the uniform.
$$F_X(X) = P(F_X(x)<X) =P(X<F_X^{-1}(X))=F_XF_X^{-1}(X)=X$$
and this only holds for uniform.
Using this fact in equation (2) gives us:
$$\tau(x) \sim B$$
Substituting this into equation (1) we get:
$$AUC=E_x(P(A>B))=P(A>B)$$
In other words, the area under the curve is the probability that a random positive sample will have a higher score than a random negative sample.
|
How to derive the probabilistic interpretation of the AUC?
|
The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probabil
|
How to derive the probabilistic interpretation of the AUC?
The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probability? Let's assume the following:
$A$ is the distribution of scores the model produces for data points that are actually in the positive class.
$B$ is the distribution of scores the model produces for data points that are actually in the negative class (we want this to be to the left of $A$).
$\tau$ is the cutoff threshold. If a data point get's a score greater than this, it's predicted as belonging to the positive class. Otherwise, it's predicted to be in the negative class.
Note that the TPR (recall) is given by: $P(A>\tau)$ and the FPR (fallout) is given be: $P(B>\tau)$.
Now, we plot the TPR on the y-axis and FPR on the x-axis, draw the curve for various $\tau$ and calculate the area under this curve ($AUC$).
We get:
$$AUC = \int_0^1 TPR(x)dx = \int_0^1 P(A>\tau(x))dx$$
where $x$ is the FPR.
Now, one way to calculate this integral is to consider $x$ as belonging to a uniform distribution. In that case, it simply becomes the expectation of the $TPR$.
$$AUC = E_x[P(A>\tau(x))] \tag{1}$$
if we consider $x \sim U[0,1)$ .
Now, $x$ here was just the $FPR$
$$x=FPR = P(B>\tau(x))$$
Since we considered $x$ to be from a uniform distribution,
$$P(B>\tau(x)) \sim U$$
$$=> P(B<\tau(x)) \sim (1-U) \sim U$$
\begin{equation}=> F_B(\tau(x)) \sim U \tag{2}\end{equation}
But we know from the inverse transform law that for any random variable $X$, if $F_X(Y) \sim U$ then $Y \sim X$. This follows since taking any random variable and applying its own CDF to it leads to the uniform.
$$F_X(X) = P(F_X(x)<X) =P(X<F_X^{-1}(X))=F_XF_X^{-1}(X)=X$$
and this only holds for uniform.
Using this fact in equation (2) gives us:
$$\tau(x) \sim B$$
Substituting this into equation (1) we get:
$$AUC=E_x(P(A>B))=P(A>B)$$
In other words, the area under the curve is the probability that a random positive sample will have a higher score than a random negative sample.
|
How to derive the probabilistic interpretation of the AUC?
The way to calculate AUC-ROC is to plot out the TPR and FPR as the threshold, $\tau$ is changed and calculate the area under that curve. But, why is this area under the curve the same as this probabil
|
9,560
|
How to derive the probabilistic interpretation of the AUC?
|
Turns out I wrote a medium article just for that! Here it is :
https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015
TL;DR : to go through the end of the demonstration, one needs to use the convolution theorem.
If you don't want to change sites, here is the full trick.
We want to show that, for a given binary classifier, :
$$ROC-AUC = P\left(X_1>X_0\right) = P\left(X_1-X_0>0\right)$$
where :
X₁ is a continuous random variable giving the “score” output by our binary classifier for a randomly chosen positive sample
X₀ is a continuous random variable giving the “score” output by our binary classifier for a randomly chosen negative sample
Definitions and preliminary results
First, some definitions :
Let X₁ and X₀ be defined as above
Let f₁ and f₀ be, respectively, the density function of X₁ and X₀
Let F₁ and F₀ be, respectively, the repartition function of X₁ and X₀
True Positive Rate (TPR) and False Positive Rate (FPR) have their usual meaning, i.e. :
$$TPR=\frac{TP}{P}\:\:\,FPR=\frac{FP}{N}$$
We can already observe that, for a classifier threshold T, a randomly chosen positive sample would be correctly classified (true positive) if X₁>T. So, for a randomly chosen positive sample, the probability of correctly classifying it is P(X₁>T). By definition of the TPR, it corresponds to the probability of correctly classifying a randomly chosen positive sample, so TPR(T) = P(X₁>T) = 1- P(X₁⩽ T) = 1-F₁(T). (1)
This also means, by definition of the density function, that :
$$TPR(T) = \int\limits_{T}^{+\infty} f_1(x)\: \mathrm{d}x$$
Similarly, we can show that FPR(T) = 1- F₀(T) (2)
Demonstration
Now let’s dig into the calculus!
By definition of the ROC, we have that :
$$ROC-AUC = \int\limits_0^1 TPR(FPR)\: \mathrm{d}FPR$$
$$= \int\limits_0^1 TPR(FPR^{-1}(x))\: \mathrm{d}x$$
By using this change in variable :
$$T=FPR^{-1}(x)\iff\ x=FPR(T)$$
the integral becomes :
$$\int\limits_{+\infty}^{-\infty} TPR(T) \times FPR'(T)\: \mathrm{d}T$$
Now, thanks to (2) we know that we can express this integral as :
$$\int\limits_{+\infty}^{-\infty} TPR(T) \times (-f_0(T))\: \mathrm{d}T = \int\limits_{-\infty}^{+\infty} TPR(T) \times f_0(T)\: \mathrm{d}T$$
Thanks to (1) we know that this can be expressed as :
$$\int\limits_{-\infty}^{+\infty} \int\limits_{T}^{+\infty} f_1(x)\: \mathrm{d}x \times f_0(T)\: \mathrm{d}T$$
By using this change in variable for the inner integral :
$$v=x-T$$
the integral becomes :
$$\int\limits_{-\infty}^{+\infty} \int\limits_{0}^{+\infty} f_1(v+T)\: \mathrm{d}v \times f_0(T)\: \mathrm{d}T$$
$$= \int\limits_{0}^{+\infty} \int\limits_{-\infty}^{+\infty} f_0(T)\: \mathrm{d}T \times \: f_1(v+T)\: \mathrm{d}v$$
and by using this change in variable for the inner integral :
$$u=v+T$$
it becomes :
$$\int\limits_{0}^{+\infty} \int\limits_{-\infty}^{+\infty} f_1(u)\: \times f_0(u-v)\: \mathrm{d}u \: \mathrm{d}v$$
Do you get where we’re going? Yes, right to the convolution theorem!
First, let’s point out that since f₀(t) is a density function of X₀, f₀(-t) is a density function of (-X₀).
Then, according to the convolution theorem and assuming the convergence, a density of X₁- X₀=X₁+(- X₀) is :
$$\int\limits_{-\infty}^{+\infty} f_1(u)\: \times f_0(u-v)\: \mathrm{d}u$$
This means that :
$$P\left(X_1>X_0\right)=P\left(X_1-X_0>0\right)$$
$$=\int\limits_{0}^{+\infty} \int\limits_{-\infty}^{+\infty} f_1(u)\: \times f_0(u-v)\: \mathrm{d}u \: \mathrm{d}v$$
And eventually we have that :
$$P\left(X_1>X_0\right) = ROC - AUC$$
Thanks for reading this far! Hope I helped :)
|
How to derive the probabilistic interpretation of the AUC?
|
Turns out I wrote a medium article just for that! Here it is :
https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015
TL;DR : to go through the end of the demonstration,
|
How to derive the probabilistic interpretation of the AUC?
Turns out I wrote a medium article just for that! Here it is :
https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015
TL;DR : to go through the end of the demonstration, one needs to use the convolution theorem.
If you don't want to change sites, here is the full trick.
We want to show that, for a given binary classifier, :
$$ROC-AUC = P\left(X_1>X_0\right) = P\left(X_1-X_0>0\right)$$
where :
X₁ is a continuous random variable giving the “score” output by our binary classifier for a randomly chosen positive sample
X₀ is a continuous random variable giving the “score” output by our binary classifier for a randomly chosen negative sample
Definitions and preliminary results
First, some definitions :
Let X₁ and X₀ be defined as above
Let f₁ and f₀ be, respectively, the density function of X₁ and X₀
Let F₁ and F₀ be, respectively, the repartition function of X₁ and X₀
True Positive Rate (TPR) and False Positive Rate (FPR) have their usual meaning, i.e. :
$$TPR=\frac{TP}{P}\:\:\,FPR=\frac{FP}{N}$$
We can already observe that, for a classifier threshold T, a randomly chosen positive sample would be correctly classified (true positive) if X₁>T. So, for a randomly chosen positive sample, the probability of correctly classifying it is P(X₁>T). By definition of the TPR, it corresponds to the probability of correctly classifying a randomly chosen positive sample, so TPR(T) = P(X₁>T) = 1- P(X₁⩽ T) = 1-F₁(T). (1)
This also means, by definition of the density function, that :
$$TPR(T) = \int\limits_{T}^{+\infty} f_1(x)\: \mathrm{d}x$$
Similarly, we can show that FPR(T) = 1- F₀(T) (2)
Demonstration
Now let’s dig into the calculus!
By definition of the ROC, we have that :
$$ROC-AUC = \int\limits_0^1 TPR(FPR)\: \mathrm{d}FPR$$
$$= \int\limits_0^1 TPR(FPR^{-1}(x))\: \mathrm{d}x$$
By using this change in variable :
$$T=FPR^{-1}(x)\iff\ x=FPR(T)$$
the integral becomes :
$$\int\limits_{+\infty}^{-\infty} TPR(T) \times FPR'(T)\: \mathrm{d}T$$
Now, thanks to (2) we know that we can express this integral as :
$$\int\limits_{+\infty}^{-\infty} TPR(T) \times (-f_0(T))\: \mathrm{d}T = \int\limits_{-\infty}^{+\infty} TPR(T) \times f_0(T)\: \mathrm{d}T$$
Thanks to (1) we know that this can be expressed as :
$$\int\limits_{-\infty}^{+\infty} \int\limits_{T}^{+\infty} f_1(x)\: \mathrm{d}x \times f_0(T)\: \mathrm{d}T$$
By using this change in variable for the inner integral :
$$v=x-T$$
the integral becomes :
$$\int\limits_{-\infty}^{+\infty} \int\limits_{0}^{+\infty} f_1(v+T)\: \mathrm{d}v \times f_0(T)\: \mathrm{d}T$$
$$= \int\limits_{0}^{+\infty} \int\limits_{-\infty}^{+\infty} f_0(T)\: \mathrm{d}T \times \: f_1(v+T)\: \mathrm{d}v$$
and by using this change in variable for the inner integral :
$$u=v+T$$
it becomes :
$$\int\limits_{0}^{+\infty} \int\limits_{-\infty}^{+\infty} f_1(u)\: \times f_0(u-v)\: \mathrm{d}u \: \mathrm{d}v$$
Do you get where we’re going? Yes, right to the convolution theorem!
First, let’s point out that since f₀(t) is a density function of X₀, f₀(-t) is a density function of (-X₀).
Then, according to the convolution theorem and assuming the convergence, a density of X₁- X₀=X₁+(- X₀) is :
$$\int\limits_{-\infty}^{+\infty} f_1(u)\: \times f_0(u-v)\: \mathrm{d}u$$
This means that :
$$P\left(X_1>X_0\right)=P\left(X_1-X_0>0\right)$$
$$=\int\limits_{0}^{+\infty} \int\limits_{-\infty}^{+\infty} f_1(u)\: \times f_0(u-v)\: \mathrm{d}u \: \mathrm{d}v$$
And eventually we have that :
$$P\left(X_1>X_0\right) = ROC - AUC$$
Thanks for reading this far! Hope I helped :)
|
How to derive the probabilistic interpretation of the AUC?
Turns out I wrote a medium article just for that! Here it is :
https://medium.com/@nathanaim/mathematics-behind-roc-auc-interpretation-e4e6f202a015
TL;DR : to go through the end of the demonstration,
|
9,561
|
Collinearity diagnostics problematic only when the interaction term is included
|
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction"
set.seed(12345)
a = rnorm(10000,20,2)
b = rnorm(10000,10,2)
cor(a,b)
cor(a,a*b)
> cor(a,b)
[1] 0.01564907
> cor(a,a*b)
[1] 0.4608877
And then when you center them:
c = a - 20
d = b - 10
cor(c,d)
cor(c,c*d)
> cor(c,d)
[1] 0.01564907
> cor(c,c*d)
[1] 0.001908758
Incidentally, the same can happen with including polynomial terms (i.e., $X,~X^2,~...$) without first centering.
So you can give that a shot with your pair.
As to why centering helps - but let's go back to the definition of covariance
\begin{align}
\text{Cov}(X,XY) &= E[(X-E(X))(XY-E(XY))] \\
&= E[(X-\mu_x)(XY-\mu_{xy})] \\
&= E[X^2Y-X\mu_{xy}-XY\mu_x+\mu_x\mu_{xy}] \\
&= E[X^2Y]-E[X]\mu_{xy}-E[XY]\mu_x+\mu_x\mu_{xy} \\
\end{align}
Even given independence of X and Y
\begin{align}
\qquad\qquad\qquad\, &= E[X^2]E[Y]-\mu_x\mu_x\mu_y-\mu_x\mu_y\mu_x+\mu_x\mu_x\mu_y \\
&= (\sigma_x^2+\mu_x^2)\mu_y-\mu_x^2\mu_y \\
&= \sigma_x^2\mu_y \\
\end{align}
This doesn't related directly to your regression problem, since you probably don't have completely independent $X$ and $Y$, and since correlation between two explanatory variables doesn't always result in multicollinearity issues in regression. But it does show how an interaction between two non-centered independent variables causes correlation to show up, and that correlation could cause multicollinearity issues.
Intuitively to me, having non-centered variables interact simply means that when $X$ is big, then $XY$ is also going to be bigger on an absolute scale irrespective of $Y$, and so $X$ and $XY$ will end up correlated, and similarly for $Y$.
|
Collinearity diagnostics problematic only when the interaction term is included
|
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction"
set.seed(12345)
a = rnorm(10000,20,2)
|
Collinearity diagnostics problematic only when the interaction term is included
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction"
set.seed(12345)
a = rnorm(10000,20,2)
b = rnorm(10000,10,2)
cor(a,b)
cor(a,a*b)
> cor(a,b)
[1] 0.01564907
> cor(a,a*b)
[1] 0.4608877
And then when you center them:
c = a - 20
d = b - 10
cor(c,d)
cor(c,c*d)
> cor(c,d)
[1] 0.01564907
> cor(c,c*d)
[1] 0.001908758
Incidentally, the same can happen with including polynomial terms (i.e., $X,~X^2,~...$) without first centering.
So you can give that a shot with your pair.
As to why centering helps - but let's go back to the definition of covariance
\begin{align}
\text{Cov}(X,XY) &= E[(X-E(X))(XY-E(XY))] \\
&= E[(X-\mu_x)(XY-\mu_{xy})] \\
&= E[X^2Y-X\mu_{xy}-XY\mu_x+\mu_x\mu_{xy}] \\
&= E[X^2Y]-E[X]\mu_{xy}-E[XY]\mu_x+\mu_x\mu_{xy} \\
\end{align}
Even given independence of X and Y
\begin{align}
\qquad\qquad\qquad\, &= E[X^2]E[Y]-\mu_x\mu_x\mu_y-\mu_x\mu_y\mu_x+\mu_x\mu_x\mu_y \\
&= (\sigma_x^2+\mu_x^2)\mu_y-\mu_x^2\mu_y \\
&= \sigma_x^2\mu_y \\
\end{align}
This doesn't related directly to your regression problem, since you probably don't have completely independent $X$ and $Y$, and since correlation between two explanatory variables doesn't always result in multicollinearity issues in regression. But it does show how an interaction between two non-centered independent variables causes correlation to show up, and that correlation could cause multicollinearity issues.
Intuitively to me, having non-centered variables interact simply means that when $X$ is big, then $XY$ is also going to be bigger on an absolute scale irrespective of $Y$, and so $X$ and $XY$ will end up correlated, and similarly for $Y$.
|
Collinearity diagnostics problematic only when the interaction term is included
Yes, this is usually the case with non-centered interactions. A quick look at what happens to the correlation of two independent variables and their "interaction"
set.seed(12345)
a = rnorm(10000,20,2)
|
9,562
|
Collinearity diagnostics problematic only when the interaction term is included
|
I've found the following publications on this topic useful:
Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues
'The effects of predictor scaling on coefficients of regression equations (centered versus uncentered solutions and higher order interaction effects (3-way interactions; categorical by continuous effects) has thoughtfully been covered by Aiken and West (1991). Their example illustrates that considerable multicollinearity is introduced into a regression equation with an interaction term when the variables are not centered.'
Afshartous & Preston (2011): Key results of interaction models with centering
'Motivations for employing variable centering include enhanced interpretability of coeffi- cients and reduced numerical instability for estimation associated with multicollinearity.'
Obviously Aiken and West (1991) also cover this topic, but I don't have their book.
|
Collinearity diagnostics problematic only when the interaction term is included
|
I've found the following publications on this topic useful:
Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues
'The effects of predictor
|
Collinearity diagnostics problematic only when the interaction term is included
I've found the following publications on this topic useful:
Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues
'The effects of predictor scaling on coefficients of regression equations (centered versus uncentered solutions and higher order interaction effects (3-way interactions; categorical by continuous effects) has thoughtfully been covered by Aiken and West (1991). Their example illustrates that considerable multicollinearity is introduced into a regression equation with an interaction term when the variables are not centered.'
Afshartous & Preston (2011): Key results of interaction models with centering
'Motivations for employing variable centering include enhanced interpretability of coeffi- cients and reduced numerical instability for estimation associated with multicollinearity.'
Obviously Aiken and West (1991) also cover this topic, but I don't have their book.
|
Collinearity diagnostics problematic only when the interaction term is included
I've found the following publications on this topic useful:
Robinson & Schumacker (2009): Interaction effects: centering, variance inflation factor, and interpretation issues
'The effects of predictor
|
9,563
|
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix?
|
If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal elements of the covariance (from elsewhere on the web!, from Prof. Thomas Lumley and Spencer Graves, Eng.).
For a 95% confidence interval
fit<-optim(pars,li_func,control=list("fnscale"=-1),hessian=TRUE,...)
fisher_info<-solve(-fit$hessian)
prop_sigma<-sqrt(diag(fisher_info))
prop_sigma<-diag(prop_sigma)
upper<-fit$par+1.96*prop_sigma
lower<-fit$par-1.96*prop_sigma
interval<-data.frame(value=fit$par, upper=upper, lower=lower)
Note that:
If you are maximizing the log(likelihood), then the NEGATIVE of the
hessian is the "observed information" (such as here).
If you MINIMIZE a "deviance" = (-2)*log(likelihood), then the HALF of the hessian is the observed information.
In the unlikely event that you are maximizing the
likelihood itself, you need to divide the negative of the hessian by
the likelihood to get the observed information.
See this for further limitations due to optimization routine used.
|
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence interv
|
If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal
|
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence intervals using the hessian matrix?
If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal elements of the covariance (from elsewhere on the web!, from Prof. Thomas Lumley and Spencer Graves, Eng.).
For a 95% confidence interval
fit<-optim(pars,li_func,control=list("fnscale"=-1),hessian=TRUE,...)
fisher_info<-solve(-fit$hessian)
prop_sigma<-sqrt(diag(fisher_info))
prop_sigma<-diag(prop_sigma)
upper<-fit$par+1.96*prop_sigma
lower<-fit$par-1.96*prop_sigma
interval<-data.frame(value=fit$par, upper=upper, lower=lower)
Note that:
If you are maximizing the log(likelihood), then the NEGATIVE of the
hessian is the "observed information" (such as here).
If you MINIMIZE a "deviance" = (-2)*log(likelihood), then the HALF of the hessian is the observed information.
In the unlikely event that you are maximizing the
likelihood itself, you need to divide the negative of the hessian by
the likelihood to get the observed information.
See this for further limitations due to optimization routine used.
|
In R, given an output from optim with a hessian matrix, how to calculate parameter confidence interv
If you are maximising a likelihood then the covariance matrix of the estimates is (asymptotically) the inverse of the negative of the Hessian. The standard errors are the square roots of the diagonal
|
9,564
|
Pseudo R squared formula for GLMs
|
There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative to UCLA's typology, it is like $R^2$ in the sense that it indexes the improvement of the fitted model over the null model. Some statistical software, notably SPSS, if I recall correctly, print out McFadden's pseudo-$R^2$ by default with the results from some analyses like logistic regression, so I suspect it is quite common, although the Cox & Snell and Nagelkerke pseudo-$R^2$s may be even more so. However, McFadden's pseudo-$R^2$ does not have all of the properties of $R^2$ (no pseudo-$R^2$ does). If someone is interested in using a pseudo-$R^2$ to understand a model, I strongly recommend reading this excellent CV thread: Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? (For what it's worth, $R^2$ itself is slipperier than people realize, a great demonstration of which can be seen in @whuber's answer here: Is $R^2$ useful or dangerous?)
|
Pseudo R squared formula for GLMs
|
There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative
|
Pseudo R squared formula for GLMs
There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative to UCLA's typology, it is like $R^2$ in the sense that it indexes the improvement of the fitted model over the null model. Some statistical software, notably SPSS, if I recall correctly, print out McFadden's pseudo-$R^2$ by default with the results from some analyses like logistic regression, so I suspect it is quite common, although the Cox & Snell and Nagelkerke pseudo-$R^2$s may be even more so. However, McFadden's pseudo-$R^2$ does not have all of the properties of $R^2$ (no pseudo-$R^2$ does). If someone is interested in using a pseudo-$R^2$ to understand a model, I strongly recommend reading this excellent CV thread: Which pseudo-$R^2$ measure is the one to report for logistic regression (Cox & Snell or Nagelkerke)? (For what it's worth, $R^2$ itself is slipperier than people realize, a great demonstration of which can be seen in @whuber's answer here: Is $R^2$ useful or dangerous?)
|
Pseudo R squared formula for GLMs
There are a large number of pseudo-$R^2$s for GLiMs. The excellent UCLA statistics help site has a comprehensive overview of them here. The one you list is called McFadden's pseudo-$R^2$. Relative
|
9,565
|
Pseudo R squared formula for GLMs
|
R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below).
> x = log(1:10)
> y = 1:10
> glm(y ~ x, family = poisson)
>Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
5.564e-13 1.000e+00
Degrees of Freedom: 9 Total (i.e. Null); 8 Residual
Null Deviance: 16.64
Residual Deviance: 2.887e-15 AIC: 37.97
You can also pull these values out of the object with model$null.deviance and model$deviance
|
Pseudo R squared formula for GLMs
|
R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below).
> x = log(1:10)
> y = 1:10
> glm(y ~ x, family = poisson)
|
Pseudo R squared formula for GLMs
R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below).
> x = log(1:10)
> y = 1:10
> glm(y ~ x, family = poisson)
>Call: glm(formula = y ~ x, family = poisson)
Coefficients:
(Intercept) x
5.564e-13 1.000e+00
Degrees of Freedom: 9 Total (i.e. Null); 8 Residual
Null Deviance: 16.64
Residual Deviance: 2.887e-15 AIC: 37.97
You can also pull these values out of the object with model$null.deviance and model$deviance
|
Pseudo R squared formula for GLMs
R gives null and residual deviance in the output to glm so that you can make exactly this sort of comparison (see the last two lines below).
> x = log(1:10)
> y = 1:10
> glm(y ~ x, family = poisson)
|
9,566
|
Pseudo R squared formula for GLMs
|
The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Regression Methods by Thomas P. Ryan on page 266).
If you make a fake data set, you will see that it's underestimate the R squared...for gaussian glm per example.
I think for a gaussian glm you can use the basic (lm) R squared formula...
R2gauss<- function(y,model){
moy<-mean(y)
N<- length(y)
p<-length(model$coefficients)-1
SSres<- sum((y-predict(model))^2)
SStot<-sum((y-moy)^2)
R2<-1-(SSres/SStot)
Rajust<-1-(((1-R2)*(N-1))/(N-p-1))
return(data.frame(R2,Rajust,SSres,SStot))
}
And for the logistic (or binomial family in r ) I would use the formula you proposed...
R2logit<- function(y,model){
R2<- 1-(model$deviance/model$null.deviance)
return(R2)
}
So far for poisson glm I have used the equation from this post.
https://stackoverflow.com/questions/23067475/how-do-i-obtain-pseudo-r2-measures-in-stata-when-using-glm-regression
There is also a great article on pseudo R2 available on researchs gates...here is the link:
https://www.researchgate.net/publication/222802021_Pseudo_R-squared_measures_for_Poisson_regression_models_with_over-_or_underdispersion
I hope this help.
|
Pseudo R squared formula for GLMs
|
The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Re
|
Pseudo R squared formula for GLMs
The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Regression Methods by Thomas P. Ryan on page 266).
If you make a fake data set, you will see that it's underestimate the R squared...for gaussian glm per example.
I think for a gaussian glm you can use the basic (lm) R squared formula...
R2gauss<- function(y,model){
moy<-mean(y)
N<- length(y)
p<-length(model$coefficients)-1
SSres<- sum((y-predict(model))^2)
SStot<-sum((y-moy)^2)
R2<-1-(SSres/SStot)
Rajust<-1-(((1-R2)*(N-1))/(N-p-1))
return(data.frame(R2,Rajust,SSres,SStot))
}
And for the logistic (or binomial family in r ) I would use the formula you proposed...
R2logit<- function(y,model){
R2<- 1-(model$deviance/model$null.deviance)
return(R2)
}
So far for poisson glm I have used the equation from this post.
https://stackoverflow.com/questions/23067475/how-do-i-obtain-pseudo-r2-measures-in-stata-when-using-glm-regression
There is also a great article on pseudo R2 available on researchs gates...here is the link:
https://www.researchgate.net/publication/222802021_Pseudo_R-squared_measures_for_Poisson_regression_models_with_over-_or_underdispersion
I hope this help.
|
Pseudo R squared formula for GLMs
The formula you proposed have been proposed by Maddala (1983) and Magee (1990) to estimate R squared on logistic model. Therefore I don't think it's applicable to all glm model (see the book Modern Re
|
9,567
|
Pseudo R squared formula for GLMs
|
The R package modEvA calculates D-Squared
as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris
set.seed(1)
data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, min=0, max=1.5))
mod <- glm(y~x,data,family = poisson)
1- (mod$deviance/mod$null.deviance)
[1] 0.01133757
library(modEvA);modEvA::Dsquared(mod)
[1] 0.01133757
The D-Squared or explained Deviance of the model is introduced in (Guisan & Zimmermann 2000)
https://doi.org/10.1016/S0304-3800(00)00354-9
|
Pseudo R squared formula for GLMs
|
The R package modEvA calculates D-Squared
as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris
set.seed(1)
data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, m
|
Pseudo R squared formula for GLMs
The R package modEvA calculates D-Squared
as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris
set.seed(1)
data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, min=0, max=1.5))
mod <- glm(y~x,data,family = poisson)
1- (mod$deviance/mod$null.deviance)
[1] 0.01133757
library(modEvA);modEvA::Dsquared(mod)
[1] 0.01133757
The D-Squared or explained Deviance of the model is introduced in (Guisan & Zimmermann 2000)
https://doi.org/10.1016/S0304-3800(00)00354-9
|
Pseudo R squared formula for GLMs
The R package modEvA calculates D-Squared
as 1 - (mod$deviance/mod$null.deviance) as mentioned by David J. Harris
set.seed(1)
data <- data.frame(y=rpois(n=10, lambda=exp(1 + 0.2 * x)), x=runif(n=10, m
|
9,568
|
Correlation between OLS estimators for intercept and slope
|
Let me try it as follows (really not sure if that is useful intuition):
Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$
Thus, if $E(X)>0$ instead of $E(X)=0$, most data will be clustered to the right of zero. Thus, if the slope coefficient gets larger, the correlation formula asserts that the intercept needs to become smaller - which makes some sense.
I'm thinking of something like this:
In the blue sample, the slope estimate is flatter, which means the intercept estimate can be larger. The slope for the golden sample is somewhat larger, so the intercept can be somewhat smaller to compensate for this.
On the other hand, if $E(X)=0$, we can have any slope without any constraints on the intercept.
The denominator of the formula can also be interpreted along these lines: if, for a given mean, the variability as measured by $E(X^2)$ increases, the data gets smeared out over the $x$-axis, so that it effectively "looks" more mean-zero again, loosening the constraints on the intercept for a given mean of $X$.
Here's the code, which I hope explains the figure fully:
n <- 30
x_1 <- sort(runif(n,2,3))
beta <- 2
y_1 <- x_1*beta + rnorm(n) # the golden sample
x_2 <- sort(runif(n,2,3))
beta <- 2
y_2 <- x_2*beta + rnorm(n) # the blue sample
xax <- seq(-1,3,by=.001)
plot(x_1,y_1,xlim=c(-1,3),ylim=c(-4,7),pch=19,col="gold",ylab="y",xlab="x")
abline(lm(y_1~x_1),col="gold",lwd=2)
abline(v=0,lty=2)
lines(xax,beta*xax) # the "true" regression line
abline(lm(y_2~x_2),col="lightblue",lwd=2)
points(x_2,y_2,pch=19,col="lightblue")
|
Correlation between OLS estimators for intercept and slope
|
Let me try it as follows (really not sure if that is useful intuition):
Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$
Thus, if $E(X)>0$ instead of $E(X)=0$
|
Correlation between OLS estimators for intercept and slope
Let me try it as follows (really not sure if that is useful intuition):
Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$
Thus, if $E(X)>0$ instead of $E(X)=0$, most data will be clustered to the right of zero. Thus, if the slope coefficient gets larger, the correlation formula asserts that the intercept needs to become smaller - which makes some sense.
I'm thinking of something like this:
In the blue sample, the slope estimate is flatter, which means the intercept estimate can be larger. The slope for the golden sample is somewhat larger, so the intercept can be somewhat smaller to compensate for this.
On the other hand, if $E(X)=0$, we can have any slope without any constraints on the intercept.
The denominator of the formula can also be interpreted along these lines: if, for a given mean, the variability as measured by $E(X^2)$ increases, the data gets smeared out over the $x$-axis, so that it effectively "looks" more mean-zero again, loosening the constraints on the intercept for a given mean of $X$.
Here's the code, which I hope explains the figure fully:
n <- 30
x_1 <- sort(runif(n,2,3))
beta <- 2
y_1 <- x_1*beta + rnorm(n) # the golden sample
x_2 <- sort(runif(n,2,3))
beta <- 2
y_2 <- x_2*beta + rnorm(n) # the blue sample
xax <- seq(-1,3,by=.001)
plot(x_1,y_1,xlim=c(-1,3),ylim=c(-4,7),pch=19,col="gold",ylab="y",xlab="x")
abline(lm(y_1~x_1),col="gold",lwd=2)
abline(v=0,lty=2)
lines(xax,beta*xax) # the "true" regression line
abline(lm(y_2~x_2),col="lightblue",lwd=2)
points(x_2,y_2,pch=19,col="lightblue")
|
Correlation between OLS estimators for intercept and slope
Let me try it as follows (really not sure if that is useful intuition):
Based on my above comment, the correlation will roughly be $$-\frac{E(X)}{\sqrt{E(X^2)}}$$
Thus, if $E(X)>0$ instead of $E(X)=0$
|
9,569
|
Correlation between OLS estimators for intercept and slope
|
You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathOperator{\MSD}{MSD}\MSD(x) = \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2$. Note that the MSD is measured in the square of the units of $x$ (e.g. if $x$ is in $\text{cm}$ then the MSD is in $\text{cm}^2$), while the root mean square deviation, $\DeclareMathOperator{\RMSD}{RMSD}\RMSD(x)=\sqrt{\MSD(x)}$ is on the original scale. This yields
$$\DeclareMathOperator{\Corr}{Corr}\Corr(\hat{\beta}_0^{OLS},\hat{\beta}_1^{OLS}) = \frac{-\bar{x}}{\sqrt{\MSD(x) + \bar{x}^2}}$$
This should help you see how the correlation is affected by both the mean of $x$ (in particular, the correlation between your slope and intercept estimators is removed if the $x$ variable is centered) and also by its spread. (This decomposition might also have made the asymptotics more obvious!)
I will reiterate the importance of this result: if $x$ does not have mean zero, we can transform it by subtracting $\bar{x}$ so that it is now centered. If we fit a regression line of $y$ on $x - \bar{x}$ the slope and intercept estimates are uncorrelated — an under- or overestimate in one does not tend to produce an under- or overestimate in the other. But this regression line is simply a translation of the $y$ on $x$ regression line! The standard error of the intercept of the $y$ on $x - \bar{x}$ line is simply a measure of uncertainty of $\hat y$ when your translated variable $x - \bar x = 0$; when that line is translated back to its original position, this reverts to being the standard error of $\hat y$ at $x = \bar x$. More generally, the standard error of $\hat y$ at any $x$ value is just the standard error of the intercept of the regression of $y$ on an appropriately translated $x$; the standard error of $\hat y$ at $x=0$ is of course the standard error of the intercept in the original, untranslated regression.
Since we can translate $x$, in some sense there is nothing special about $x=0$ and therefore nothing special about $\hat \beta_0$. With a bit of thought, what I am about to say works for $\hat y$ at any value of $x$, which is useful if you are seeking insight into e.g. confidence intervals for mean responses from your regression line. However, we have seen that there is something special about $\hat y$ at $x=\bar x$, for it is here that errors in the estimated height of the regression line — which is of course estimated at $\bar y$ — and errors in the estimated slope of the regression line have nothing to do with one another. Your estimated intercept is $\hat \beta_0 = \bar y - \hat \beta_1 \bar x$ and errors in its estimation must stem either from the estimation of $\bar y$ or the estimation of $\hat \beta_1$ (since we regarded $x$ as non-stochastic); now we know these two sources of error are uncorrelated it is clear algebraically why there should be a negative correlation between estimated slope and intercept (overestimating slope will tend to underestimate intercept, so long as $\bar x < 0$) but a positive correlation between estimated intercept and estimated mean response $\hat y = \bar y$ at $x = \bar x$. But can see such relationships without algebra too.
Imagine the estimated regression line as a ruler. That ruler must pass through $(\bar x, \bar y)$. We have just seen that there are two essentially unrelated uncertainties in the location of this line, which I visualise kinaesthetically as the "twanging" uncertainty and the "parallel sliding" uncertainty. Before you twang the ruler, hold it at $(\bar x, \bar y)$ as a pivot, then give it a hearty twang related to your uncertainty in the slope. The ruler will have a good wobble, more violently so if you are very uncertain about the slope (indeed, a previously positive slope will quite possibly be rendered negative if your uncertainty is large) but note that the height of the regression line at $x=\bar x$ is unchanged by this kind of uncertainty, and the effect of the twang is more noticeable the further from the mean that you look.
To "slide" the ruler, grip it firmly and shift it up and down, taking care to keep it parallel with its original position — don't change the slope! How vigorously to shift it up and down depends on how uncertain you are about the height of the regression line as it passes through the mean point; think about what the standard error of the intercept would be if $x$ had been translated so that the $y$-axis passed through the mean point. Alternatively, since the estimated height of the regression line here is simply $\bar y$, it is also the standard error of $\bar y$. Note that this kind of "sliding" uncertainty affects all points on the regression line in an equal manner, unlike the "twang".
These two uncertainties apply independently (well, uncorrelatedly, but if we assume normally distributed error terms then they should be technically independent) so the heights $\hat y$ of all points on your regression line are affected by a "twanging" uncertainty which is zero at the mean and gets worse away from it, and a "sliding" uncertainty which is the same everywhere. (Can you see the relationship with the regression confidence intervals that I promised earlier, particularly how their width is narrowest at $\bar x$?)
This includes the uncertainty in $\hat y$ at $x=0$, which is essentially what we mean by the standard error in $\hat \beta_0$. Now suppose $\bar x$ is to the right of $x=0$; then twanging the graph to a higher estimated slope tends to reduce our estimated intercept as a quick sketch will reveal. This is the negative correlation predicted by $\frac{-\bar{x}}{\sqrt{\MSD(x) + \bar{x}^2}}$ when $\bar x$ is positive. Conversely, if $\bar x$ is the left of $x=0$ you will see that a higher estimated slope tends to increase our estimated intercept, consistent with the positive correlation your equation predicts when $\bar x$ is negative. Note that if $\bar x$ is a long way from zero, the extrapolation of a regression line of uncertain gradient out towards the $y$-axis becomes increasingly precarious (the amplitude of the "twang" worsens away from the mean). The "twanging" error in the $ - \hat \beta_1 \bar x$ term will massively outweigh the "sliding" error in the $\bar y$ term, so the error in $\hat \beta_0$ is almost entirely determined by any error in $\hat \beta_1$. As you can easily verify algebraically, if we take $\bar x \to \pm \infty$ without changing the MSD or the standard deviation of errors $s_u$, the correlation between $\hat \beta_0$ and $\hat \beta_1$ tends to $\mp 1$.
To illustrate this (You may want to right-click on the image and save it, or view it full-size in a new tab if that option is available to you) I have chosen to consider repeated samplings of $y_i = 5 + 2x_i + u_i$, where $u_i \sim N(0, 10^2)$ are i.i.d., over a fixed set of $x$ values with $\bar x = 10$, so $\mathbb{E}(\bar y)=25$. In this set-up, there is a fairly strong negative correlation between estimated slope and intercept, and a weaker positive correlation between $\bar y$, the estimated mean response at $x=\bar x$, and estimated intercept. The animation shows several simulated samples, with sample (gold) regression line drawn over the true (black) regression line. The second row shows what the collection of estimated regression lines would have looked like if there were error only in the estimated $\bar y$ and the slopes matched the true slope ("sliding" error); then, if there were error only in the slopes and $\bar y$ matched its population value ("twanging" error); and finally, what the collection of estimated lines actually looked like, when both sources of error were combined. These have been colour-coded by the size of the actually estimated intercept (not the intercepts shown on the first two graphs where one of the sources of error has been eliminated) from blue for low intercepts to red for high intercepts. Note that from the colours alone we can see that samples with low $\bar y$ tended to produce lower estimated intercepts, as did samples with high estimated slopes. The next row shows the simulated (histogram) and theoretical (normal curve) sampling distributions of the estimates, and the final row shows scatter plots between them. Observe how there is no correlation between $\bar y$ and estimated slope, a negative correlation between estimated intercept and slope, and a positive correlation between intercept and $\bar y$.
What is the MSD doing in the denominator of $\frac{-\bar{x}}{\sqrt{\MSD(x) + \bar{x}^2}}$? Spreading out the range of $x$ values you measure over is well-known to allow you to estimate the slope more precisely, and the intuition is clear from a sketch, but it does not let you estimate $\bar y$ any better. I suggest you visualise taking the MSD to near zero (i.e. sampling points only very near the mean of $x$), so that your uncertainty in the slope becomes massive: think great big twangs, but with no change to your sliding uncertainty. If your $y$-axis is any distance from $\bar x$ (in other words, if $\bar x \neq 0$) you will find that uncertainty in your intercept becomes utterly dominated by the slope-related twanging error. In contrast, if you increase the spread of your $x$ measurements, without changing the mean, you will massively improve the precision of your slope estimate and need only take the gentlest of twangs to your line. The height of your intercept is now dominated by your sliding uncertainty, which has nothing to do with your estimated slope. This tallies with the algebraic fact that the correlation between estimated slope and intercept tends to zero as $\MSD(x) \to \pm \infty$ and, when $\bar x \neq 0$, towards $\pm 1$ (the sign is the opposite of the sign of $\bar x$) as $\MSD(x) \to 0$.
Correlation of slope and intercept estimators was a function of both $\bar x$ and the MSD (or RMSD) of $x$, so how do their relative contributions weight up? Actually, all that matters is the ratio of $\bar x$ to the RMSD of $x$. A geometric intuition is that the RMSD gives us a kind of "natural unit" for $x$; if we rescale the $x$-axis using $w_i = x_i / \RMSD(x)$ then this is a horizontal stretch that leaves the estimated intercept and $\bar y$ unchanged, gives us a new $\RMSD(w)=1$, and multiplies the estimated slope by the RMSD of $x$. The formula for the correlation between the new slope and intercept estimators is in terms only of $\RMSD(w)$, which is one, and $\bar w$, which is the ratio $\frac{\bar x}{\RMSD(x)}$. As the intercept estimate was unchanged, and the slope estimate merely multiplied by a positive constant, then the correlation between them has not changed: hence the correlation between the original slope and intercept must also only depend on $\frac{\bar x}{\RMSD(x)}$. Algebraically we can see this by dividing top and bottom of $\frac{-\bar x}{\sqrt{\MSD(x)+\bar{x}^2}}$ by $\RMSD(x)$ to obtain $\Corr\left(\hat \beta_0, \hat \beta_1 \right) = \frac{- (\bar x / \RMSD(x))}{\sqrt{1 + (\bar x / \RMSD(x))^2}}$.
To find the correlation between $\hat \beta_0$ and $\bar y$, consider $\DeclareMathOperator{\Cov}{Cov}\Cov(\hat \beta_0, \bar y)=\Cov(\bar y - \hat \beta_1 \bar x, \bar y)$. By bilinearity of $\Cov$ this is $\Cov(\bar y, \bar y) - \bar x \Cov(\hat \beta_1, \bar y)$. The first term is $\operatorname{Var}(\bar y)=\frac{\sigma_u^2}{n}$ while the second term we established earlier to be zero. From this we deduce
$$\Corr(\hat \beta_0, \bar y)=\frac{1}{\sqrt{1 + (\bar x/\RMSD(x))^2}}$$
So this correlation also depends only on the ratio $\frac{\bar x}{\RMSD(x)}$. Note that the squares of $\Corr(\hat \beta_0, \hat \beta_1)$ and $\Corr(\hat \beta_0, \bar y)$ sum to one: we expect this since all sampling variation (for fixed $x$) in $\hat \beta_0$ is due either to variation in $\hat \beta_1$ or to variation in $\bar y$, and these sources of variation are uncorrelated with each other. Here is a plot of the correlations against the ratio $\frac{\bar x}{\RMSD(x)}$.
The plot clearly shows how when $\bar x$ is high relative to the RMSD, errors in the intercept estimate are largely due to errors in the slope estimate and the two are closely correlated, whereas when $\bar x$ is low relative to the RMSD, it is error in the estimation of $\bar y$ that predominates, and the relationship between intercept and slope is weaker. Note that the correlation of intercept with slope is an odd function of the ratio $\frac{\bar x}{\RMSD(x)}$, so its sign depends on the sign of $\bar x$ and it is zero if $\bar x=0$, whereas the correlation of intercept with $\bar y$ is always positive and is an even function of the ratio, i.e. it doesn't matter what side of the $y$-axis that $\bar x$ is. The correlations are equal in magnitude if $\bar x$ is one RMSD away from the $y$-axis, when $\Corr(\hat \beta_0, \bar y)=\frac{1}{\sqrt{2}} \approx 0.707$ and $\Corr(\hat \beta_0, \hat \beta_1)=\pm \frac{1}{\sqrt{2}} \approx \pm 0.707$ where the sign is opposite that of $\bar x$. In the example in the simulation above, $\bar x=10$ and $\RMSD(x) \approx 5.16$ so the mean was about $1.93$ RMSDs from the $y$-axis; at this ratio, the correlation between intercept and slope is stronger, but the correlation between intercept and $\bar y$ is still not negligible.
As an aside, I like to think of the formula for the standard error of the intercept,
$$\operatorname{s.e.}(\hat \beta_0^{OLS}) = \sqrt{s_u^2 \left( \frac{1}{n} + \frac{{\bar x}^2 }{n \MSD(x)} \right) }$$
as $\sqrt{\text{sliding error} + \text{twanging error}}$, and ditto for the formula for the standard error of $\hat y$ at $x = x_0$ (used for confidence intervals for the mean response, and of which the intercept is just a special case as I explained earlier via a translation argument),
$$\operatorname{s.e.}(\hat y) = \sqrt{s_u^2 \left( \frac{1}{n} + \frac{(x_0 - \bar x)^2}{n \MSD(x)} \right) }$$
R code for plots
require(graphics)
require(grDevices)
require(animation
#This saves a GIF so you may want to change your working directory
#setwd("~/YOURDIRECTORY")
#animation package requires ImageMagick or GraphicsMagick on computer
#See: http://www.inside-r.org/packages/cran/animation/docs/im.convert
#You might only want to run up to the "STATIC PLOTS" section
#The static plot does not save a file, so need to change directory.
#Change as desired
simulations <- 100 #how many samples to draw and regress on
xvalues <- c(2,4,6,8,10,12,14,16,18) #used in all regressions
su <- 10 #standard deviation of error term
beta0 <- 5 #true intercept
beta1 <- 2 #true slope
plotAlpha <- 1/5 #transparency setting for charts
interceptPalette <- colorRampPalette(c(rgb(0,0,1,plotAlpha),
rgb(1,0,0,plotAlpha)), alpha = TRUE)(100) #intercept color range
animationFrames <- 20 #how many samples to include in animation
#Consequences of previous choices
n <- length(xvalues) #sample size
meanX <- mean(xvalues) #same for all regressions
msdX <- sum((xvalues - meanX)^2)/n #Mean Square Deviation
minX <- min(xvalues)
maxX <- max(xvalues)
animationFrames <- min(simulations, animationFrames)
#Theoretical properties of estimators
expectedMeanY <- beta0 + beta1 * meanX
sdMeanY <- su / sqrt(n) #standard deviation of mean of Y (i.e. Y hat at mean x)
sdSlope <- sqrt(su^2 / (n * msdX))
sdIntercept <- sqrt(su^2 * (1/n + meanX^2 / (n * msdX)))
data.df <- data.frame(regression = rep(1:simulations, each=n),
x = rep(xvalues, times = simulations))
data.df$y <- beta0 + beta1*data.df$x + rnorm(n*simulations, mean = 0, sd = su)
regressionOutput <- function(i){ #i is the index of the regression simulation
i.df <- data.df[data.df$regression == i,]
i.lm <- lm(y ~ x, i.df)
return(c(i, mean(i.df$y), coef(summary(i.lm))["x", "Estimate"],
coef(summary(i.lm))["(Intercept)", "Estimate"]))
}
estimates.df <- as.data.frame(t(sapply(1:simulations, regressionOutput)))
colnames(estimates.df) <- c("Regression", "MeanY", "Slope", "Intercept")
perc.rank <- function(x) ceiling(100*rank(x)/length(x))
rank.text <- function(x) ifelse(x < 50, paste("bottom", paste0(x, "%")),
paste("top", paste0(101 - x, "%")))
estimates.df$percMeanY <- perc.rank(estimates.df$MeanY)
estimates.df$percSlope <- perc.rank(estimates.df$Slope)
estimates.df$percIntercept <- perc.rank(estimates.df$Intercept)
estimates.df$percTextMeanY <- paste("Mean Y",
rank.text(estimates.df$percMeanY))
estimates.df$percTextSlope <- paste("Slope",
rank.text(estimates.df$percSlope))
estimates.df$percTextIntercept <- paste("Intercept",
rank.text(estimates.df$percIntercept))
#data frame of extreme points to size plot axes correctly
extremes.df <- data.frame(x = c(min(minX,0), max(maxX,0)),
y = c(min(beta0, min(data.df$y)), max(beta0, max(data.df$y))))
#STATIC PLOTS ONLY
par(mfrow=c(3,3))
#first draw empty plot to reasonable plot size
with(extremes.df, plot(x,y, type="n", main = "Estimated Mean Y"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, beta1,
interceptPalette[estimates.df$percIntercept]))
with(extremes.df, plot(x,y, type="n", main = "Estimated Slope"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
expectedMeanY - estimates.df$Slope * meanX, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
with(extremes.df, plot(x,y, type="n", main = "Estimated Intercept"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
with(estimates.df, hist(MeanY, freq=FALSE, main = "Histogram of Mean Y",
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdMeanY))))
curve(dnorm(x, mean=expectedMeanY, sd=sdMeanY), lwd=2, add=TRUE)
with(estimates.df, hist(Slope, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdSlope))))
curve(dnorm(x, mean=beta1, sd=sdSlope), lwd=2, add=TRUE)
with(estimates.df, hist(Intercept, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdIntercept))))
curve(dnorm(x, mean=beta0, sd=sdIntercept), lwd=2, add=TRUE)
with(estimates.df, plot(MeanY, Slope, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Slope vs Mean Y"))
with(estimates.df, plot(Slope, Intercept, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Intercept vs Slope"))
with(estimates.df, plot(Intercept, MeanY, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Mean Y vs Intercept"))
#ANIMATED PLOTS
makeplot <- function(){for (i in 1:animationFrames) {
par(mfrow=c(4,3))
iMeanY <- estimates.df$MeanY[i]
iSlope <- estimates.df$Slope[i]
iIntercept <- estimates.df$Intercept[i]
with(extremes.df, plot(x,y, type="n", main = paste("Simulated dataset", i)))
with(data.df[data.df$regression==i,], points(x,y))
abline(beta0, beta1, lwd = 2)
abline(iIntercept, iSlope, lwd = 2, col="gold")
plot.new()
title(main = "Parameter Estimates")
text(x=0.5, y=c(0.9, 0.5, 0.1), labels = c(
paste("Mean Y =", round(iMeanY, digits = 2), "True =", expectedMeanY),
paste("Slope =", round(iSlope, digits = 2), "True =", beta1),
paste("Intercept =", round(iIntercept, digits = 2), "True =", beta0)))
plot.new()
title(main = "Percentile Ranks")
with(estimates.df, text(x=0.5, y=c(0.9, 0.5, 0.1),
labels = c(percTextMeanY[i], percTextSlope[i],
percTextIntercept[i])))
#first draw empty plot to reasonable plot size
with(extremes.df, plot(x,y, type="n", main = "Estimated Mean Y"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, beta1,
interceptPalette[estimates.df$percIntercept]))
abline(iIntercept, beta1, lwd = 2, col="gold")
with(extremes.df, plot(x,y, type="n", main = "Estimated Slope"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
expectedMeanY - estimates.df$Slope * meanX, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
abline(expectedMeanY - iSlope * meanX, iSlope,
lwd = 2, col="gold")
with(extremes.df, plot(x,y, type="n", main = "Estimated Intercept"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
abline(iIntercept, iSlope, lwd = 2, col="gold")
with(estimates.df, hist(MeanY, freq=FALSE, main = "Histogram of Mean Y",
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdMeanY))))
curve(dnorm(x, mean=expectedMeanY, sd=sdMeanY), lwd=2, add=TRUE)
lines(x=c(iMeanY, iMeanY),
y=c(0, dnorm(iMeanY, mean=expectedMeanY, sd=sdMeanY)),
lwd = 2, col = "gold")
with(estimates.df, hist(Slope, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdSlope))))
curve(dnorm(x, mean=beta1, sd=sdSlope), lwd=2, add=TRUE)
lines(x=c(iSlope, iSlope), y=c(0, dnorm(iSlope, mean=beta1, sd=sdSlope)),
lwd = 2, col = "gold")
with(estimates.df, hist(Intercept, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdIntercept))))
curve(dnorm(x, mean=beta0, sd=sdIntercept), lwd=2, add=TRUE)
lines(x=c(iIntercept, iIntercept),
y=c(0, dnorm(iIntercept, mean=beta0, sd=sdIntercept)),
lwd = 2, col = "gold")
with(estimates.df, plot(MeanY, Slope, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Slope vs Mean Y"))
points(x = iMeanY, y = iSlope, pch = 16, col = "gold")
with(estimates.df, plot(Slope, Intercept, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Intercept vs Slope"))
points(x = iSlope, y = iIntercept, pch = 16, col = "gold")
with(estimates.df, plot(Intercept, MeanY, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Mean Y vs Intercept"))
points(x = iIntercept, y = iMeanY, pch = 16, col = "gold")
}}
saveGIF(makeplot(), interval = 4, ani.width = 500, ani.height = 600)
For the plot of correlation versus ratio of $\bar x$ to RMSD:
require(ggplot2)
numberOfPoints <- 200
data.df <- data.frame(
ratio = rep(seq(from=-10, to=10, length=numberOfPoints), times=2),
between = rep(c("Slope", "MeanY"), each=numberOfPoints))
data.df$correlation <- with(data.df, ifelse(between=="Slope",
-ratio/sqrt(1+ratio^2),
1/sqrt(1+ratio^2)))
ggplot(data.df, aes(x=ratio, y=correlation, group=factor(between),
colour=factor(between))) +
theme_bw() +
geom_line(size=1.5) +
scale_colour_brewer(name="Correlation between", palette="Set1",
labels=list(expression(hat(beta[0])*" and "*bar(y)),
expression(hat(beta[0])*" and "*hat(beta[1])))) +
theme(legend.key = element_blank()) +
ggtitle(expression("Correlation of intercept estimates with slope and "*bar(y))) +
xlab(expression("Ratio of "*bar(X)/"RMSD(X)")) +
ylab(expression(paste("Correlation")))
|
Correlation between OLS estimators for intercept and slope
|
You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathO
|
Correlation between OLS estimators for intercept and slope
You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathOperator{\MSD}{MSD}\MSD(x) = \frac{1}{n} \sum_{i=1}^n (x_i - \bar{x})^2$. Note that the MSD is measured in the square of the units of $x$ (e.g. if $x$ is in $\text{cm}$ then the MSD is in $\text{cm}^2$), while the root mean square deviation, $\DeclareMathOperator{\RMSD}{RMSD}\RMSD(x)=\sqrt{\MSD(x)}$ is on the original scale. This yields
$$\DeclareMathOperator{\Corr}{Corr}\Corr(\hat{\beta}_0^{OLS},\hat{\beta}_1^{OLS}) = \frac{-\bar{x}}{\sqrt{\MSD(x) + \bar{x}^2}}$$
This should help you see how the correlation is affected by both the mean of $x$ (in particular, the correlation between your slope and intercept estimators is removed if the $x$ variable is centered) and also by its spread. (This decomposition might also have made the asymptotics more obvious!)
I will reiterate the importance of this result: if $x$ does not have mean zero, we can transform it by subtracting $\bar{x}$ so that it is now centered. If we fit a regression line of $y$ on $x - \bar{x}$ the slope and intercept estimates are uncorrelated — an under- or overestimate in one does not tend to produce an under- or overestimate in the other. But this regression line is simply a translation of the $y$ on $x$ regression line! The standard error of the intercept of the $y$ on $x - \bar{x}$ line is simply a measure of uncertainty of $\hat y$ when your translated variable $x - \bar x = 0$; when that line is translated back to its original position, this reverts to being the standard error of $\hat y$ at $x = \bar x$. More generally, the standard error of $\hat y$ at any $x$ value is just the standard error of the intercept of the regression of $y$ on an appropriately translated $x$; the standard error of $\hat y$ at $x=0$ is of course the standard error of the intercept in the original, untranslated regression.
Since we can translate $x$, in some sense there is nothing special about $x=0$ and therefore nothing special about $\hat \beta_0$. With a bit of thought, what I am about to say works for $\hat y$ at any value of $x$, which is useful if you are seeking insight into e.g. confidence intervals for mean responses from your regression line. However, we have seen that there is something special about $\hat y$ at $x=\bar x$, for it is here that errors in the estimated height of the regression line — which is of course estimated at $\bar y$ — and errors in the estimated slope of the regression line have nothing to do with one another. Your estimated intercept is $\hat \beta_0 = \bar y - \hat \beta_1 \bar x$ and errors in its estimation must stem either from the estimation of $\bar y$ or the estimation of $\hat \beta_1$ (since we regarded $x$ as non-stochastic); now we know these two sources of error are uncorrelated it is clear algebraically why there should be a negative correlation between estimated slope and intercept (overestimating slope will tend to underestimate intercept, so long as $\bar x < 0$) but a positive correlation between estimated intercept and estimated mean response $\hat y = \bar y$ at $x = \bar x$. But can see such relationships without algebra too.
Imagine the estimated regression line as a ruler. That ruler must pass through $(\bar x, \bar y)$. We have just seen that there are two essentially unrelated uncertainties in the location of this line, which I visualise kinaesthetically as the "twanging" uncertainty and the "parallel sliding" uncertainty. Before you twang the ruler, hold it at $(\bar x, \bar y)$ as a pivot, then give it a hearty twang related to your uncertainty in the slope. The ruler will have a good wobble, more violently so if you are very uncertain about the slope (indeed, a previously positive slope will quite possibly be rendered negative if your uncertainty is large) but note that the height of the regression line at $x=\bar x$ is unchanged by this kind of uncertainty, and the effect of the twang is more noticeable the further from the mean that you look.
To "slide" the ruler, grip it firmly and shift it up and down, taking care to keep it parallel with its original position — don't change the slope! How vigorously to shift it up and down depends on how uncertain you are about the height of the regression line as it passes through the mean point; think about what the standard error of the intercept would be if $x$ had been translated so that the $y$-axis passed through the mean point. Alternatively, since the estimated height of the regression line here is simply $\bar y$, it is also the standard error of $\bar y$. Note that this kind of "sliding" uncertainty affects all points on the regression line in an equal manner, unlike the "twang".
These two uncertainties apply independently (well, uncorrelatedly, but if we assume normally distributed error terms then they should be technically independent) so the heights $\hat y$ of all points on your regression line are affected by a "twanging" uncertainty which is zero at the mean and gets worse away from it, and a "sliding" uncertainty which is the same everywhere. (Can you see the relationship with the regression confidence intervals that I promised earlier, particularly how their width is narrowest at $\bar x$?)
This includes the uncertainty in $\hat y$ at $x=0$, which is essentially what we mean by the standard error in $\hat \beta_0$. Now suppose $\bar x$ is to the right of $x=0$; then twanging the graph to a higher estimated slope tends to reduce our estimated intercept as a quick sketch will reveal. This is the negative correlation predicted by $\frac{-\bar{x}}{\sqrt{\MSD(x) + \bar{x}^2}}$ when $\bar x$ is positive. Conversely, if $\bar x$ is the left of $x=0$ you will see that a higher estimated slope tends to increase our estimated intercept, consistent with the positive correlation your equation predicts when $\bar x$ is negative. Note that if $\bar x$ is a long way from zero, the extrapolation of a regression line of uncertain gradient out towards the $y$-axis becomes increasingly precarious (the amplitude of the "twang" worsens away from the mean). The "twanging" error in the $ - \hat \beta_1 \bar x$ term will massively outweigh the "sliding" error in the $\bar y$ term, so the error in $\hat \beta_0$ is almost entirely determined by any error in $\hat \beta_1$. As you can easily verify algebraically, if we take $\bar x \to \pm \infty$ without changing the MSD or the standard deviation of errors $s_u$, the correlation between $\hat \beta_0$ and $\hat \beta_1$ tends to $\mp 1$.
To illustrate this (You may want to right-click on the image and save it, or view it full-size in a new tab if that option is available to you) I have chosen to consider repeated samplings of $y_i = 5 + 2x_i + u_i$, where $u_i \sim N(0, 10^2)$ are i.i.d., over a fixed set of $x$ values with $\bar x = 10$, so $\mathbb{E}(\bar y)=25$. In this set-up, there is a fairly strong negative correlation between estimated slope and intercept, and a weaker positive correlation between $\bar y$, the estimated mean response at $x=\bar x$, and estimated intercept. The animation shows several simulated samples, with sample (gold) regression line drawn over the true (black) regression line. The second row shows what the collection of estimated regression lines would have looked like if there were error only in the estimated $\bar y$ and the slopes matched the true slope ("sliding" error); then, if there were error only in the slopes and $\bar y$ matched its population value ("twanging" error); and finally, what the collection of estimated lines actually looked like, when both sources of error were combined. These have been colour-coded by the size of the actually estimated intercept (not the intercepts shown on the first two graphs where one of the sources of error has been eliminated) from blue for low intercepts to red for high intercepts. Note that from the colours alone we can see that samples with low $\bar y$ tended to produce lower estimated intercepts, as did samples with high estimated slopes. The next row shows the simulated (histogram) and theoretical (normal curve) sampling distributions of the estimates, and the final row shows scatter plots between them. Observe how there is no correlation between $\bar y$ and estimated slope, a negative correlation between estimated intercept and slope, and a positive correlation between intercept and $\bar y$.
What is the MSD doing in the denominator of $\frac{-\bar{x}}{\sqrt{\MSD(x) + \bar{x}^2}}$? Spreading out the range of $x$ values you measure over is well-known to allow you to estimate the slope more precisely, and the intuition is clear from a sketch, but it does not let you estimate $\bar y$ any better. I suggest you visualise taking the MSD to near zero (i.e. sampling points only very near the mean of $x$), so that your uncertainty in the slope becomes massive: think great big twangs, but with no change to your sliding uncertainty. If your $y$-axis is any distance from $\bar x$ (in other words, if $\bar x \neq 0$) you will find that uncertainty in your intercept becomes utterly dominated by the slope-related twanging error. In contrast, if you increase the spread of your $x$ measurements, without changing the mean, you will massively improve the precision of your slope estimate and need only take the gentlest of twangs to your line. The height of your intercept is now dominated by your sliding uncertainty, which has nothing to do with your estimated slope. This tallies with the algebraic fact that the correlation between estimated slope and intercept tends to zero as $\MSD(x) \to \pm \infty$ and, when $\bar x \neq 0$, towards $\pm 1$ (the sign is the opposite of the sign of $\bar x$) as $\MSD(x) \to 0$.
Correlation of slope and intercept estimators was a function of both $\bar x$ and the MSD (or RMSD) of $x$, so how do their relative contributions weight up? Actually, all that matters is the ratio of $\bar x$ to the RMSD of $x$. A geometric intuition is that the RMSD gives us a kind of "natural unit" for $x$; if we rescale the $x$-axis using $w_i = x_i / \RMSD(x)$ then this is a horizontal stretch that leaves the estimated intercept and $\bar y$ unchanged, gives us a new $\RMSD(w)=1$, and multiplies the estimated slope by the RMSD of $x$. The formula for the correlation between the new slope and intercept estimators is in terms only of $\RMSD(w)$, which is one, and $\bar w$, which is the ratio $\frac{\bar x}{\RMSD(x)}$. As the intercept estimate was unchanged, and the slope estimate merely multiplied by a positive constant, then the correlation between them has not changed: hence the correlation between the original slope and intercept must also only depend on $\frac{\bar x}{\RMSD(x)}$. Algebraically we can see this by dividing top and bottom of $\frac{-\bar x}{\sqrt{\MSD(x)+\bar{x}^2}}$ by $\RMSD(x)$ to obtain $\Corr\left(\hat \beta_0, \hat \beta_1 \right) = \frac{- (\bar x / \RMSD(x))}{\sqrt{1 + (\bar x / \RMSD(x))^2}}$.
To find the correlation between $\hat \beta_0$ and $\bar y$, consider $\DeclareMathOperator{\Cov}{Cov}\Cov(\hat \beta_0, \bar y)=\Cov(\bar y - \hat \beta_1 \bar x, \bar y)$. By bilinearity of $\Cov$ this is $\Cov(\bar y, \bar y) - \bar x \Cov(\hat \beta_1, \bar y)$. The first term is $\operatorname{Var}(\bar y)=\frac{\sigma_u^2}{n}$ while the second term we established earlier to be zero. From this we deduce
$$\Corr(\hat \beta_0, \bar y)=\frac{1}{\sqrt{1 + (\bar x/\RMSD(x))^2}}$$
So this correlation also depends only on the ratio $\frac{\bar x}{\RMSD(x)}$. Note that the squares of $\Corr(\hat \beta_0, \hat \beta_1)$ and $\Corr(\hat \beta_0, \bar y)$ sum to one: we expect this since all sampling variation (for fixed $x$) in $\hat \beta_0$ is due either to variation in $\hat \beta_1$ or to variation in $\bar y$, and these sources of variation are uncorrelated with each other. Here is a plot of the correlations against the ratio $\frac{\bar x}{\RMSD(x)}$.
The plot clearly shows how when $\bar x$ is high relative to the RMSD, errors in the intercept estimate are largely due to errors in the slope estimate and the two are closely correlated, whereas when $\bar x$ is low relative to the RMSD, it is error in the estimation of $\bar y$ that predominates, and the relationship between intercept and slope is weaker. Note that the correlation of intercept with slope is an odd function of the ratio $\frac{\bar x}{\RMSD(x)}$, so its sign depends on the sign of $\bar x$ and it is zero if $\bar x=0$, whereas the correlation of intercept with $\bar y$ is always positive and is an even function of the ratio, i.e. it doesn't matter what side of the $y$-axis that $\bar x$ is. The correlations are equal in magnitude if $\bar x$ is one RMSD away from the $y$-axis, when $\Corr(\hat \beta_0, \bar y)=\frac{1}{\sqrt{2}} \approx 0.707$ and $\Corr(\hat \beta_0, \hat \beta_1)=\pm \frac{1}{\sqrt{2}} \approx \pm 0.707$ where the sign is opposite that of $\bar x$. In the example in the simulation above, $\bar x=10$ and $\RMSD(x) \approx 5.16$ so the mean was about $1.93$ RMSDs from the $y$-axis; at this ratio, the correlation between intercept and slope is stronger, but the correlation between intercept and $\bar y$ is still not negligible.
As an aside, I like to think of the formula for the standard error of the intercept,
$$\operatorname{s.e.}(\hat \beta_0^{OLS}) = \sqrt{s_u^2 \left( \frac{1}{n} + \frac{{\bar x}^2 }{n \MSD(x)} \right) }$$
as $\sqrt{\text{sliding error} + \text{twanging error}}$, and ditto for the formula for the standard error of $\hat y$ at $x = x_0$ (used for confidence intervals for the mean response, and of which the intercept is just a special case as I explained earlier via a translation argument),
$$\operatorname{s.e.}(\hat y) = \sqrt{s_u^2 \left( \frac{1}{n} + \frac{(x_0 - \bar x)^2}{n \MSD(x)} \right) }$$
R code for plots
require(graphics)
require(grDevices)
require(animation
#This saves a GIF so you may want to change your working directory
#setwd("~/YOURDIRECTORY")
#animation package requires ImageMagick or GraphicsMagick on computer
#See: http://www.inside-r.org/packages/cran/animation/docs/im.convert
#You might only want to run up to the "STATIC PLOTS" section
#The static plot does not save a file, so need to change directory.
#Change as desired
simulations <- 100 #how many samples to draw and regress on
xvalues <- c(2,4,6,8,10,12,14,16,18) #used in all regressions
su <- 10 #standard deviation of error term
beta0 <- 5 #true intercept
beta1 <- 2 #true slope
plotAlpha <- 1/5 #transparency setting for charts
interceptPalette <- colorRampPalette(c(rgb(0,0,1,plotAlpha),
rgb(1,0,0,plotAlpha)), alpha = TRUE)(100) #intercept color range
animationFrames <- 20 #how many samples to include in animation
#Consequences of previous choices
n <- length(xvalues) #sample size
meanX <- mean(xvalues) #same for all regressions
msdX <- sum((xvalues - meanX)^2)/n #Mean Square Deviation
minX <- min(xvalues)
maxX <- max(xvalues)
animationFrames <- min(simulations, animationFrames)
#Theoretical properties of estimators
expectedMeanY <- beta0 + beta1 * meanX
sdMeanY <- su / sqrt(n) #standard deviation of mean of Y (i.e. Y hat at mean x)
sdSlope <- sqrt(su^2 / (n * msdX))
sdIntercept <- sqrt(su^2 * (1/n + meanX^2 / (n * msdX)))
data.df <- data.frame(regression = rep(1:simulations, each=n),
x = rep(xvalues, times = simulations))
data.df$y <- beta0 + beta1*data.df$x + rnorm(n*simulations, mean = 0, sd = su)
regressionOutput <- function(i){ #i is the index of the regression simulation
i.df <- data.df[data.df$regression == i,]
i.lm <- lm(y ~ x, i.df)
return(c(i, mean(i.df$y), coef(summary(i.lm))["x", "Estimate"],
coef(summary(i.lm))["(Intercept)", "Estimate"]))
}
estimates.df <- as.data.frame(t(sapply(1:simulations, regressionOutput)))
colnames(estimates.df) <- c("Regression", "MeanY", "Slope", "Intercept")
perc.rank <- function(x) ceiling(100*rank(x)/length(x))
rank.text <- function(x) ifelse(x < 50, paste("bottom", paste0(x, "%")),
paste("top", paste0(101 - x, "%")))
estimates.df$percMeanY <- perc.rank(estimates.df$MeanY)
estimates.df$percSlope <- perc.rank(estimates.df$Slope)
estimates.df$percIntercept <- perc.rank(estimates.df$Intercept)
estimates.df$percTextMeanY <- paste("Mean Y",
rank.text(estimates.df$percMeanY))
estimates.df$percTextSlope <- paste("Slope",
rank.text(estimates.df$percSlope))
estimates.df$percTextIntercept <- paste("Intercept",
rank.text(estimates.df$percIntercept))
#data frame of extreme points to size plot axes correctly
extremes.df <- data.frame(x = c(min(minX,0), max(maxX,0)),
y = c(min(beta0, min(data.df$y)), max(beta0, max(data.df$y))))
#STATIC PLOTS ONLY
par(mfrow=c(3,3))
#first draw empty plot to reasonable plot size
with(extremes.df, plot(x,y, type="n", main = "Estimated Mean Y"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, beta1,
interceptPalette[estimates.df$percIntercept]))
with(extremes.df, plot(x,y, type="n", main = "Estimated Slope"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
expectedMeanY - estimates.df$Slope * meanX, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
with(extremes.df, plot(x,y, type="n", main = "Estimated Intercept"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
with(estimates.df, hist(MeanY, freq=FALSE, main = "Histogram of Mean Y",
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdMeanY))))
curve(dnorm(x, mean=expectedMeanY, sd=sdMeanY), lwd=2, add=TRUE)
with(estimates.df, hist(Slope, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdSlope))))
curve(dnorm(x, mean=beta1, sd=sdSlope), lwd=2, add=TRUE)
with(estimates.df, hist(Intercept, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdIntercept))))
curve(dnorm(x, mean=beta0, sd=sdIntercept), lwd=2, add=TRUE)
with(estimates.df, plot(MeanY, Slope, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Slope vs Mean Y"))
with(estimates.df, plot(Slope, Intercept, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Intercept vs Slope"))
with(estimates.df, plot(Intercept, MeanY, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Mean Y vs Intercept"))
#ANIMATED PLOTS
makeplot <- function(){for (i in 1:animationFrames) {
par(mfrow=c(4,3))
iMeanY <- estimates.df$MeanY[i]
iSlope <- estimates.df$Slope[i]
iIntercept <- estimates.df$Intercept[i]
with(extremes.df, plot(x,y, type="n", main = paste("Simulated dataset", i)))
with(data.df[data.df$regression==i,], points(x,y))
abline(beta0, beta1, lwd = 2)
abline(iIntercept, iSlope, lwd = 2, col="gold")
plot.new()
title(main = "Parameter Estimates")
text(x=0.5, y=c(0.9, 0.5, 0.1), labels = c(
paste("Mean Y =", round(iMeanY, digits = 2), "True =", expectedMeanY),
paste("Slope =", round(iSlope, digits = 2), "True =", beta1),
paste("Intercept =", round(iIntercept, digits = 2), "True =", beta0)))
plot.new()
title(main = "Percentile Ranks")
with(estimates.df, text(x=0.5, y=c(0.9, 0.5, 0.1),
labels = c(percTextMeanY[i], percTextSlope[i],
percTextIntercept[i])))
#first draw empty plot to reasonable plot size
with(extremes.df, plot(x,y, type="n", main = "Estimated Mean Y"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, beta1,
interceptPalette[estimates.df$percIntercept]))
abline(iIntercept, beta1, lwd = 2, col="gold")
with(extremes.df, plot(x,y, type="n", main = "Estimated Slope"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
expectedMeanY - estimates.df$Slope * meanX, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
abline(expectedMeanY - iSlope * meanX, iSlope,
lwd = 2, col="gold")
with(extremes.df, plot(x,y, type="n", main = "Estimated Intercept"))
invisible(mapply(function(a,b,c) { abline(a, b, col=c) },
estimates.df$Intercept, estimates.df$Slope,
interceptPalette[estimates.df$percIntercept]))
abline(iIntercept, iSlope, lwd = 2, col="gold")
with(estimates.df, hist(MeanY, freq=FALSE, main = "Histogram of Mean Y",
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdMeanY))))
curve(dnorm(x, mean=expectedMeanY, sd=sdMeanY), lwd=2, add=TRUE)
lines(x=c(iMeanY, iMeanY),
y=c(0, dnorm(iMeanY, mean=expectedMeanY, sd=sdMeanY)),
lwd = 2, col = "gold")
with(estimates.df, hist(Slope, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdSlope))))
curve(dnorm(x, mean=beta1, sd=sdSlope), lwd=2, add=TRUE)
lines(x=c(iSlope, iSlope), y=c(0, dnorm(iSlope, mean=beta1, sd=sdSlope)),
lwd = 2, col = "gold")
with(estimates.df, hist(Intercept, freq=FALSE,
ylim=c(0, 1.3*dnorm(0, mean=0, sd=sdIntercept))))
curve(dnorm(x, mean=beta0, sd=sdIntercept), lwd=2, add=TRUE)
lines(x=c(iIntercept, iIntercept),
y=c(0, dnorm(iIntercept, mean=beta0, sd=sdIntercept)),
lwd = 2, col = "gold")
with(estimates.df, plot(MeanY, Slope, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Slope vs Mean Y"))
points(x = iMeanY, y = iSlope, pch = 16, col = "gold")
with(estimates.df, plot(Slope, Intercept, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Intercept vs Slope"))
points(x = iSlope, y = iIntercept, pch = 16, col = "gold")
with(estimates.df, plot(Intercept, MeanY, pch = 16, col = rgb(0,0,0,plotAlpha),
main = "Scatter of Mean Y vs Intercept"))
points(x = iIntercept, y = iMeanY, pch = 16, col = "gold")
}}
saveGIF(makeplot(), interval = 4, ani.width = 500, ani.height = 600)
For the plot of correlation versus ratio of $\bar x$ to RMSD:
require(ggplot2)
numberOfPoints <- 200
data.df <- data.frame(
ratio = rep(seq(from=-10, to=10, length=numberOfPoints), times=2),
between = rep(c("Slope", "MeanY"), each=numberOfPoints))
data.df$correlation <- with(data.df, ifelse(between=="Slope",
-ratio/sqrt(1+ratio^2),
1/sqrt(1+ratio^2)))
ggplot(data.df, aes(x=ratio, y=correlation, group=factor(between),
colour=factor(between))) +
theme_bw() +
geom_line(size=1.5) +
scale_colour_brewer(name="Correlation between", palette="Set1",
labels=list(expression(hat(beta[0])*" and "*bar(y)),
expression(hat(beta[0])*" and "*hat(beta[1])))) +
theme(legend.key = element_blank()) +
ggtitle(expression("Correlation of intercept estimates with slope and "*bar(y))) +
xlab(expression("Ratio of "*bar(X)/"RMSD(X)")) +
ylab(expression(paste("Correlation")))
|
Correlation between OLS estimators for intercept and slope
You might like to follow Dougherty's Introduction to Econometrics, perhaps considering for now that $x$ is a non-stochastic variable, and defining the mean square deviation of $x$ to be $\DeclareMathO
|
9,570
|
Interpretation of betas when there are multiple categorical variables
|
You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the intercept ($\hat\beta_0$) is the mean of the group that constitutes the reference level for both (all) categorical variables. Using your example scenario, consider the case where there is no interaction, then the betas are:
$\hat\beta_0$: the mean of white males
$\hat\beta_{\rm Female}$: the difference between the mean of females and the mean of males
$\hat\beta_{\rm Black}$: the difference between the mean of blacks and the mean of whites
We can also think of this in terms of how to calculate the various group means:
\begin{align}
&\bar x_{\rm White\ Males}& &= \hat\beta_0 \\
&\bar x_{\rm White\ Females}& &= \hat\beta_0 + \hat\beta_{\rm Female} \\
&\bar x_{\rm Black\ Males}& &= \hat\beta_0 + \hat\beta_{\rm Black} \\
&\bar x_{\rm Black\ Females}& &= \hat\beta_0 + \hat\beta_{\rm Female} + \hat\beta_{\rm Black}
\end{align}
If you had an interaction term, it would be added at the end of the equation for black females. (The interpretation of such an interaction term is quite convoluted, but I walk through it here: Interpretation of interaction term.)
Update: To clarify my points, let's consider a canned example, coded in R.
d = data.frame(Sex =factor(rep(c("Male","Female"),times=2), levels=c("Male","Female")),
Race =factor(rep(c("White","Black"),each=2), levels=c("White","Black")),
y =c(1, 3, 5, 7))
d
# Sex Race y
# 1 Male White 1
# 2 Female White 3
# 3 Male Black 5
# 4 Female Black 7
The means of y for these categorical variables are:
aggregate(y~Sex, d, mean)
# Sex y
# 1 Male 3
# 2 Female 5
## i.e., the difference is 2
aggregate(y~Race, d, mean)
# Race y
# 1 White 2
# 2 Black 6
## i.e., the difference is 4
We can compare the differences between these means to the coefficients from a fitted model:
summary(lm(y~Sex+Race, d))
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 1 3.85e-16 2.60e+15 2.4e-16 ***
# SexFemale 2 4.44e-16 4.50e+15 < 2e-16 ***
# RaceBlack 4 4.44e-16 9.01e+15 < 2e-16 ***
# ...
# Warning message:
# In summary.lm(lm(y ~ Sex + Race, d)) :
# essentially perfect fit: summary may be unreliable
The thing to recognize about this situation is that, without an interaction term, we are assuming parallel lines. Thus, the Estimate for the (Intercept) is the mean of white males. The Estimate for SexFemale is the difference between the mean of females and the mean of males. The Estimate for RaceBlack is the difference between the mean of blacks and the mean of whites. Again, because a model without an interaction term assumes that the effects are strictly additive (the lines are strictly parallel), the mean of black females is then the mean of white males plus the difference between the mean of females and the mean of males plus the difference between the mean of blacks and the mean of whites.
|
Interpretation of betas when there are multiple categorical variables
|
You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the
|
Interpretation of betas when there are multiple categorical variables
You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the intercept ($\hat\beta_0$) is the mean of the group that constitutes the reference level for both (all) categorical variables. Using your example scenario, consider the case where there is no interaction, then the betas are:
$\hat\beta_0$: the mean of white males
$\hat\beta_{\rm Female}$: the difference between the mean of females and the mean of males
$\hat\beta_{\rm Black}$: the difference between the mean of blacks and the mean of whites
We can also think of this in terms of how to calculate the various group means:
\begin{align}
&\bar x_{\rm White\ Males}& &= \hat\beta_0 \\
&\bar x_{\rm White\ Females}& &= \hat\beta_0 + \hat\beta_{\rm Female} \\
&\bar x_{\rm Black\ Males}& &= \hat\beta_0 + \hat\beta_{\rm Black} \\
&\bar x_{\rm Black\ Females}& &= \hat\beta_0 + \hat\beta_{\rm Female} + \hat\beta_{\rm Black}
\end{align}
If you had an interaction term, it would be added at the end of the equation for black females. (The interpretation of such an interaction term is quite convoluted, but I walk through it here: Interpretation of interaction term.)
Update: To clarify my points, let's consider a canned example, coded in R.
d = data.frame(Sex =factor(rep(c("Male","Female"),times=2), levels=c("Male","Female")),
Race =factor(rep(c("White","Black"),each=2), levels=c("White","Black")),
y =c(1, 3, 5, 7))
d
# Sex Race y
# 1 Male White 1
# 2 Female White 3
# 3 Male Black 5
# 4 Female Black 7
The means of y for these categorical variables are:
aggregate(y~Sex, d, mean)
# Sex y
# 1 Male 3
# 2 Female 5
## i.e., the difference is 2
aggregate(y~Race, d, mean)
# Race y
# 1 White 2
# 2 Black 6
## i.e., the difference is 4
We can compare the differences between these means to the coefficients from a fitted model:
summary(lm(y~Sex+Race, d))
# ...
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 1 3.85e-16 2.60e+15 2.4e-16 ***
# SexFemale 2 4.44e-16 4.50e+15 < 2e-16 ***
# RaceBlack 4 4.44e-16 9.01e+15 < 2e-16 ***
# ...
# Warning message:
# In summary.lm(lm(y ~ Sex + Race, d)) :
# essentially perfect fit: summary may be unreliable
The thing to recognize about this situation is that, without an interaction term, we are assuming parallel lines. Thus, the Estimate for the (Intercept) is the mean of white males. The Estimate for SexFemale is the difference between the mean of females and the mean of males. The Estimate for RaceBlack is the difference between the mean of blacks and the mean of whites. Again, because a model without an interaction term assumes that the effects are strictly additive (the lines are strictly parallel), the mean of black females is then the mean of white males plus the difference between the mean of females and the mean of males plus the difference between the mean of blacks and the mean of whites.
|
Interpretation of betas when there are multiple categorical variables
You are right about the interpretation of the betas when there is a single categorical variable with $k$ levels. If there were multiple categorical variables (and there were no interaction term), the
|
9,571
|
Interpretation of betas when there are multiple categorical variables
|
Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta$ are the difference between the mean of that level of the category and the mean of the reference.
If we extend a bit your example to include a third level to the race category (say Asian) and chose White as the reference, then you would have:
$\hat{\beta}_0 = \bar{x}_{White}$
$\hat{\beta}_{Black} = \bar{x}_{Black} - \bar{x}_{White}$
$\hat{\beta}_{Asian} = \bar{x}_{Asian} - \bar{x}_{White}$
In this case, the interpretation of all the $\hat{\beta}$ is easy and finding the mean of any level of the category is straightforward. For example:
$\bar{x}_{Asian} = \hat{\beta}_{Asian} + \hat{\beta}_0$
Unfortunately in the case of multiple categorical variables, the correct interpretation for the intercept is no longer as clear (see note at the end). When there is n categories, each with multiple levels and one reference level (e.g. White and Male in you example), the general form for the intercept is:
$$\hat{\beta}_0 =∑_{i=1}^{n}\bar{x}_{reference,i} -(n-1) \bar{x} ,$$
where
$$\bar{x}_{reference,i}\small{\text{ is the mean of the reference level of the i-th categorical variable,}}$$
$$\bar{x}\small{\text{ is the mean of the whole data set}}$$
The other $\hat\beta$ are the same as with a single category: they are the difference between the mean of that level of the category and the mean of the reference level of the same category.
If we go back to your example, we would get:
$\hat{\beta}_0 = \bar{x}_{White} + \bar{x}_{Male} - \bar{x}$
$\hat{\beta}_{Black} = \bar{x}_{Black} - \bar{x}_{White}$
$\hat{\beta}_{Asian} = \bar{x}_{Asian} - \bar{x}_{White}$
$\hat{\beta}_{Female} = \bar{x}_{Female} - \bar{x}_{Male}$
You will notice that the mean of the cross categories (e.g. White males) are not present in any of the $\hat\beta$. As a matter of fact, you cannot calculate these means precisely from the results of this type of regression.
The reason for this is that, the number of predictor variables (i.e. the $\hat\beta$) is smaller then the number of cross categories (as long as you have more than 1 category) so a perfect fit is not always possible. If we go back to your example, the number of predictors is 4 (i.e. $\hat{\beta}_0, ~\hat{\beta}_{Black}, ~\hat{\beta}_{Asian}$ and $\hat{\beta}_{Female}$) while the number of cross categories is 6.
Numerical Example
Let me borrow from @Gung for a canned numerical example:
d = data.frame(Sex=factor(rep(c("Male","Female"),times=3), levels=c("Male","Female")),
Race =factor(rep(c("White","Black","Asian"),each=2),levels=c("White","Black","Asian")),
y =c(0, 3, 7, 8, 9, 10))
d
# Sex Race y
# 1 Male White 0
# 2 Female White 3
# 3 Male Black 7
# 4 Female Black 8
# 5 Male Asian 9
# 6 Female Asian 10
In this case, the various averages that will go in the calculation of the $\hat\beta$ are:
aggregate(y~1, d, mean)
# y
# 1 6.166667
aggregate(y~Sex, d, mean)
# Sex y
# 1 Male 5.333333
# 2 Female 7.000000
aggregate(y~Race, d, mean)
# Race y
# 1 White 1.5
# 2 Black 7.5
# 3 Asian 9.5
We can compare these numbers with the results of the regression:
summary(lm(y~Sex+Race, d))
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.6667 0.6667 1.000 0.4226
# SexFemale 1.6667 0.6667 2.500 0.1296
# RaceBlack 6.0000 0.8165 7.348 0.0180
# RaceAsian 8.0000 0.8165 9.798 0.0103
As you can see, the various $\hat\beta$ estimated from the regression all line up with the formulas given above. For example, $\hat\beta_0$ is given by:
$$\hat{\beta}_0 = \bar{x}_{White} + \bar{x}_{Male} - \bar{x}$$
Which gives:
1.5 + 5.333333 - 6.166667
# 0.66666
Note on the choice of contrast
A final note on this topic, all the results discussed above relate to categorical regressions using contrast treatment (the default type of contrast in R). There are different types of contrast which could be used (notably Helmert and sum) and and it would change the interpretation of the various $\hat\beta$. However, It would not change the final predictions from the regressions (e.g. the prediction for White males is always the same no matter which type of contrast you use).
My personal favourite is contrast sum as I feel that the interpretation of the $\hat\beta^{contr.sum}$ generalises better when there are multiple categories. For this type of contrast, there is no reference level, or rather the reference is the mean of the whole sample, and you have the following $\hat\beta^{contr.sum}$:
$\hat\beta_0^{contr.sum}=\bar{x}$
$\hat\beta_i^{contr.sum}=\bar{x}_i-\bar{x}$
If we go back to the previous example, you would have:
$\hat{\beta}_0^{contr.sum} = \bar{x}$
$\hat{\beta}_{White}^{contr.sum} = \bar{x}_{White} - \bar{x}$
$\hat{\beta}_{Black}^{contr.sum} = \bar{x}_{Black} - \bar{x}$
$\hat{\beta}_{Asian}^{contr.sum} = \bar{x}_{Asian} - \bar{x}$
$\hat{\beta}_{Male}^{contr.sum} = \bar{x}_{Male} - \bar{x}$
$\hat{\beta}_{Female}^{contr.sum} = \bar{x}_{Female} - \bar{x}$
You will notice that because White and Male are no longer reference levels, their $\hat\beta^{contr.sum}$ are no longer 0. The fact that these are 0 is specific to contrast treatment.
|
Interpretation of betas when there are multiple categorical variables
|
Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta
|
Interpretation of betas when there are multiple categorical variables
Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta$ are the difference between the mean of that level of the category and the mean of the reference.
If we extend a bit your example to include a third level to the race category (say Asian) and chose White as the reference, then you would have:
$\hat{\beta}_0 = \bar{x}_{White}$
$\hat{\beta}_{Black} = \bar{x}_{Black} - \bar{x}_{White}$
$\hat{\beta}_{Asian} = \bar{x}_{Asian} - \bar{x}_{White}$
In this case, the interpretation of all the $\hat{\beta}$ is easy and finding the mean of any level of the category is straightforward. For example:
$\bar{x}_{Asian} = \hat{\beta}_{Asian} + \hat{\beta}_0$
Unfortunately in the case of multiple categorical variables, the correct interpretation for the intercept is no longer as clear (see note at the end). When there is n categories, each with multiple levels and one reference level (e.g. White and Male in you example), the general form for the intercept is:
$$\hat{\beta}_0 =∑_{i=1}^{n}\bar{x}_{reference,i} -(n-1) \bar{x} ,$$
where
$$\bar{x}_{reference,i}\small{\text{ is the mean of the reference level of the i-th categorical variable,}}$$
$$\bar{x}\small{\text{ is the mean of the whole data set}}$$
The other $\hat\beta$ are the same as with a single category: they are the difference between the mean of that level of the category and the mean of the reference level of the same category.
If we go back to your example, we would get:
$\hat{\beta}_0 = \bar{x}_{White} + \bar{x}_{Male} - \bar{x}$
$\hat{\beta}_{Black} = \bar{x}_{Black} - \bar{x}_{White}$
$\hat{\beta}_{Asian} = \bar{x}_{Asian} - \bar{x}_{White}$
$\hat{\beta}_{Female} = \bar{x}_{Female} - \bar{x}_{Male}$
You will notice that the mean of the cross categories (e.g. White males) are not present in any of the $\hat\beta$. As a matter of fact, you cannot calculate these means precisely from the results of this type of regression.
The reason for this is that, the number of predictor variables (i.e. the $\hat\beta$) is smaller then the number of cross categories (as long as you have more than 1 category) so a perfect fit is not always possible. If we go back to your example, the number of predictors is 4 (i.e. $\hat{\beta}_0, ~\hat{\beta}_{Black}, ~\hat{\beta}_{Asian}$ and $\hat{\beta}_{Female}$) while the number of cross categories is 6.
Numerical Example
Let me borrow from @Gung for a canned numerical example:
d = data.frame(Sex=factor(rep(c("Male","Female"),times=3), levels=c("Male","Female")),
Race =factor(rep(c("White","Black","Asian"),each=2),levels=c("White","Black","Asian")),
y =c(0, 3, 7, 8, 9, 10))
d
# Sex Race y
# 1 Male White 0
# 2 Female White 3
# 3 Male Black 7
# 4 Female Black 8
# 5 Male Asian 9
# 6 Female Asian 10
In this case, the various averages that will go in the calculation of the $\hat\beta$ are:
aggregate(y~1, d, mean)
# y
# 1 6.166667
aggregate(y~Sex, d, mean)
# Sex y
# 1 Male 5.333333
# 2 Female 7.000000
aggregate(y~Race, d, mean)
# Race y
# 1 White 1.5
# 2 Black 7.5
# 3 Asian 9.5
We can compare these numbers with the results of the regression:
summary(lm(y~Sex+Race, d))
# Coefficients:
# Estimate Std. Error t value Pr(>|t|)
# (Intercept) 0.6667 0.6667 1.000 0.4226
# SexFemale 1.6667 0.6667 2.500 0.1296
# RaceBlack 6.0000 0.8165 7.348 0.0180
# RaceAsian 8.0000 0.8165 9.798 0.0103
As you can see, the various $\hat\beta$ estimated from the regression all line up with the formulas given above. For example, $\hat\beta_0$ is given by:
$$\hat{\beta}_0 = \bar{x}_{White} + \bar{x}_{Male} - \bar{x}$$
Which gives:
1.5 + 5.333333 - 6.166667
# 0.66666
Note on the choice of contrast
A final note on this topic, all the results discussed above relate to categorical regressions using contrast treatment (the default type of contrast in R). There are different types of contrast which could be used (notably Helmert and sum) and and it would change the interpretation of the various $\hat\beta$. However, It would not change the final predictions from the regressions (e.g. the prediction for White males is always the same no matter which type of contrast you use).
My personal favourite is contrast sum as I feel that the interpretation of the $\hat\beta^{contr.sum}$ generalises better when there are multiple categories. For this type of contrast, there is no reference level, or rather the reference is the mean of the whole sample, and you have the following $\hat\beta^{contr.sum}$:
$\hat\beta_0^{contr.sum}=\bar{x}$
$\hat\beta_i^{contr.sum}=\bar{x}_i-\bar{x}$
If we go back to the previous example, you would have:
$\hat{\beta}_0^{contr.sum} = \bar{x}$
$\hat{\beta}_{White}^{contr.sum} = \bar{x}_{White} - \bar{x}$
$\hat{\beta}_{Black}^{contr.sum} = \bar{x}_{Black} - \bar{x}$
$\hat{\beta}_{Asian}^{contr.sum} = \bar{x}_{Asian} - \bar{x}$
$\hat{\beta}_{Male}^{contr.sum} = \bar{x}_{Male} - \bar{x}$
$\hat{\beta}_{Female}^{contr.sum} = \bar{x}_{Female} - \bar{x}$
You will notice that because White and Male are no longer reference levels, their $\hat\beta^{contr.sum}$ are no longer 0. The fact that these are 0 is specific to contrast treatment.
|
Interpretation of betas when there are multiple categorical variables
Actually as you correctly pointed out, in the case of a single categorical variable (with potentially more than 2 levels), $\hat{\beta}_0$ is indeed the mean of the reference and the other $\hat\beta
|
9,572
|
What is the difference between logistic and logit regression?
|
The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows:
$$
{\rm logit}(\pi) = \log\bigg(\frac{\pi}{1-\pi}\bigg)
$$
The logistic function is the inverse of the logit. If we have a value, $x$, the logistic is:
$$
{\rm logistic}(x) = \frac{e^x}{1+e^x}
$$
Thus (using matrix notation where $\boldsymbol X$ is an $N\times p$ matrix and $\boldsymbol\beta$ is a $p\times 1$ vector), logit regression is:
$$
\log\bigg(\frac{\pi}{1-\pi}\bigg) = \boldsymbol{X\beta}
$$
and logistic regression is:
$$
\pi = \frac{e^\boldsymbol{X\beta}}{1+e^\boldsymbol{X\beta}}
$$
For more information about these topics, it may help you to read my answer here: Difference between logit and probit models.
The odds of an event is the probability of the event divided by the probability of the event not occurring. Exponentiating the logit will give the odds. Likewise, you can get the odds by taking the output of the logistic and dividing it by 1 minus the logistic. That is:
$$
{\rm odds} = \exp({\rm logit}(\pi)) = \frac{{\rm logistic}(x)}{1-{\rm logistic}(x)}
$$
For more on probabilities and odds, and how logistic regression is related to them, it may help you to read my answer here: Interpretation of simple predictions to odds ratios in logistic regression.
|
What is the difference between logistic and logit regression?
|
The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows:
$$
{\rm logit}(\pi) = \log\bigg(\frac{\pi}{
|
What is the difference between logistic and logit regression?
The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows:
$$
{\rm logit}(\pi) = \log\bigg(\frac{\pi}{1-\pi}\bigg)
$$
The logistic function is the inverse of the logit. If we have a value, $x$, the logistic is:
$$
{\rm logistic}(x) = \frac{e^x}{1+e^x}
$$
Thus (using matrix notation where $\boldsymbol X$ is an $N\times p$ matrix and $\boldsymbol\beta$ is a $p\times 1$ vector), logit regression is:
$$
\log\bigg(\frac{\pi}{1-\pi}\bigg) = \boldsymbol{X\beta}
$$
and logistic regression is:
$$
\pi = \frac{e^\boldsymbol{X\beta}}{1+e^\boldsymbol{X\beta}}
$$
For more information about these topics, it may help you to read my answer here: Difference between logit and probit models.
The odds of an event is the probability of the event divided by the probability of the event not occurring. Exponentiating the logit will give the odds. Likewise, you can get the odds by taking the output of the logistic and dividing it by 1 minus the logistic. That is:
$$
{\rm odds} = \exp({\rm logit}(\pi)) = \frac{{\rm logistic}(x)}{1-{\rm logistic}(x)}
$$
For more on probabilities and odds, and how logistic regression is related to them, it may help you to read my answer here: Interpretation of simple predictions to odds ratios in logistic regression.
|
What is the difference between logistic and logit regression?
The logit is a link function / a transformation of a parameter. It is the logarithm of the odds. If we call the parameter $\pi$, it is defined as follows:
$$
{\rm logit}(\pi) = \log\bigg(\frac{\pi}{
|
9,573
|
What is the difference between logistic and logit regression?
|
This answer applies for scikit-learn in python.
Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences between the two methods.
Logit from statsmodels provides more detailed statistical output, including p-values, confidence intervals, and goodness-of-fit measures such as the deviance and the likelihood ratio test. It also allows for more advanced modeling options, such as specifying offset terms, incorporating robust standard errors, and modeling hierarchical data structures.
LogisticRegression from scikit-learn, on the other hand, provides a more user-friendly interface and is better suited for large-scale machine learning applications. It allows for easy cross-validation, regularization, and feature selection, and is generally faster and more scalable than logit from statsmodels.
In this case, either logit or LogisticRegression could be used to fit the logistic regression model with the two indicator variables. The choice between the two methods may depend on the specific needs of the analysis, such as the desired level of statistical inference or the computational resources available.
|
What is the difference between logistic and logit regression?
|
This answer applies for scikit-learn in python.
Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences
|
What is the difference between logistic and logit regression?
This answer applies for scikit-learn in python.
Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences between the two methods.
Logit from statsmodels provides more detailed statistical output, including p-values, confidence intervals, and goodness-of-fit measures such as the deviance and the likelihood ratio test. It also allows for more advanced modeling options, such as specifying offset terms, incorporating robust standard errors, and modeling hierarchical data structures.
LogisticRegression from scikit-learn, on the other hand, provides a more user-friendly interface and is better suited for large-scale machine learning applications. It allows for easy cross-validation, regularization, and feature selection, and is generally faster and more scalable than logit from statsmodels.
In this case, either logit or LogisticRegression could be used to fit the logistic regression model with the two indicator variables. The choice between the two methods may depend on the specific needs of the analysis, such as the desired level of statistical inference or the computational resources available.
|
What is the difference between logistic and logit regression?
This answer applies for scikit-learn in python.
Both logit from statsmodels and LogisticRegression from scikit-learn can be used to fit logistic regression models. However, there are some differences
|
9,574
|
Is a vague prior the same as a non-informative prior?
|
Gelman et al. (2003) say:
there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'reference prior distributions' and the prior density is described as vague, flat, or noninformative.[emphasis from original text]
Based on my reading of the discussion of Jeffreys' prior in Gelman et al. (2003, p.62ff, there is no consensus about the existence of a truly non-informative prior, and that sufficiently vague/flat/diffuse priors are sufficient.
Some of the points that they make:
Any prior includes information, including priors that state that no information is known.
For example, if we know that we know nothing about the parameter in question, then we know something about it.
In most applied contexts, there is no clear advantage to a truly non-informative prior when sufficiently vague priors suffice, and in many cases there are advantages - like finding a proper prior - to using a vague parameterization of a conjugate prior.
Jeffreys' principle can be useful to construct priors that minimize Fisher's information content in univariate models, but there is no analogue for the multivariate case
When comparing models, the Jeffreys' prior will vary with the distribution of the likelihood, so priors would also have to change
there has generally been a lot of debate about whether a non-informative prior even exists (because of 1, but also see discussion and references on p.66 in Gelman et al. for the history of this debate).
note this is community wiki - The underlying theory is at the limits of my understanding, and I would appreciate contributions to this answer.
Gelman et al. 2003 Bayesian Data Analysis, Chapman and Hall/CRC
|
Is a vague prior the same as a non-informative prior?
|
Gelman et al. (2003) say:
there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'r
|
Is a vague prior the same as a non-informative prior?
Gelman et al. (2003) say:
there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'reference prior distributions' and the prior density is described as vague, flat, or noninformative.[emphasis from original text]
Based on my reading of the discussion of Jeffreys' prior in Gelman et al. (2003, p.62ff, there is no consensus about the existence of a truly non-informative prior, and that sufficiently vague/flat/diffuse priors are sufficient.
Some of the points that they make:
Any prior includes information, including priors that state that no information is known.
For example, if we know that we know nothing about the parameter in question, then we know something about it.
In most applied contexts, there is no clear advantage to a truly non-informative prior when sufficiently vague priors suffice, and in many cases there are advantages - like finding a proper prior - to using a vague parameterization of a conjugate prior.
Jeffreys' principle can be useful to construct priors that minimize Fisher's information content in univariate models, but there is no analogue for the multivariate case
When comparing models, the Jeffreys' prior will vary with the distribution of the likelihood, so priors would also have to change
there has generally been a lot of debate about whether a non-informative prior even exists (because of 1, but also see discussion and references on p.66 in Gelman et al. for the history of this debate).
note this is community wiki - The underlying theory is at the limits of my understanding, and I would appreciate contributions to this answer.
Gelman et al. 2003 Bayesian Data Analysis, Chapman and Hall/CRC
|
Is a vague prior the same as a non-informative prior?
Gelman et al. (2003) say:
there has long been a desire for prior distributions that can be guaranteed to play a minimal role in the posterior distribution. Such distributions are sometimes called 'r
|
9,575
|
Is a vague prior the same as a non-informative prior?
|
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very informative prior on some other transformation $f(\theta)$. This is at least part of the motivation for Jeffreys' prior, which was initially constructed to be as non-informative as possible.
Vague priors can also do some pretty miserable things to your model. The now-classic example is using $\mathrm{InverseGamma}(\epsilon, \epsilon)$ as $\epsilon\rightarrow 0$ priors on variance components in a hierarchical model.
The improper limiting prior gives an improper posterior in this case. A popular alternative was to take $\epsilon$ to be really small, which results in a prior that looks almost uniform on $\mathbb{R}^+$. But it also results in a posterior that is almost improper, and model fitting and inferences suffered. See Gelman's Prior distributions for variance parameters in hierarchical models for a complete exposition.
Edit: @csgillespie (rightly!) points out that I haven't completely answered your question. To my mind a non-informative prior is one that is vague in the sense that it doesn't particularly favor one area of the parameter space over another, but in doing so it shouldn't induce informative priors on other parameters. So a non-informative prior is vague but a vague prior isn't necessarily noninformative. One example where this comes into play is Bayesian variable selection; a "vague" prior on variable inclusion probabilities can actually induce a pretty informative prior on the total number of variables included in the model!
It seems to me that the search for truly noninformative priors is quixotic (though many would disagree); better to use so-called "weakly" informative priors (which, I suppose, are generally vague in some sense). Really, how often do we know nothing about the parameter in question?
|
Is a vague prior the same as a non-informative prior?
|
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very
|
Is a vague prior the same as a non-informative prior?
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very informative prior on some other transformation $f(\theta)$. This is at least part of the motivation for Jeffreys' prior, which was initially constructed to be as non-informative as possible.
Vague priors can also do some pretty miserable things to your model. The now-classic example is using $\mathrm{InverseGamma}(\epsilon, \epsilon)$ as $\epsilon\rightarrow 0$ priors on variance components in a hierarchical model.
The improper limiting prior gives an improper posterior in this case. A popular alternative was to take $\epsilon$ to be really small, which results in a prior that looks almost uniform on $\mathbb{R}^+$. But it also results in a posterior that is almost improper, and model fitting and inferences suffered. See Gelman's Prior distributions for variance parameters in hierarchical models for a complete exposition.
Edit: @csgillespie (rightly!) points out that I haven't completely answered your question. To my mind a non-informative prior is one that is vague in the sense that it doesn't particularly favor one area of the parameter space over another, but in doing so it shouldn't induce informative priors on other parameters. So a non-informative prior is vague but a vague prior isn't necessarily noninformative. One example where this comes into play is Bayesian variable selection; a "vague" prior on variable inclusion probabilities can actually induce a pretty informative prior on the total number of variables included in the model!
It seems to me that the search for truly noninformative priors is quixotic (though many would disagree); better to use so-called "weakly" informative priors (which, I suppose, are generally vague in some sense). Really, how often do we know nothing about the parameter in question?
|
Is a vague prior the same as a non-informative prior?
Definitely not, although they are frequently used interchangeably. A vague prior (relatively uninformed, not really favoring some values over others) on a parameter $\theta$ can actually induce a very
|
9,576
|
Is a vague prior the same as a non-informative prior?
|
Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of the term non-informative prior distribution as we consider all priors to contribute some information". I tend to agree but I am definitely no expert in Bayesian statistics.
|
Is a vague prior the same as a non-informative prior?
|
Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of
|
Is a vague prior the same as a non-informative prior?
Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of the term non-informative prior distribution as we consider all priors to contribute some information". I tend to agree but I am definitely no expert in Bayesian statistics.
|
Is a vague prior the same as a non-informative prior?
Lambert et al (2005) raise the question "How Vague is Vague? A simulation study of the impact of the use of vague prior distributions in MCMC using WinBUGS". They write: "We do not advocate the use of
|
9,577
|
Is a vague prior the same as a non-informative prior?
|
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be used to mean complete ignorance regarding the value of that parameter. It would perhaps be used to show that the analysis was not completely objective.
For example a very broad Gaussian might be a vague prior for a parameter where a non-informative prior would be uniform. The Gaussian would be very nearly flat on the scale of interest, but would nevertheless favour one particular value a bit more than any other (but it might make the problem more mathematically tractable).
|
Is a vague prior the same as a non-informative prior?
|
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be
|
Is a vague prior the same as a non-informative prior?
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be used to mean complete ignorance regarding the value of that parameter. It would perhaps be used to show that the analysis was not completely objective.
For example a very broad Gaussian might be a vague prior for a parameter where a non-informative prior would be uniform. The Gaussian would be very nearly flat on the scale of interest, but would nevertheless favour one particular value a bit more than any other (but it might make the problem more mathematically tractable).
|
Is a vague prior the same as a non-informative prior?
I suspect "vague prior" is used to mean a prior that is known to encode some small, but non-zero amount of knowledge regarding the true value of a parameter, whereas a "non-informative prior" would be
|
9,578
|
Is a vague prior the same as a non-informative prior?
|
Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
|
Is a vague prior the same as a non-informative prior?
|
Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
|
Is a vague prior the same as a non-informative prior?
Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
|
Is a vague prior the same as a non-informative prior?
Non-informative priors have different forms. These forms include vague priors and improper priors. So vague prior is part of non-informative priors.
|
9,579
|
What are correct values for precision and recall in edge cases?
|
Given a confusion matrix:
predicted
(+) (-)
---------
(+) | TP | FN |
actual ---------
(-) | FP | TN |
---------
we know that:
Precision = TP / (TP + FP)
Recall = TP / (TP + FN)
Lets consider the cases where the denominator is zero:
TP+FN=0 : means that there were no positive cases in the input data
TP+FP=0 : means that all instances were predicted as negative
|
What are correct values for precision and recall in edge cases?
|
Given a confusion matrix:
predicted
(+) (-)
---------
(+) | TP | FN |
actual ---------
(-) | FP | TN |
---------
we know that:
Pre
|
What are correct values for precision and recall in edge cases?
Given a confusion matrix:
predicted
(+) (-)
---------
(+) | TP | FN |
actual ---------
(-) | FP | TN |
---------
we know that:
Precision = TP / (TP + FP)
Recall = TP / (TP + FN)
Lets consider the cases where the denominator is zero:
TP+FN=0 : means that there were no positive cases in the input data
TP+FP=0 : means that all instances were predicted as negative
|
What are correct values for precision and recall in edge cases?
Given a confusion matrix:
predicted
(+) (-)
---------
(+) | TP | FN |
actual ---------
(-) | FP | TN |
---------
we know that:
Pre
|
9,580
|
What are correct values for precision and recall in edge cases?
|
Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case,
Recall = 1 when FN=0, since 100% of the TP were discovered
Precision = 1 when FP=0, since no there were no spurious results
This is a reformulation of @mbq's comment.
|
What are correct values for precision and recall in edge cases?
|
Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case,
Recall = 1 when FN=0, since 100% of the TP were discovered
|
What are correct values for precision and recall in edge cases?
Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case,
Recall = 1 when FN=0, since 100% of the TP were discovered
Precision = 1 when FP=0, since no there were no spurious results
This is a reformulation of @mbq's comment.
|
What are correct values for precision and recall in edge cases?
Answer is Yes. The undefined edge cases occur when true positives (TP) are 0 since this is in the denominator of both P & R. In this case,
Recall = 1 when FN=0, since 100% of the TP were discovered
|
9,581
|
What are correct values for precision and recall in edge cases?
|
I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). :
http://en.wikipedia.org/wiki/Receiver_operating_characteristic
In the case of sensitivity (recall), if the denominator is zero (as Amro points out), there are NO positive cases, so the classification is meaningless. (That does not stop either TP or FN being zero, which would result in a limiting sensitivity of 1 or 0. These points are respectively at the top right and bottom left hand corners of the ROC curve - TPR = 1 and TPR = 0.)
The limit of PPV is meaningful though. It is possible for the test cut-off to be set so high (or low) so that all cases are predicted as negative. This is at the origin of the ROC curve. The limiting value of the PPV just before the cutoff reaches the origin can be estimated by considering the final segment of the ROC curve just before the origin. (This may be better to model as ROC curves are notoriously noisy.)
For example if there are 100 actual positives and 100 actual negatives and the final segnemt of the ROC curve approaches from TPR = 0.08, FPR = 0.02, then the limiting PPV would be PPR ~ 0.08*100/(0.08*100 + 0.02*100) = 8/10 = 0.8 i.e 80% probability of being a true positive.
In practice each sample is represented by a segment on the ROC curve - horizontal for an actual negative and vertical for an actual positive. One could estimate the limiting PPV by the very last segment before the origin, but that would give an estimated limiting PPV of 1, 0 or 0.5, depending on whether the last sample was a true positive, a false positive (actual negative) or made of an equal TP and FP. A modelling approach would be better, perhaps assuming the data are binormal - a common assumption, eg:
http://mdm.sagepub.com/content/8/3/197.short
|
What are correct values for precision and recall in edge cases?
|
I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). :
http://en.wikipedia.org/wiki/Rece
|
What are correct values for precision and recall in edge cases?
I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). :
http://en.wikipedia.org/wiki/Receiver_operating_characteristic
In the case of sensitivity (recall), if the denominator is zero (as Amro points out), there are NO positive cases, so the classification is meaningless. (That does not stop either TP or FN being zero, which would result in a limiting sensitivity of 1 or 0. These points are respectively at the top right and bottom left hand corners of the ROC curve - TPR = 1 and TPR = 0.)
The limit of PPV is meaningful though. It is possible for the test cut-off to be set so high (or low) so that all cases are predicted as negative. This is at the origin of the ROC curve. The limiting value of the PPV just before the cutoff reaches the origin can be estimated by considering the final segment of the ROC curve just before the origin. (This may be better to model as ROC curves are notoriously noisy.)
For example if there are 100 actual positives and 100 actual negatives and the final segnemt of the ROC curve approaches from TPR = 0.08, FPR = 0.02, then the limiting PPV would be PPR ~ 0.08*100/(0.08*100 + 0.02*100) = 8/10 = 0.8 i.e 80% probability of being a true positive.
In practice each sample is represented by a segment on the ROC curve - horizontal for an actual negative and vertical for an actual positive. One could estimate the limiting PPV by the very last segment before the origin, but that would give an estimated limiting PPV of 1, 0 or 0.5, depending on whether the last sample was a true positive, a false positive (actual negative) or made of an equal TP and FP. A modelling approach would be better, perhaps assuming the data are binormal - a common assumption, eg:
http://mdm.sagepub.com/content/8/3/197.short
|
What are correct values for precision and recall in edge cases?
I am familiar with different terminology. What you call precision I would positive predictive value (PPV). And what you call recall I would call sensitivity (Sens). :
http://en.wikipedia.org/wiki/Rece
|
9,582
|
What are correct values for precision and recall in edge cases?
|
That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not necessarily.
|
What are correct values for precision and recall in edge cases?
|
That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not neces
|
What are correct values for precision and recall in edge cases?
That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not necessarily.
|
What are correct values for precision and recall in edge cases?
That would depend on what you mean by "approach 0". If false positives and false negatives both approach zero at a faster rate than true positives, then yes to both questions. But otherwise, not neces
|
9,583
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
Short answer is NO.
The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multiplications, element-wise additions and mapping functions). A neural network sees a Tensor as its input (i.e. a multi-dimensional array). It's shape usually is 4-D (number of images per batch, image height, image width, number of channels).
Different image formats (especially lossy ones) may produce different input arrays but strictly speaking neural nets see arrays in their input, and NOT images.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
Short answer is NO.
The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multipli
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
Short answer is NO.
The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multiplications, element-wise additions and mapping functions). A neural network sees a Tensor as its input (i.e. a multi-dimensional array). It's shape usually is 4-D (number of images per batch, image height, image width, number of channels).
Different image formats (especially lossy ones) may produce different input arrays but strictly speaking neural nets see arrays in their input, and NOT images.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
Short answer is NO.
The format in which the image is encoded has to do with its quality. Neural networks are essentially mathematical models that perform lots and lots of operations (matrix multipli
|
9,584
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general.
There is only little research in this topic (afaik), but there might be more research on it in the future. I only found this article on it.
The problem at the moment is, that this is more a problem appearing in practical applications and less in an academic research field. I remember one current podcast where researchers observed that even the camera that was used to take a picture could have a big effect.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general.
There is only little research in this topic
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general.
There is only little research in this topic (afaik), but there might be more research on it in the future. I only found this article on it.
The problem at the moment is, that this is more a problem appearing in practical applications and less in an academic research field. I remember one current podcast where researchers observed that even the camera that was used to take a picture could have a big effect.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While Djib2011 answer is correct, I understand your question as more focused on how the image quality/properties affect neural network learning in general.
There is only little research in this topic
|
9,585
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is considered to have m rows and n columns, each pixel is specified by it's row and column location, that is by the pair (m,n). In particular there are m*n pixels which is very large even for 'small' photos. Each pixel of the photo is encoded by a number between zero and one (blackness intensity) if the photo is black and white. It is encoded by three numbers (RGB intensities) if the photo is color. So one winds up with a tensor that is either a 1xmxn or a 3xmxn. Image recognition is done through CNN's which, taking advantage of the fact that photos don't change that much from pixel to pixel, compress the data via filters and pooling. So the point is that CNN's work by compressing the incredibly large numbers of data points (or features) of a photo into a smaller number of values. So whatever format you start with, CNN's start off by further compressing the data of the photo. Hence the per se independence from the size of the representation of the photo.
However, a CNN will demand that all images being run through it are all of the same size. So there is that dependency that will change depending on how the image is saved. In addition, to the extent that different file formats of the same size produce different values for their tensors, one cannot use the same CNN model to identify photos stored by different methods.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is considered to have m rows and n columns, each pixel is specified by it's row and column location, that is by the pair (m,n). In particular there are m*n pixels which is very large even for 'small' photos. Each pixel of the photo is encoded by a number between zero and one (blackness intensity) if the photo is black and white. It is encoded by three numbers (RGB intensities) if the photo is color. So one winds up with a tensor that is either a 1xmxn or a 3xmxn. Image recognition is done through CNN's which, taking advantage of the fact that photos don't change that much from pixel to pixel, compress the data via filters and pooling. So the point is that CNN's work by compressing the incredibly large numbers of data points (or features) of a photo into a smaller number of values. So whatever format you start with, CNN's start off by further compressing the data of the photo. Hence the per se independence from the size of the representation of the photo.
However, a CNN will demand that all images being run through it are all of the same size. So there is that dependency that will change depending on how the image is saved. In addition, to the extent that different file formats of the same size produce different values for their tensors, one cannot use the same CNN model to identify photos stored by different methods.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
This is a riff on the first answer from Djib2011. The short answer has to be no. Longer - Firstly photos are always encoded as a tensor as follows. An image is a number of pixels. If the photo is
|
9,586
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that standard JPEG compression would make a big difference. But it will depend on the application.
If you change things after training, it very much depends. E.g. for some networks changing the resolution doesn't work at all. For others its possible. Its very network specific. In general any change (even lens, lighting, background, etc) needs to be evaluated and needs to be included in the training from a theoretical perspective.
In general its not a good idea to have training data that is qualitatively different. If you want to classify PNG and JPG, then it would be best to also train on both. The same for other image properties.
A CNN cannot extrapolate, it usually just works within the training set space. Other models can do that, like rule based models.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
|
While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that standard JPEG compression would make a big difference. But it will depend on the application.
If you change things after training, it very much depends. E.g. for some networks changing the resolution doesn't work at all. For others its possible. Its very network specific. In general any change (even lens, lighting, background, etc) needs to be evaluated and needs to be included in the training from a theoretical perspective.
In general its not a good idea to have training data that is qualitatively different. If you want to classify PNG and JPG, then it would be best to also train on both. The same for other image properties.
A CNN cannot extrapolate, it usually just works within the training set space. Other models can do that, like rule based models.
|
Does the image format (png, jpg, gif) affect how an image recognition neural net is trained?
While changes in camera or image compression after training can be severe, if it is the same, the problem is much less. Of course with more noisy images the performance is less, but I never heard that
|
9,587
|
In what order should you do linear regression diagnostics?
|
The process is iterative, but there is a natural order:
You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstable systems of equations potentially resulting in outright incorrect answers (to 16 decimal places...) Any problem here usually means you cannot proceed until it is fixed. Multicollinearity is usually diagnosed using Variance Inflation Factors and similar examination of the "hat matrix." Additional checks at this stage can include assessing the influence of any missing values in the dataset and verifying the identifiability of important parameters. (Missing combinations of discrete independent variables can sometimes cause trouble here.)
Next you need to be concerned whether the output reflects most of the data or is sensitive to a small subset. In the latter case, everything else you subsequently do may be misleading, so it is to be avoided. Procedures include examination of outliers and of leverage. (A high-leverage datum might not be an outlier but even so it may unduly influence all the results.) If a robust alternative to the regression procedure exists, this is a good time to apply it: check that it is producing similar results and use it to detect outlying values.
Finally, having achieved a situation that is numerically stable (so you can trust the computations) and which reflects the full dataset, you turn to an examination of the statistical assumptions needed for correct interpretation of the output. Primarily these concerns focus--in rough order of importance--on distributions of the residuals (including heteroscedasticity, but also extending to symmetry, distributional shape, possible correlation with predicted values or other variables, and autocorrelation), goodness of fit (including the possible need for interaction terms), whether to re-express the dependent variable, and whether to re-express the independent variables.
At any stage, if something needs to be corrected then it's wise to return to the beginning. Repeat as many times as necessary.
|
In what order should you do linear regression diagnostics?
|
The process is iterative, but there is a natural order:
You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstab
|
In what order should you do linear regression diagnostics?
The process is iterative, but there is a natural order:
You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstable systems of equations potentially resulting in outright incorrect answers (to 16 decimal places...) Any problem here usually means you cannot proceed until it is fixed. Multicollinearity is usually diagnosed using Variance Inflation Factors and similar examination of the "hat matrix." Additional checks at this stage can include assessing the influence of any missing values in the dataset and verifying the identifiability of important parameters. (Missing combinations of discrete independent variables can sometimes cause trouble here.)
Next you need to be concerned whether the output reflects most of the data or is sensitive to a small subset. In the latter case, everything else you subsequently do may be misleading, so it is to be avoided. Procedures include examination of outliers and of leverage. (A high-leverage datum might not be an outlier but even so it may unduly influence all the results.) If a robust alternative to the regression procedure exists, this is a good time to apply it: check that it is producing similar results and use it to detect outlying values.
Finally, having achieved a situation that is numerically stable (so you can trust the computations) and which reflects the full dataset, you turn to an examination of the statistical assumptions needed for correct interpretation of the output. Primarily these concerns focus--in rough order of importance--on distributions of the residuals (including heteroscedasticity, but also extending to symmetry, distributional shape, possible correlation with predicted values or other variables, and autocorrelation), goodness of fit (including the possible need for interaction terms), whether to re-express the dependent variable, and whether to re-express the independent variables.
At any stage, if something needs to be corrected then it's wise to return to the beginning. Repeat as many times as necessary.
|
In what order should you do linear regression diagnostics?
The process is iterative, but there is a natural order:
You have to worry first about conditions that cause outright numerical errors. Multicollinearity is one of those, because it can produce unstab
|
9,588
|
In what order should you do linear regression diagnostics?
|
I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detecting them then check for outliers first. The other issues with the model could change after observations are removed. After that the order between multicollinaerity and heteroscedasticity doesn't matter. I agree with Chris that outliers should not be removed arbitrarily. You need to have a reason to think the observations are wrong.
Of course if you observe multicollinearity or heteroscedasticity you may need to change your approach. The multicollinearity problem is observed in the covariance matrix but there are specific diagnostic tests for detecting multicollinearity and other problems like leverage points look at the Regression Diagnostics book by Belsley, Kuh and Welsch or one of Dennis Cook's regression books.
|
In what order should you do linear regression diagnostics?
|
I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detect
|
In what order should you do linear regression diagnostics?
I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detecting them then check for outliers first. The other issues with the model could change after observations are removed. After that the order between multicollinaerity and heteroscedasticity doesn't matter. I agree with Chris that outliers should not be removed arbitrarily. You need to have a reason to think the observations are wrong.
Of course if you observe multicollinearity or heteroscedasticity you may need to change your approach. The multicollinearity problem is observed in the covariance matrix but there are specific diagnostic tests for detecting multicollinearity and other problems like leverage points look at the Regression Diagnostics book by Belsley, Kuh and Welsch or one of Dennis Cook's regression books.
|
In what order should you do linear regression diagnostics?
I think it depends on the situation. If you don't expect any particular problems you can probably check these in any order. If you expect outliers and might have a reason to remove them after detect
|
9,589
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the same argument. Then you get
$$
({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = c
$$
where $\Sigma$ is the covariance matrix. That is exactly the equation of an ellipse; in the simplest case, $\mu=(0,0)$ and $\Sigma$ is diagonal, so you get
$$
\left(\frac{x}{\sigma_x}\right)^2+\left(\frac{y}{\sigma_y}\right)^2=c
$$
If $\Sigma$ is not diagonal, diagonalizing you get the same result.
Now, you would have to integrate the pdf of the multivariate inside (or outside) the ellipse and request that this is equal to the quantile you want. Let's say that your quantiles are not the usual ones, but elliptical in principle (i.e. you are looking for the Highest Density Region, HDR, as Tim answer points out). I would change variables in the pdf to $z^2=(x/\sigma_x)^2+(y/\sigma_y)^2$, integrate in the angle and then for $z$ from $0$ to $\sqrt{c}$
$$
1-\alpha=\int_0^{\sqrt{c}}dz\frac{z\;e^{-z^2/2}}{2\pi}\int_0^{2\pi}d\theta=\int_0^{\sqrt{c}}z\;e^{-z^2/2}
$$
Then you substitute $s=-z^2/2$:
$$
\int_0^{\sqrt{c}}z\;e^{-z^2/2}=\int_{-c/2}^{0}e^sds=(1-e^{-c/2})$$
So in principle, you have to look for the ellipse centered in $\mu$, with axis over the eigenvectors of $\Sigma$ and effective radius $-2\ln\alpha$:
$$
({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = -2\ln{\alpha}
$$
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the sa
|
How to determine quantiles (isolines?) of a multivariate normal distribution
The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the same argument. Then you get
$$
({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = c
$$
where $\Sigma$ is the covariance matrix. That is exactly the equation of an ellipse; in the simplest case, $\mu=(0,0)$ and $\Sigma$ is diagonal, so you get
$$
\left(\frac{x}{\sigma_x}\right)^2+\left(\frac{y}{\sigma_y}\right)^2=c
$$
If $\Sigma$ is not diagonal, diagonalizing you get the same result.
Now, you would have to integrate the pdf of the multivariate inside (or outside) the ellipse and request that this is equal to the quantile you want. Let's say that your quantiles are not the usual ones, but elliptical in principle (i.e. you are looking for the Highest Density Region, HDR, as Tim answer points out). I would change variables in the pdf to $z^2=(x/\sigma_x)^2+(y/\sigma_y)^2$, integrate in the angle and then for $z$ from $0$ to $\sqrt{c}$
$$
1-\alpha=\int_0^{\sqrt{c}}dz\frac{z\;e^{-z^2/2}}{2\pi}\int_0^{2\pi}d\theta=\int_0^{\sqrt{c}}z\;e^{-z^2/2}
$$
Then you substitute $s=-z^2/2$:
$$
\int_0^{\sqrt{c}}z\;e^{-z^2/2}=\int_{-c/2}^{0}e^sds=(1-e^{-c/2})$$
So in principle, you have to look for the ellipse centered in $\mu$, with axis over the eigenvectors of $\Sigma$ and effective radius $-2\ln\alpha$:
$$
({\bf x}-\mu)^T\Sigma^{-1}({\bf x}-\mu) = -2\ln{\alpha}
$$
|
How to determine quantiles (isolines?) of a multivariate normal distribution
The contour line is an ellipsoid. The reason is because you have to look at the argument of the exponential, in the pdf of the multivariate normal distribution: the isolines would be lines with the sa
|
9,590
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems that you are interested in highest density regions. They are defined by Hyndman (1996) as following
Let $f(z)$ be the density function of a random variable $X$ . Then the $100( 1 - \alpha )\%$ HDR is the subset $R(f_\alpha)$
of the sample space of $X$ such that
$$ R(f_\alpha) = \{ x : f(x) \geq f_\alpha\}$$
where $f_\alpha$ is the largest constant such that $\Pr(X \in
R(f_\alpha)) \geq 1 - a$.
HDR's can be obtained by integration but, as described by Hyndman, you can do it using a simpler, numerical method. If $Y = f(x)$, then you can obtain $f_\alpha$ such that $\Pr(f(x) \geq f_\alpha) \geq 1 - \alpha$ simply by taking $\alpha$ quantile of $Y$. It can be estimated using sample quantiles from a set of observations $y_1,...,y_m$. The method applies even if we do not know $f(x)$, but have only a set of i.i.d. observations. This method would work also for multimodal distributions.
Hyndman, R.J. (1996). Computing and graphing highest density regions. The American Statistician, 50(2), 120-126.
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems t
|
How to determine quantiles (isolines?) of a multivariate normal distribution
You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems that you are interested in highest density regions. They are defined by Hyndman (1996) as following
Let $f(z)$ be the density function of a random variable $X$ . Then the $100( 1 - \alpha )\%$ HDR is the subset $R(f_\alpha)$
of the sample space of $X$ such that
$$ R(f_\alpha) = \{ x : f(x) \geq f_\alpha\}$$
where $f_\alpha$ is the largest constant such that $\Pr(X \in
R(f_\alpha)) \geq 1 - a$.
HDR's can be obtained by integration but, as described by Hyndman, you can do it using a simpler, numerical method. If $Y = f(x)$, then you can obtain $f_\alpha$ such that $\Pr(f(x) \geq f_\alpha) \geq 1 - \alpha$ simply by taking $\alpha$ quantile of $Y$. It can be estimated using sample quantiles from a set of observations $y_1,...,y_m$. The method applies even if we do not know $f(x)$, but have only a set of i.i.d. observations. This method would work also for multimodal distributions.
Hyndman, R.J. (1996). Computing and graphing highest density regions. The American Statistician, 50(2), 120-126.
|
How to determine quantiles (isolines?) of a multivariate normal distribution
You asked about multivariate normal, but started your question with asking about "quantile of a multivariate distribution" in general. From wording of your question and the example provided it seems t
|
9,591
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version:
$$
\int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2})
$$
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version:
$$
\int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2})
$$
|
How to determine quantiles (isolines?) of a multivariate normal distribution
The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version:
$$
\int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2})
$$
|
How to determine quantiles (isolines?) of a multivariate normal distribution
The correct answer should be $-2*\ln(\alpha)$. There was a mistake in the calculation above. The corrected version:
$$
\int_0^\sqrt{c} z e^{-z^2/2} =\int_{-c/2}^0e^sds=(1-e^{-c/2})
$$
|
9,592
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
You could draw an ellipses corresponding to Mahalanobis distances.
library(chemometrics)
data(glass)
data(glass.grp)
x=glass[,c(2,7)]
require(robustbase)
x.mcd=covMcd(x)
drawMahal(x,center=x.mcd$center,covariance=x.mcd$cov,quantile=0.90)
Or with circles around 95%, 75%, and 50% of data
drawMahal(x,center=x.mcd$center,covariance=x.mcd$cov,quantile=c(0.95,.75,.5))
|
How to determine quantiles (isolines?) of a multivariate normal distribution
|
You could draw an ellipses corresponding to Mahalanobis distances.
library(chemometrics)
data(glass)
data(glass.grp)
x=glass[,c(2,7)]
require(robustbase)
x.mcd=covMcd(x)
drawMahal(x,center=x.mcd$cent
|
How to determine quantiles (isolines?) of a multivariate normal distribution
You could draw an ellipses corresponding to Mahalanobis distances.
library(chemometrics)
data(glass)
data(glass.grp)
x=glass[,c(2,7)]
require(robustbase)
x.mcd=covMcd(x)
drawMahal(x,center=x.mcd$center,covariance=x.mcd$cov,quantile=0.90)
Or with circles around 95%, 75%, and 50% of data
drawMahal(x,center=x.mcd$center,covariance=x.mcd$cov,quantile=c(0.95,.75,.5))
|
How to determine quantiles (isolines?) of a multivariate normal distribution
You could draw an ellipses corresponding to Mahalanobis distances.
library(chemometrics)
data(glass)
data(glass.grp)
x=glass[,c(2,7)]
require(robustbase)
x.mcd=covMcd(x)
drawMahal(x,center=x.mcd$cent
|
9,593
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to measure the strength of the relationship. You might be interested in looking at some ideas from information theory. Specifically I think you might want to look at mutual information. Mutual information essentially gives you a way to quantify how much knowing the state of one variable tells you about the other variable. I actually think this definition is closer to what most people mean when they think about correlation.
For two discrete variables X and Y, the calculation is as follows: $$I(X;Y) = \sum_{y \in Y} \sum_{x \in X}
p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)}
\right) }$$
For two continuous variables we integrate rather than taking the sum: $$I(X;Y) = \int_Y \int_X
p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)}
\right) } \; dx \,dy$$
Your particular use-case is for one discrete and one continuous. Rather than integrating over a sum or summing over an integral, I imagine it would be easier to convert one of the variables into the other type. A typical way to do that would be to discretize your continuous variable into discrete bins.
There are a number of ways to discretzie data (e.g. equal intervals), and I believe the entropy package should be helpful for the MI calculations if you want to use R.
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to
|
How do I study the "correlation" between a continuous variable and a categorical variable?
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to measure the strength of the relationship. You might be interested in looking at some ideas from information theory. Specifically I think you might want to look at mutual information. Mutual information essentially gives you a way to quantify how much knowing the state of one variable tells you about the other variable. I actually think this definition is closer to what most people mean when they think about correlation.
For two discrete variables X and Y, the calculation is as follows: $$I(X;Y) = \sum_{y \in Y} \sum_{x \in X}
p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)}
\right) }$$
For two continuous variables we integrate rather than taking the sum: $$I(X;Y) = \int_Y \int_X
p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)}
\right) } \; dx \,dy$$
Your particular use-case is for one discrete and one continuous. Rather than integrating over a sum or summing over an integral, I imagine it would be easier to convert one of the variables into the other type. A typical way to do that would be to discretize your continuous variable into discrete bins.
There are a number of ways to discretzie data (e.g. equal intervals), and I believe the entropy package should be helpful for the MI calculations if you want to use R.
|
How do I study the "correlation" between a continuous variable and a categorical variable?
For a moment, let's ignore the continuous/discrete issue. Basically correlation measures the strength of the linear relationship between variables, and you seem to be asking for an alternative way to
|
9,594
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stuart's tau and Somers D. These are all available in SAS using Proc Freq. I don't know how they are computed using R functions. Here is a link to a presentation that gives detailed information:
http://faculty.unlv.edu/cstream/ppts/QM722/measuresofassociation.ppt#260,5,Measures of Association for Nominal and Ordinal Variables
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stu
|
How do I study the "correlation" between a continuous variable and a categorical variable?
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stuart's tau and Somers D. These are all available in SAS using Proc Freq. I don't know how they are computed using R functions. Here is a link to a presentation that gives detailed information:
http://faculty.unlv.edu/cstream/ppts/QM722/measuresofassociation.ppt#260,5,Measures of Association for Nominal and Ordinal Variables
|
How do I study the "correlation" between a continuous variable and a categorical variable?
If the categorical variable is ordinal and you bin the continuous variable into a few frequency intervals you can use Gamma. Also available for paired data put into ordinal form are Kendal's tau, Stu
|
9,595
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not make sense to use the numerical labelling of the categories in any measure of the relationship between another variable (e.g., 'correlation'). For this reason, and measure of the relationship between a continuous variable and a categorical variable should be based entirely on the indicator variables derived from the latter.
Given that you want a measure of 'correlation' between the two variables, it makes sense to look at the correlation between a continuous random variable $X$ and an indicator random variable $I$ derived from t a categorical variable. Letting $\phi \equiv \mathbb{P}(I=1)$ we have:
$$\mathbb{Cov}(I,X) = \mathbb{E}(IX) - \mathbb{E}(I) \mathbb{E}(X) = \phi \left[ \mathbb{E}(X|I=1) - \mathbb{E}(X) \right] ,$$
which gives:
$$\mathbb{Corr}(I,X) = \sqrt{\frac{\phi}{1-\phi}} \cdot \frac{\mathbb{E}(X|I=1) - \mathbb{E}(X)}{\mathbb{S}(X)} .$$
So the correlation between a continuous random variable $X$ and an indicator random variable $I$ is a fairly simple function of the indicator probability $\phi$ and the standardised gain in expected value of $X$ from conditioning on $I=1$. Note that this correlation does not require any discretization of the continuous random variable.
For a general categorical variable $C$ with range $1, ..., m$ you would then just extend this idea to have a vector of correlation values for each outcome of the categorical variable. For any outcome $C=k$ we can define the corresponding indicator $I_k \equiv \mathbb{I}(C=k)$ and we have:
$$\mathbb{Corr}(I_k,X) = \sqrt{\frac{\phi_k}{1-\phi_k}} \cdot \frac{\mathbb{E}(X|C=k) - \mathbb{E}(X)}{\mathbb{S}(X)} .$$
We can then define $\mathbb{Corr}(C,X) \equiv (\mathbb{Corr}(I_1,X), ..., \mathbb{Corr}(I_m,X))$ as the vector of correlation values for each category of the categorical random variable. This is really the only sense in which it makes sense to talk about 'correlation' for a categorical random variable.
(Note: It is trivial to show that $\sum_k \mathbb{Cov}(I_k,X) = 0$ and so the correlation vector for a categorical random variable is subject to this constraint. This means that given knowledge of the probability vector for the categorical random variable, and the standard deviation of $X$, you can derive the vector from any $m-1$ of its elements.)
The above exposition is for the true correlation values, but obviously these must be estimated in a given analysis. Estimating the indicator correlations from sample data is simple, and can be done by substitution of appropriate estimates for each of the parts. (You could use fancier estimation methods if you prefer.) Given sample data $(x_1, c_1), ..., (x_n, c_n)$ we can estimate the parts of the correlation equation as:
$$\hat{\phi}_k \equiv \frac{1}{n} \sum_{i=1}^n \mathbb{I}(c_i=k).$$
$$\hat{\mathbb{E}}(X) \equiv \bar{x} \equiv \frac{1}{n} \sum_{i=1}^n x_i.$$
$$\hat{\mathbb{E}}(X|C=k) \equiv \bar{x}_k \equiv \frac{1}{n} \sum_{i=1}^n x_i \mathbb{I}(c_i=k) \Bigg/ \hat{\phi}_k .$$
$$\hat{\mathbb{S}}(X) \equiv s_X \equiv \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2}.$$
Substitution of these estimates would yield a basic estimate of the correlation vector. If you have parametric information on $X$ then you could estimate the correlation vector directly by maximum likelihood or some other technique.
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not mak
|
How do I study the "correlation" between a continuous variable and a categorical variable?
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not make sense to use the numerical labelling of the categories in any measure of the relationship between another variable (e.g., 'correlation'). For this reason, and measure of the relationship between a continuous variable and a categorical variable should be based entirely on the indicator variables derived from the latter.
Given that you want a measure of 'correlation' between the two variables, it makes sense to look at the correlation between a continuous random variable $X$ and an indicator random variable $I$ derived from t a categorical variable. Letting $\phi \equiv \mathbb{P}(I=1)$ we have:
$$\mathbb{Cov}(I,X) = \mathbb{E}(IX) - \mathbb{E}(I) \mathbb{E}(X) = \phi \left[ \mathbb{E}(X|I=1) - \mathbb{E}(X) \right] ,$$
which gives:
$$\mathbb{Corr}(I,X) = \sqrt{\frac{\phi}{1-\phi}} \cdot \frac{\mathbb{E}(X|I=1) - \mathbb{E}(X)}{\mathbb{S}(X)} .$$
So the correlation between a continuous random variable $X$ and an indicator random variable $I$ is a fairly simple function of the indicator probability $\phi$ and the standardised gain in expected value of $X$ from conditioning on $I=1$. Note that this correlation does not require any discretization of the continuous random variable.
For a general categorical variable $C$ with range $1, ..., m$ you would then just extend this idea to have a vector of correlation values for each outcome of the categorical variable. For any outcome $C=k$ we can define the corresponding indicator $I_k \equiv \mathbb{I}(C=k)$ and we have:
$$\mathbb{Corr}(I_k,X) = \sqrt{\frac{\phi_k}{1-\phi_k}} \cdot \frac{\mathbb{E}(X|C=k) - \mathbb{E}(X)}{\mathbb{S}(X)} .$$
We can then define $\mathbb{Corr}(C,X) \equiv (\mathbb{Corr}(I_1,X), ..., \mathbb{Corr}(I_m,X))$ as the vector of correlation values for each category of the categorical random variable. This is really the only sense in which it makes sense to talk about 'correlation' for a categorical random variable.
(Note: It is trivial to show that $\sum_k \mathbb{Cov}(I_k,X) = 0$ and so the correlation vector for a categorical random variable is subject to this constraint. This means that given knowledge of the probability vector for the categorical random variable, and the standard deviation of $X$, you can derive the vector from any $m-1$ of its elements.)
The above exposition is for the true correlation values, but obviously these must be estimated in a given analysis. Estimating the indicator correlations from sample data is simple, and can be done by substitution of appropriate estimates for each of the parts. (You could use fancier estimation methods if you prefer.) Given sample data $(x_1, c_1), ..., (x_n, c_n)$ we can estimate the parts of the correlation equation as:
$$\hat{\phi}_k \equiv \frac{1}{n} \sum_{i=1}^n \mathbb{I}(c_i=k).$$
$$\hat{\mathbb{E}}(X) \equiv \bar{x} \equiv \frac{1}{n} \sum_{i=1}^n x_i.$$
$$\hat{\mathbb{E}}(X|C=k) \equiv \bar{x}_k \equiv \frac{1}{n} \sum_{i=1}^n x_i \mathbb{I}(c_i=k) \Bigg/ \hat{\phi}_k .$$
$$\hat{\mathbb{S}}(X) \equiv s_X \equiv \sqrt{\frac{1}{n-1} \sum_{i=1}^n (x_i - \bar{x})^2}.$$
Substitution of these estimates would yield a basic estimate of the correlation vector. If you have parametric information on $X$ then you could estimate the correlation vector directly by maximum likelihood or some other technique.
|
How do I study the "correlation" between a continuous variable and a categorical variable?
A categorical variable is effectively just a set of indicator variable. It is a basic idea of measurement theory that such a variable is invariant to relabelling of the categories, so it does not mak
|
9,596
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by
the point-biserial correlation coefficient, if $Y$ is dichotomous;
the point-polyserial correlation coefficient, if $Y$ is polychotomous with ordinal categories.
It should be noted, though, that the point-polyserial correlation is just a generalization of the point-biserial.
For a broader view, here's a table from Olsson, Drasgow & Dorans (1982)[1].
[1]: Source: Olsson, U., Drasgow, F., & Dorans, N. J. (1982). The polyserial correlation coefficient. Psychometrika, 47(3), 337–347
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by
the point-biserial correlation coefficient, if $Y$ is dichotomous
|
How do I study the "correlation" between a continuous variable and a categorical variable?
If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by
the point-biserial correlation coefficient, if $Y$ is dichotomous;
the point-polyserial correlation coefficient, if $Y$ is polychotomous with ordinal categories.
It should be noted, though, that the point-polyserial correlation is just a generalization of the point-biserial.
For a broader view, here's a table from Olsson, Drasgow & Dorans (1982)[1].
[1]: Source: Olsson, U., Drasgow, F., & Dorans, N. J. (1982). The polyserial correlation coefficient. Psychometrika, 47(3), 337–347
|
How do I study the "correlation" between a continuous variable and a categorical variable?
If $X$ is a continuous random variable and $Y$ is a categorical r.v., the observed correlation between $X$ and $Y$ can be measured by
the point-biserial correlation coefficient, if $Y$ is dichotomous
|
9,597
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete.
Although there are other statistical options like (point) biserial correlation coefficient to be useful here, it would be beneficial and highly recommended to calculate mutual information since it can detect associations other than linear and monotonic.
|
How do I study the "correlation" between a continuous variable and a categorical variable?
|
R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete.
Although there are other statistical options like (point) biserial correlati
|
How do I study the "correlation" between a continuous variable and a categorical variable?
R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete.
Although there are other statistical options like (point) biserial correlation coefficient to be useful here, it would be beneficial and highly recommended to calculate mutual information since it can detect associations other than linear and monotonic.
|
How do I study the "correlation" between a continuous variable and a categorical variable?
R package mpmi has the ability to calculate mutual information for the mixed variable case, namely continuous and discrete.
Although there are other statistical options like (point) biserial correlati
|
9,598
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
|
One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally to any statistical graphic.
People may initially think that color is a good cue as to whether a specific object is in the foreground or background, but this is not the case. Take for instance this example below, taken from an ESRI blogpost, Make Maps People Want to Look At:
Five primary design principles for cartography by Aileen Buckley.
So if I asked you to say which is the figure (e.g. land mass) and which one is the ground (e.g. water body) which one would you pick? A similar phenonenon also happens with the Rubin vase optical illusion.
Some experimental research I remember reading in Alan MacEachren's How Maps Work suggests that in the above pictures people choose the light and dark areas at an equal frequency for the figure (apparently color hue and saturation is used to determine figure from ground). So color can't intrinsically demarcate whether the background competes with the foreground in any statistical graphic, but other cues can help.
People often associate figures as enclosed objects (this is part of the reason the above map is confusing, in that neither mass is enclosed). This suggests in general (regardless of background color), elements in the plot should have clearly delineated boundaries and elements in the plot should be darker than the background. This probably biases the de facto plot background to white, but having a grey background is not damning. Other aspects can be used to delineate between foreground and background (the ESRI blog post mentions a few of these).
One is the hated Excel drop shadow for graphics (example given here in this newsletter by Dan Carr in figure 2). Although that should come with the caveat that people may interpret the numerical attributes at the location of the shadow instead of the intended element.
Another is using different colors/saturation for the outline of an element in the plot versus the interior fill. Examples are given below, with the leftmost circle an example of a not clearly delineated boundary.
These don't appear to be exhaustive either. For line plots it frequently appears that thicker lines come to the foreground, while thinner lines recede to the background.
This is mainly just intended to be food for thought though: your self-study seems to be pretty exhaustive (and I thank you for some of the resources you provided!) I don't think I disagree with any of the resources you provided, but I'm not sure I grok what Hadley is talking about with his motivation for a default grey background. But personal aesthetic preference for grey backgrounds can be accommodated by making sure elements in the plot come to the foreground (that is what really matters). These lessons can be applied to gridlines as well, and if gridlines help and are unobtrusive (i.e. in the background) they certainly are not chartjunk.
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
|
One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally to any statistical graphic.
People may initially think that color is a good cue as to whether a specific object is in the foreground or background, but this is not the case. Take for instance this example below, taken from an ESRI blogpost, Make Maps People Want to Look At:
Five primary design principles for cartography by Aileen Buckley.
So if I asked you to say which is the figure (e.g. land mass) and which one is the ground (e.g. water body) which one would you pick? A similar phenonenon also happens with the Rubin vase optical illusion.
Some experimental research I remember reading in Alan MacEachren's How Maps Work suggests that in the above pictures people choose the light and dark areas at an equal frequency for the figure (apparently color hue and saturation is used to determine figure from ground). So color can't intrinsically demarcate whether the background competes with the foreground in any statistical graphic, but other cues can help.
People often associate figures as enclosed objects (this is part of the reason the above map is confusing, in that neither mass is enclosed). This suggests in general (regardless of background color), elements in the plot should have clearly delineated boundaries and elements in the plot should be darker than the background. This probably biases the de facto plot background to white, but having a grey background is not damning. Other aspects can be used to delineate between foreground and background (the ESRI blog post mentions a few of these).
One is the hated Excel drop shadow for graphics (example given here in this newsletter by Dan Carr in figure 2). Although that should come with the caveat that people may interpret the numerical attributes at the location of the shadow instead of the intended element.
Another is using different colors/saturation for the outline of an element in the plot versus the interior fill. Examples are given below, with the leftmost circle an example of a not clearly delineated boundary.
These don't appear to be exhaustive either. For line plots it frequently appears that thicker lines come to the foreground, while thinner lines recede to the background.
This is mainly just intended to be food for thought though: your self-study seems to be pretty exhaustive (and I thank you for some of the resources you provided!) I don't think I disagree with any of the resources you provided, but I'm not sure I grok what Hadley is talking about with his motivation for a default grey background. But personal aesthetic preference for grey backgrounds can be accommodated by making sure elements in the plot come to the foreground (that is what really matters). These lessons can be applied to gridlines as well, and if gridlines help and are unobtrusive (i.e. in the background) they certainly are not chartjunk.
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
One thing that may help move the debate forward is to acknowledge what makes people visually distinguish between background and foreground, taking lessons from cartography and apply it more generally
|
9,599
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
|
Professor Wickham wrote in the ggplot2 book:
"We can still see the gridlines to aid in the judgement of position
(Cleveland, 1993b), but they have little visual impact and we can
easily "tune" them out. The grey background gives the plot a similar
colour (in a typographical sense) to the remainder of the text,
ensuring that the graphics fit in with the flow of a text without
jumping out with a bright white background. Finally, the grey
background creates a continuous field of colour which ensures that the
plot is perceived as a single visual entity."
And @Wayne wrote:
" Personally, I think charts without any reference lines are just as
bad as bold, distracting grids. More stylish and cool, but still
interfering with understanding and our ability to drill down into
information. The idea is not to be minimalist, as if we were all
Scandanavian furniture designers, but to clearly communicate, which
should include subtle (but useful) reference lines"
and @Peter Flom wrote:
I think faint gridlines should be the default in a scatterplot; they
help the reader; similarly, blank spaces between words and line in
text help the reader. I don't like the grey background, though. I find
it distracting. Text, after all, is usually black lettering on white
background.
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
|
Professor Wickham wrote in the ggplot2 book:
"We can still see the gridlines to aid in the judgement of position
(Cleveland, 1993b), but they have little visual impact and we can
easily "tune" th
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
Professor Wickham wrote in the ggplot2 book:
"We can still see the gridlines to aid in the judgement of position
(Cleveland, 1993b), but they have little visual impact and we can
easily "tune" them out. The grey background gives the plot a similar
colour (in a typographical sense) to the remainder of the text,
ensuring that the graphics fit in with the flow of a text without
jumping out with a bright white background. Finally, the grey
background creates a continuous field of colour which ensures that the
plot is perceived as a single visual entity."
And @Wayne wrote:
" Personally, I think charts without any reference lines are just as
bad as bold, distracting grids. More stylish and cool, but still
interfering with understanding and our ability to drill down into
information. The idea is not to be minimalist, as if we were all
Scandanavian furniture designers, but to clearly communicate, which
should include subtle (but useful) reference lines"
and @Peter Flom wrote:
I think faint gridlines should be the default in a scatterplot; they
help the reader; similarly, blank spaces between words and line in
text help the reader. I don't like the grey background, though. I find
it distracting. Text, after all, is usually black lettering on white
background.
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
Professor Wickham wrote in the ggplot2 book:
"We can still see the gridlines to aid in the judgement of position
(Cleveland, 1993b), but they have little visual impact and we can
easily "tune" th
|
9,600
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
|
While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear as effective with a white background.
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
|
While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear as effective with a white background.
|
Are gridlines and grey backgrounds chartjunk and should they be used only on an exception basis?
While I tend to avoid the default grey background, perhaps one reason Hadley may have gone with the grey is to allow the user to use more light, saturated colors to display data, which may not appear
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.