idx int64 1 56k | question stringlengths 15 155 | answer stringlengths 2 29.2k ⌀ | question_cut stringlengths 15 100 | answer_cut stringlengths 2 200 ⌀ | conversation stringlengths 47 29.3k | conversation_cut stringlengths 47 301 |
|---|---|---|---|---|---|---|
31,601 | Reading multilevel model syntax in intuitive ways in R (lme4) | Intuitively, the understanding can start with grouping variables or grouping terms. These are the terms that appear on the right side of the | or || in the random parts of the formula. They are often factor variables for which there are repeated measurements, or combinations (interactions) of a factor with another random factor (or indeed a fixed factor). These should be a single term - a vaiable or an combination/interaction, and not multiple terms (ie, you can have (1|A) or (1|A:B) but not (1|A+B) (that would be invalid). In all cases, they can be interpreted as: there is some random variation in the data at the "level" of the variable or combination of variables. We usually want to fit random intercepts for these, which will account for any non-independence due to this variation.
We can think about the number of "levels" in the model by considering the number of unique grouping terms. With a single groupig term we have a 2-level model, with 3 terms, a 3-level model. Some care is needed here because these "levels" might not correspond to levels in a multilevel model. In the analysis of experiments with a factorial design, "levels" don't really apply, so you can have things like y ~ A + B + (1|id) + (1|id:A) + (1|id:B) and there are 3 "levels" of variation, but not in a multilevel modelling sense. See this answer for further details on this. Even in multilevel modelling this can be tricky because of nested and crossed factors. See here for further details on this. Nesting is a property of the study design, not the model.
The terms on the left side of the | or || specify which variable(s) are allowed to vary at the different levels of the grouping term. In the simplest case it is just "1" which means we want only the intercept to vary (so, just random intercepts). If we have other variables in place of the "1" then it means we want those variables (which are typically fixed effects) to vary at each level of the grouping term in addition to the intercept (that is, (time|group) is the same as (1 + time|group). These are random slopes.
If "0" appears on the left hand side of the | or || then it means that random intercepts for the groupng term at not to be fitted (usually because they are fitted by a different term in the model).
Lastly, by default lmer will attempt to estimate a correlation between the random effects. However, if || is specified, then the correlations are not estimated (is they are fixed at zero). This is actually just shorthand. For example (time||group) is the same as (1|group) + (0+time|group) which means we fit random intercepts for group, and we fit random slopes for time, but no random intercept for time, so taken together it means random intercepts for group and slopes for time but with no correlation between them. This also means that randon slopes and random intercepts will be correlated only when they are on opposite sides of a single |.
So, for your specific examples:
lmer(y ~ time * group +
(time | therapist:subjects) +
(time * group || therapist),
data = data)
First note we have 2 different grouping terms: the therapist:subjects combination and therapist and this is the case for all 3 models. For the former we also fit random slopes for time (correlated with the random intercepts). For the latter we fit random slopes for time * group but these are not correlated with the random intercept for therapist.
lmer(y ~ time * group +
(time | therapist:subjects) +
(time | therapist) +
(0 + group + time:group | therapist),
data = data)
As with the first model, we have 2 different grouping terms: the therapist:subjects combination and therapist and again we have time as a random slope for the former. For the latter, this time we have random slopes for time (correlated with the random intercepts for therapist, and uncorrelated random slopes for group and time:group.
lmer(y ~ time * group +
(1 | therapist:subjects) +
(0 + time | therapist:subjects) +
(0 + time:group | therapist) +
(0 + group | therapist),
data = data)
Again we have 2 different grouping terms: the therapist:subjects combination and therapist. For the former, we have random intercepts for therapist:subjects, and random slopes for time (uncorrelated with the random itercepts). As noted above these could also be written in shorthand as (time || therapist:subjects). For the latter, there are no random intercepts at all (because 0 appears in both fomulae on the left hand side), but we fit random slopes for time:group and group
A few final points to highlight things that are invalid:
lmer(y ~ time * group + (0 | therapist)
is invalid because you can't have a single zero on the left of |. That would mean therapist is a grouping variable (implying we want random intercepts) but then the "0" means don't fit random intercepts. It's a conflict and should generate an error.
lmer(y ~ time * group + (1 | therapist*subject)
is an error because therapist*subject is not a single term - it is shorthand for therapist + subject + therapist:subject so it is eqivalent to
lmer(y ~ time * group + (1 | therapist + subject + therapist:subject)
which is invalid. If you actually wanted each of those terms to be grouping terms then you would use:
lmer(y ~ time * group + (1 | therapist) + (1 subject) + (1| therapist:subject) | Reading multilevel model syntax in intuitive ways in R (lme4) | Intuitively, the understanding can start with grouping variables or grouping terms. These are the terms that appear on the right side of the | or || in the random parts of the formula. They are often | Reading multilevel model syntax in intuitive ways in R (lme4)
Intuitively, the understanding can start with grouping variables or grouping terms. These are the terms that appear on the right side of the | or || in the random parts of the formula. They are often factor variables for which there are repeated measurements, or combinations (interactions) of a factor with another random factor (or indeed a fixed factor). These should be a single term - a vaiable or an combination/interaction, and not multiple terms (ie, you can have (1|A) or (1|A:B) but not (1|A+B) (that would be invalid). In all cases, they can be interpreted as: there is some random variation in the data at the "level" of the variable or combination of variables. We usually want to fit random intercepts for these, which will account for any non-independence due to this variation.
We can think about the number of "levels" in the model by considering the number of unique grouping terms. With a single groupig term we have a 2-level model, with 3 terms, a 3-level model. Some care is needed here because these "levels" might not correspond to levels in a multilevel model. In the analysis of experiments with a factorial design, "levels" don't really apply, so you can have things like y ~ A + B + (1|id) + (1|id:A) + (1|id:B) and there are 3 "levels" of variation, but not in a multilevel modelling sense. See this answer for further details on this. Even in multilevel modelling this can be tricky because of nested and crossed factors. See here for further details on this. Nesting is a property of the study design, not the model.
The terms on the left side of the | or || specify which variable(s) are allowed to vary at the different levels of the grouping term. In the simplest case it is just "1" which means we want only the intercept to vary (so, just random intercepts). If we have other variables in place of the "1" then it means we want those variables (which are typically fixed effects) to vary at each level of the grouping term in addition to the intercept (that is, (time|group) is the same as (1 + time|group). These are random slopes.
If "0" appears on the left hand side of the | or || then it means that random intercepts for the groupng term at not to be fitted (usually because they are fitted by a different term in the model).
Lastly, by default lmer will attempt to estimate a correlation between the random effects. However, if || is specified, then the correlations are not estimated (is they are fixed at zero). This is actually just shorthand. For example (time||group) is the same as (1|group) + (0+time|group) which means we fit random intercepts for group, and we fit random slopes for time, but no random intercept for time, so taken together it means random intercepts for group and slopes for time but with no correlation between them. This also means that randon slopes and random intercepts will be correlated only when they are on opposite sides of a single |.
So, for your specific examples:
lmer(y ~ time * group +
(time | therapist:subjects) +
(time * group || therapist),
data = data)
First note we have 2 different grouping terms: the therapist:subjects combination and therapist and this is the case for all 3 models. For the former we also fit random slopes for time (correlated with the random intercepts). For the latter we fit random slopes for time * group but these are not correlated with the random intercept for therapist.
lmer(y ~ time * group +
(time | therapist:subjects) +
(time | therapist) +
(0 + group + time:group | therapist),
data = data)
As with the first model, we have 2 different grouping terms: the therapist:subjects combination and therapist and again we have time as a random slope for the former. For the latter, this time we have random slopes for time (correlated with the random intercepts for therapist, and uncorrelated random slopes for group and time:group.
lmer(y ~ time * group +
(1 | therapist:subjects) +
(0 + time | therapist:subjects) +
(0 + time:group | therapist) +
(0 + group | therapist),
data = data)
Again we have 2 different grouping terms: the therapist:subjects combination and therapist. For the former, we have random intercepts for therapist:subjects, and random slopes for time (uncorrelated with the random itercepts). As noted above these could also be written in shorthand as (time || therapist:subjects). For the latter, there are no random intercepts at all (because 0 appears in both fomulae on the left hand side), but we fit random slopes for time:group and group
A few final points to highlight things that are invalid:
lmer(y ~ time * group + (0 | therapist)
is invalid because you can't have a single zero on the left of |. That would mean therapist is a grouping variable (implying we want random intercepts) but then the "0" means don't fit random intercepts. It's a conflict and should generate an error.
lmer(y ~ time * group + (1 | therapist*subject)
is an error because therapist*subject is not a single term - it is shorthand for therapist + subject + therapist:subject so it is eqivalent to
lmer(y ~ time * group + (1 | therapist + subject + therapist:subject)
which is invalid. If you actually wanted each of those terms to be grouping terms then you would use:
lmer(y ~ time * group + (1 | therapist) + (1 subject) + (1| therapist:subject) | Reading multilevel model syntax in intuitive ways in R (lme4)
Intuitively, the understanding can start with grouping variables or grouping terms. These are the terms that appear on the right side of the | or || in the random parts of the formula. They are often |
31,602 | Is there a way to calculate the riskiest places to be infected by COVID-19? | First of all,
The computation is very theoretic and not a good representation or guideline for adapting your behavior (just in case, if that is what you are after). In the comments I had already mentioned several points of critique for this approach:
The problem is that these computations will be based on highly subjective estimates about the underlying model/assumptions. Yes, you can compute it.... But do not expect that the answer is rigorous just because it has used mathematics
Another problem is that the descriptions of contacts are very complex. How exactly are you gonna be in the description of the 'time of contact'? Are you differentiating just the time of contact or also the type of contact? It is not a deterministic model and you need to deal with with distributions and stochastic behaviour that will make computations more difficult.
See for example the report about transmission of SARS in airplanes: there were three cases described with an infected person on board. In one case tens of other passengers got infected. In the other two cases it was only one other person (a crew member) that got infected.
In addition, are you gonna describe the probability for a single person, or the probability for the public health? In high traffic the probability for a single individual might be low but due to the large number of individuals in those situations there might be a probability that at least one or more persons get infected.
For public health, the problem is not to compare cases based on the probabilities for individuals to become sick. But instead, the point is to reduce the probability for individuals spreading the virus, make others sick. In general those probabilities (to make others sick) are much higher with high traffic cases. Sick people should not be around many other people.
There are many of these strange probability effects around. For instance, in Europe there is a lot of focus on people who had contact with the high risk areas; and it seems to be ignored that one may acquire the virus locally as well.
Indeed, when considering only a contact with a single person, then it is more likely to obtain the virus if this person is from (or had contact with) a high risk area. However, due to the much larger number of contacts with people outside the risk area it may be more likely to acquire the virus from one of those people, albeit the fact that the risk per contact is lower. Still, it is not illogical to focus on the high risk areas. But that is more a consideration from the point of view of focusing the limited time, money and materials. Yes, it is more likely to get the coronavirus from somebody that is not from the risk area's. But there are many other viruses from which one may obtain a common cold and we can not deal with all of those cases. When we wish to focus efforts on the most important cases, then the consideration is for which people the common cold is most likely to be due to the coronavirus. In that case it is linked to the high risk areas.
A possible solution (based on a simple model),
Let's consider the (unrealistic) probability of obtaining an infection, conditional on the other person being sick (this is a bit complex, there are different levels of being sick but let's consider this for single cases).
Say, the probability 'to get sick from a single contact of time $t$' is a function of contact time according to some (sort of) homogeneous Poisson process (ie. the waiting time to get hit/sick depends on an exponentially distributed variable and the longer the contact the more likely to get sick)
$$P(\text{sick from contact time $t$}) = 1 - e^{-\lambda t}$$
If you encounter $n$ people, each for a time $t$, sampled from a population of which $p\%$ are sick...
then the number of sick people, $S$, that you encounter is binomial distributed $$P(S=s) = {{n}\choose{s}} p^s(1-p)^{n-s}$$
the probability of getting sick from those $S$ people is: $$P(\text{sick} \vert t,s) = 1- e^{-\lambda ts}$$
the marginal probability of getting sick is $$\begin{array}{}
P(\text{sick} \vert t, n) & = & \sum_{s=0}^n
\overset{{\substack{\llap{\text{probability}}\rlap{\text{ to encounter}} \\
\llap{\text{$s$ sick }}\rlap{\text{people}} }}}{\overbrace{P(S=s)}^{}}
\times \underset{{\substack{\llap{\text{probability to }}\rlap{\text{get sick}} \\ \llap{\text{conditional}} \rlap{\text{ on}} \\ \llap{\text{encountering $s$}} \rlap{\text{ sick people}}
}}}{\underbrace{P(\text{sick} \vert t,s)}_{}} \\ \\
&=& 1- \sum_{s= 0}^n {{s}\choose{n}} p^s(1-p)^{n-s}e^{-\lambda ts} \\ &=& 1- \left(1- p + pe^{-\lambda t}\right)^n
\end{array}$$ where I solved this last term with wolframalpha.
Note that
$$\lim_{n\to \infty} 1- \left(1- p + pe^{-\lambda t/n}\right)^n = 1 - e^{-\lambda t p} $$
For a given fixed total contact $C = n\times t \times \lambda$ you get an increase as function of $n$. For instance, if $C = 10$ then:
Intuitive overview
Below are two graphs that show the value of this term $1- \left(1- p + pe^{-\lambda t}\right)^n $ as function of contact time $t$ and number of contacts $n$. The plots are made for different values of $p$.
Note the following regions:
On the right side the contact time is very large and you are almost certainly gonna get infected if the person that you have contact with is sick. More specifically for the lower right corner (if $n=1$ and $t$ is very large) the probability of getting sick will be equal to $p$ (ie. the probability that the other person is sick).
More generally for the right side, the region where $\lambda t>1$, a change in the contact time is not gonna change much the probability to get sick from a single person (this curve $1-e^{-\lambda t}$ doesn't change much in value for large $\lambda t$).
So if $\lambda t>1$ (and you are almost certainly getting sick if the other person is sick) then if you halve the contact time and double the number of contacts, then this is gonna increase the probability to get sick (because the probability to encounter a sick person increases).
On the left side for $\lambda t < 1$ you will get that at some point an increase of $n$ with an equal decrease of $t$ will counter each other. On the left side it doesn't matter whether you have high traffic short time vs. low traffic high time.
Conclusion
So, say you consider total contact time $n\times t$ being constant, then this should lead to a higher probability for getting sick for higher $n$ (shorter contacts but with more people).
Limitations
However the assumptions will not hold in practice. The time of contact is an abstract concept and also the exponential distribution for the probability of getting sick from a single person is not accurate.
Possibly there might be something like contact being more/less intense in the beginning (to compare in the simple model the probability to get sick from a single contact of time $t$ is approximately linear in time $1-e^{\lambda t} \approx \lambda t$)
Also when considering infections of groups, instead of individuals, then there might be correlations like when a sick person sneezes then it will hit multiple people at the same time. Think about the cases of superspreaders, e.g. the case of the SARS outbreak in the Amoy Gardens apartment complex where likely a single person infected hundreds of others)
So based on the simple model there is this effect that of for a given total time of contact, $n \times t$, is better to spread it out among less people, $n$.
However, there is an opposite effect. At some point, for short $t$, the transmission will be relatively unlikely. For instance, a walk on a busy street means high $n$ but the contacts will not be meaningful to create a high risk. (Potentially you could adapt this first equation $1 - e^{\lambda t}$ but it is very subjective/broad). You could think like something as the '5 seconds rule' (which is actually not correct but gets close to the idea).
Use of the simple model
Although the model used here is very simplistic, it does still help to get a general idea about what sort of measures should be taken and how the principle would work out for a more complex model (it will be more or less analogous to the simple model):
On the right side (of the image), it doesn't help much to change (reduce) the contact time, and it is more important to focus on reducing the number of contacts (e.g. some of the rigorous advise for non-sick family-members that are in quarantine together with sick family members are not much useful since restricting $\lambda t$ for large $\lambda t$ has little effect and it would be better to focus on making less contacts; go cook yourself instead of ordering that pizza)
On the left side, reductions should be weighed against each other. When restrictions that reduce high traffic are gonna lead to low traffic but a longer time then the measures are not gonna help a lot.
A very clear example: I am currently waiting in line to enter the supermarket. They have decided to reduce the number of total people inside the supermarket. But this is entirely useless and possibly detrimental. The total time that we are in contact with other people does not decrease because of this. (And there are secondary effects: partners alone at home with children that have to wait longer. Potential shopping at multiple markets because time is limited at single markets. Etc. It is just silly)
I am letting the older people in the line pass before me since the health effects may be worse for them. And in the meantime I make myself annoyed about this symbolical useless measure (if not even detrimental) and have sufficient time to type this edit in this post, and in the meantime either make other people sick or become sick myself. | Is there a way to calculate the riskiest places to be infected by COVID-19? | First of all,
The computation is very theoretic and not a good representation or guideline for adapting your behavior (just in case, if that is what you are after). In the comments I had already menti | Is there a way to calculate the riskiest places to be infected by COVID-19?
First of all,
The computation is very theoretic and not a good representation or guideline for adapting your behavior (just in case, if that is what you are after). In the comments I had already mentioned several points of critique for this approach:
The problem is that these computations will be based on highly subjective estimates about the underlying model/assumptions. Yes, you can compute it.... But do not expect that the answer is rigorous just because it has used mathematics
Another problem is that the descriptions of contacts are very complex. How exactly are you gonna be in the description of the 'time of contact'? Are you differentiating just the time of contact or also the type of contact? It is not a deterministic model and you need to deal with with distributions and stochastic behaviour that will make computations more difficult.
See for example the report about transmission of SARS in airplanes: there were three cases described with an infected person on board. In one case tens of other passengers got infected. In the other two cases it was only one other person (a crew member) that got infected.
In addition, are you gonna describe the probability for a single person, or the probability for the public health? In high traffic the probability for a single individual might be low but due to the large number of individuals in those situations there might be a probability that at least one or more persons get infected.
For public health, the problem is not to compare cases based on the probabilities for individuals to become sick. But instead, the point is to reduce the probability for individuals spreading the virus, make others sick. In general those probabilities (to make others sick) are much higher with high traffic cases. Sick people should not be around many other people.
There are many of these strange probability effects around. For instance, in Europe there is a lot of focus on people who had contact with the high risk areas; and it seems to be ignored that one may acquire the virus locally as well.
Indeed, when considering only a contact with a single person, then it is more likely to obtain the virus if this person is from (or had contact with) a high risk area. However, due to the much larger number of contacts with people outside the risk area it may be more likely to acquire the virus from one of those people, albeit the fact that the risk per contact is lower. Still, it is not illogical to focus on the high risk areas. But that is more a consideration from the point of view of focusing the limited time, money and materials. Yes, it is more likely to get the coronavirus from somebody that is not from the risk area's. But there are many other viruses from which one may obtain a common cold and we can not deal with all of those cases. When we wish to focus efforts on the most important cases, then the consideration is for which people the common cold is most likely to be due to the coronavirus. In that case it is linked to the high risk areas.
A possible solution (based on a simple model),
Let's consider the (unrealistic) probability of obtaining an infection, conditional on the other person being sick (this is a bit complex, there are different levels of being sick but let's consider this for single cases).
Say, the probability 'to get sick from a single contact of time $t$' is a function of contact time according to some (sort of) homogeneous Poisson process (ie. the waiting time to get hit/sick depends on an exponentially distributed variable and the longer the contact the more likely to get sick)
$$P(\text{sick from contact time $t$}) = 1 - e^{-\lambda t}$$
If you encounter $n$ people, each for a time $t$, sampled from a population of which $p\%$ are sick...
then the number of sick people, $S$, that you encounter is binomial distributed $$P(S=s) = {{n}\choose{s}} p^s(1-p)^{n-s}$$
the probability of getting sick from those $S$ people is: $$P(\text{sick} \vert t,s) = 1- e^{-\lambda ts}$$
the marginal probability of getting sick is $$\begin{array}{}
P(\text{sick} \vert t, n) & = & \sum_{s=0}^n
\overset{{\substack{\llap{\text{probability}}\rlap{\text{ to encounter}} \\
\llap{\text{$s$ sick }}\rlap{\text{people}} }}}{\overbrace{P(S=s)}^{}}
\times \underset{{\substack{\llap{\text{probability to }}\rlap{\text{get sick}} \\ \llap{\text{conditional}} \rlap{\text{ on}} \\ \llap{\text{encountering $s$}} \rlap{\text{ sick people}}
}}}{\underbrace{P(\text{sick} \vert t,s)}_{}} \\ \\
&=& 1- \sum_{s= 0}^n {{s}\choose{n}} p^s(1-p)^{n-s}e^{-\lambda ts} \\ &=& 1- \left(1- p + pe^{-\lambda t}\right)^n
\end{array}$$ where I solved this last term with wolframalpha.
Note that
$$\lim_{n\to \infty} 1- \left(1- p + pe^{-\lambda t/n}\right)^n = 1 - e^{-\lambda t p} $$
For a given fixed total contact $C = n\times t \times \lambda$ you get an increase as function of $n$. For instance, if $C = 10$ then:
Intuitive overview
Below are two graphs that show the value of this term $1- \left(1- p + pe^{-\lambda t}\right)^n $ as function of contact time $t$ and number of contacts $n$. The plots are made for different values of $p$.
Note the following regions:
On the right side the contact time is very large and you are almost certainly gonna get infected if the person that you have contact with is sick. More specifically for the lower right corner (if $n=1$ and $t$ is very large) the probability of getting sick will be equal to $p$ (ie. the probability that the other person is sick).
More generally for the right side, the region where $\lambda t>1$, a change in the contact time is not gonna change much the probability to get sick from a single person (this curve $1-e^{-\lambda t}$ doesn't change much in value for large $\lambda t$).
So if $\lambda t>1$ (and you are almost certainly getting sick if the other person is sick) then if you halve the contact time and double the number of contacts, then this is gonna increase the probability to get sick (because the probability to encounter a sick person increases).
On the left side for $\lambda t < 1$ you will get that at some point an increase of $n$ with an equal decrease of $t$ will counter each other. On the left side it doesn't matter whether you have high traffic short time vs. low traffic high time.
Conclusion
So, say you consider total contact time $n\times t$ being constant, then this should lead to a higher probability for getting sick for higher $n$ (shorter contacts but with more people).
Limitations
However the assumptions will not hold in practice. The time of contact is an abstract concept and also the exponential distribution for the probability of getting sick from a single person is not accurate.
Possibly there might be something like contact being more/less intense in the beginning (to compare in the simple model the probability to get sick from a single contact of time $t$ is approximately linear in time $1-e^{\lambda t} \approx \lambda t$)
Also when considering infections of groups, instead of individuals, then there might be correlations like when a sick person sneezes then it will hit multiple people at the same time. Think about the cases of superspreaders, e.g. the case of the SARS outbreak in the Amoy Gardens apartment complex where likely a single person infected hundreds of others)
So based on the simple model there is this effect that of for a given total time of contact, $n \times t$, is better to spread it out among less people, $n$.
However, there is an opposite effect. At some point, for short $t$, the transmission will be relatively unlikely. For instance, a walk on a busy street means high $n$ but the contacts will not be meaningful to create a high risk. (Potentially you could adapt this first equation $1 - e^{\lambda t}$ but it is very subjective/broad). You could think like something as the '5 seconds rule' (which is actually not correct but gets close to the idea).
Use of the simple model
Although the model used here is very simplistic, it does still help to get a general idea about what sort of measures should be taken and how the principle would work out for a more complex model (it will be more or less analogous to the simple model):
On the right side (of the image), it doesn't help much to change (reduce) the contact time, and it is more important to focus on reducing the number of contacts (e.g. some of the rigorous advise for non-sick family-members that are in quarantine together with sick family members are not much useful since restricting $\lambda t$ for large $\lambda t$ has little effect and it would be better to focus on making less contacts; go cook yourself instead of ordering that pizza)
On the left side, reductions should be weighed against each other. When restrictions that reduce high traffic are gonna lead to low traffic but a longer time then the measures are not gonna help a lot.
A very clear example: I am currently waiting in line to enter the supermarket. They have decided to reduce the number of total people inside the supermarket. But this is entirely useless and possibly detrimental. The total time that we are in contact with other people does not decrease because of this. (And there are secondary effects: partners alone at home with children that have to wait longer. Potential shopping at multiple markets because time is limited at single markets. Etc. It is just silly)
I am letting the older people in the line pass before me since the health effects may be worse for them. And in the meantime I make myself annoyed about this symbolical useless measure (if not even detrimental) and have sufficient time to type this edit in this post, and in the meantime either make other people sick or become sick myself. | Is there a way to calculate the riskiest places to be infected by COVID-19?
First of all,
The computation is very theoretic and not a good representation or guideline for adapting your behavior (just in case, if that is what you are after). In the comments I had already menti |
31,603 | What is the relationship between Metropolis Hastings and Simulated Annealing? | Simulated annealing is a meta-heuristic algorithm used for optimization, that is finding the minimum/maximum of a function. Metropolis-Hastings is an algorithm used for exploring a function (finding possible values/samples).
Both algorithms are stochastic, generating new points to move to at random. Where they differ is in their acceptance/rejection criterion. Both algorithms move to a new random point with a certain probability, which is based on the difference (or the ratio) of the current and new proposed point in the search space.
Metropolis-Hastings moves to a new point based on the ratio of the current point and a new proposed random point $\min(\frac{new}{old},1)$, with some additional stuff for asymmetric distributions). If this ratio is bigger than 1 (the new point has higher likelihood than the current point) then it will move to this new point immediately. Otherwise, if the new point has lower likelihood, the algorithm will move to this point with some probability - based on the ratio. In this case the algorithm will generate a random value between 0 and 1, if the ratio is smaller than this values then it will reject the new point, else it will accept the new point.
Simulated annealing has an additional parameter (temperature) which scales the difference by a certain amount $\exp(\frac{new-old}{T})$. When the temperature is very high the difference won't have any meaningful impact on the decision (which will evaluate to something greater than 1), so the algorithm will always accept any new point, which means it will move at random. When the temperature is very low the criterion will evaluate to ~0 for worse points, so the algorithm will move deterministically, only accepting better solutions. | What is the relationship between Metropolis Hastings and Simulated Annealing? | Simulated annealing is a meta-heuristic algorithm used for optimization, that is finding the minimum/maximum of a function. Metropolis-Hastings is an algorithm used for exploring a function (finding p | What is the relationship between Metropolis Hastings and Simulated Annealing?
Simulated annealing is a meta-heuristic algorithm used for optimization, that is finding the minimum/maximum of a function. Metropolis-Hastings is an algorithm used for exploring a function (finding possible values/samples).
Both algorithms are stochastic, generating new points to move to at random. Where they differ is in their acceptance/rejection criterion. Both algorithms move to a new random point with a certain probability, which is based on the difference (or the ratio) of the current and new proposed point in the search space.
Metropolis-Hastings moves to a new point based on the ratio of the current point and a new proposed random point $\min(\frac{new}{old},1)$, with some additional stuff for asymmetric distributions). If this ratio is bigger than 1 (the new point has higher likelihood than the current point) then it will move to this new point immediately. Otherwise, if the new point has lower likelihood, the algorithm will move to this point with some probability - based on the ratio. In this case the algorithm will generate a random value between 0 and 1, if the ratio is smaller than this values then it will reject the new point, else it will accept the new point.
Simulated annealing has an additional parameter (temperature) which scales the difference by a certain amount $\exp(\frac{new-old}{T})$. When the temperature is very high the difference won't have any meaningful impact on the decision (which will evaluate to something greater than 1), so the algorithm will always accept any new point, which means it will move at random. When the temperature is very low the criterion will evaluate to ~0 for worse points, so the algorithm will move deterministically, only accepting better solutions. | What is the relationship between Metropolis Hastings and Simulated Annealing?
Simulated annealing is a meta-heuristic algorithm used for optimization, that is finding the minimum/maximum of a function. Metropolis-Hastings is an algorithm used for exploring a function (finding p |
31,604 | What is the relationship between Metropolis Hastings and Simulated Annealing? | The MCMC algorithm is closely related to Simulated Annealing (SA), in that both explore a surface stochastically, using the Metropolis acceptance criterion for proposed points. We provide parameters that permit use of the MCMC routine for SA, though this is not the primary focus of the feature.
source: vensim | What is the relationship between Metropolis Hastings and Simulated Annealing? | The MCMC algorithm is closely related to Simulated Annealing (SA), in that both explore a surface stochastically, using the Metropolis acceptance criterion for proposed points. We provide parameters t | What is the relationship between Metropolis Hastings and Simulated Annealing?
The MCMC algorithm is closely related to Simulated Annealing (SA), in that both explore a surface stochastically, using the Metropolis acceptance criterion for proposed points. We provide parameters that permit use of the MCMC routine for SA, though this is not the primary focus of the feature.
source: vensim | What is the relationship between Metropolis Hastings and Simulated Annealing?
The MCMC algorithm is closely related to Simulated Annealing (SA), in that both explore a surface stochastically, using the Metropolis acceptance criterion for proposed points. We provide parameters t |
31,605 | Regression with heavy-tailed response variable | The first thing to note is that the estimators in the linear regression model are not particularly sensitive to heavy tails in the error distribution (so long as the error variance is finite). Fitting a standard linear regression to data with excessively heavy tail will mean that data points in the tails are penalised excessively, but the coefficient estimators in the model are still usually quite reasonable. The main drawback in this situation is that prediction intervals for values will be too short, since they do not account for the heavy tails.
If you would like to adapt your model to deal with the heavier tails, you can use the heavyLm function in the heavy package in R. This function fits a linear model using the T-distribution as the error distribution, which allows you to use an error distribution with heavier tails than the normal. The only drawback of the package is that it requires you to specify the degrees-of-freedom parameter for the error distribution, rather than just estimating this from the data. However, with some creative looping, you could even estimate this parameter if you wanted to. In any case, this model should allow you to get estimates for a linear regression, where the error distribution has heavier tails than the normal distribution, and so your corresponding residual density plot and residual QQ plot should be close to the stipulated error distribution.
Update: The heavy package has been removed from CRAN due to some check problems that were not resolved in the required time. Previous versions of the package are available in the archive here. | Regression with heavy-tailed response variable | The first thing to note is that the estimators in the linear regression model are not particularly sensitive to heavy tails in the error distribution (so long as the error variance is finite). Fittin | Regression with heavy-tailed response variable
The first thing to note is that the estimators in the linear regression model are not particularly sensitive to heavy tails in the error distribution (so long as the error variance is finite). Fitting a standard linear regression to data with excessively heavy tail will mean that data points in the tails are penalised excessively, but the coefficient estimators in the model are still usually quite reasonable. The main drawback in this situation is that prediction intervals for values will be too short, since they do not account for the heavy tails.
If you would like to adapt your model to deal with the heavier tails, you can use the heavyLm function in the heavy package in R. This function fits a linear model using the T-distribution as the error distribution, which allows you to use an error distribution with heavier tails than the normal. The only drawback of the package is that it requires you to specify the degrees-of-freedom parameter for the error distribution, rather than just estimating this from the data. However, with some creative looping, you could even estimate this parameter if you wanted to. In any case, this model should allow you to get estimates for a linear regression, where the error distribution has heavier tails than the normal distribution, and so your corresponding residual density plot and residual QQ plot should be close to the stipulated error distribution.
Update: The heavy package has been removed from CRAN due to some check problems that were not resolved in the required time. Previous versions of the package are available in the archive here. | Regression with heavy-tailed response variable
The first thing to note is that the estimators in the linear regression model are not particularly sensitive to heavy tails in the error distribution (so long as the error variance is finite). Fittin |
31,606 | Regression with heavy-tailed response variable | It depends on how heavy the tails are. For example, for OLS regression of Student's t residuals, as the degrees of freedom decrease, first the SD, then the mean itself become incalculable. The following linked answer shows simulations demonstrating this effect. For lower degrees of freedom other methods become increasingly relevant.
For example, because the tails look Cauchy or Cauchy similar. I would consider whether one can use a non-parametric regression like Theil regression, even thorough it is slightly biased, or Passing-Bablok, which is unbiased but where it is generally not realized that the latter can only be applied iff the slope is positive. Also, please note that in common with Deming regression these methods do not yield least error in $y$, but rather represent best functional agreement, that is, how the variables 'best' covary.
Also see the robust regression and other related "robust regression" questions (with quotes, about 360 of them) scattered on CV. Such methods can be extended to multi-linear cases and probably non-linear models with somewhat greater difficulty. | Regression with heavy-tailed response variable | It depends on how heavy the tails are. For example, for OLS regression of Student's t residuals, as the degrees of freedom decrease, first the SD, then the mean itself become incalculable. The follo | Regression with heavy-tailed response variable
It depends on how heavy the tails are. For example, for OLS regression of Student's t residuals, as the degrees of freedom decrease, first the SD, then the mean itself become incalculable. The following linked answer shows simulations demonstrating this effect. For lower degrees of freedom other methods become increasingly relevant.
For example, because the tails look Cauchy or Cauchy similar. I would consider whether one can use a non-parametric regression like Theil regression, even thorough it is slightly biased, or Passing-Bablok, which is unbiased but where it is generally not realized that the latter can only be applied iff the slope is positive. Also, please note that in common with Deming regression these methods do not yield least error in $y$, but rather represent best functional agreement, that is, how the variables 'best' covary.
Also see the robust regression and other related "robust regression" questions (with quotes, about 360 of them) scattered on CV. Such methods can be extended to multi-linear cases and probably non-linear models with somewhat greater difficulty. | Regression with heavy-tailed response variable
It depends on how heavy the tails are. For example, for OLS regression of Student's t residuals, as the degrees of freedom decrease, first the SD, then the mean itself become incalculable. The follo |
31,607 | Can zero covariance and zero expectation imply zero conditional expectation? | This figure is a complete answer.
For those who would like a gloss on the figure though, notice that in this sample of 1,000 values of $(X,\varepsilon),$
"$\operatorname{Cov}(X,\varepsilon)=0$." When $X$ and $\varepsilon$ are centered around zero, as they are here, the covariance is their average product. The figure uses color to indicate the individual products: greens and blues for very negative values and oranges for slightly positive values. On balance the many oranges cancel the few greens and blues, giving zero covariance.
"$E[\varepsilon]=0$." On average the value of $\varepsilon$ is zero. This is clear from the symmetry: rotating the plot 180 degrees preserves its features (down to fairly fine detail) but negates the values of $\varepsilon.$ Thus the average must be close to zero (the only finite number equal to its own negative).
The conditional expectation $E[\varepsilon\mid X]$ is traced out by the thick black curve: at each value of $X$ it estimates the average height of the points lying above that value. Clearly this is not always zero. (All that matters in this example is that the curve is not constantly zero: the details of its shape are irrelevant.)
Even when $X$ and $\varepsilon$ have zero covariance and $\varepsilon$ has zero expectation, locally the values of $\varepsilon$ may fluctuate with $X,$ provided they average out to zero globally (on the whole).
Appendix
Below is the R code that generated the figure. The (long) third line is the heart of it: to a curvilinear function $y = \sin(\pi x/\sqrt{3})$ it adds uniformly distributed errors runif(n, -1/2, 1/2) and then -- to assure the resulting response will have no correlation with $x$ -- removes the effect of $x$ on these values (using the sequence of calls scale(residuals(lm(...)))).
(If you are uncomfortable with this pre-processing, then add the errors after removing the effect of $x:$
eps <- runif(n, -1/2, 1/2) + residuals(lm(sin(pi*x/sqrt(3)) ~ x))
The result, because the errors are truly independent of $x,$ will not have a perfectly zero correlation, but only because of chance variation in the simulation. This gives a better simulation but doesn't guarantee a good plot!)
#
# Create data.
#
n <- 1e3 # Specify the size of the dataset
x <- scale(runif(n)) # Create explanatory variable values
eps <- scale(residuals(lm(runif(n, -1/2, 1/2) + sin(pi*x/sqrt(3)) ~ x)))
zapsmall(cor(cbind(x, eps))) # Confirm lack of correlation
#
# Create a data frame for plotting and plot it.
#
X <- data.frame(x=x, eps=eps, Product=x*eps)
library(ggplot2)
ggplot(X, aes(x, eps)) +
geom_hline(yintercept=0) + geom_vline(xintercept=0) + # Draw axes
geom_point(aes(fill=Product), size=2, shape=21, alpha=1/4) + # Plot the points
geom_smooth(color="Black", se=FALSE, size=1.1) + # Plot the regression
scale_fill_gradientn(colors=topo.colors(13)[1:12]) + # Specify colors
ylab(expression(epsilon)) # Label an axis | Can zero covariance and zero expectation imply zero conditional expectation? | This figure is a complete answer.
For those who would like a gloss on the figure though, notice that in this sample of 1,000 values of $(X,\varepsilon),$
"$\operatorname{Cov}(X,\varepsilon)=0$." Wh | Can zero covariance and zero expectation imply zero conditional expectation?
This figure is a complete answer.
For those who would like a gloss on the figure though, notice that in this sample of 1,000 values of $(X,\varepsilon),$
"$\operatorname{Cov}(X,\varepsilon)=0$." When $X$ and $\varepsilon$ are centered around zero, as they are here, the covariance is their average product. The figure uses color to indicate the individual products: greens and blues for very negative values and oranges for slightly positive values. On balance the many oranges cancel the few greens and blues, giving zero covariance.
"$E[\varepsilon]=0$." On average the value of $\varepsilon$ is zero. This is clear from the symmetry: rotating the plot 180 degrees preserves its features (down to fairly fine detail) but negates the values of $\varepsilon.$ Thus the average must be close to zero (the only finite number equal to its own negative).
The conditional expectation $E[\varepsilon\mid X]$ is traced out by the thick black curve: at each value of $X$ it estimates the average height of the points lying above that value. Clearly this is not always zero. (All that matters in this example is that the curve is not constantly zero: the details of its shape are irrelevant.)
Even when $X$ and $\varepsilon$ have zero covariance and $\varepsilon$ has zero expectation, locally the values of $\varepsilon$ may fluctuate with $X,$ provided they average out to zero globally (on the whole).
Appendix
Below is the R code that generated the figure. The (long) third line is the heart of it: to a curvilinear function $y = \sin(\pi x/\sqrt{3})$ it adds uniformly distributed errors runif(n, -1/2, 1/2) and then -- to assure the resulting response will have no correlation with $x$ -- removes the effect of $x$ on these values (using the sequence of calls scale(residuals(lm(...)))).
(If you are uncomfortable with this pre-processing, then add the errors after removing the effect of $x:$
eps <- runif(n, -1/2, 1/2) + residuals(lm(sin(pi*x/sqrt(3)) ~ x))
The result, because the errors are truly independent of $x,$ will not have a perfectly zero correlation, but only because of chance variation in the simulation. This gives a better simulation but doesn't guarantee a good plot!)
#
# Create data.
#
n <- 1e3 # Specify the size of the dataset
x <- scale(runif(n)) # Create explanatory variable values
eps <- scale(residuals(lm(runif(n, -1/2, 1/2) + sin(pi*x/sqrt(3)) ~ x)))
zapsmall(cor(cbind(x, eps))) # Confirm lack of correlation
#
# Create a data frame for plotting and plot it.
#
X <- data.frame(x=x, eps=eps, Product=x*eps)
library(ggplot2)
ggplot(X, aes(x, eps)) +
geom_hline(yintercept=0) + geom_vline(xintercept=0) + # Draw axes
geom_point(aes(fill=Product), size=2, shape=21, alpha=1/4) + # Plot the points
geom_smooth(color="Black", se=FALSE, size=1.1) + # Plot the regression
scale_fill_gradientn(colors=topo.colors(13)[1:12]) + # Specify colors
ylab(expression(epsilon)) # Label an axis | Can zero covariance and zero expectation imply zero conditional expectation?
This figure is a complete answer.
For those who would like a gloss on the figure though, notice that in this sample of 1,000 values of $(X,\varepsilon),$
"$\operatorname{Cov}(X,\varepsilon)=0$." Wh |
31,608 | Can zero covariance and zero expectation imply zero conditional expectation? | Let $x = \epsilon = 0$. Now check the relevant moments. | Can zero covariance and zero expectation imply zero conditional expectation? | Let $x = \epsilon = 0$. Now check the relevant moments. | Can zero covariance and zero expectation imply zero conditional expectation?
Let $x = \epsilon = 0$. Now check the relevant moments. | Can zero covariance and zero expectation imply zero conditional expectation?
Let $x = \epsilon = 0$. Now check the relevant moments. |
31,609 | Can zero covariance and zero expectation imply zero conditional expectation? | Let $x$ and $\epsilon$ are two random variables.
If $$Cov(x,\epsilon)=0$$ and $$E[\epsilon]=0,$$ can that lead to
$E[\epsilon|x]=0?$
No, those conditions are not enough.
Indeed it is possible that both hold but $Cov(x^2,\epsilon)\neq0$, therefore $E[\epsilon|x]\neq 0$
However if this stonger condition hold $Cov(f(x),\epsilon)=0$ fon any $f()$, then
$E[\epsilon|x] = 0$ hold
Note: sometimes $E[\epsilon|x] = 0$ is assumed but only $E[x \epsilon] = 0$ is checked/considered. This prassi can work if we assume (often implicitly) that only linear relations are permitted. | Can zero covariance and zero expectation imply zero conditional expectation? | Let $x$ and $\epsilon$ are two random variables.
If $$Cov(x,\epsilon)=0$$ and $$E[\epsilon]=0,$$ can that lead to
$E[\epsilon|x]=0?$
No, those conditions are not enough.
Indeed it is possible that bo | Can zero covariance and zero expectation imply zero conditional expectation?
Let $x$ and $\epsilon$ are two random variables.
If $$Cov(x,\epsilon)=0$$ and $$E[\epsilon]=0,$$ can that lead to
$E[\epsilon|x]=0?$
No, those conditions are not enough.
Indeed it is possible that both hold but $Cov(x^2,\epsilon)\neq0$, therefore $E[\epsilon|x]\neq 0$
However if this stonger condition hold $Cov(f(x),\epsilon)=0$ fon any $f()$, then
$E[\epsilon|x] = 0$ hold
Note: sometimes $E[\epsilon|x] = 0$ is assumed but only $E[x \epsilon] = 0$ is checked/considered. This prassi can work if we assume (often implicitly) that only linear relations are permitted. | Can zero covariance and zero expectation imply zero conditional expectation?
Let $x$ and $\epsilon$ are two random variables.
If $$Cov(x,\epsilon)=0$$ and $$E[\epsilon]=0,$$ can that lead to
$E[\epsilon|x]=0?$
No, those conditions are not enough.
Indeed it is possible that bo |
31,610 | Can zero covariance and zero expectation imply zero conditional expectation? | The following is wrong. Here is an counter example:
Here are my thoughts. Could some expert help check for me? thanks.
$Cov(x, \epsilon)=0\space\implies E[x\epsilon]-E[x]E[\epsilon]=0 \space \stackrel{E[\epsilon]=0}\implies E[x\epsilon]=0$
so we can say $x$ and $\epsilon$ are orthogonal. From the geometric perspective, $E[\epsilon|x]$ is the projection of $\epsilon$ onto the x's plane. Since $\epsilon$ is orthogonal to that plane, so the projection is zero. That is $E[\epsilon|x]=0$ | Can zero covariance and zero expectation imply zero conditional expectation? | The following is wrong. Here is an counter example:
Here are my thoughts. Could some expert help check for me? thanks.
$Cov(x, \epsilon)=0\space\implies E[x\epsilon]-E[x]E[\epsilon]=0 \space \stackrel | Can zero covariance and zero expectation imply zero conditional expectation?
The following is wrong. Here is an counter example:
Here are my thoughts. Could some expert help check for me? thanks.
$Cov(x, \epsilon)=0\space\implies E[x\epsilon]-E[x]E[\epsilon]=0 \space \stackrel{E[\epsilon]=0}\implies E[x\epsilon]=0$
so we can say $x$ and $\epsilon$ are orthogonal. From the geometric perspective, $E[\epsilon|x]$ is the projection of $\epsilon$ onto the x's plane. Since $\epsilon$ is orthogonal to that plane, so the projection is zero. That is $E[\epsilon|x]=0$ | Can zero covariance and zero expectation imply zero conditional expectation?
The following is wrong. Here is an counter example:
Here are my thoughts. Could some expert help check for me? thanks.
$Cov(x, \epsilon)=0\space\implies E[x\epsilon]-E[x]E[\epsilon]=0 \space \stackrel |
31,611 | What would make Graph Neural Networks better than 'normal' Neural Networks? | what are the main differences between GNN and NN? Apart from GNN has
its input as a graph data?
Well, that is the main difference. Of course some corollaries of this fact is that GNNs can deal with variable sized graph inputs and typical NNs cannot, GNNs are not fully connected and typical (non-convolutional) NNs are, GNNs are usually invariant to permutation of the vertices and NNs are not.
In a bit more detail: Let $A$ be the adjacency matrix of some graph $G$, and let $X$ be an $n \times d$ matrix of features for each vertex, and let $W$ be a $d \times d'$ weight matrix. Then a graph neural network layer might compute something like $Y = \sigma(AXW)$. Inspired by spectral analysis, more sophisticated versions compute $Y = \sigma(\sum_i L^iXW_i)$ where $L^i$ is the $i$th power of the laplacian.
Just as in a convolutional network, the size of the weight matrix is independent of the graph -- you might interpret it as some sort of convolutional filter which can be slid over each vertex of the graph. Graphs with different sizes and connectivity simply alters $A$ and $L$, but not $W$.
Consequently, what is the potential in using GNN instead of NN, saying
for eg in knowledge graphs and bases?
Applications are very wide, and include...
Language models which operate on parse-trees rather than just a linear sequence of words. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Solving reading comprehension by maintaining a graph of the relationships between all the different entities in a story. Learning Graphical State Transitions
Learning a model of physics by inferring the graph of physical interactions between objects. Neural Relational Inference for Interacting Systems and Interaction Networks for Learning about Objects, Relations and Physics
Improving segmentation boundaries Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++
by modeling object boundaries as a graph.
A richer form of object detecion which not only detects objects in an image but reveals their relationship to each other. Graph R-CNN for Scene Graph Generation
Generating and predicting molecules with particular chemical properties. Junction Tree Variational Autoencoder for Molecular Graph Generation
Generating and learning patterns in real world road patterns. Neural Turtle Graphics for Modeling City Road Layouts
Since meshes are also graphs, you can generate / segment / reconstruct, etc. 3D shapes as well. Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images Dynamic Graph CNN for Learning on Point Clouds
Neural networks are computation graphs, so you could use GNNs to learn to generate better network architectures. Graph HyperNetworks for Neural Architecture Search | What would make Graph Neural Networks better than 'normal' Neural Networks? | what are the main differences between GNN and NN? Apart from GNN has
its input as a graph data?
Well, that is the main difference. Of course some corollaries of this fact is that GNNs can deal with | What would make Graph Neural Networks better than 'normal' Neural Networks?
what are the main differences between GNN and NN? Apart from GNN has
its input as a graph data?
Well, that is the main difference. Of course some corollaries of this fact is that GNNs can deal with variable sized graph inputs and typical NNs cannot, GNNs are not fully connected and typical (non-convolutional) NNs are, GNNs are usually invariant to permutation of the vertices and NNs are not.
In a bit more detail: Let $A$ be the adjacency matrix of some graph $G$, and let $X$ be an $n \times d$ matrix of features for each vertex, and let $W$ be a $d \times d'$ weight matrix. Then a graph neural network layer might compute something like $Y = \sigma(AXW)$. Inspired by spectral analysis, more sophisticated versions compute $Y = \sigma(\sum_i L^iXW_i)$ where $L^i$ is the $i$th power of the laplacian.
Just as in a convolutional network, the size of the weight matrix is independent of the graph -- you might interpret it as some sort of convolutional filter which can be slid over each vertex of the graph. Graphs with different sizes and connectivity simply alters $A$ and $L$, but not $W$.
Consequently, what is the potential in using GNN instead of NN, saying
for eg in knowledge graphs and bases?
Applications are very wide, and include...
Language models which operate on parse-trees rather than just a linear sequence of words. Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks
Solving reading comprehension by maintaining a graph of the relationships between all the different entities in a story. Learning Graphical State Transitions
Learning a model of physics by inferring the graph of physical interactions between objects. Neural Relational Inference for Interacting Systems and Interaction Networks for Learning about Objects, Relations and Physics
Improving segmentation boundaries Efficient Interactive Annotation of Segmentation Datasets with Polygon-RNN++
by modeling object boundaries as a graph.
A richer form of object detecion which not only detects objects in an image but reveals their relationship to each other. Graph R-CNN for Scene Graph Generation
Generating and predicting molecules with particular chemical properties. Junction Tree Variational Autoencoder for Molecular Graph Generation
Generating and learning patterns in real world road patterns. Neural Turtle Graphics for Modeling City Road Layouts
Since meshes are also graphs, you can generate / segment / reconstruct, etc. 3D shapes as well. Pixel2Mesh: Generating 3D Mesh Models from Single RGB Images Dynamic Graph CNN for Learning on Point Clouds
Neural networks are computation graphs, so you could use GNNs to learn to generate better network architectures. Graph HyperNetworks for Neural Architecture Search | What would make Graph Neural Networks better than 'normal' Neural Networks?
what are the main differences between GNN and NN? Apart from GNN has
its input as a graph data?
Well, that is the main difference. Of course some corollaries of this fact is that GNNs can deal with |
31,612 | What is a factorized Gaussian distribution? | In this context factorised means that each of the marginal distributions are independent. Here a factorised Gaussian distribution just means that the covariance matrix is diagonal. | What is a factorized Gaussian distribution? | In this context factorised means that each of the marginal distributions are independent. Here a factorised Gaussian distribution just means that the covariance matrix is diagonal. | What is a factorized Gaussian distribution?
In this context factorised means that each of the marginal distributions are independent. Here a factorised Gaussian distribution just means that the covariance matrix is diagonal. | What is a factorized Gaussian distribution?
In this context factorised means that each of the marginal distributions are independent. Here a factorised Gaussian distribution just means that the covariance matrix is diagonal. |
31,613 | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy) | This post gives inequalities for a bunch of distances, including total variation
$$\frac{1}{2}d_{TV}(\nu,\mu)<\sqrt{KL(\nu,\mu)}$$
and this says the Wasserstein distance is bounded by the total variation distance
$$2W_1(\nu,\mu)\leq Cd_{TV}(\nu,\mu)$$
if the metric is bounded by $C$.
There isn't a simple bound in the other direction, since you can make the KL divergence infinite by moving the probability off an arbitrarily small spot onto the neighbouring area, and this can be done with arbitrarily small $W_1$ distance. For example, take two standard Normals. For one of them, set the density to zero on $[0,\epsilon]$ and to twice the existing value on $[-\epsilon,0]$. Do the opposite for the other one. The Wasserstein distance is proportional to $\epsilon$, but the KL-divergence is infinite. | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy) | This post gives inequalities for a bunch of distances, including total variation
$$\frac{1}{2}d_{TV}(\nu,\mu)<\sqrt{KL(\nu,\mu)}$$
and this says the Wasserstein distance is bounded by the total variat | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy)
This post gives inequalities for a bunch of distances, including total variation
$$\frac{1}{2}d_{TV}(\nu,\mu)<\sqrt{KL(\nu,\mu)}$$
and this says the Wasserstein distance is bounded by the total variation distance
$$2W_1(\nu,\mu)\leq Cd_{TV}(\nu,\mu)$$
if the metric is bounded by $C$.
There isn't a simple bound in the other direction, since you can make the KL divergence infinite by moving the probability off an arbitrarily small spot onto the neighbouring area, and this can be done with arbitrarily small $W_1$ distance. For example, take two standard Normals. For one of them, set the density to zero on $[0,\epsilon]$ and to twice the existing value on $[-\epsilon,0]$. Do the opposite for the other one. The Wasserstein distance is proportional to $\epsilon$, but the KL-divergence is infinite. | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy)
This post gives inequalities for a bunch of distances, including total variation
$$\frac{1}{2}d_{TV}(\nu,\mu)<\sqrt{KL(\nu,\mu)}$$
and this says the Wasserstein distance is bounded by the total variat |
31,614 | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy) | As was pointed out in the previous post this inequality is not true in general. However, there has been a lot of research on those measures $\mu$ for which it is true for all measures $\nu$. Most prominently this holds for the standard normal distribution.
Those kinds of estimates go under 'Talagrand inequality'.
Take a look at the paper of Otto and Villani and refernces therein: http://cedricvillani.org/sites/dev/files/old_images//2012/08/014.OV-Talagrand.pdf | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy) | As was pointed out in the previous post this inequality is not true in general. However, there has been a lot of research on those measures $\mu$ for which it is true for all measures $\nu$. Most prom | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy)
As was pointed out in the previous post this inequality is not true in general. However, there has been a lot of research on those measures $\mu$ for which it is true for all measures $\nu$. Most prominently this holds for the standard normal distribution.
Those kinds of estimates go under 'Talagrand inequality'.
Take a look at the paper of Otto and Villani and refernces therein: http://cedricvillani.org/sites/dev/files/old_images//2012/08/014.OV-Talagrand.pdf | Relation Between Wasserstein Distance and KL-Divergence (Relative Entropy)
As was pointed out in the previous post this inequality is not true in general. However, there has been a lot of research on those measures $\mu$ for which it is true for all measures $\nu$. Most prom |
31,615 | Can we really assign random slopes to between-subject effects in a mixed effects model? | I will work with an example using the Verbal Aggression dataset, largely borrowing off this paper: https://www.jstatsoft.org/article/view/v039i12
library(lme4)
VA.dat <- VerbAgg[, c("Gender", "r2", "id")]
VA.dat <- VA.dat[order(VA.dat$id), ] # sort data by person id
I prepare the data to regress the binary response on gender and permit the gender effect to be different by person:
VA.dat$M <- (VA.dat$Gender == "M") + 0
VA.dat$F <- (VA.dat$Gender == "F") + 0
VA.dat$rbin <- (VA.dat$r2 == "Y") + 0
Look at data frame to confirm gender is person level:
table(aggregate(M ~ id, VA.dat, mean)$M)
#
# 0 1
# 243 73
The mean of the male indicator by person is either 1 or 0, so gender values are constant within id.
Regression:
summary(fit <- lmer(rbin ~ F + (0 + M + F || id), VA.dat))
# Random effects:
# Groups Name Variance Std.Dev.
# id M 0.04164 0.2041
# id.1 F 0.04903 0.2214
# Residual 0.20202 0.4495
# Number of obs: 7584, groups: id, 316
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) 0.51370 0.02619 19.617
# F -0.04885 0.03037 -1.609
I have an intercept representing Male and the Fem difference from it. Note the use of the || before id in the random effect specification. It stops the gender slopes from being correlated and forces a simpler interpretation of the random slopes. With a |, each id would have a value on both male and female slopes, which can be confusing to deal with.
On average, male responses are higher than female responses. However, the random intercept variance of female respondents is slightly larger than the random intercept variance of male respondents.
To look at the random slopes:
head(ranef(fit)$id)
# M F
# 1 -0.1153768 0.000000000
# 2 -0.3926609 0.000000000
# 3 0.0000000 -0.041122748
# 4 0.0000000 0.101123911
# 5 0.0000000 -0.041122748
# 6 0.0000000 -0.005561083
The first two persons are male and the gender effects for them are more negative than the overall gender effect, $0.514 - 0.115; 0.514 - 0.393$. The next four persons are female, and the gender effect for the first two are: $0.514 - 0.049 - 0.041; 0.514 - 0.049 + 0.101$.
So clearly, one does not need variability within the grouping factor to permit the effect of a variable to vary across persons. The reason for the confusion is older model formulations which separate the data in level 1 dataset and level 2 datasets. With lme4, and more recent model formulations, there is no need for such formulation. Douglas Bates covers details here: https://cran.r-project.org/web/packages/lme4/vignettes/Theory.pdf | Can we really assign random slopes to between-subject effects in a mixed effects model? | I will work with an example using the Verbal Aggression dataset, largely borrowing off this paper: https://www.jstatsoft.org/article/view/v039i12
library(lme4)
VA.dat <- VerbAgg[, c("Gender", "r2", "i | Can we really assign random slopes to between-subject effects in a mixed effects model?
I will work with an example using the Verbal Aggression dataset, largely borrowing off this paper: https://www.jstatsoft.org/article/view/v039i12
library(lme4)
VA.dat <- VerbAgg[, c("Gender", "r2", "id")]
VA.dat <- VA.dat[order(VA.dat$id), ] # sort data by person id
I prepare the data to regress the binary response on gender and permit the gender effect to be different by person:
VA.dat$M <- (VA.dat$Gender == "M") + 0
VA.dat$F <- (VA.dat$Gender == "F") + 0
VA.dat$rbin <- (VA.dat$r2 == "Y") + 0
Look at data frame to confirm gender is person level:
table(aggregate(M ~ id, VA.dat, mean)$M)
#
# 0 1
# 243 73
The mean of the male indicator by person is either 1 or 0, so gender values are constant within id.
Regression:
summary(fit <- lmer(rbin ~ F + (0 + M + F || id), VA.dat))
# Random effects:
# Groups Name Variance Std.Dev.
# id M 0.04164 0.2041
# id.1 F 0.04903 0.2214
# Residual 0.20202 0.4495
# Number of obs: 7584, groups: id, 316
#
# Fixed effects:
# Estimate Std. Error t value
# (Intercept) 0.51370 0.02619 19.617
# F -0.04885 0.03037 -1.609
I have an intercept representing Male and the Fem difference from it. Note the use of the || before id in the random effect specification. It stops the gender slopes from being correlated and forces a simpler interpretation of the random slopes. With a |, each id would have a value on both male and female slopes, which can be confusing to deal with.
On average, male responses are higher than female responses. However, the random intercept variance of female respondents is slightly larger than the random intercept variance of male respondents.
To look at the random slopes:
head(ranef(fit)$id)
# M F
# 1 -0.1153768 0.000000000
# 2 -0.3926609 0.000000000
# 3 0.0000000 -0.041122748
# 4 0.0000000 0.101123911
# 5 0.0000000 -0.041122748
# 6 0.0000000 -0.005561083
The first two persons are male and the gender effects for them are more negative than the overall gender effect, $0.514 - 0.115; 0.514 - 0.393$. The next four persons are female, and the gender effect for the first two are: $0.514 - 0.049 - 0.041; 0.514 - 0.049 + 0.101$.
So clearly, one does not need variability within the grouping factor to permit the effect of a variable to vary across persons. The reason for the confusion is older model formulations which separate the data in level 1 dataset and level 2 datasets. With lme4, and more recent model formulations, there is no need for such formulation. Douglas Bates covers details here: https://cran.r-project.org/web/packages/lme4/vignettes/Theory.pdf | Can we really assign random slopes to between-subject effects in a mixed effects model?
I will work with an example using the Verbal Aggression dataset, largely borrowing off this paper: https://www.jstatsoft.org/article/view/v039i12
library(lme4)
VA.dat <- VerbAgg[, c("Gender", "r2", "i |
31,616 | Calculate Earth Mover's Distance for two grayscale images | Having looked into it a little more than at my initial answer: it seems indeed that the original usage in computer vision, e.g. Peleg et al. (1989), simply matched between pixel values and totally ignored location. Later work, e.g. Rubner et al. (2000), did the same but on e.g. local texture features rather than the raw pixel values. This then leaves the question of how to incorporate location.
Doing it row-by-row as you've proposed is kind of weird: you're only allowing mass to match row-by-row, so if you e.g. slid an image up by one pixel you might have an extremely large distance (which wouldn't be the case if you slid it to the right by one pixel).
A more natural way to use EMD with locations, I think, is just to do it directly between the image grayscale values, including the locations, so that it measures how much pixel "light" you need to move between the two. This is then a 2-dimensional EMD, which scipy.stats.wasserstein_distance can't compute, but e.g. the POT package can with ot.lp.emd2.
Doing this with POT, though, seems to require creating a matrix of the cost of moving any one pixel from image 1 to any pixel of image 2. Since your images each have $299 \cdot 299 = 89,401$ pixels, this would require making an $89,401 \times 89,401$ matrix, which will not be reasonable.
Update: probably a better way than I describe below is to use the sliced Wasserstein distance, rather than the plain Wasserstein. This takes advantage of the fact that 1-dimensional Wassersteins are extremely efficient to compute, and defines a distance on $d$-dimesinonal distributions by taking the average of the Wasserstein distance between random one-dimensional projections of the data.
This is similar to your idea of doing row and column transports: that corresponds to two particular projections. But by doing the mean over projections, you get out a real distance, which also has better sample complexity than the full Wasserstein.
In (untested, inefficient) Python code, that might look like:
import numpy as np
from scipy.stats import wasserstein_distance
def sliced_wasserstein(X, Y, num_proj):
dim = X.shape[1]
ests = []
for _ in range(num_proj):
# sample uniformly from the unit sphere
dir = np.random.rand(dim)
dir /= np.linalg.norm(dir)
# project the data
X_proj = X @ dir
Y_proj = Y @ dir
# compute 1d wasserstein
ests.append(wasserstein_distance(X_proj, Y_proj)
return np.mean(ests)
(The loop here, at least up to getting X_proj and Y_proj, could be vectorized, which would probably be faster.)
Another option would be to simply compute the distance on images which have been resized smaller (by simply adding grayscales together). If you downscaled by a factor of 10 to make your images $30 \times 30$, you'd have a pretty reasonably sized optimization problem, and in this case the images would still look pretty different. However, this is naturally only going to compare images at a "broad" scale and ignore smaller-scale differences.
There are also, of course, computationally cheaper methods to compare the original images. You can think of the method I've listed here as treating the two images as distributions of "light" over $\{1, \dots, 299\} \times \{1, \dots, 299\}$ and then computing the Wasserstein distance between those distributions; one could instead compute the total variation distance by simply
$$\operatorname{TV}(P, Q) = \frac12 \sum_{i=1}^{299} \sum_{j=1}^{299} \lvert P_{ij} - Q_{ij} \rvert,$$
or similarly a KL divergence or other $f$-divergences.
These are trivial to compute in this setting but treat each pixel totally separately. There are also "in-between" distances; for example, you could apply a Gaussian blur to the two images before computing similarities, which would correspond to estimating
$$
L_2(p, q) = \int (p(x) - q(x))^2 \mathrm{d}x
$$
between the two densities with a kernel density estimate. What distance is best is going to depend on your data and what you're using it for. | Calculate Earth Mover's Distance for two grayscale images | Having looked into it a little more than at my initial answer: it seems indeed that the original usage in computer vision, e.g. Peleg et al. (1989), simply matched between pixel values and totally ign | Calculate Earth Mover's Distance for two grayscale images
Having looked into it a little more than at my initial answer: it seems indeed that the original usage in computer vision, e.g. Peleg et al. (1989), simply matched between pixel values and totally ignored location. Later work, e.g. Rubner et al. (2000), did the same but on e.g. local texture features rather than the raw pixel values. This then leaves the question of how to incorporate location.
Doing it row-by-row as you've proposed is kind of weird: you're only allowing mass to match row-by-row, so if you e.g. slid an image up by one pixel you might have an extremely large distance (which wouldn't be the case if you slid it to the right by one pixel).
A more natural way to use EMD with locations, I think, is just to do it directly between the image grayscale values, including the locations, so that it measures how much pixel "light" you need to move between the two. This is then a 2-dimensional EMD, which scipy.stats.wasserstein_distance can't compute, but e.g. the POT package can with ot.lp.emd2.
Doing this with POT, though, seems to require creating a matrix of the cost of moving any one pixel from image 1 to any pixel of image 2. Since your images each have $299 \cdot 299 = 89,401$ pixels, this would require making an $89,401 \times 89,401$ matrix, which will not be reasonable.
Update: probably a better way than I describe below is to use the sliced Wasserstein distance, rather than the plain Wasserstein. This takes advantage of the fact that 1-dimensional Wassersteins are extremely efficient to compute, and defines a distance on $d$-dimesinonal distributions by taking the average of the Wasserstein distance between random one-dimensional projections of the data.
This is similar to your idea of doing row and column transports: that corresponds to two particular projections. But by doing the mean over projections, you get out a real distance, which also has better sample complexity than the full Wasserstein.
In (untested, inefficient) Python code, that might look like:
import numpy as np
from scipy.stats import wasserstein_distance
def sliced_wasserstein(X, Y, num_proj):
dim = X.shape[1]
ests = []
for _ in range(num_proj):
# sample uniformly from the unit sphere
dir = np.random.rand(dim)
dir /= np.linalg.norm(dir)
# project the data
X_proj = X @ dir
Y_proj = Y @ dir
# compute 1d wasserstein
ests.append(wasserstein_distance(X_proj, Y_proj)
return np.mean(ests)
(The loop here, at least up to getting X_proj and Y_proj, could be vectorized, which would probably be faster.)
Another option would be to simply compute the distance on images which have been resized smaller (by simply adding grayscales together). If you downscaled by a factor of 10 to make your images $30 \times 30$, you'd have a pretty reasonably sized optimization problem, and in this case the images would still look pretty different. However, this is naturally only going to compare images at a "broad" scale and ignore smaller-scale differences.
There are also, of course, computationally cheaper methods to compare the original images. You can think of the method I've listed here as treating the two images as distributions of "light" over $\{1, \dots, 299\} \times \{1, \dots, 299\}$ and then computing the Wasserstein distance between those distributions; one could instead compute the total variation distance by simply
$$\operatorname{TV}(P, Q) = \frac12 \sum_{i=1}^{299} \sum_{j=1}^{299} \lvert P_{ij} - Q_{ij} \rvert,$$
or similarly a KL divergence or other $f$-divergences.
These are trivial to compute in this setting but treat each pixel totally separately. There are also "in-between" distances; for example, you could apply a Gaussian blur to the two images before computing similarities, which would correspond to estimating
$$
L_2(p, q) = \int (p(x) - q(x))^2 \mathrm{d}x
$$
between the two densities with a kernel density estimate. What distance is best is going to depend on your data and what you're using it for. | Calculate Earth Mover's Distance for two grayscale images
Having looked into it a little more than at my initial answer: it seems indeed that the original usage in computer vision, e.g. Peleg et al. (1989), simply matched between pixel values and totally ign |
31,617 | Calculate Earth Mover's Distance for two grayscale images | I think for your image size requirement, maybe sliced wasserstein as @Dougal suggests is probably the best suited since 299^4 * 4 bytes would mean a memory requirement of ~32 GBs for the transport matrix, which is quite huge.
For the sake of completion of answering the general question of comparing two grayscale images using EMD and if speed of estimation is a criterion, one could also consider the regularized OT distance which is available in POT toolbox through ot.sinkhorn(a, b, M1, reg) command: the regularized version is supposed to optimize to a solution faster than the ot.emd(a, b, M1) command. | Calculate Earth Mover's Distance for two grayscale images | I think for your image size requirement, maybe sliced wasserstein as @Dougal suggests is probably the best suited since 299^4 * 4 bytes would mean a memory requirement of ~32 GBs for the transport ma | Calculate Earth Mover's Distance for two grayscale images
I think for your image size requirement, maybe sliced wasserstein as @Dougal suggests is probably the best suited since 299^4 * 4 bytes would mean a memory requirement of ~32 GBs for the transport matrix, which is quite huge.
For the sake of completion of answering the general question of comparing two grayscale images using EMD and if speed of estimation is a criterion, one could also consider the regularized OT distance which is available in POT toolbox through ot.sinkhorn(a, b, M1, reg) command: the regularized version is supposed to optimize to a solution faster than the ot.emd(a, b, M1) command. | Calculate Earth Mover's Distance for two grayscale images
I think for your image size requirement, maybe sliced wasserstein as @Dougal suggests is probably the best suited since 299^4 * 4 bytes would mean a memory requirement of ~32 GBs for the transport ma |
31,618 | How to test whether $\mu_1 < \mu_2 <\mu_3$? | In statistics you cannot test whether "X is true or not". You can only try to find evidence that a null hypothesis is false.
Let's say your null hypothesis is
$$
H_0^1: \mu_1 < \mu_2 < \mu_3.
$$
Let's also assume that you have a way of estimating the vector $\mu = (\mu_1, \mu_2, \mu_3)'$. To keep things simply assume that you have an estimator
$$
x \sim N(\mu, \Sigma),
$$
where $\Sigma$ is $3 \times 3$ covariate matrix.
We can rewrite the null hypothesis as
$$
A \mu < 0,
$$
where
$$
A = \begin{bmatrix}
1 & - 1 & 0 \\
0 & 1 & - 1
\end{bmatrix}.
$$
This shows that your null hypothesis can be expressed as an inequality restriction on the vector $A \mu$. A natural estimator of $A\mu$ is given by
$$
A x \sim N(A\mu, A\Sigma A').
$$
You can now use the framework for testing inequality constraint on normal vectors given in:
Kudo, Akio (1963). “A multivariate analogue of the one-sided test”. In: Biometrika 50.3/4, pp. 403–418.
This test will also work if the normality assumption holds only approximately ("asymptotically"). For example, it will work if you can draw sample means from the groups. If you draw samples of size $n_1, n_2, n_3$ and if you can draw independently from the groups then $\Sigma$ is a diagonal matrix with diagonal
$$
(\sigma_1^2/n_1, \sigma_2^2/n_2, \sigma_3^2/n_3)',
$$
where $\sigma_k^2$ is the variance in group $k = 1, 2, 3$. In an application, you can use sample variance instead of the unknown population variance without changing the properties of the test.
If on the other hand your alternative hypothesis is
$$
H_1^2: \mu_1 < \mu_2 < \mu_3
$$
then your null hypothesis becomes
$$
H_0^2: \text{NOT $H_1$}.
$$
This isn't very operational. Remember that our new alternative hypothesis can be written as $H_1: A\mu <0$ so that
$$
H_0^2: \text{there exists a $k=1,2$ such that $(A\mu)_k \geq 0$}.
$$
I don't know if there exists any specialized test for this, but you can definitely try some strategy based on successive testing. Remember that you try to find evidence against the null. So you may first test
$$
H_{0,1}^2: (A\mu)_1 \geq 0.
$$
and then
$$
H_{0,2}^2: (A\mu)_2 \geq 0.
$$
If you reject both times then you have found evidence that $H_0$ is false and you reject $H_0$. If you don't, then you don't reject $H_0$.
Since you are testing multiple times you have to adjust the nominal level of the subtest. You can use a Bonferroni correction or figure out an exact correction (since you know $\Sigma$).
Another way of constructing a test for $H_0^2$ is to note that
$$
H_0^2: \max_{k=1,2} (A\mu)_k \geq 0.
$$
This implies using $\max Ax$ as a test statistic. The test will have a non-standard distribution under the null, but the appropriate critical value should still be fairly easy to compute. | How to test whether $\mu_1 < \mu_2 <\mu_3$? | In statistics you cannot test whether "X is true or not". You can only try to find evidence that a null hypothesis is false.
Let's say your null hypothesis is
$$
H_0^1: \mu_1 < \mu_2 < \mu_3.
$$
Let | How to test whether $\mu_1 < \mu_2 <\mu_3$?
In statistics you cannot test whether "X is true or not". You can only try to find evidence that a null hypothesis is false.
Let's say your null hypothesis is
$$
H_0^1: \mu_1 < \mu_2 < \mu_3.
$$
Let's also assume that you have a way of estimating the vector $\mu = (\mu_1, \mu_2, \mu_3)'$. To keep things simply assume that you have an estimator
$$
x \sim N(\mu, \Sigma),
$$
where $\Sigma$ is $3 \times 3$ covariate matrix.
We can rewrite the null hypothesis as
$$
A \mu < 0,
$$
where
$$
A = \begin{bmatrix}
1 & - 1 & 0 \\
0 & 1 & - 1
\end{bmatrix}.
$$
This shows that your null hypothesis can be expressed as an inequality restriction on the vector $A \mu$. A natural estimator of $A\mu$ is given by
$$
A x \sim N(A\mu, A\Sigma A').
$$
You can now use the framework for testing inequality constraint on normal vectors given in:
Kudo, Akio (1963). “A multivariate analogue of the one-sided test”. In: Biometrika 50.3/4, pp. 403–418.
This test will also work if the normality assumption holds only approximately ("asymptotically"). For example, it will work if you can draw sample means from the groups. If you draw samples of size $n_1, n_2, n_3$ and if you can draw independently from the groups then $\Sigma$ is a diagonal matrix with diagonal
$$
(\sigma_1^2/n_1, \sigma_2^2/n_2, \sigma_3^2/n_3)',
$$
where $\sigma_k^2$ is the variance in group $k = 1, 2, 3$. In an application, you can use sample variance instead of the unknown population variance without changing the properties of the test.
If on the other hand your alternative hypothesis is
$$
H_1^2: \mu_1 < \mu_2 < \mu_3
$$
then your null hypothesis becomes
$$
H_0^2: \text{NOT $H_1$}.
$$
This isn't very operational. Remember that our new alternative hypothesis can be written as $H_1: A\mu <0$ so that
$$
H_0^2: \text{there exists a $k=1,2$ such that $(A\mu)_k \geq 0$}.
$$
I don't know if there exists any specialized test for this, but you can definitely try some strategy based on successive testing. Remember that you try to find evidence against the null. So you may first test
$$
H_{0,1}^2: (A\mu)_1 \geq 0.
$$
and then
$$
H_{0,2}^2: (A\mu)_2 \geq 0.
$$
If you reject both times then you have found evidence that $H_0$ is false and you reject $H_0$. If you don't, then you don't reject $H_0$.
Since you are testing multiple times you have to adjust the nominal level of the subtest. You can use a Bonferroni correction or figure out an exact correction (since you know $\Sigma$).
Another way of constructing a test for $H_0^2$ is to note that
$$
H_0^2: \max_{k=1,2} (A\mu)_k \geq 0.
$$
This implies using $\max Ax$ as a test statistic. The test will have a non-standard distribution under the null, but the appropriate critical value should still be fairly easy to compute. | How to test whether $\mu_1 < \mu_2 <\mu_3$?
In statistics you cannot test whether "X is true or not". You can only try to find evidence that a null hypothesis is false.
Let's say your null hypothesis is
$$
H_0^1: \mu_1 < \mu_2 < \mu_3.
$$
Let |
31,619 | How to test whether $\mu_1 < \mu_2 <\mu_3$? | The answer provided by @andreas-dzemski is correct only if we know that the data is normally distributed.
If we do not know the distribution, I believe it would be better to run a nonparametric test. In this case, the simplest seems to run a permutation test. This is a book about the topic and this is a nice online explanation. Below I include R code to compute this test.
# some test data
D <- data.frame(group1=c(3,6,2,2,3,9,3,4,2,5), group2=c(5,3,10,1,10,2,4,4,2,2), group3=c(8,0,1,5,10,7,3,4,8,1))
# sample with replacement
resample <- function(X) sample(X, replace=TRUE)
# return true if mu1 < mu2 < mu3
test <- function(mu1, mu2, mu3) (mu1 < mu2) & (mu2 < mu3)
# resampling test that returns the probability of observing the relationship
mean(replicate(1000, test(mean(resample(D$group1)), mean(resample(D$group2)), mean(resample(D$group3))))) | How to test whether $\mu_1 < \mu_2 <\mu_3$? | The answer provided by @andreas-dzemski is correct only if we know that the data is normally distributed.
If we do not know the distribution, I believe it would be better to run a nonparametric test. | How to test whether $\mu_1 < \mu_2 <\mu_3$?
The answer provided by @andreas-dzemski is correct only if we know that the data is normally distributed.
If we do not know the distribution, I believe it would be better to run a nonparametric test. In this case, the simplest seems to run a permutation test. This is a book about the topic and this is a nice online explanation. Below I include R code to compute this test.
# some test data
D <- data.frame(group1=c(3,6,2,2,3,9,3,4,2,5), group2=c(5,3,10,1,10,2,4,4,2,2), group3=c(8,0,1,5,10,7,3,4,8,1))
# sample with replacement
resample <- function(X) sample(X, replace=TRUE)
# return true if mu1 < mu2 < mu3
test <- function(mu1, mu2, mu3) (mu1 < mu2) & (mu2 < mu3)
# resampling test that returns the probability of observing the relationship
mean(replicate(1000, test(mean(resample(D$group1)), mean(resample(D$group2)), mean(resample(D$group3))))) | How to test whether $\mu_1 < \mu_2 <\mu_3$?
The answer provided by @andreas-dzemski is correct only if we know that the data is normally distributed.
If we do not know the distribution, I believe it would be better to run a nonparametric test. |
31,620 | What are the best books to study Neural Networks from a purely mathematical perspective? | A very good reason why there are few very rigorous books on neural networks is that, apart from the Universal Approximation theorem (whose relevance to the learning problem is vastly overrated), there are very few mathematically rigorous results about NNs, and most of them are of a negative nature. It's thus understandably unlikely that someone would decide to write a math book which contains few proofs, most of which tell you what you can't do with your fancy model. As a matter of fact, Foundations of Machine Learning by by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar, a book which is second to none in terms of rigour, explicitly chooses not to cover Neural networks because of the lack of rigorous results:
https://www.amazon.com/Foundations-Machine-Learning-Adaptive-Computation/dp/0262039400/
Anyway, a few mathematical proofs (including the proof that the backpropagation algorithm computes the gradient of the loss function with respect to the weights) can be found in Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David:
https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms-ebook/dp/B00J8LQU8I
Neural Network Methods in Natural Language Processing by Yoav Goldberg and Graeme Hirst is also quite rigorous, but probably not enough for you:
https://www.amazon.com/Language-Processing-Synthesis-Lectures-Technologies/dp/1627052984
Finally, Linear Algebra and Learning from Data by Gilbert Strang covers a part of the math of deep learning, which while not being the whole story, is definitely a cornerstone, i.e., linear algebra:
https://www.amazon.com/-Algebra-Learning-Gilbert-Strang/dp/0692196382
EDIT: this has recently changed with the latest advancements of Deep Learning Theory, e.g., NTK theory, new concentration of measure results, new results on Rademacher complexity and covering numbers, etc. Matus Telgarsky wrote an excellent online book on the topic:
https://mjt.cs.illinois.edu/dlt/ | What are the best books to study Neural Networks from a purely mathematical perspective? | A very good reason why there are few very rigorous books on neural networks is that, apart from the Universal Approximation theorem (whose relevance to the learning problem is vastly overrated), there | What are the best books to study Neural Networks from a purely mathematical perspective?
A very good reason why there are few very rigorous books on neural networks is that, apart from the Universal Approximation theorem (whose relevance to the learning problem is vastly overrated), there are very few mathematically rigorous results about NNs, and most of them are of a negative nature. It's thus understandably unlikely that someone would decide to write a math book which contains few proofs, most of which tell you what you can't do with your fancy model. As a matter of fact, Foundations of Machine Learning by by Mehryar Mohri, Afshin Rostamizadeh and Ameet Talwalkar, a book which is second to none in terms of rigour, explicitly chooses not to cover Neural networks because of the lack of rigorous results:
https://www.amazon.com/Foundations-Machine-Learning-Adaptive-Computation/dp/0262039400/
Anyway, a few mathematical proofs (including the proof that the backpropagation algorithm computes the gradient of the loss function with respect to the weights) can be found in Understanding Machine Learning: From Theory to Algorithms, by Shai Shalev-Shwartz and Shai Ben-David:
https://www.amazon.com/Understanding-Machine-Learning-Theory-Algorithms-ebook/dp/B00J8LQU8I
Neural Network Methods in Natural Language Processing by Yoav Goldberg and Graeme Hirst is also quite rigorous, but probably not enough for you:
https://www.amazon.com/Language-Processing-Synthesis-Lectures-Technologies/dp/1627052984
Finally, Linear Algebra and Learning from Data by Gilbert Strang covers a part of the math of deep learning, which while not being the whole story, is definitely a cornerstone, i.e., linear algebra:
https://www.amazon.com/-Algebra-Learning-Gilbert-Strang/dp/0692196382
EDIT: this has recently changed with the latest advancements of Deep Learning Theory, e.g., NTK theory, new concentration of measure results, new results on Rademacher complexity and covering numbers, etc. Matus Telgarsky wrote an excellent online book on the topic:
https://mjt.cs.illinois.edu/dlt/ | What are the best books to study Neural Networks from a purely mathematical perspective?
A very good reason why there are few very rigorous books on neural networks is that, apart from the Universal Approximation theorem (whose relevance to the learning problem is vastly overrated), there |
31,621 | What are the best books to study Neural Networks from a purely mathematical perspective? | I really liked Deep Learning by Goodfellow, Bengio and Courville. Papers listed in its bibliography develop and get deep in most mathematical aspects.
In this post you can find a pair of suggestions from a more rigorous perspective.
But if you really want to be rigorous to the fundamentals, I encourage you to read Information Theory, Inference and Learning Algorithms by MacKay, there is a free official electronic copy in author's website. | What are the best books to study Neural Networks from a purely mathematical perspective? | I really liked Deep Learning by Goodfellow, Bengio and Courville. Papers listed in its bibliography develop and get deep in most mathematical aspects.
In this post you can find a pair of suggestions f | What are the best books to study Neural Networks from a purely mathematical perspective?
I really liked Deep Learning by Goodfellow, Bengio and Courville. Papers listed in its bibliography develop and get deep in most mathematical aspects.
In this post you can find a pair of suggestions from a more rigorous perspective.
But if you really want to be rigorous to the fundamentals, I encourage you to read Information Theory, Inference and Learning Algorithms by MacKay, there is a free official electronic copy in author's website. | What are the best books to study Neural Networks from a purely mathematical perspective?
I really liked Deep Learning by Goodfellow, Bengio and Courville. Papers listed in its bibliography develop and get deep in most mathematical aspects.
In this post you can find a pair of suggestions f |
31,622 | What happens to the initial hidden state in an RNN layer? | There are two common RNN strategies.
You have a long sequence that's always contiguous (for example, a language model that's trained on the text of War and Peace); because the novel's words all have a very specific order, you have to train it on consecutive sequences, so the hidden state at the last hidden state of the previous sequence is used as the initial hidden state of the next sequence.
The way most people do this is that you'll have to traverse the sequences in order, and not shuffle. Suppose you use mini-batch size of 2. You can cut the book in half, and the first sample will always have text from the first half of War and Peace and the second sample will always have text from the second half. Instead of using samples at random, the text is always read in order, so the first sample in the first mini-batch has the first words of the text, and the second sample in the first mini-batch has the first words after the mid-point of the text.
Purely abstractly, I suppose you could do something more complicated where you shuffle the data but can compute the initial hidden state for each position in the sequence (e.g. by computing the text up until that point, or else saving & restoring states) but this sounds expensive.
You have lots of distinct sequences (such as discrete tweets); it can make sense to start each sequence with hidden states of all 0s. Some people prefer to train a "baseline" initial state (user0's suggestion). I read an article advocating doing this if your data has lots of short sequences but I can't find the article now.
Which strategy is appropriate depends on the problem, and specific choices about how to represent that problem.
From the perspective of developing software, an ideal implementation would somehow expose functionality for both options to users. This can be tricky, and different software (pytorch, tensorflow, keras) achieves this in different ways. | What happens to the initial hidden state in an RNN layer? | There are two common RNN strategies.
You have a long sequence that's always contiguous (for example, a language model that's trained on the text of War and Peace); because the novel's words all have | What happens to the initial hidden state in an RNN layer?
There are two common RNN strategies.
You have a long sequence that's always contiguous (for example, a language model that's trained on the text of War and Peace); because the novel's words all have a very specific order, you have to train it on consecutive sequences, so the hidden state at the last hidden state of the previous sequence is used as the initial hidden state of the next sequence.
The way most people do this is that you'll have to traverse the sequences in order, and not shuffle. Suppose you use mini-batch size of 2. You can cut the book in half, and the first sample will always have text from the first half of War and Peace and the second sample will always have text from the second half. Instead of using samples at random, the text is always read in order, so the first sample in the first mini-batch has the first words of the text, and the second sample in the first mini-batch has the first words after the mid-point of the text.
Purely abstractly, I suppose you could do something more complicated where you shuffle the data but can compute the initial hidden state for each position in the sequence (e.g. by computing the text up until that point, or else saving & restoring states) but this sounds expensive.
You have lots of distinct sequences (such as discrete tweets); it can make sense to start each sequence with hidden states of all 0s. Some people prefer to train a "baseline" initial state (user0's suggestion). I read an article advocating doing this if your data has lots of short sequences but I can't find the article now.
Which strategy is appropriate depends on the problem, and specific choices about how to represent that problem.
From the perspective of developing software, an ideal implementation would somehow expose functionality for both options to users. This can be tricky, and different software (pytorch, tensorflow, keras) achieves this in different ways. | What happens to the initial hidden state in an RNN layer?
There are two common RNN strategies.
You have a long sequence that's always contiguous (for example, a language model that's trained on the text of War and Peace); because the novel's words all have |
31,623 | In multiple regression, why are interactions modelled as products, and not something else, of the predictors? | We can conceive of an "interaction" between regressor variables $x_1$ and $x_2$ as a departure from a perfectly linear relationship in which the relationship between one regressor and the response is different for different values of the other regressors. The usual "interaction term" is, in a sense to be explained below, a "simplest" such departure.
Definitions and Concepts
"Linear relationship" simply means the usual model in which we suppose a response $Y$ differs from a linear combination of the $x_i$ (and a constant) by independent, zero-mean errors $\varepsilon:$
$$Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \varepsilon.\tag{*}$$
"Interaction," in the most general sense, means the parameters $\beta_i$ may depend on other variables.
Specifically, in this example of just two regressors, we might generically write
$$\beta_1 = \beta_1(x_2)\text{ and }\beta_2 = \beta_2(x_1).$$
Analysis
Now, in practice, nobody except a theoretical physicist really believes model $(*)$ is fully accurate: it's an approximation to the truth and, we hope, a close one. Pursuing this idea further, we might ask whether we could similarly approximate the functions $\beta_i$ with linear ones in case we need to model some kind of interaction. Specifically, we could try to write
$$\beta_1(x_2) = \gamma_0 + \gamma_1 x_2 + \text{ tiny error}_1;$$
$$\beta_2(x_1) = \delta_0 + \delta_1 x_1 + \text{ tiny error}_2.$$
Let's see where that leads. Plugging these linear approximations into $(*)$ gives
$$\eqalign{
Y &= \beta_0 + \beta_1(x_2) x_1 + \beta_2(x_1) x_2 + \varepsilon \\
&= \beta_0 + (\gamma_0 + \gamma_1 x_2 + \text{ tiny error}_1)x_1 + (\delta_0 + \delta_1 x_1 + \text{ tiny error}_2)x_2 + \varepsilon \\
&= \beta_0 + \gamma_0 x_1 + \delta_0 x_2 + (\gamma_1 + \delta_1)x_1 x_2 + \ldots
}$$
where "$\ldots$" represents the total error,
$$\ldots = (\text{ tiny error}_1)x_1 + (\text{ tiny error}_2)x_2 + \varepsilon.$$
With any luck, multiplying those two "tiny errors" by typical values of the $x_i$ will either (a) be inconsequential compared to $\varepsilon$ or (b) can be treated as random terms which, when added to $\varepsilon$ (and maybe adjusting the constant term $\beta_0$ to accommodate any systematic bias) can be treated as a random error term.
In either case, with a change of notation we see that this linear-approximation-to-an-interaction model takes the form
$$Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_{12}x_1 x_2 + \varepsilon,\tag{**}$$
which is precisely the usual "interaction" regression model. (Note that none of the new parameters, nor $\varepsilon$ itself, is the same quantity originally represented by those terms in $(*).$)
Observe how $\beta_{12}$ arises through variation in both the original parameters. It captures the combination of (i) how the coefficient of $x_1$ depends on $x_2$ (namely, through $\gamma_1$) and (ii) how the coefficient of $x_2$ depends on $x_1$ (through $\delta_1$).
Some Consequences
It is a consequence of this analysis that if we fix all but one of the regressors, then (conditionally) the response $Y$ is still a linear function of the remaining regressor. For instance, if we fix the value of $x_2,$ then we may rewrite the interaction model $(**)$ as
$$Y = (\beta_0 + \beta_2 x_2) + (\beta_1 + \beta_{12} x_2) x_1 + \varepsilon,$$
where the intercept is $\beta_0 + \beta_2 x_2$ and the slope (that is, the $x_1$ coefficient) is $\beta_1 + \beta_2 x_2.$ This allows for easy description and insight. Geometrically, the surface given by the function
$$f(x_1,x_2) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_{12}x_1x_2$$
is ruled: when we slice it parallel to either of the coordinate axes, the result is always a line. (However, the surface itself is not planar except when $\beta_{12}=0.$ Indeed, it everywhere has negative Gaussian curvature.)
Finally, if our hope for (a) or (b) does not pan out, we might further expand the functional behavior of the original $\beta_i$ to include terms of second order or higher. Carrying out the same analysis shows this will introduce terms of the form $x_1^2,$ $x_2^2,$ $x_1x_2^2,$ $x_1^2x_2,$ and so forth into the model. In this sense, including a (product) interaction term is merely the first--and simplest--step towards modeling nonlinear relationships between the response and the regressors by means of polynomial functions.
Finally, in his textbook EDA (Addison-Wesley 1977), John Tukey showed how this approach can be carried out far more generally. After first "re-expressing" (that is, applying suitable non-linear transformations to) the regressors and the response, it often is the case that either model $(*)$ applies to the transformed variables or, if not, model $(**)$ can easily be fit (using a robust analysis of residuals). This allows for a huge variety of nonlinear relationships to be expressed and interpreted as conditionally linear responses. | In multiple regression, why are interactions modelled as products, and not something else, of the pr | We can conceive of an "interaction" between regressor variables $x_1$ and $x_2$ as a departure from a perfectly linear relationship in which the relationship between one regressor and the response is | In multiple regression, why are interactions modelled as products, and not something else, of the predictors?
We can conceive of an "interaction" between regressor variables $x_1$ and $x_2$ as a departure from a perfectly linear relationship in which the relationship between one regressor and the response is different for different values of the other regressors. The usual "interaction term" is, in a sense to be explained below, a "simplest" such departure.
Definitions and Concepts
"Linear relationship" simply means the usual model in which we suppose a response $Y$ differs from a linear combination of the $x_i$ (and a constant) by independent, zero-mean errors $\varepsilon:$
$$Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \varepsilon.\tag{*}$$
"Interaction," in the most general sense, means the parameters $\beta_i$ may depend on other variables.
Specifically, in this example of just two regressors, we might generically write
$$\beta_1 = \beta_1(x_2)\text{ and }\beta_2 = \beta_2(x_1).$$
Analysis
Now, in practice, nobody except a theoretical physicist really believes model $(*)$ is fully accurate: it's an approximation to the truth and, we hope, a close one. Pursuing this idea further, we might ask whether we could similarly approximate the functions $\beta_i$ with linear ones in case we need to model some kind of interaction. Specifically, we could try to write
$$\beta_1(x_2) = \gamma_0 + \gamma_1 x_2 + \text{ tiny error}_1;$$
$$\beta_2(x_1) = \delta_0 + \delta_1 x_1 + \text{ tiny error}_2.$$
Let's see where that leads. Plugging these linear approximations into $(*)$ gives
$$\eqalign{
Y &= \beta_0 + \beta_1(x_2) x_1 + \beta_2(x_1) x_2 + \varepsilon \\
&= \beta_0 + (\gamma_0 + \gamma_1 x_2 + \text{ tiny error}_1)x_1 + (\delta_0 + \delta_1 x_1 + \text{ tiny error}_2)x_2 + \varepsilon \\
&= \beta_0 + \gamma_0 x_1 + \delta_0 x_2 + (\gamma_1 + \delta_1)x_1 x_2 + \ldots
}$$
where "$\ldots$" represents the total error,
$$\ldots = (\text{ tiny error}_1)x_1 + (\text{ tiny error}_2)x_2 + \varepsilon.$$
With any luck, multiplying those two "tiny errors" by typical values of the $x_i$ will either (a) be inconsequential compared to $\varepsilon$ or (b) can be treated as random terms which, when added to $\varepsilon$ (and maybe adjusting the constant term $\beta_0$ to accommodate any systematic bias) can be treated as a random error term.
In either case, with a change of notation we see that this linear-approximation-to-an-interaction model takes the form
$$Y = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_{12}x_1 x_2 + \varepsilon,\tag{**}$$
which is precisely the usual "interaction" regression model. (Note that none of the new parameters, nor $\varepsilon$ itself, is the same quantity originally represented by those terms in $(*).$)
Observe how $\beta_{12}$ arises through variation in both the original parameters. It captures the combination of (i) how the coefficient of $x_1$ depends on $x_2$ (namely, through $\gamma_1$) and (ii) how the coefficient of $x_2$ depends on $x_1$ (through $\delta_1$).
Some Consequences
It is a consequence of this analysis that if we fix all but one of the regressors, then (conditionally) the response $Y$ is still a linear function of the remaining regressor. For instance, if we fix the value of $x_2,$ then we may rewrite the interaction model $(**)$ as
$$Y = (\beta_0 + \beta_2 x_2) + (\beta_1 + \beta_{12} x_2) x_1 + \varepsilon,$$
where the intercept is $\beta_0 + \beta_2 x_2$ and the slope (that is, the $x_1$ coefficient) is $\beta_1 + \beta_2 x_2.$ This allows for easy description and insight. Geometrically, the surface given by the function
$$f(x_1,x_2) = \beta_0 + \beta_1 x_1 + \beta_2 x_2 + \beta_{12}x_1x_2$$
is ruled: when we slice it parallel to either of the coordinate axes, the result is always a line. (However, the surface itself is not planar except when $\beta_{12}=0.$ Indeed, it everywhere has negative Gaussian curvature.)
Finally, if our hope for (a) or (b) does not pan out, we might further expand the functional behavior of the original $\beta_i$ to include terms of second order or higher. Carrying out the same analysis shows this will introduce terms of the form $x_1^2,$ $x_2^2,$ $x_1x_2^2,$ $x_1^2x_2,$ and so forth into the model. In this sense, including a (product) interaction term is merely the first--and simplest--step towards modeling nonlinear relationships between the response and the regressors by means of polynomial functions.
Finally, in his textbook EDA (Addison-Wesley 1977), John Tukey showed how this approach can be carried out far more generally. After first "re-expressing" (that is, applying suitable non-linear transformations to) the regressors and the response, it often is the case that either model $(*)$ applies to the transformed variables or, if not, model $(**)$ can easily be fit (using a robust analysis of residuals). This allows for a huge variety of nonlinear relationships to be expressed and interpreted as conditionally linear responses. | In multiple regression, why are interactions modelled as products, and not something else, of the pr
We can conceive of an "interaction" between regressor variables $x_1$ and $x_2$ as a departure from a perfectly linear relationship in which the relationship between one regressor and the response is |
31,624 | When is deviation coding useful? | @llewmills: This week, I encountered a project where the deviation coding you inquired about came in handy, so I thought I would share here what I learned on this topic.
First, I think it will be easier if we start with a simpler model:
m0.dev <- lm(outcome ~ group, df,
contrasts = list(group = c(-1,1)))
summary(m0.dev)
whose output is given by:
Call:
lm(formula = outcome ~ group, data = df,
contrasts = list(group = c(-1, 1)))
Residuals:
Min 1Q Median 3Q Max
-3.3119 -0.6728 0.1027 0.6748 2.8539
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.07010 0.07019 29.49 <2e-16 ***
group1 1.98435 0.07019 28.27 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9926 on 198 degrees of freedom
Multiple R-squared: 0.8015, Adjusted R-squared: 0.8005
F-statistic: 799.3 on 1 and 198 DF, p-value: < 2.2e-16
In this model, which uses deviation coding for group, the intercept represents the overall mean of the outcome value y across the groups a and b, whereas the coefficient of group1 represents the difference between this overall mean and the mean of the outcome value y for group a. The overall mean is none other than the mean of these two means: (i) the mean of y for group a and (ii) the mean of y for group b.
In other words:
2.07010 = overall mean of y across groups a and b
1.98435 = (overall mean of y across groups a and b) - mean of y for group a
Recall that, for your simulated data, the mean of y for group a is 0.08574619 and the mean of y for group b is 4.05445164. Indeed:
means <- tapply(df$outcome, df$group, mean)
means
> means
a b
0.08574619 4.05445164
Here is how we recover these means from the model summary reported above:
# group a:
mean of group a = overall mean - (overall mean - mean of group a)
= intercept in deviation coded model m0.dev - coef of group a (aka, group 1) in model m0.dev
= 2.07010 - 1.98435 = 0.08575
# group b:
mean of group b = overall mean - (overall mean - mean of group b)
= overall mean + (overall mean - mean of group a)
= intercept in deviation coded model m0.dev + coef of group a (aka, group 1) in model m0.dev
= 2.07010 + 1.98435 = 4.05445
In the above, we used the fact that the sum of the deviations of each group's mean from the overall mean is zero:
(overall mean - mean of group a) + (overall mean - mean of group b) = 0.
This implies that:
(overall mean - mean of group b) = -(overall mean - mean of group a).
Now, we can plot the group-specific means and the overall means derived from the summary of model m0.dev using this code:
plot(1:2, as.numeric(means), type="n", xlim=c(0.5,2.5), ylim=c(0,5), xlab="group", ylab="y", xaxt="n", las=1)
segments(x0=1-0.2, y0=means[1], x1 = 1+0.2, y1=means[1], lwd=3, col="dodgerblue") # group a mean
segments(x0=2-0.2, y0=means[2], x1 = 2+0.2, y1=means[2], lwd=3, col="orange")# group b mean
segments(x0=1.5-0.2, y0=coef(m0.dev)[1], x1 = 1.5+0.2, y1=coef(m0.dev)[1], lwd=3, col="red3", lty=2) # overall mean
text(1, means[1]+0.3, "Mean value of y \nfor group a\n(0.0857)", cex=0.8)
text(2, means[2]+0.3, "Mean value of y \nfor group b\n(4.0545)", cex=0.8)
text(1.5, coef(m0.dev)[1]+0.3, "Overall mean value of y \n across groups a and b\n(2.0701)", cex=0.8)
axis(1, at = c(1,2), labels=c("a","b"))
We are now ready to move to the more complicated model below:
m1.dev <- lm(outcome ~ pred*group, df, contrasts = list(group = c(-1,1)))
summary(m1.dev)
whose output is given by:
Call:
lm(formula = outcome ~ pred * group, data = df, contrasts = list(group = c(-1,
1)))
Residuals:
Min 1Q Median 3Q Max
-2.78611 -0.60587 -0.05853 0.58578 2.42940
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.03308 0.06125 33.191 < 2e-16 ***
pred 0.52114 0.06557 7.947 1.47e-13 ***
group1 2.02386 0.06125 33.041 < 2e-16 ***
pred:group1 -0.14406 0.06557 -2.197 0.0292 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8628 on 196 degrees of freedom
Multiple R-squared: 0.8515, Adjusted R-squared: 0.8493
F-statistic: 374.7 on 3 and 196 DF, p-value: < 2.2e-16
This model is essentially fitting two different lines - one per each of the groups a and b - which describe how the outcome value y tends to vary with the predictor pred. The intercepts of the two lines are expressed in relation to the intercept of the overall line across the two groups. Furthermore, the slopes of the two lines are expressed in relation to the slopes of the overall regression line across the two groups. The model reports the following quantities:
2.03308 = estimated intercept of the overall regression line across the two groups a and b (i.e., intercept reported in the summary for model m1.dev)
0.52114 = estimated slope of the overall regression line
(i.e., the coefficient of pred reported in the summary for model m1.dev)
2.02386 = estimated difference between the intercept of the overall regression line and the intercept of the regression line for group a (aka, group 1) (i.e., the coefficient of group1 reported in the summary for model m1.dev)
-0.14406 = estimated difference between the slope of the overall regression line and the slope of the regression line for group a (aka, group 1) (i.e., the coefficient of pred:group1 reported in the summary for model m1.dev)
Here is the R code which allows you to recover the intercept and slope for the group-specific regression lines:
# Note: group a is re-labelled as group 1
intercept of group a line = intercept of overall line - (intercept of overall line - intercept of group a line)
= intercept in deviation coded model m1.dev - coef of group 1 in model m1.dev
= 2.03308 - (2.02386) = 0.00922
slope of group a line = slope of overall line - (slope of overall line - slope of group a line) =
= slope of pred in deviation coded model m1.dev - slope of pred:group1 in model m1.dev
= 0.52114 - (-0.14406) = 0.6652
# Now, use that:
# (intercept of overall line - intercept of group a line) + (intercept of overall line - intercept of group b line) = 0
# which means that:
# (intercept of overall line - intercept of group b line) = - (intercept of overall line - intercept of group a line)
# Similarly:
# (slope of overall line - slope of group a line) + (slope of overall line - slope of group b line) = 0
# which means that:
# (slope of overall line - slope of group b line) = - (slope of overall line - slope of group a line)
intercept of group b line = intercept of overall line - (intercept of overall line - intercept of group b line)
= intercept of overall line + (intercept of overall line - intercept of group a line)
= intercept in deviation coded model m1.dev + coef of group 1 in model m1.dev
= 2.03308 + (2.02386) = 4.05694
slope of group b line = slope of overall line - (slope of overall line - slope of group b line) =
= slope of overall line + (slope of overall line - slope of group a line) =
= slope of pred in deviation coded model m1.dev + slope of pred:group1 in model m1.dev
= 0.52114 + (-0.14406) = 0.37708
Using this information, you can plot the group-specific and overall regression lines with this code:
plot(outcome ~ pred, data = df, type="n")
# overall regression line
abline(a = coef(m1.dev)["(Intercept)"],
b = coef(m1.dev)["pred"],
col="red3", lwd=2, lty=2)
# regression line for group a
abline(a = coef(m1.dev)["(Intercept)"] - coef(m1.dev)["group1"],
b = coef(m1.dev)["pred"] - coef(m1.dev)["pred:group1"],
col="dodgerblue", lwd=2)
# regression line for group b
abline(a = coef(m1.dev)["(Intercept)"] + coef(m1.dev)["group1"],
b = coef(m1.dev)["pred"] + coef(m1.dev)["pred:group1"], col="orange", lwd=2)
points(outcome ~ pred, data = subset(df, group=="a"), col="dodgerblue")
points(outcome ~ pred, data = subset(df, group=="b"), col="orange")
Of course, the summary of model m1.dev allows you to test the significance of the intercept and slope of the overall regression line by judging the significance of the p-values reported for the Intercept and pred portions of the output. It also allows you to test the significance of the:
Deviation of the intercept of the group a regression line from the overall intercept (via the p-value reported for group1);
Deviation of the slope of the group b regression line from the overall slope (via the p-value reported for pred:group1).
When would you want to use the deviation coding situation captured by the model m1.dev? If:
The outcome variable y represents some water quality parameter;
The predictor variable pred represents a (centered) year variable;
The group factor represents a season variable (with levels a = Winter and b = Summer);
then you can envision wanting to report not just a winter-specific and a summer-specific (linear) trend over time in the values of the water quality parameter, but also an overall trend across the two seasons. However, for the overall trend to be interpretable, you might want to have some overlap in the data for the two seasons. (In your generated data, you have essentially no overlap - so an overall trend may not be informative.)
I hope this answers your question, which was very interesting. | When is deviation coding useful? | @llewmills: This week, I encountered a project where the deviation coding you inquired about came in handy, so I thought I would share here what I learned on this topic.
First, I think it will be ea | When is deviation coding useful?
@llewmills: This week, I encountered a project where the deviation coding you inquired about came in handy, so I thought I would share here what I learned on this topic.
First, I think it will be easier if we start with a simpler model:
m0.dev <- lm(outcome ~ group, df,
contrasts = list(group = c(-1,1)))
summary(m0.dev)
whose output is given by:
Call:
lm(formula = outcome ~ group, data = df,
contrasts = list(group = c(-1, 1)))
Residuals:
Min 1Q Median 3Q Max
-3.3119 -0.6728 0.1027 0.6748 2.8539
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.07010 0.07019 29.49 <2e-16 ***
group1 1.98435 0.07019 28.27 <2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.9926 on 198 degrees of freedom
Multiple R-squared: 0.8015, Adjusted R-squared: 0.8005
F-statistic: 799.3 on 1 and 198 DF, p-value: < 2.2e-16
In this model, which uses deviation coding for group, the intercept represents the overall mean of the outcome value y across the groups a and b, whereas the coefficient of group1 represents the difference between this overall mean and the mean of the outcome value y for group a. The overall mean is none other than the mean of these two means: (i) the mean of y for group a and (ii) the mean of y for group b.
In other words:
2.07010 = overall mean of y across groups a and b
1.98435 = (overall mean of y across groups a and b) - mean of y for group a
Recall that, for your simulated data, the mean of y for group a is 0.08574619 and the mean of y for group b is 4.05445164. Indeed:
means <- tapply(df$outcome, df$group, mean)
means
> means
a b
0.08574619 4.05445164
Here is how we recover these means from the model summary reported above:
# group a:
mean of group a = overall mean - (overall mean - mean of group a)
= intercept in deviation coded model m0.dev - coef of group a (aka, group 1) in model m0.dev
= 2.07010 - 1.98435 = 0.08575
# group b:
mean of group b = overall mean - (overall mean - mean of group b)
= overall mean + (overall mean - mean of group a)
= intercept in deviation coded model m0.dev + coef of group a (aka, group 1) in model m0.dev
= 2.07010 + 1.98435 = 4.05445
In the above, we used the fact that the sum of the deviations of each group's mean from the overall mean is zero:
(overall mean - mean of group a) + (overall mean - mean of group b) = 0.
This implies that:
(overall mean - mean of group b) = -(overall mean - mean of group a).
Now, we can plot the group-specific means and the overall means derived from the summary of model m0.dev using this code:
plot(1:2, as.numeric(means), type="n", xlim=c(0.5,2.5), ylim=c(0,5), xlab="group", ylab="y", xaxt="n", las=1)
segments(x0=1-0.2, y0=means[1], x1 = 1+0.2, y1=means[1], lwd=3, col="dodgerblue") # group a mean
segments(x0=2-0.2, y0=means[2], x1 = 2+0.2, y1=means[2], lwd=3, col="orange")# group b mean
segments(x0=1.5-0.2, y0=coef(m0.dev)[1], x1 = 1.5+0.2, y1=coef(m0.dev)[1], lwd=3, col="red3", lty=2) # overall mean
text(1, means[1]+0.3, "Mean value of y \nfor group a\n(0.0857)", cex=0.8)
text(2, means[2]+0.3, "Mean value of y \nfor group b\n(4.0545)", cex=0.8)
text(1.5, coef(m0.dev)[1]+0.3, "Overall mean value of y \n across groups a and b\n(2.0701)", cex=0.8)
axis(1, at = c(1,2), labels=c("a","b"))
We are now ready to move to the more complicated model below:
m1.dev <- lm(outcome ~ pred*group, df, contrasts = list(group = c(-1,1)))
summary(m1.dev)
whose output is given by:
Call:
lm(formula = outcome ~ pred * group, data = df, contrasts = list(group = c(-1,
1)))
Residuals:
Min 1Q Median 3Q Max
-2.78611 -0.60587 -0.05853 0.58578 2.42940
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 2.03308 0.06125 33.191 < 2e-16 ***
pred 0.52114 0.06557 7.947 1.47e-13 ***
group1 2.02386 0.06125 33.041 < 2e-16 ***
pred:group1 -0.14406 0.06557 -2.197 0.0292 *
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 0.8628 on 196 degrees of freedom
Multiple R-squared: 0.8515, Adjusted R-squared: 0.8493
F-statistic: 374.7 on 3 and 196 DF, p-value: < 2.2e-16
This model is essentially fitting two different lines - one per each of the groups a and b - which describe how the outcome value y tends to vary with the predictor pred. The intercepts of the two lines are expressed in relation to the intercept of the overall line across the two groups. Furthermore, the slopes of the two lines are expressed in relation to the slopes of the overall regression line across the two groups. The model reports the following quantities:
2.03308 = estimated intercept of the overall regression line across the two groups a and b (i.e., intercept reported in the summary for model m1.dev)
0.52114 = estimated slope of the overall regression line
(i.e., the coefficient of pred reported in the summary for model m1.dev)
2.02386 = estimated difference between the intercept of the overall regression line and the intercept of the regression line for group a (aka, group 1) (i.e., the coefficient of group1 reported in the summary for model m1.dev)
-0.14406 = estimated difference between the slope of the overall regression line and the slope of the regression line for group a (aka, group 1) (i.e., the coefficient of pred:group1 reported in the summary for model m1.dev)
Here is the R code which allows you to recover the intercept and slope for the group-specific regression lines:
# Note: group a is re-labelled as group 1
intercept of group a line = intercept of overall line - (intercept of overall line - intercept of group a line)
= intercept in deviation coded model m1.dev - coef of group 1 in model m1.dev
= 2.03308 - (2.02386) = 0.00922
slope of group a line = slope of overall line - (slope of overall line - slope of group a line) =
= slope of pred in deviation coded model m1.dev - slope of pred:group1 in model m1.dev
= 0.52114 - (-0.14406) = 0.6652
# Now, use that:
# (intercept of overall line - intercept of group a line) + (intercept of overall line - intercept of group b line) = 0
# which means that:
# (intercept of overall line - intercept of group b line) = - (intercept of overall line - intercept of group a line)
# Similarly:
# (slope of overall line - slope of group a line) + (slope of overall line - slope of group b line) = 0
# which means that:
# (slope of overall line - slope of group b line) = - (slope of overall line - slope of group a line)
intercept of group b line = intercept of overall line - (intercept of overall line - intercept of group b line)
= intercept of overall line + (intercept of overall line - intercept of group a line)
= intercept in deviation coded model m1.dev + coef of group 1 in model m1.dev
= 2.03308 + (2.02386) = 4.05694
slope of group b line = slope of overall line - (slope of overall line - slope of group b line) =
= slope of overall line + (slope of overall line - slope of group a line) =
= slope of pred in deviation coded model m1.dev + slope of pred:group1 in model m1.dev
= 0.52114 + (-0.14406) = 0.37708
Using this information, you can plot the group-specific and overall regression lines with this code:
plot(outcome ~ pred, data = df, type="n")
# overall regression line
abline(a = coef(m1.dev)["(Intercept)"],
b = coef(m1.dev)["pred"],
col="red3", lwd=2, lty=2)
# regression line for group a
abline(a = coef(m1.dev)["(Intercept)"] - coef(m1.dev)["group1"],
b = coef(m1.dev)["pred"] - coef(m1.dev)["pred:group1"],
col="dodgerblue", lwd=2)
# regression line for group b
abline(a = coef(m1.dev)["(Intercept)"] + coef(m1.dev)["group1"],
b = coef(m1.dev)["pred"] + coef(m1.dev)["pred:group1"], col="orange", lwd=2)
points(outcome ~ pred, data = subset(df, group=="a"), col="dodgerblue")
points(outcome ~ pred, data = subset(df, group=="b"), col="orange")
Of course, the summary of model m1.dev allows you to test the significance of the intercept and slope of the overall regression line by judging the significance of the p-values reported for the Intercept and pred portions of the output. It also allows you to test the significance of the:
Deviation of the intercept of the group a regression line from the overall intercept (via the p-value reported for group1);
Deviation of the slope of the group b regression line from the overall slope (via the p-value reported for pred:group1).
When would you want to use the deviation coding situation captured by the model m1.dev? If:
The outcome variable y represents some water quality parameter;
The predictor variable pred represents a (centered) year variable;
The group factor represents a season variable (with levels a = Winter and b = Summer);
then you can envision wanting to report not just a winter-specific and a summer-specific (linear) trend over time in the values of the water quality parameter, but also an overall trend across the two seasons. However, for the overall trend to be interpretable, you might want to have some overlap in the data for the two seasons. (In your generated data, you have essentially no overlap - so an overall trend may not be informative.)
I hope this answers your question, which was very interesting. | When is deviation coding useful?
@llewmills: This week, I encountered a project where the deviation coding you inquired about came in handy, so I thought I would share here what I learned on this topic.
First, I think it will be ea |
31,625 | In linear regression, why should we include quadratic terms when we are only interested in interaction terms? | It depends on the goal of inference. If you want to make inference of whether there exists an interaction, for instance, in a causal context (or, more generally, if you want to interpret the interaction coefficient), this recommendation from your professor does make sense, and it comes from the fact that misspecification of the functional form can lead to wrong inferences about interaction.
Here is a simple example where there is no interaction term between $x_1$ and $x_2$ in the structural equation of $y$, yet, if you do not include the quadratic term of $x_1$, you would wrongly conclude that $x_1$ interacts with $x_2$ when in fact it doesn't.
set.seed(10)
n <- 1e3
x1 <- rnorm(n)
x2 <- x1 + rnorm(n)
y <- x1 + x2 + x1^2 + rnorm(n)
summary(lm(y ~ x1 + x2 + x1:x2))
Call:
lm(formula = y ~ x1 + x2 + x1:x2)
Residuals:
Min 1Q Median 3Q Max
-3.7781 -0.8326 -0.0806 0.7598 7.7929
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.30116 0.04813 6.257 5.81e-10 ***
x1 1.03142 0.05888 17.519 < 2e-16 ***
x2 1.01806 0.03971 25.638 < 2e-16 ***
x1:x2 0.63939 0.02390 26.757 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.308 on 996 degrees of freedom
Multiple R-squared: 0.7935, Adjusted R-squared: 0.7929
F-statistic: 1276 on 3 and 996 DF, p-value: < 2.2e-16
This can be interpreted as simply a case of omitted variable bias, and here $x_1^2$ is the omitted variable. If you go back and include the squared term in your regression, the apparent interaction disappears.
summary(lm(y ~ x1 + x2 + x1:x2 + I(x1^2)))
Call:
lm(formula = y ~ x1 + x2 + x1:x2 + I(x1^2))
Residuals:
Min 1Q Median 3Q Max
-3.4574 -0.7073 0.0228 0.6723 3.7135
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.0419958 0.0398423 -1.054 0.292
x1 1.0296642 0.0458586 22.453 <2e-16 ***
x2 1.0017625 0.0309367 32.381 <2e-16 ***
I(x1^2) 1.0196002 0.0400940 25.430 <2e-16 ***
x1:x2 -0.0006889 0.0313045 -0.022 0.982
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.019 on 995 degrees of freedom
Multiple R-squared: 0.8748, Adjusted R-squared: 0.8743
F-statistic: 1739 on 4 and 995 DF, p-value: < 2.2e-16
Of course, this reasoning applies not only to quadratic terms, but misspecification of the functional form in general. The goal here is to model the conditional expectation function appropriately to assess interaction. If you are limiting yourself to modeling with linear regression, then you will need to include these nonlinear terms manually. But an alternative is to use more flexible regression modeling, such as kernel ridge regression for instance. | In linear regression, why should we include quadratic terms when we are only interested in interacti | It depends on the goal of inference. If you want to make inference of whether there exists an interaction, for instance, in a causal context (or, more generally, if you want to interpret the interacti | In linear regression, why should we include quadratic terms when we are only interested in interaction terms?
It depends on the goal of inference. If you want to make inference of whether there exists an interaction, for instance, in a causal context (or, more generally, if you want to interpret the interaction coefficient), this recommendation from your professor does make sense, and it comes from the fact that misspecification of the functional form can lead to wrong inferences about interaction.
Here is a simple example where there is no interaction term between $x_1$ and $x_2$ in the structural equation of $y$, yet, if you do not include the quadratic term of $x_1$, you would wrongly conclude that $x_1$ interacts with $x_2$ when in fact it doesn't.
set.seed(10)
n <- 1e3
x1 <- rnorm(n)
x2 <- x1 + rnorm(n)
y <- x1 + x2 + x1^2 + rnorm(n)
summary(lm(y ~ x1 + x2 + x1:x2))
Call:
lm(formula = y ~ x1 + x2 + x1:x2)
Residuals:
Min 1Q Median 3Q Max
-3.7781 -0.8326 -0.0806 0.7598 7.7929
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) 0.30116 0.04813 6.257 5.81e-10 ***
x1 1.03142 0.05888 17.519 < 2e-16 ***
x2 1.01806 0.03971 25.638 < 2e-16 ***
x1:x2 0.63939 0.02390 26.757 < 2e-16 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.308 on 996 degrees of freedom
Multiple R-squared: 0.7935, Adjusted R-squared: 0.7929
F-statistic: 1276 on 3 and 996 DF, p-value: < 2.2e-16
This can be interpreted as simply a case of omitted variable bias, and here $x_1^2$ is the omitted variable. If you go back and include the squared term in your regression, the apparent interaction disappears.
summary(lm(y ~ x1 + x2 + x1:x2 + I(x1^2)))
Call:
lm(formula = y ~ x1 + x2 + x1:x2 + I(x1^2))
Residuals:
Min 1Q Median 3Q Max
-3.4574 -0.7073 0.0228 0.6723 3.7135
Coefficients:
Estimate Std. Error t value Pr(>|t|)
(Intercept) -0.0419958 0.0398423 -1.054 0.292
x1 1.0296642 0.0458586 22.453 <2e-16 ***
x2 1.0017625 0.0309367 32.381 <2e-16 ***
I(x1^2) 1.0196002 0.0400940 25.430 <2e-16 ***
x1:x2 -0.0006889 0.0313045 -0.022 0.982
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
Residual standard error: 1.019 on 995 degrees of freedom
Multiple R-squared: 0.8748, Adjusted R-squared: 0.8743
F-statistic: 1739 on 4 and 995 DF, p-value: < 2.2e-16
Of course, this reasoning applies not only to quadratic terms, but misspecification of the functional form in general. The goal here is to model the conditional expectation function appropriately to assess interaction. If you are limiting yourself to modeling with linear regression, then you will need to include these nonlinear terms manually. But an alternative is to use more flexible regression modeling, such as kernel ridge regression for instance. | In linear regression, why should we include quadratic terms when we are only interested in interacti
It depends on the goal of inference. If you want to make inference of whether there exists an interaction, for instance, in a causal context (or, more generally, if you want to interpret the interacti |
31,626 | In linear regression, why should we include quadratic terms when we are only interested in interaction terms? | The two models you listed in your answer can be re-expressed to make it clear how the effect of $X_1$ is postulated to depend on $X_2$ (or the other way around) in each model.
The first model can be re-expressed like this:
$$Y = \beta_0 + (\beta_1 + \beta_3X_2)X_1 + \beta_2X_2+ \epsilon,$$
which shows that, in this model, $X1$ is assumed to have a linear effect on $Y$ (controlling for the effect of $X_2$) but the the magnitude of this linear effect - captured by the slope coefficient of $X_1$ - changes linearly as a function of $X_2$. For example, the effect of $X_1$ on $Y$ may increase in magnitude as the values of $X_2$ increase.
The second model can be re-expressed like this:
$$Y = \beta_0 + (\beta_1 + \beta_3X_2)X_1 + \beta_4 X_1^2 + \beta_2X_2 +\beta_5X_2^2 + \epsilon,$$
which shows that, in this model, the effect of $X_1$ on $Y$ (controlling for the effect of $X_2$) is assumed to be quadratic rather than linear. This quadratic effect is captured by including both $X_1$ and $X_1^2$ in the model. While the coefficient of $X_1^2$ is assumed to be independent of $X_2$, the coefficient of $X_1$ is assumed to depend linearly on $X_2$.
Using either model would imply that you are making entirely different assumptions about the nature of the effect of $X_1$ on $Y$ (controlling for the effect of $X_2$).
Usually, people fit the first model. They might then plot the residuals from that model against $X_1$ and $X_2$ in turns. If the residuals reveal a quadratic pattern in the residuals as a function of $X_1$ and/or $X_2$, the model can be augmented accordingly so that it includes $X_1^2$ and/or $X_2^2$ (and possibly their interaction).
Note that I simplified the notation you used for consistency and also made ther error term explicit in both models. | In linear regression, why should we include quadratic terms when we are only interested in interacti | The two models you listed in your answer can be re-expressed to make it clear how the effect of $X_1$ is postulated to depend on $X_2$ (or the other way around) in each model.
The first model can be r | In linear regression, why should we include quadratic terms when we are only interested in interaction terms?
The two models you listed in your answer can be re-expressed to make it clear how the effect of $X_1$ is postulated to depend on $X_2$ (or the other way around) in each model.
The first model can be re-expressed like this:
$$Y = \beta_0 + (\beta_1 + \beta_3X_2)X_1 + \beta_2X_2+ \epsilon,$$
which shows that, in this model, $X1$ is assumed to have a linear effect on $Y$ (controlling for the effect of $X_2$) but the the magnitude of this linear effect - captured by the slope coefficient of $X_1$ - changes linearly as a function of $X_2$. For example, the effect of $X_1$ on $Y$ may increase in magnitude as the values of $X_2$ increase.
The second model can be re-expressed like this:
$$Y = \beta_0 + (\beta_1 + \beta_3X_2)X_1 + \beta_4 X_1^2 + \beta_2X_2 +\beta_5X_2^2 + \epsilon,$$
which shows that, in this model, the effect of $X_1$ on $Y$ (controlling for the effect of $X_2$) is assumed to be quadratic rather than linear. This quadratic effect is captured by including both $X_1$ and $X_1^2$ in the model. While the coefficient of $X_1^2$ is assumed to be independent of $X_2$, the coefficient of $X_1$ is assumed to depend linearly on $X_2$.
Using either model would imply that you are making entirely different assumptions about the nature of the effect of $X_1$ on $Y$ (controlling for the effect of $X_2$).
Usually, people fit the first model. They might then plot the residuals from that model against $X_1$ and $X_2$ in turns. If the residuals reveal a quadratic pattern in the residuals as a function of $X_1$ and/or $X_2$, the model can be augmented accordingly so that it includes $X_1^2$ and/or $X_2^2$ (and possibly their interaction).
Note that I simplified the notation you used for consistency and also made ther error term explicit in both models. | In linear regression, why should we include quadratic terms when we are only interested in interacti
The two models you listed in your answer can be re-expressed to make it clear how the effect of $X_1$ is postulated to depend on $X_2$ (or the other way around) in each model.
The first model can be r |
31,627 | With what probability one coin is better than the other? | It is easy to calculate the probability for making that observation, given the fact the two coins are equal. This can be done by Fishers exact test. Given these observations
$$
\begin{array} {r|c|c}
&\text{coin }1 &\text{coin }2 \\
\hline
\text{heads} &H_1 &H_2\\
\hline
\text{tails} &n_1-H_1 &n_2-H_2\\\end{array}
$$
the probability to observe these numbers while the coins are equal given the number of tries $n_1$, $n_2$ and the total amount of heads $H_1+H_2$ is
$$
p(H_1, H_2|n_1, n_2, H_1+H_2) = \frac{(H_1+H_2)!(n_1+n_2-H_1-H_2)!n_1!n_2!}{H_1!H_2!(n_1-H_1)!(n_2-H_2)!(n_1+n_2)!}.
$$
But what you are asking for is the probability that one coin is better.
Since we argue about a believe on how biased the coins are we have to use a Bayesian approach to calculate the result. Please note, that in Bayesian inference the term belief is modeled as probability and the two terms are used interchangeably (s. Bayesian probability). We call the probability that coin $i$ tosses heads $p_i$. The posterior distribution after observation, for this $p_i$ is given by Bayes' theorem:
$$
f(p_i|H_i,n_i)= \frac{f(H_i|p_i,n_i)f(p_i)}{f(n_i,H_i)}
$$
The probability density function (pdf) $f(H_i|p_i,n_i)$ is given by the Binomial probability, since the individual tries are Bernoulli experiments:
$$
f(H_i|p_i,n_i) = \binom{n_i}{H_i}p_i^{H_i}(1-p_i)^{n_i-H_i}
$$
I assume the prior knowledge on $f(p_i)$ is that $p_i$ could lie anywhere between $0$ and $1$ with equal probability, hence $f(p_i) = 1$. So the nominator is $f(H_i|p_i,n_i)f(p_i)= f(H_i|p_i,n_i)$.
In Order to calculate $f(n_i,H_i)$ we use the fact that the integral over a pdf has to be one $\int_0^1f(p|H_i,n_i)\mathrm dp = 1$. So the denominator will be a constant factor to achieve just that. There is a known pdf that differs from the nominator by only a constant factor, which is the beta distribution. Hence
$$
f(p_i|H_i,n_i) = \frac{1}{B(H_i+1, n_i-H_i+1)}p_i^{H_i}(1-p_i)^{n_i-H_i}.
$$
The pdf for the pair of probabilities of independent coins is
$$
f(p_1,p_2|H_1,n_1,H_2,n_2) = f(p_1|H_1,n_1)f(p_2|H_2,n_2).
$$
Now we need to integrate this over the cases in which $p_1>p_2$ in order to find out how probable coin $1$ is better then coin $2$:
$$\begin{align}
\mathbb P(p_1>p_2)
&= \int_0^1 \int_0^{p‘_1} f(p‘_1,p‘_2|H_1,n_1,H_2,n_2)\mathrm dp‘_2 \mathrm dp‘_1\\
&=\int_0^1 \frac{B(p‘_1;H_2+1,n_2-H_2+1)}{B(H_2+1,n_2-H_2+1)}
f(p‘_1|H_1,n_1)\mathrm dp‘_1
\end{align}$$
I cannot solve this last integral analytically but one can solve it numerically with a computer after plugging in the numbers. $B(\cdot,\cdot)$ is the beta function and $B(\cdot;\cdot,\cdot)$ is the incomplete beta function. Note that $\mathbb P(p_1=p_2) = 0$ because $p_1$ is a continues variable and never exactly the same as $p_2$.
Concerning the prior assumption on $f(p_i)$ and remarks on it: A good alternative to model many believes is to use a beta distribution $Beta(a_i+1,b_i+1)$. This would lead to a final probability
$$
\mathbb P(p_1>p_2)
=\int_0^1 \frac{B(p‘_1;H_2+1+a_2,n_2-H_2+1+b_2)}{B(H_2+1+a_2,n_2-H_2+1+b_2)}
f(p‘_1|H_1+a_1,n_1+a_1+b_1)\mathrm dp‘_1.
$$
That way one could model a strong bias towards regular coins by large but equal $a_i$, $b_i$. It would be equivalent to tossing the coin $a_i+b_i$ additional times and receiving $a_i$ heads hence equivalent to just having more data. $a_i + b_i$ is the amount of tosses we would not have to make if we include this prior.
The OP stated that the two coins are both biased to an unknown degree. So I understood all knowledge has to be inferred from the observations. This is why I opted for an uninformative prior that dose not bias the result e.g. towards regular coins.
All information can be conveyed in the form of $(H_i, n_i)$ per coin. The lack of an informative prior only means more observations are needed to decide which coin is better with high probability.
Here is the code in R that provides a function P(n1, H1, n2, H2) $=\mathbb P(p_1>p_2)$ using the uniform prior $f(p_i)=1$:
mp <- function(p1, n1, H1, n2, H2) {
f1 <- pbeta(p1, H2 + 1, n2 - H2 + 1)
f2 <- dbeta(p1, H1 + 1, n1 - H1 + 1)
return(f1 * f2)
}
P <- function(n1, H1, n2, H2) {
return(integrate(mp, 0, 1, n1, H1, n2, H2))
}
You can draw $P(p_1>p_2)$ for different experimental results and fixed $n_1$, $n_2$ e.g. $n_1=n_2=4$ with this code sniped:
library(lattice)
n1 <- 4
n2 <- 4
heads <- expand.grid(H1 = 0:n1, H2 = 0:n2)
heads$P <- apply(heads, 1, function(H) P(n1, H[1], n2, H[2])$value)
levelplot(P ~ H1 + H2, heads, main = "P(p1 > p2)")
You may need to install.packages("lattice") first.
One can see, that even with the uniform prior and a small sample size, the probability or believe that one coin is better can become quite solid, when $H_1$ and $H_2$ differ enough. An even smaller relative difference is needed if $n_1$ and $n_2$ are even larger. Here is a plot for $n_1=100$ and $n_2=200$:
Martijn Weterings suggested to calculate the posterior probability distribution for the difference between $p_1$ and $p_2$. This can be done by integrating the pdf of the pair over the set $S(d)=\{(p_1,p_2)\in[0,1]^2|d=|p_1-p_2|\}$:
$$\begin{align}
f(d|H_1,n_1,H_2,n_2)
&= \int_{S(d)}f(p_1,p_2|H_1,n_1,H_2,n_2) \mathrm d\gamma\\
&= \int_0^{1-d} f(p,p+d|H_1,n_1,H_2,n_2) \mathrm dp + \int_d^1 f(p,p-d|H_1,n_1,H_2,n_2) \mathrm dp\\
\end{align}$$
Again, not an integral I can solve analytically but the R code would be:
d1 <- function(p, d, n1, H1, n2, H2) {
f1 <- dbeta(p, H1 + 1, n1 - H1 + 1)
f2 <- dbeta(p + d, H2 + 1, n2 - H2 + 1)
return(f1 * f2)
}
d2 <- function(p, d, n1, H1, n2, H2) {
f1 <- dbeta(p, H1 + 1, n1 - H1 + 1)
f2 <- dbeta(p - d, H2 + 1, n2 - H2 + 1)
return(f1 * f2)
}
fd <- function(d, n1, H1, n2, H2) {
if (d==1) return(0)
s1 <- integrate(d1, 0, 1-d, d, n1, H1, n2, H2)
s2 <- integrate(d2, d, 1, d, n1, H1, n2, H2)
return(s1$value + s2$value)
}
I plotted $f(d|n_1,H_1,n_2,H_2)$ for $n_1=4$, $H_1=3$, $n_2=4$ and all values of $H_2$:
n1 <- 4
n2 <- 4
H1 <- 3
d <- seq(0, 1, length = 500)
get_f <- function(H2) sapply(d, fd, n1, H1, n2, H2)
dat <- sapply(0:n2, get_f)
matplot(d, dat, type = "l", ylab = "Density",
main = "f(d | 4, 3, 4, H2)")
legend("topright", legend = paste("H2 =", 0:n2),
col = 1:(n2 + 1), pch = "-")
You can calculate the probability of $|p_1-p_2|$ to be above a value $d$ by integrate(fd, d, 1, n1, H1, n2, H2). Mind that the double application of the numerical integral comes with some numerical error. E.g. integrate(fd, 0, 1, n1, H1, n2, H2) should always equal $1$ since $d$ always takes a value between $0$ and $1$. But the result often deviates slightly. | With what probability one coin is better than the other? | It is easy to calculate the probability for making that observation, given the fact the two coins are equal. This can be done by Fishers exact test. Given these observations
$$
\begin{array} {r|c|c}
| With what probability one coin is better than the other?
It is easy to calculate the probability for making that observation, given the fact the two coins are equal. This can be done by Fishers exact test. Given these observations
$$
\begin{array} {r|c|c}
&\text{coin }1 &\text{coin }2 \\
\hline
\text{heads} &H_1 &H_2\\
\hline
\text{tails} &n_1-H_1 &n_2-H_2\\\end{array}
$$
the probability to observe these numbers while the coins are equal given the number of tries $n_1$, $n_2$ and the total amount of heads $H_1+H_2$ is
$$
p(H_1, H_2|n_1, n_2, H_1+H_2) = \frac{(H_1+H_2)!(n_1+n_2-H_1-H_2)!n_1!n_2!}{H_1!H_2!(n_1-H_1)!(n_2-H_2)!(n_1+n_2)!}.
$$
But what you are asking for is the probability that one coin is better.
Since we argue about a believe on how biased the coins are we have to use a Bayesian approach to calculate the result. Please note, that in Bayesian inference the term belief is modeled as probability and the two terms are used interchangeably (s. Bayesian probability). We call the probability that coin $i$ tosses heads $p_i$. The posterior distribution after observation, for this $p_i$ is given by Bayes' theorem:
$$
f(p_i|H_i,n_i)= \frac{f(H_i|p_i,n_i)f(p_i)}{f(n_i,H_i)}
$$
The probability density function (pdf) $f(H_i|p_i,n_i)$ is given by the Binomial probability, since the individual tries are Bernoulli experiments:
$$
f(H_i|p_i,n_i) = \binom{n_i}{H_i}p_i^{H_i}(1-p_i)^{n_i-H_i}
$$
I assume the prior knowledge on $f(p_i)$ is that $p_i$ could lie anywhere between $0$ and $1$ with equal probability, hence $f(p_i) = 1$. So the nominator is $f(H_i|p_i,n_i)f(p_i)= f(H_i|p_i,n_i)$.
In Order to calculate $f(n_i,H_i)$ we use the fact that the integral over a pdf has to be one $\int_0^1f(p|H_i,n_i)\mathrm dp = 1$. So the denominator will be a constant factor to achieve just that. There is a known pdf that differs from the nominator by only a constant factor, which is the beta distribution. Hence
$$
f(p_i|H_i,n_i) = \frac{1}{B(H_i+1, n_i-H_i+1)}p_i^{H_i}(1-p_i)^{n_i-H_i}.
$$
The pdf for the pair of probabilities of independent coins is
$$
f(p_1,p_2|H_1,n_1,H_2,n_2) = f(p_1|H_1,n_1)f(p_2|H_2,n_2).
$$
Now we need to integrate this over the cases in which $p_1>p_2$ in order to find out how probable coin $1$ is better then coin $2$:
$$\begin{align}
\mathbb P(p_1>p_2)
&= \int_0^1 \int_0^{p‘_1} f(p‘_1,p‘_2|H_1,n_1,H_2,n_2)\mathrm dp‘_2 \mathrm dp‘_1\\
&=\int_0^1 \frac{B(p‘_1;H_2+1,n_2-H_2+1)}{B(H_2+1,n_2-H_2+1)}
f(p‘_1|H_1,n_1)\mathrm dp‘_1
\end{align}$$
I cannot solve this last integral analytically but one can solve it numerically with a computer after plugging in the numbers. $B(\cdot,\cdot)$ is the beta function and $B(\cdot;\cdot,\cdot)$ is the incomplete beta function. Note that $\mathbb P(p_1=p_2) = 0$ because $p_1$ is a continues variable and never exactly the same as $p_2$.
Concerning the prior assumption on $f(p_i)$ and remarks on it: A good alternative to model many believes is to use a beta distribution $Beta(a_i+1,b_i+1)$. This would lead to a final probability
$$
\mathbb P(p_1>p_2)
=\int_0^1 \frac{B(p‘_1;H_2+1+a_2,n_2-H_2+1+b_2)}{B(H_2+1+a_2,n_2-H_2+1+b_2)}
f(p‘_1|H_1+a_1,n_1+a_1+b_1)\mathrm dp‘_1.
$$
That way one could model a strong bias towards regular coins by large but equal $a_i$, $b_i$. It would be equivalent to tossing the coin $a_i+b_i$ additional times and receiving $a_i$ heads hence equivalent to just having more data. $a_i + b_i$ is the amount of tosses we would not have to make if we include this prior.
The OP stated that the two coins are both biased to an unknown degree. So I understood all knowledge has to be inferred from the observations. This is why I opted for an uninformative prior that dose not bias the result e.g. towards regular coins.
All information can be conveyed in the form of $(H_i, n_i)$ per coin. The lack of an informative prior only means more observations are needed to decide which coin is better with high probability.
Here is the code in R that provides a function P(n1, H1, n2, H2) $=\mathbb P(p_1>p_2)$ using the uniform prior $f(p_i)=1$:
mp <- function(p1, n1, H1, n2, H2) {
f1 <- pbeta(p1, H2 + 1, n2 - H2 + 1)
f2 <- dbeta(p1, H1 + 1, n1 - H1 + 1)
return(f1 * f2)
}
P <- function(n1, H1, n2, H2) {
return(integrate(mp, 0, 1, n1, H1, n2, H2))
}
You can draw $P(p_1>p_2)$ for different experimental results and fixed $n_1$, $n_2$ e.g. $n_1=n_2=4$ with this code sniped:
library(lattice)
n1 <- 4
n2 <- 4
heads <- expand.grid(H1 = 0:n1, H2 = 0:n2)
heads$P <- apply(heads, 1, function(H) P(n1, H[1], n2, H[2])$value)
levelplot(P ~ H1 + H2, heads, main = "P(p1 > p2)")
You may need to install.packages("lattice") first.
One can see, that even with the uniform prior and a small sample size, the probability or believe that one coin is better can become quite solid, when $H_1$ and $H_2$ differ enough. An even smaller relative difference is needed if $n_1$ and $n_2$ are even larger. Here is a plot for $n_1=100$ and $n_2=200$:
Martijn Weterings suggested to calculate the posterior probability distribution for the difference between $p_1$ and $p_2$. This can be done by integrating the pdf of the pair over the set $S(d)=\{(p_1,p_2)\in[0,1]^2|d=|p_1-p_2|\}$:
$$\begin{align}
f(d|H_1,n_1,H_2,n_2)
&= \int_{S(d)}f(p_1,p_2|H_1,n_1,H_2,n_2) \mathrm d\gamma\\
&= \int_0^{1-d} f(p,p+d|H_1,n_1,H_2,n_2) \mathrm dp + \int_d^1 f(p,p-d|H_1,n_1,H_2,n_2) \mathrm dp\\
\end{align}$$
Again, not an integral I can solve analytically but the R code would be:
d1 <- function(p, d, n1, H1, n2, H2) {
f1 <- dbeta(p, H1 + 1, n1 - H1 + 1)
f2 <- dbeta(p + d, H2 + 1, n2 - H2 + 1)
return(f1 * f2)
}
d2 <- function(p, d, n1, H1, n2, H2) {
f1 <- dbeta(p, H1 + 1, n1 - H1 + 1)
f2 <- dbeta(p - d, H2 + 1, n2 - H2 + 1)
return(f1 * f2)
}
fd <- function(d, n1, H1, n2, H2) {
if (d==1) return(0)
s1 <- integrate(d1, 0, 1-d, d, n1, H1, n2, H2)
s2 <- integrate(d2, d, 1, d, n1, H1, n2, H2)
return(s1$value + s2$value)
}
I plotted $f(d|n_1,H_1,n_2,H_2)$ for $n_1=4$, $H_1=3$, $n_2=4$ and all values of $H_2$:
n1 <- 4
n2 <- 4
H1 <- 3
d <- seq(0, 1, length = 500)
get_f <- function(H2) sapply(d, fd, n1, H1, n2, H2)
dat <- sapply(0:n2, get_f)
matplot(d, dat, type = "l", ylab = "Density",
main = "f(d | 4, 3, 4, H2)")
legend("topright", legend = paste("H2 =", 0:n2),
col = 1:(n2 + 1), pch = "-")
You can calculate the probability of $|p_1-p_2|$ to be above a value $d$ by integrate(fd, d, 1, n1, H1, n2, H2). Mind that the double application of the numerical integral comes with some numerical error. E.g. integrate(fd, 0, 1, n1, H1, n2, H2) should always equal $1$ since $d$ always takes a value between $0$ and $1$. But the result often deviates slightly. | With what probability one coin is better than the other?
It is easy to calculate the probability for making that observation, given the fact the two coins are equal. This can be done by Fishers exact test. Given these observations
$$
\begin{array} {r|c|c}
|
31,628 | With what probability one coin is better than the other? | I've made a numerical simulation with R, probably you're looking for an analytical answer, but I thought this could be interesting to share.
set.seed(123)
# coin 1
N1 = 20
theta1 = 0.7
toss1 <- rbinom(n = N1, size = 1, prob = theta1)
# coin 2
N2 = 25
theta2 = 0.5
toss2 <- rbinom(n = N2, size = 1, prob = theta2)
# frequency
sum(toss1)/N1 # [1] 0.65
sum(toss2)/N2 # [1] 0.52
In this first code, I simply simulate two coin toss. Here you can see of course that it theta1 > theta2, then of course the frequency of H1 will be higher than H2. Note the different N1,N2 sizes.
Let's see what we can do with different thetas. Note the code is not optimal. At all.
simulation <- function(N1, N2, theta1, theta2, nsim = 100) {
count1 <- count2 <- 0
for (i in 1:nsim) {
toss1 <- rbinom(n = N1, size = 1, prob = theta1)
toss2 <- rbinom(n = N2, size = 1, prob = theta2)
if (sum(toss1)/N1 > sum(toss2)/N2) {count1 = count1 + 1}
#if (sum(toss1)/N1 < sum(toss2)/N2) {count2 = count2 + 1}
}
count1/nsim
}
set.seed(123)
simulation(20, 25, 0.7, 0.5, 100)
#[1] 0.93
So 0.93 is the frequency of times (out of a 100) that the first coin had more heads. This seems ok, looking at theta1 and theta2 used.
Let's see with two vector of thetas.
theta1_v <- seq(from = 0.1, to = 0.9, by = 0.1)
theta2_v <- seq(from = 0.9, to = 0.1, by = -0.1)
res_v <- c()
for (i in 1:length(theta1_v)) {
res <- simulation(1000, 1500, theta1_v[i], theta2_v[i], 100)
res_v[i] <- res
}
plot(theta1_v, res_v, type = "l")
Remember that res_v are the frequencies where H1 > H2, out of 100 simulations.
So as theta1 increases, then the probability of H1 being higher increases, of course.
I've done some other simulations and it seems that the sizes N1,N2 are less important.
If you're familiar with R you can use this code to shed some light on the problem. I'm aware this is not a complete analysis, and it can be improved. | With what probability one coin is better than the other? | I've made a numerical simulation with R, probably you're looking for an analytical answer, but I thought this could be interesting to share.
set.seed(123)
# coin 1
N1 = 20
theta1 = 0.7
toss1 <- rbino | With what probability one coin is better than the other?
I've made a numerical simulation with R, probably you're looking for an analytical answer, but I thought this could be interesting to share.
set.seed(123)
# coin 1
N1 = 20
theta1 = 0.7
toss1 <- rbinom(n = N1, size = 1, prob = theta1)
# coin 2
N2 = 25
theta2 = 0.5
toss2 <- rbinom(n = N2, size = 1, prob = theta2)
# frequency
sum(toss1)/N1 # [1] 0.65
sum(toss2)/N2 # [1] 0.52
In this first code, I simply simulate two coin toss. Here you can see of course that it theta1 > theta2, then of course the frequency of H1 will be higher than H2. Note the different N1,N2 sizes.
Let's see what we can do with different thetas. Note the code is not optimal. At all.
simulation <- function(N1, N2, theta1, theta2, nsim = 100) {
count1 <- count2 <- 0
for (i in 1:nsim) {
toss1 <- rbinom(n = N1, size = 1, prob = theta1)
toss2 <- rbinom(n = N2, size = 1, prob = theta2)
if (sum(toss1)/N1 > sum(toss2)/N2) {count1 = count1 + 1}
#if (sum(toss1)/N1 < sum(toss2)/N2) {count2 = count2 + 1}
}
count1/nsim
}
set.seed(123)
simulation(20, 25, 0.7, 0.5, 100)
#[1] 0.93
So 0.93 is the frequency of times (out of a 100) that the first coin had more heads. This seems ok, looking at theta1 and theta2 used.
Let's see with two vector of thetas.
theta1_v <- seq(from = 0.1, to = 0.9, by = 0.1)
theta2_v <- seq(from = 0.9, to = 0.1, by = -0.1)
res_v <- c()
for (i in 1:length(theta1_v)) {
res <- simulation(1000, 1500, theta1_v[i], theta2_v[i], 100)
res_v[i] <- res
}
plot(theta1_v, res_v, type = "l")
Remember that res_v are the frequencies where H1 > H2, out of 100 simulations.
So as theta1 increases, then the probability of H1 being higher increases, of course.
I've done some other simulations and it seems that the sizes N1,N2 are less important.
If you're familiar with R you can use this code to shed some light on the problem. I'm aware this is not a complete analysis, and it can be improved. | With what probability one coin is better than the other?
I've made a numerical simulation with R, probably you're looking for an analytical answer, but I thought this could be interesting to share.
set.seed(123)
# coin 1
N1 = 20
theta1 = 0.7
toss1 <- rbino |
31,629 | How do the t-distribution and standard normal distribution differ, and why is t-distribution used more? | The normal distribution (which is almost certainly returning in later chapters of your course) is much easier to motivate than the t distribution for students new to the material. The reason why you are learning about the t distribution is more or less for your first reason: the t distribution takes a single parameter—sample size minus one—and more correctly accounts for uncertainty due to (small) sample size than the normal distribution when making inferences about a sample mean of normally-distributed data, assuming that the true variance is unknown.
With increasing sample size, both t and standard normal distributions are both approximately as robust with respect to deviations from normality (as sample size increases the t distribution converges to the standard normal distribution). Nonparametric tests (which I start teaching about half way through my intro stats course) are generally much more robust to non-normality than either t or normal distributions.
Finally, you are likely going to learn tests and confidence intervals for many different distributions by the end of your course (F, $\chi^{2}$, rank distributions—at least in their table p-values, for example). | How do the t-distribution and standard normal distribution differ, and why is t-distribution used mo | The normal distribution (which is almost certainly returning in later chapters of your course) is much easier to motivate than the t distribution for students new to the material. The reason why you a | How do the t-distribution and standard normal distribution differ, and why is t-distribution used more?
The normal distribution (which is almost certainly returning in later chapters of your course) is much easier to motivate than the t distribution for students new to the material. The reason why you are learning about the t distribution is more or less for your first reason: the t distribution takes a single parameter—sample size minus one—and more correctly accounts for uncertainty due to (small) sample size than the normal distribution when making inferences about a sample mean of normally-distributed data, assuming that the true variance is unknown.
With increasing sample size, both t and standard normal distributions are both approximately as robust with respect to deviations from normality (as sample size increases the t distribution converges to the standard normal distribution). Nonparametric tests (which I start teaching about half way through my intro stats course) are generally much more robust to non-normality than either t or normal distributions.
Finally, you are likely going to learn tests and confidence intervals for many different distributions by the end of your course (F, $\chi^{2}$, rank distributions—at least in their table p-values, for example). | How do the t-distribution and standard normal distribution differ, and why is t-distribution used mo
The normal distribution (which is almost certainly returning in later chapters of your course) is much easier to motivate than the t distribution for students new to the material. The reason why you a |
31,630 | How do the t-distribution and standard normal distribution differ, and why is t-distribution used more? | The reason t-distribution is used in inference instead of normal is due to the fact that the theoretical distribution of some estimators is normal (Gaussian) only when the standard deviation is known, and when it is unknown the theoretical distribution is Student t.
We rarely know the standard deviation. Usually, we estimate from the sample, so for many estimators it is theoretically more solid to use Student t distribution and not normal.
Some estimators are consistent, i.e. in layman terms, they get better when the sample size increases. Student t becomes normal when sample size is large.
Example: sample mean
Consider a mean $\mu$ of the sample $x_1,x_2,\dots,x_n$. We can estimate it using a usual average estimator: $\bar x=\frac 1 n\sum_{i=1}^nx_i$, which you may call a sample mean.
If we want to make inference statements about the mean, such as whether a true mean $\mu<0$, we can use the sample mean $\bar x$ but we need to know what is its distribution. It turns out that if we knew the standard deviation $\sigma$ of $x_i$, then the sample mean would be distributed around the true mean according to Gaussian: $\bar x\sim\mathcal N(\mu,\sigma^2/n)$, for large enough $n$
The problem's that we rarely know $\sigma$, but we can estimate its value from the sample $\hat\sigma$ using one of the estimators. In this case the distribution of the sample mean is no longer Gaussian, but closer to Student t distribution. | How do the t-distribution and standard normal distribution differ, and why is t-distribution used mo | The reason t-distribution is used in inference instead of normal is due to the fact that the theoretical distribution of some estimators is normal (Gaussian) only when the standard deviation is known, | How do the t-distribution and standard normal distribution differ, and why is t-distribution used more?
The reason t-distribution is used in inference instead of normal is due to the fact that the theoretical distribution of some estimators is normal (Gaussian) only when the standard deviation is known, and when it is unknown the theoretical distribution is Student t.
We rarely know the standard deviation. Usually, we estimate from the sample, so for many estimators it is theoretically more solid to use Student t distribution and not normal.
Some estimators are consistent, i.e. in layman terms, they get better when the sample size increases. Student t becomes normal when sample size is large.
Example: sample mean
Consider a mean $\mu$ of the sample $x_1,x_2,\dots,x_n$. We can estimate it using a usual average estimator: $\bar x=\frac 1 n\sum_{i=1}^nx_i$, which you may call a sample mean.
If we want to make inference statements about the mean, such as whether a true mean $\mu<0$, we can use the sample mean $\bar x$ but we need to know what is its distribution. It turns out that if we knew the standard deviation $\sigma$ of $x_i$, then the sample mean would be distributed around the true mean according to Gaussian: $\bar x\sim\mathcal N(\mu,\sigma^2/n)$, for large enough $n$
The problem's that we rarely know $\sigma$, but we can estimate its value from the sample $\hat\sigma$ using one of the estimators. In this case the distribution of the sample mean is no longer Gaussian, but closer to Student t distribution. | How do the t-distribution and standard normal distribution differ, and why is t-distribution used mo
The reason t-distribution is used in inference instead of normal is due to the fact that the theoretical distribution of some estimators is normal (Gaussian) only when the standard deviation is known, |
31,631 | Uninformative prior density on normal | Let $\phi = \log (\sigma) = \tfrac{1}{2} \log (\sigma^2)$ so that you have the inverse transformation $\sigma^2 = \exp (2\phi)$. Now we apply the standard rule for transformations of random variables to get:
$$p(\sigma^2) = p(\phi) \cdot \Bigg| \frac{d \phi}{d\sigma^2} \Bigg| \propto 1 \cdot \frac{1}{2\sigma^2} \propto (\sigma^2)^{-1}.$$
Since the parameters are independent in this prior, we then have:
$$p(\mu, \sigma^2) = p(\mu) p(\sigma^2) \propto (\sigma^2)^{-1}.$$
This gives the stated form for the improper prior density. As to the justification for why this prior is sensible, there are several avenues of appeal. The simplest justification is that we would like to take $\mu$ and $\log \sigma$ to be uniform to represent "ignorance" about these parameters. Taking the logarithm of the variance is a transformation that ensures that our beliefs about that parameter are scale invariant. (Our beliefs about the mean parameter are also location and scale invariant.) In other words, we would like our representation of ignorance for the two parameters to be invariant to arbitrary changes in the measurement scale of the variables.
For the derivation above, we have used an improper uniform prior on the log-variance parameter. It is possible to get the same result in a limiting sense, by using a proper prior for the log-scale that tends towards uniformity, and finding the proper prior for the variance that corresponds to this, and then taking the limit to obtain the present improper variance prior. This is really just a reflection of the fact that improper priors can generally be interpreted as limits of proper priors.
There are many other possible justifications for this improper prior, and these appeal to the theory of representing prior "ignorance". There is a large literature on this subject, but a shorter discussion can be found in Irony and Singpurwalla (1997) (discussion with José Bernardo) which talks about the various methods by which we try to represent "ignorance". The improper prior you are dealing with here is the limiting version of the conjugate prior for the normal model, with the prior variance for each parameter taken to infinity. | Uninformative prior density on normal | Let $\phi = \log (\sigma) = \tfrac{1}{2} \log (\sigma^2)$ so that you have the inverse transformation $\sigma^2 = \exp (2\phi)$. Now we apply the standard rule for transformations of random variables | Uninformative prior density on normal
Let $\phi = \log (\sigma) = \tfrac{1}{2} \log (\sigma^2)$ so that you have the inverse transformation $\sigma^2 = \exp (2\phi)$. Now we apply the standard rule for transformations of random variables to get:
$$p(\sigma^2) = p(\phi) \cdot \Bigg| \frac{d \phi}{d\sigma^2} \Bigg| \propto 1 \cdot \frac{1}{2\sigma^2} \propto (\sigma^2)^{-1}.$$
Since the parameters are independent in this prior, we then have:
$$p(\mu, \sigma^2) = p(\mu) p(\sigma^2) \propto (\sigma^2)^{-1}.$$
This gives the stated form for the improper prior density. As to the justification for why this prior is sensible, there are several avenues of appeal. The simplest justification is that we would like to take $\mu$ and $\log \sigma$ to be uniform to represent "ignorance" about these parameters. Taking the logarithm of the variance is a transformation that ensures that our beliefs about that parameter are scale invariant. (Our beliefs about the mean parameter are also location and scale invariant.) In other words, we would like our representation of ignorance for the two parameters to be invariant to arbitrary changes in the measurement scale of the variables.
For the derivation above, we have used an improper uniform prior on the log-variance parameter. It is possible to get the same result in a limiting sense, by using a proper prior for the log-scale that tends towards uniformity, and finding the proper prior for the variance that corresponds to this, and then taking the limit to obtain the present improper variance prior. This is really just a reflection of the fact that improper priors can generally be interpreted as limits of proper priors.
There are many other possible justifications for this improper prior, and these appeal to the theory of representing prior "ignorance". There is a large literature on this subject, but a shorter discussion can be found in Irony and Singpurwalla (1997) (discussion with José Bernardo) which talks about the various methods by which we try to represent "ignorance". The improper prior you are dealing with here is the limiting version of the conjugate prior for the normal model, with the prior variance for each parameter taken to infinity. | Uninformative prior density on normal
Let $\phi = \log (\sigma) = \tfrac{1}{2} \log (\sigma^2)$ so that you have the inverse transformation $\sigma^2 = \exp (2\phi)$. Now we apply the standard rule for transformations of random variables |
31,632 | Explaining dimensionality reduction using SVD (without reference to PCA) | The SVD can be linked to dimensionality reduction from the standpoint of low rank matrix approximation.
SVD for low rank matrix approximation
Suppose we have a matrix $X$ and want to approximate it with a rank $r$ matrix $\hat{X}$, where $r < \text{rank}(X)$. The approximation error is typically measured by the Frobenius norm (which is equivalent to the square root of the squared error). The problem is then:
$$\min_\hat{X} \ \|X - \hat{X}\|_F \quad s.t. \ \text{rank}(\hat{X}) = r$$
The Eckart-Young theorem (Eckart and Young 1936) states that the solution is given by the truncated SVD of $X$:
$$\hat{X} = \tilde{U} \tilde{S} \tilde{V}^T$$
Where $X = U S V^T$ is the SVD of $X$, and $\tilde{U}, \tilde{S}, \tilde{V}$ are truncated versions of $U, S, V$--the bottom singular values and corresponding singular vectors have been discarded and only the top $r$ are retained.
Connection to dimensionality reduction
The rank of a data matrix indicates the number of dimensions spanned by the data points, i.e. the dimensionality of the linear (sub)space in which the points lie. If $X$ is a data matrix and $\hat{X}$ is a low rank approximation computed as above, this means that $\hat{X}$ is an approximation of $X$ where the points have been squashed into a lower dimensional subspace--specifically, an $r$-dimensional subspace. Hence, the truncated SVD performs dimensionality reduction.
Suppose rows correspond to data points and columns to dimensions. Then $\tilde{U} \tilde{S}$ (which has $r$ columns) gives low dimensional representations of the data. Multiplying by $\tilde{V}^T$ projects these low dimensional points back into the high dimensional space to approximate the original data: $X \approx \tilde{U} \tilde{S} \tilde{V}^T$
Connection to PCA
The relationship between SVD and PCA is often explained via eigendecomposition of the covariance matrix (e.g. as described here). But, there's also an alternative explanation based on the approximation error.
As above, the truncated SVD gives low dimensional representations that approximate the original data when projected back into the high dimensional space. This minimizes the approximation error as measured by the Frobenius norm, which is equivalent to minimizing the squared error. That is, it minimizes the squared distance between each data point and its reconstruction from the low dimensional representation.
This corresponds exactly to a description of PCA. PCA is commonly described as finding the directions of maximum variance, but it can be described equivalently as minimizing the squared approximation error, as above (e.g. see Tipping and Bishop 1999). Thus, SVD of the data matrix is equivalent to PCA. Note that this holds when the data matrix is centered. Otherwise, SVD corresponds to non-centered PCA (which can be expressed in terms of an eigendecomposition of $X^T X$ rather than the covariance matrix).
Other norms
Above, I focused on the Frobenius norm/squared error because that's the most common way of measuring reconstruction error. But, the truncated SVD also minimizes the reconstruction error as measured by other matrix norms. In particular, the Eckart-Young theorem can be generalized to all unitarily invariant norms (i.e. norms that are invariant to unitary transformations). See Mirsky (1960) and Li and Strang (2020). For example, unitarily invariant norms include the Ky Fan k-norms and Schatten p-norms, which include the common Frobenius, spectral, and nuclear norms as special cases.
References
Eckart and Young (1936). The approximation of one matrix by another of lower rank.
Li and Strang (2020). An elementary proof of Mirsky’s low rank approximation theorem.
Mirsky (1960). Symmetric gauge functions and unitarily invariant norms.
Tipping and Bishop (1999). Probabilistic principal component analysis. | Explaining dimensionality reduction using SVD (without reference to PCA) | The SVD can be linked to dimensionality reduction from the standpoint of low rank matrix approximation.
SVD for low rank matrix approximation
Suppose we have a matrix $X$ and want to approximate it wi | Explaining dimensionality reduction using SVD (without reference to PCA)
The SVD can be linked to dimensionality reduction from the standpoint of low rank matrix approximation.
SVD for low rank matrix approximation
Suppose we have a matrix $X$ and want to approximate it with a rank $r$ matrix $\hat{X}$, where $r < \text{rank}(X)$. The approximation error is typically measured by the Frobenius norm (which is equivalent to the square root of the squared error). The problem is then:
$$\min_\hat{X} \ \|X - \hat{X}\|_F \quad s.t. \ \text{rank}(\hat{X}) = r$$
The Eckart-Young theorem (Eckart and Young 1936) states that the solution is given by the truncated SVD of $X$:
$$\hat{X} = \tilde{U} \tilde{S} \tilde{V}^T$$
Where $X = U S V^T$ is the SVD of $X$, and $\tilde{U}, \tilde{S}, \tilde{V}$ are truncated versions of $U, S, V$--the bottom singular values and corresponding singular vectors have been discarded and only the top $r$ are retained.
Connection to dimensionality reduction
The rank of a data matrix indicates the number of dimensions spanned by the data points, i.e. the dimensionality of the linear (sub)space in which the points lie. If $X$ is a data matrix and $\hat{X}$ is a low rank approximation computed as above, this means that $\hat{X}$ is an approximation of $X$ where the points have been squashed into a lower dimensional subspace--specifically, an $r$-dimensional subspace. Hence, the truncated SVD performs dimensionality reduction.
Suppose rows correspond to data points and columns to dimensions. Then $\tilde{U} \tilde{S}$ (which has $r$ columns) gives low dimensional representations of the data. Multiplying by $\tilde{V}^T$ projects these low dimensional points back into the high dimensional space to approximate the original data: $X \approx \tilde{U} \tilde{S} \tilde{V}^T$
Connection to PCA
The relationship between SVD and PCA is often explained via eigendecomposition of the covariance matrix (e.g. as described here). But, there's also an alternative explanation based on the approximation error.
As above, the truncated SVD gives low dimensional representations that approximate the original data when projected back into the high dimensional space. This minimizes the approximation error as measured by the Frobenius norm, which is equivalent to minimizing the squared error. That is, it minimizes the squared distance between each data point and its reconstruction from the low dimensional representation.
This corresponds exactly to a description of PCA. PCA is commonly described as finding the directions of maximum variance, but it can be described equivalently as minimizing the squared approximation error, as above (e.g. see Tipping and Bishop 1999). Thus, SVD of the data matrix is equivalent to PCA. Note that this holds when the data matrix is centered. Otherwise, SVD corresponds to non-centered PCA (which can be expressed in terms of an eigendecomposition of $X^T X$ rather than the covariance matrix).
Other norms
Above, I focused on the Frobenius norm/squared error because that's the most common way of measuring reconstruction error. But, the truncated SVD also minimizes the reconstruction error as measured by other matrix norms. In particular, the Eckart-Young theorem can be generalized to all unitarily invariant norms (i.e. norms that are invariant to unitary transformations). See Mirsky (1960) and Li and Strang (2020). For example, unitarily invariant norms include the Ky Fan k-norms and Schatten p-norms, which include the common Frobenius, spectral, and nuclear norms as special cases.
References
Eckart and Young (1936). The approximation of one matrix by another of lower rank.
Li and Strang (2020). An elementary proof of Mirsky’s low rank approximation theorem.
Mirsky (1960). Symmetric gauge functions and unitarily invariant norms.
Tipping and Bishop (1999). Probabilistic principal component analysis. | Explaining dimensionality reduction using SVD (without reference to PCA)
The SVD can be linked to dimensionality reduction from the standpoint of low rank matrix approximation.
SVD for low rank matrix approximation
Suppose we have a matrix $X$ and want to approximate it wi |
31,633 | Explaining dimensionality reduction using SVD (without reference to PCA) | SVD is a generalization of PCA in the following sense. If one apples the singular value decomposition to a covariance matrix, then one gets the pca decomposition of that matrix. Viewing the matrix as a linear transformation, the matrix takes an orthonormal vector to a linear subspace spanned by one of the orthonormal vectors in the target space. What both results state is that if we consider a matrix as a linear transformation from one vector space to another, then there are orthonomal (perpendicular and of unit length) basis in both spaces such that the matrix takes a vector in one basis to the one dimensional space spanned by a vector in the other basis. In the case of a symmetric matrix one can take both vector spaces to be the same and there is only one basis. In either case once you have an orthonormal basis, , all the linear transformation can do is take a orthonormal vector to a multiple of a member of the orthonormal basis in the other. That scalar multiplication in the target space is a diagonal matrix in the case of pca,and in the case of a svd, it is a pseudo diagonal matrix. In words, with the right basis any linear transformation is just scaling the basis.
Once one has the last statement, the dimension reduction statement is, it seems to me, clear. If one has a diagonal matrix, what is the one dimensional matrix that is closest to it ? It is the diagonal matrix with all zeros except that it agrees with the original diagonal on the element of largest absolute value. | Explaining dimensionality reduction using SVD (without reference to PCA) | SVD is a generalization of PCA in the following sense. If one apples the singular value decomposition to a covariance matrix, then one gets the pca decomposition of that matrix. Viewing the matrix as | Explaining dimensionality reduction using SVD (without reference to PCA)
SVD is a generalization of PCA in the following sense. If one apples the singular value decomposition to a covariance matrix, then one gets the pca decomposition of that matrix. Viewing the matrix as a linear transformation, the matrix takes an orthonormal vector to a linear subspace spanned by one of the orthonormal vectors in the target space. What both results state is that if we consider a matrix as a linear transformation from one vector space to another, then there are orthonomal (perpendicular and of unit length) basis in both spaces such that the matrix takes a vector in one basis to the one dimensional space spanned by a vector in the other basis. In the case of a symmetric matrix one can take both vector spaces to be the same and there is only one basis. In either case once you have an orthonormal basis, , all the linear transformation can do is take a orthonormal vector to a multiple of a member of the orthonormal basis in the other. That scalar multiplication in the target space is a diagonal matrix in the case of pca,and in the case of a svd, it is a pseudo diagonal matrix. In words, with the right basis any linear transformation is just scaling the basis.
Once one has the last statement, the dimension reduction statement is, it seems to me, clear. If one has a diagonal matrix, what is the one dimensional matrix that is closest to it ? It is the diagonal matrix with all zeros except that it agrees with the original diagonal on the element of largest absolute value. | Explaining dimensionality reduction using SVD (without reference to PCA)
SVD is a generalization of PCA in the following sense. If one apples the singular value decomposition to a covariance matrix, then one gets the pca decomposition of that matrix. Viewing the matrix as |
31,634 | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0? | 1. A practical suggestion.
Change this part of the code
if mad == 0:
mad = 9223372036854775807 # maxint
to
if mad == 0:
mad = 2.2250738585072014e-308 # sys.float_info.min
It does the trick. Division by this number blows up the Iglewicz-Hoaglin test statistic – exactly as desired. That is, marking strongly deviant observations as outliers.
2. Previous practical suggestion.
What you could do, is check if it works with the closely related definition of mean absolute error (MAE):
$$
\text{MAE} = \frac{1}{n} \sum_{i=1}^n |x_i - \text{median}(x)|,
$$
with $e_i = x_i - \text{median}(x)$ the errors (better: residuals, or, deviations).
IBM uses this variant:
$$
M_{i} = \frac{x_{i} - \text{median}(x)} { 1.253314 \cdot \text{MAE} }
$$
for the if MAD == 0 case.
3. What is going on here? (From a programming perspective)
Consider the two cases:
$0/0$,
$x/0$ for $x \neq 0$.
Scientific programming languages R, Matlab and Julia have the following behavior:
0/0 returns NaN.
90/0 returns Inf.
Python, on the other hand, throws a ZeroDivisionError in both cases.
Practical suggestion one circumvents both cases for both flavors of zero-division handling. | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0? | 1. A practical suggestion.
Change this part of the code
if mad == 0:
mad = 9223372036854775807 # maxint
to
if mad == 0:
mad = 2.2250738585072014e-308 # sys.float_info.min
I | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0?
1. A practical suggestion.
Change this part of the code
if mad == 0:
mad = 9223372036854775807 # maxint
to
if mad == 0:
mad = 2.2250738585072014e-308 # sys.float_info.min
It does the trick. Division by this number blows up the Iglewicz-Hoaglin test statistic – exactly as desired. That is, marking strongly deviant observations as outliers.
2. Previous practical suggestion.
What you could do, is check if it works with the closely related definition of mean absolute error (MAE):
$$
\text{MAE} = \frac{1}{n} \sum_{i=1}^n |x_i - \text{median}(x)|,
$$
with $e_i = x_i - \text{median}(x)$ the errors (better: residuals, or, deviations).
IBM uses this variant:
$$
M_{i} = \frac{x_{i} - \text{median}(x)} { 1.253314 \cdot \text{MAE} }
$$
for the if MAD == 0 case.
3. What is going on here? (From a programming perspective)
Consider the two cases:
$0/0$,
$x/0$ for $x \neq 0$.
Scientific programming languages R, Matlab and Julia have the following behavior:
0/0 returns NaN.
90/0 returns Inf.
Python, on the other hand, throws a ZeroDivisionError in both cases.
Practical suggestion one circumvents both cases for both flavors of zero-division handling. | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0?
1. A practical suggestion.
Change this part of the code
if mad == 0:
mad = 9223372036854775807 # maxint
to
if mad == 0:
mad = 2.2250738585072014e-308 # sys.float_info.min
I |
31,635 | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0? | Three facts will help you here.
What you discovered is called the exact fit property. If a proportion $\alpha > 0.5$ of the observations in your sample have the same value, the mad of your sample will be 0.
This is not a property of the mad in particular, but of all robust estimators of scale. More precisely: any robust estimator of scale with a breakdown point of $0< \alpha < 0.5$ will have an exact fit property at the level of $1-\alpha$ (see section 3 of Croux et al., 2006, [0], for example).
Your first proposals amount to replacing the value of $M_i$ by arbitrary numbers in case of exact fit (setting $M_i=0$ in the former and $M_i=O(1/\sigma)$ --where $\sigma$ is the amount by which you perturb the data-- in the latter).
Your proposed solution to the problem (point 3) is not the correct one.
In fact, the correct solution to your problem is much simpler. Keep the MAD, keep the outliers rejection rule. All you need to do is to adopt the convention $0/0:=0$ in the computation of the outliers detection rule. This convention has no impact outside of exact fit cases. Then you can use the rule regardless of whether the MAD is strictly positive or not.
This is because:
In an exact fit situation whereby half or more of the data is tied at an arbitrary value $x$, all observations in your sample that are different from $x$ are severe outliers.
In such a situation, all observations in your sample that are different from $x$ are, after all, infinitely divergent from the pattern of the bulk of the data. Then, adopting the $0/0:=0$ will assign the correct outlyingness score both to those observation equal to $x$ ($M_i=0$) and those different from $x$ ($M_i=\infty$).
The reason you can use this convention is because the exact fit property is bijective:
Mad = 0 $\iff$ more than half of your sample are tied to the same
value.
Algorithms for projection-pursuit robust principal component analysis. (2006). Croux, C. Filzmoser, P. and Oliveira, M. R. | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0? | Three facts will help you here.
What you discovered is called the exact fit property. If a proportion $\alpha > 0.5$ of the observations in your sample have the same value, the mad of your sample wi | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0?
Three facts will help you here.
What you discovered is called the exact fit property. If a proportion $\alpha > 0.5$ of the observations in your sample have the same value, the mad of your sample will be 0.
This is not a property of the mad in particular, but of all robust estimators of scale. More precisely: any robust estimator of scale with a breakdown point of $0< \alpha < 0.5$ will have an exact fit property at the level of $1-\alpha$ (see section 3 of Croux et al., 2006, [0], for example).
Your first proposals amount to replacing the value of $M_i$ by arbitrary numbers in case of exact fit (setting $M_i=0$ in the former and $M_i=O(1/\sigma)$ --where $\sigma$ is the amount by which you perturb the data-- in the latter).
Your proposed solution to the problem (point 3) is not the correct one.
In fact, the correct solution to your problem is much simpler. Keep the MAD, keep the outliers rejection rule. All you need to do is to adopt the convention $0/0:=0$ in the computation of the outliers detection rule. This convention has no impact outside of exact fit cases. Then you can use the rule regardless of whether the MAD is strictly positive or not.
This is because:
In an exact fit situation whereby half or more of the data is tied at an arbitrary value $x$, all observations in your sample that are different from $x$ are severe outliers.
In such a situation, all observations in your sample that are different from $x$ are, after all, infinitely divergent from the pattern of the bulk of the data. Then, adopting the $0/0:=0$ will assign the correct outlyingness score both to those observation equal to $x$ ($M_i=0$) and those different from $x$ ($M_i=\infty$).
The reason you can use this convention is because the exact fit property is bijective:
Mad = 0 $\iff$ more than half of your sample are tied to the same
value.
Algorithms for projection-pursuit robust principal component analysis. (2006). Croux, C. Filzmoser, P. and Oliveira, M. R. | Iglewicz and Hoaglin outlier test with modified z-scores - What should I do if the MAD becomes 0?
Three facts will help you here.
What you discovered is called the exact fit property. If a proportion $\alpha > 0.5$ of the observations in your sample have the same value, the mad of your sample wi |
31,636 | What does the term "gold label" refer to in the context of semi-supervised classification? | From https://hazyresearch.github.io/snorkel/blog/snark.html:
We call this type of training data weak supervision because it’s noisier and less accurate than the expensive, manually-curated “gold” labels that machine learning models are usually trained on. However, Snorkel automatically de-noises this noisy training data, so that we can then use it to train state-of-the-art models.
As I understand it, the goal of Snorkel is to generate a large set of synthetic training data for large-scale ML algorithms by learning from a much smaller set of hand-labeled training data. The hand-labeled training data have been handled by subject-matter experts and thus we are much more certain of the correctness of the label (but obtaining a large set of such data may be prohibitively expensive, hence the impetus for Snorkel in the first place). So it appears they are calling these hand-labeled data "gold" labels, as they represent some reliable ground-truth value. This can be contrasted with the labels output by the algorithm, which are hopefully of high quality but are still subject to noise by construction. | What does the term "gold label" refer to in the context of semi-supervised classification? | From https://hazyresearch.github.io/snorkel/blog/snark.html:
We call this type of training data weak supervision because it’s noisier and less accurate than the expensive, manually-curated “gold” lab | What does the term "gold label" refer to in the context of semi-supervised classification?
From https://hazyresearch.github.io/snorkel/blog/snark.html:
We call this type of training data weak supervision because it’s noisier and less accurate than the expensive, manually-curated “gold” labels that machine learning models are usually trained on. However, Snorkel automatically de-noises this noisy training data, so that we can then use it to train state-of-the-art models.
As I understand it, the goal of Snorkel is to generate a large set of synthetic training data for large-scale ML algorithms by learning from a much smaller set of hand-labeled training data. The hand-labeled training data have been handled by subject-matter experts and thus we are much more certain of the correctness of the label (but obtaining a large set of such data may be prohibitively expensive, hence the impetus for Snorkel in the first place). So it appears they are calling these hand-labeled data "gold" labels, as they represent some reliable ground-truth value. This can be contrasted with the labels output by the algorithm, which are hopefully of high quality but are still subject to noise by construction. | What does the term "gold label" refer to in the context of semi-supervised classification?
From https://hazyresearch.github.io/snorkel/blog/snark.html:
We call this type of training data weak supervision because it’s noisier and less accurate than the expensive, manually-curated “gold” lab |
31,637 | What is variable importance? | I've only encountered the term in machine learning contexts (that is, contexts where one is interested in accurate predictions and not necessarily theoretical inferences), but the concept can be applied to any statistical model.
(My) definition: Variable importance refers to how much a given model "uses" that variable to make accurate predictions. The more a model relies on a variable to make predictions, the more important it is for the model.
It can apply to many different models, each using different metrics.
Imagine two variables on the same scale in a standard ordinary least squares regression. One has a regression coefficient of 1.6, the other has one of .003. The former is a more important variable than the latter, because the model relies in the former more (remember that the variables are on the same scale and their coefficients are directly comparable). Another way to do this would be to look at the change in $R^2$ when adding each variable; the one with a higher $\Delta R^2$ is more important.
Similarly, one could compare two variables used in a random forest. If the trees in the forest split the sample more on variable A than variable B, then variable A is more important to the model. There are a bunch of metrics for quantifying this, for example, here is what the documentation for the popular randomForest::importance() package does:
Here are the definitions of the variable importance measures. The first measure is computed from
permuting OOB data: For each tree, the prediction error on the out-of-bag portion of the data is
recorded (error rate for classification, MSE for regression). Then the same is done after permuting
each predictor variable. The difference between the two are then averaged over all trees, and normalized
by the standard deviation of the differences. If the standard deviation of the differences is
equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that
case).
The second measure is the total decrease in node impurities from splitting on the variable, averaged
over all trees. For classification, the node impurity is measured by the Gini index. For regression, it
is measured by residual sum of squares.
Variable importance is often used for variable selection: What variables could we drop from the model (not contributing much information), and what variables should we make sure to always measure and use in the model?
The wonderful Introduction to Statistical Learning book by James, Witten, Hastie, & Tibshirani specifically discusses variable importance a few times throughout the book (e.g., p. 319, 330). URL: http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf | What is variable importance? | I've only encountered the term in machine learning contexts (that is, contexts where one is interested in accurate predictions and not necessarily theoretical inferences), but the concept can be appli | What is variable importance?
I've only encountered the term in machine learning contexts (that is, contexts where one is interested in accurate predictions and not necessarily theoretical inferences), but the concept can be applied to any statistical model.
(My) definition: Variable importance refers to how much a given model "uses" that variable to make accurate predictions. The more a model relies on a variable to make predictions, the more important it is for the model.
It can apply to many different models, each using different metrics.
Imagine two variables on the same scale in a standard ordinary least squares regression. One has a regression coefficient of 1.6, the other has one of .003. The former is a more important variable than the latter, because the model relies in the former more (remember that the variables are on the same scale and their coefficients are directly comparable). Another way to do this would be to look at the change in $R^2$ when adding each variable; the one with a higher $\Delta R^2$ is more important.
Similarly, one could compare two variables used in a random forest. If the trees in the forest split the sample more on variable A than variable B, then variable A is more important to the model. There are a bunch of metrics for quantifying this, for example, here is what the documentation for the popular randomForest::importance() package does:
Here are the definitions of the variable importance measures. The first measure is computed from
permuting OOB data: For each tree, the prediction error on the out-of-bag portion of the data is
recorded (error rate for classification, MSE for regression). Then the same is done after permuting
each predictor variable. The difference between the two are then averaged over all trees, and normalized
by the standard deviation of the differences. If the standard deviation of the differences is
equal to 0 for a variable, the division is not done (but the average is almost always equal to 0 in that
case).
The second measure is the total decrease in node impurities from splitting on the variable, averaged
over all trees. For classification, the node impurity is measured by the Gini index. For regression, it
is measured by residual sum of squares.
Variable importance is often used for variable selection: What variables could we drop from the model (not contributing much information), and what variables should we make sure to always measure and use in the model?
The wonderful Introduction to Statistical Learning book by James, Witten, Hastie, & Tibshirani specifically discusses variable importance a few times throughout the book (e.g., p. 319, 330). URL: http://www-bcf.usc.edu/~gareth/ISL/ISLR%20Seventh%20Printing.pdf | What is variable importance?
I've only encountered the term in machine learning contexts (that is, contexts where one is interested in accurate predictions and not necessarily theoretical inferences), but the concept can be appli |
31,638 | What's the intuition for ABA' in linear algebra? | This pattern often occurs when there is, explicitly or implicitly, an orthonormal change of basis going on.
If we have a matrix $B$ and would like to re-express the transformation of multiplication by $B$ in another basis, standard linear algebra tells us that the matrix expression of the same transformation in the new basis is a similar matrix:
$$ B_{\text{new}} = A B A^{-1} $$
Here the matrix $A$ is called a change of basis matrix.
When we want to change between two orthonormal basies, a simple calculation shows that the change of basis matrix is an orthogonal matrix. Orthogonal matrices $B$ satisfy the identity:
$$ B^{-1} = B' $$
So this is how you end up with expressions like $B A B'$. | What's the intuition for ABA' in linear algebra? | This pattern often occurs when there is, explicitly or implicitly, an orthonormal change of basis going on.
If we have a matrix $B$ and would like to re-express the transformation of multiplication by | What's the intuition for ABA' in linear algebra?
This pattern often occurs when there is, explicitly or implicitly, an orthonormal change of basis going on.
If we have a matrix $B$ and would like to re-express the transformation of multiplication by $B$ in another basis, standard linear algebra tells us that the matrix expression of the same transformation in the new basis is a similar matrix:
$$ B_{\text{new}} = A B A^{-1} $$
Here the matrix $A$ is called a change of basis matrix.
When we want to change between two orthonormal basies, a simple calculation shows that the change of basis matrix is an orthogonal matrix. Orthogonal matrices $B$ satisfy the identity:
$$ B^{-1} = B' $$
So this is how you end up with expressions like $B A B'$. | What's the intuition for ABA' in linear algebra?
This pattern often occurs when there is, explicitly or implicitly, an orthonormal change of basis going on.
If we have a matrix $B$ and would like to re-express the transformation of multiplication by |
31,639 | What's the intuition for ABA' in linear algebra? | For short
This is covariance matrix's property $var(AX)=Avar(X)A^T$.
Also in scalar variance, $var(aX) = a^2var(X)$ when $a$ is constant. (Maybe) we can interprete as squred of matrix as well.
In other , we can interprete this and linear tranformation of ramdom variable inside covariance matrix. when we find $ABA^T$. and we know that $B=Var(X)$ so $ABA^T = Var(AX)$
For long answer.
I learning about Kalman filter recently, so I would like to explain with covariance perspective.
From Kalman filter state transition equation.
$$x_k = Fx_{k-1}+Bu_k+w_k$$
Which define new state from last state and control input, we can predect next state (step k) given last step (step k-1) data by
$$\hat{x}_{k|k-1} = F\hat{x}_{k-1|k-1}+Bu_k$$
1.1 Then we find error covariance matrix of $\hat{x}_{k|k-1}$ called $P_{k|k-1}$ by
$$
P_{k|k-1}=Var(x_k-\hat{k}_{x|k-1})
$$
$$
P_{k|k-1}=Var((Fx_{k-1}+Bu_k+w_k)-(F\hat{x}_{k-1|k-1}+Bu_k))
$$
$$
P_{k|k-1}=Var(F(x_{k-1}-\hat{x}_{k-1|k-1})+w_k)
$$
1.2 With covariance matrix property, $Var(A\pm B)=Var(A)+Var(B)$ when A and B are independent. we got
$$P_{k|k-1}=Var(F(x_{k-1}-\hat{x}_{k-1|k-1}))+Var(w_k)$$
and we know that $Var(w_k) = Q_k$
$$P_{k|k-1}=Var(F(x_{k-1}-\hat{x}_{k-1|k-1}))+Q_k$$
1.2 when $F$ is constant matrix, with caovaraince matrix's property $var(AX)=Avar(X)A^T$, then we got
$$P_{k|k-1}=F Var(x_{k-1}-\hat{x}_{k-1|k-1})F^T+Q_k$$
1.3 we know that $P_{k-1|k-1}=Var(x_{k-1}-\hat{x}_{k-1|k-1})$, we got
$$P_{k|k-1}=F P_{k-1|k-1}F^T+Q_k$$
You see that $F P_{k-1|k-1}F^T$ pattern is just for update covariance matrix into new state coresponding to state update.
Example of $F$ and $H$
this is example value from wikipedia
$$F=
\begin{bmatrix}
1 & \Delta t\\
0 & 1
\end{bmatrix}
$$
$$H=
\begin{bmatrix}
1 & 0\\
\end{bmatrix}
$$ | What's the intuition for ABA' in linear algebra? | For short
This is covariance matrix's property $var(AX)=Avar(X)A^T$.
Also in scalar variance, $var(aX) = a^2var(X)$ when $a$ is constant. (Maybe) we can interprete as squred of matrix as well.
In othe | What's the intuition for ABA' in linear algebra?
For short
This is covariance matrix's property $var(AX)=Avar(X)A^T$.
Also in scalar variance, $var(aX) = a^2var(X)$ when $a$ is constant. (Maybe) we can interprete as squred of matrix as well.
In other , we can interprete this and linear tranformation of ramdom variable inside covariance matrix. when we find $ABA^T$. and we know that $B=Var(X)$ so $ABA^T = Var(AX)$
For long answer.
I learning about Kalman filter recently, so I would like to explain with covariance perspective.
From Kalman filter state transition equation.
$$x_k = Fx_{k-1}+Bu_k+w_k$$
Which define new state from last state and control input, we can predect next state (step k) given last step (step k-1) data by
$$\hat{x}_{k|k-1} = F\hat{x}_{k-1|k-1}+Bu_k$$
1.1 Then we find error covariance matrix of $\hat{x}_{k|k-1}$ called $P_{k|k-1}$ by
$$
P_{k|k-1}=Var(x_k-\hat{k}_{x|k-1})
$$
$$
P_{k|k-1}=Var((Fx_{k-1}+Bu_k+w_k)-(F\hat{x}_{k-1|k-1}+Bu_k))
$$
$$
P_{k|k-1}=Var(F(x_{k-1}-\hat{x}_{k-1|k-1})+w_k)
$$
1.2 With covariance matrix property, $Var(A\pm B)=Var(A)+Var(B)$ when A and B are independent. we got
$$P_{k|k-1}=Var(F(x_{k-1}-\hat{x}_{k-1|k-1}))+Var(w_k)$$
and we know that $Var(w_k) = Q_k$
$$P_{k|k-1}=Var(F(x_{k-1}-\hat{x}_{k-1|k-1}))+Q_k$$
1.2 when $F$ is constant matrix, with caovaraince matrix's property $var(AX)=Avar(X)A^T$, then we got
$$P_{k|k-1}=F Var(x_{k-1}-\hat{x}_{k-1|k-1})F^T+Q_k$$
1.3 we know that $P_{k-1|k-1}=Var(x_{k-1}-\hat{x}_{k-1|k-1})$, we got
$$P_{k|k-1}=F P_{k-1|k-1}F^T+Q_k$$
You see that $F P_{k-1|k-1}F^T$ pattern is just for update covariance matrix into new state coresponding to state update.
Example of $F$ and $H$
this is example value from wikipedia
$$F=
\begin{bmatrix}
1 & \Delta t\\
0 & 1
\end{bmatrix}
$$
$$H=
\begin{bmatrix}
1 & 0\\
\end{bmatrix}
$$ | What's the intuition for ABA' in linear algebra?
For short
This is covariance matrix's property $var(AX)=Avar(X)A^T$.
Also in scalar variance, $var(aX) = a^2var(X)$ when $a$ is constant. (Maybe) we can interprete as squred of matrix as well.
In othe |
31,640 | Q-learning when to stop training? | This depends very much on what your goal is. Here are some different cases I can think of:
Goal: Train until convergence, but no longer
From your question, I get the impression that this seems to be your goal. The easiest way is probably the "old-fashioned" way of plotting your episode returns during training (if it's an episodic task), inspecting the plot yourself, and interrupting the training process when it seems to have stabilized / converged. This assumes that you actually implemented something (like a very simple GUI with a stop button) so that you are able to decide manually when to interrupt the training loop.
To do this automatically (which is what I suppose you're looking for when you say "scientific way(s) to determine when to stop training"), I suppose you could do something simple like measuring average performance over the last 10 episodes, and also average performance over the last 50 episodes, and average performance over the last 100 episodes (for example). If those are all very similar, it may be safe to stop training. Or, maybe better, you could measure variance in performance over such a period of time, and stop if the variance drops below a certain threshold.
Goal: Compare performance of an algorithm to another algorithm / performance described in publications
In this case, you'd simply want to make sure to use a similar amount of training time / number of training steps as was used for the baseline you're comparing to. What often happens in current Reinforcement Learning research is to measure the mean performance over the last X (e.g. X = 10 or X = 100) episodes at specific points in time (e.g. after 10M, 50M, 100M and 200M frames in Atari games, see: https://arxiv.org/abs/1709.06009). Even better in my opinion is to do exactly this at every single point in time during training, and plot a learning curve. In this case it really doesn't matter all too much when you stop training, as long as you do it consistently in the same way for all algorithms you're comparing. Note: your decision for when to stop training will influence which conclusions you can reasonably draw though. If you stop training very early, you can't conclude anything about long-term training performance.
Goal: Implement an agent that is intended to be deployed for a long period of time
In this case, you may even want to consider to simply never stop learning ("life-long learning"). You can simply keep updating as your agent is deployed and acts in its environment. Or you could consider to halt the training whenever performance seems adequate, if you're afraid that it may degrade afterwards during deployment. | Q-learning when to stop training? | This depends very much on what your goal is. Here are some different cases I can think of:
Goal: Train until convergence, but no longer
From your question, I get the impression that this seems to be | Q-learning when to stop training?
This depends very much on what your goal is. Here are some different cases I can think of:
Goal: Train until convergence, but no longer
From your question, I get the impression that this seems to be your goal. The easiest way is probably the "old-fashioned" way of plotting your episode returns during training (if it's an episodic task), inspecting the plot yourself, and interrupting the training process when it seems to have stabilized / converged. This assumes that you actually implemented something (like a very simple GUI with a stop button) so that you are able to decide manually when to interrupt the training loop.
To do this automatically (which is what I suppose you're looking for when you say "scientific way(s) to determine when to stop training"), I suppose you could do something simple like measuring average performance over the last 10 episodes, and also average performance over the last 50 episodes, and average performance over the last 100 episodes (for example). If those are all very similar, it may be safe to stop training. Or, maybe better, you could measure variance in performance over such a period of time, and stop if the variance drops below a certain threshold.
Goal: Compare performance of an algorithm to another algorithm / performance described in publications
In this case, you'd simply want to make sure to use a similar amount of training time / number of training steps as was used for the baseline you're comparing to. What often happens in current Reinforcement Learning research is to measure the mean performance over the last X (e.g. X = 10 or X = 100) episodes at specific points in time (e.g. after 10M, 50M, 100M and 200M frames in Atari games, see: https://arxiv.org/abs/1709.06009). Even better in my opinion is to do exactly this at every single point in time during training, and plot a learning curve. In this case it really doesn't matter all too much when you stop training, as long as you do it consistently in the same way for all algorithms you're comparing. Note: your decision for when to stop training will influence which conclusions you can reasonably draw though. If you stop training very early, you can't conclude anything about long-term training performance.
Goal: Implement an agent that is intended to be deployed for a long period of time
In this case, you may even want to consider to simply never stop learning ("life-long learning"). You can simply keep updating as your agent is deployed and acts in its environment. Or you could consider to halt the training whenever performance seems adequate, if you're afraid that it may degrade afterwards during deployment. | Q-learning when to stop training?
This depends very much on what your goal is. Here are some different cases I can think of:
Goal: Train until convergence, but no longer
From your question, I get the impression that this seems to be |
31,641 | Limit of beta-binomial distribution is binomial | There are at least two ways of seeing this.
The urn interpretation of the distribution can be shown to be
The beta-binomial distribution can also be motivated via an urn model for positive integer values of $\alpha$ and $\beta$, known as the Polya urn model. Specifically, imagine an urn containing $\alpha$ red balls and $\beta$ black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, then two black balls are returned to the urn. If this is repeated $n$ times, then the probability of observing k red balls follows a beta-binomial distribution with parameters $n$,$\alpha$ and $\beta$.
However if $n$ is negligible compared to the number of balls in the urn, adding a few balls back to the urns makes negligible difference for the next draw. It follows that the distribution is simply that of drawing with replacement, which is binomial.
From an algebraic viewpoint, the distribution is
$$
{n \choose k} \frac{B(k + \alpha, n - k + \beta)}{B(\alpha, \beta)}
.
$$
By the properties of the Beta function
$$
B(x + 1, y) = B(x, y) \frac{x}{x + y},
\\
B(x, y + 1) = B(x, y) \frac{y}{x + y}
$$
Specifically,
$$
B(i + \alpha, n - k + \beta) = B(i - 1 + \alpha, n - k + \beta) \frac{i - 1 + \alpha}{i - 1 + n - k + \alpha + \beta},
$$
and for large $\alpha, \beta$, taking into account the Taylor series of $\frac{1}{1 + x}$:
$$
\frac{i - 1 + \alpha}{i - 1 + n - k + \alpha + \beta} =
\frac{i - 1 + \alpha}{\alpha} \frac{\alpha}{(\alpha + \beta) \left(1 + \frac{i - 1 + n - k}{\alpha + \beta}\right)}
\sim
\frac{i - 1 + \alpha}{\alpha} \frac{\alpha}{(\alpha + \beta)} \left(1 - \frac{i - 1 + n - k}{\alpha + \beta}\right) \sim \frac{\alpha}{(\alpha + \beta)}
.
$$
Continuing this,
$$
\frac{B(k + \alpha, n - k + \beta)}{B(\alpha, \beta)}
\sim
\frac{B(\alpha, \beta)}{B(\alpha, \beta)} \left( \frac{\alpha}{\alpha + \beta}\right)^k \left( \frac{\beta}{\alpha + \beta} \right)^{n - k}
,
$$
and the distribution is approximately binomial. | Limit of beta-binomial distribution is binomial | There are at least two ways of seeing this.
The urn interpretation of the distribution can be shown to be
The beta-binomial distribution can also be motivated via an urn model for positive integer | Limit of beta-binomial distribution is binomial
There are at least two ways of seeing this.
The urn interpretation of the distribution can be shown to be
The beta-binomial distribution can also be motivated via an urn model for positive integer values of $\alpha$ and $\beta$, known as the Polya urn model. Specifically, imagine an urn containing $\alpha$ red balls and $\beta$ black balls, where random draws are made. If a red ball is observed, then two red balls are returned to the urn. Likewise, if a black ball is drawn, then two black balls are returned to the urn. If this is repeated $n$ times, then the probability of observing k red balls follows a beta-binomial distribution with parameters $n$,$\alpha$ and $\beta$.
However if $n$ is negligible compared to the number of balls in the urn, adding a few balls back to the urns makes negligible difference for the next draw. It follows that the distribution is simply that of drawing with replacement, which is binomial.
From an algebraic viewpoint, the distribution is
$$
{n \choose k} \frac{B(k + \alpha, n - k + \beta)}{B(\alpha, \beta)}
.
$$
By the properties of the Beta function
$$
B(x + 1, y) = B(x, y) \frac{x}{x + y},
\\
B(x, y + 1) = B(x, y) \frac{y}{x + y}
$$
Specifically,
$$
B(i + \alpha, n - k + \beta) = B(i - 1 + \alpha, n - k + \beta) \frac{i - 1 + \alpha}{i - 1 + n - k + \alpha + \beta},
$$
and for large $\alpha, \beta$, taking into account the Taylor series of $\frac{1}{1 + x}$:
$$
\frac{i - 1 + \alpha}{i - 1 + n - k + \alpha + \beta} =
\frac{i - 1 + \alpha}{\alpha} \frac{\alpha}{(\alpha + \beta) \left(1 + \frac{i - 1 + n - k}{\alpha + \beta}\right)}
\sim
\frac{i - 1 + \alpha}{\alpha} \frac{\alpha}{(\alpha + \beta)} \left(1 - \frac{i - 1 + n - k}{\alpha + \beta}\right) \sim \frac{\alpha}{(\alpha + \beta)}
.
$$
Continuing this,
$$
\frac{B(k + \alpha, n - k + \beta)}{B(\alpha, \beta)}
\sim
\frac{B(\alpha, \beta)}{B(\alpha, \beta)} \left( \frac{\alpha}{\alpha + \beta}\right)^k \left( \frac{\beta}{\alpha + \beta} \right)^{n - k}
,
$$
and the distribution is approximately binomial. | Limit of beta-binomial distribution is binomial
There are at least two ways of seeing this.
The urn interpretation of the distribution can be shown to be
The beta-binomial distribution can also be motivated via an urn model for positive integer |
31,642 | Random vs fixed effects: why is the standard error the same for the slope but wildly different for the intercept? | OK, I think I understand this now. I figured one way to understand it might be through a parametric bootstrap of mod using bootMer as an alternative way of generating the standard errors. This function works by randomly sampling new datasets according to the parameters fitted in a given mixed model from lmer, then fitting a new model to that simulated data, to see how the parameter estimates vary for different samples from the same population. It offers one argument in particular, use.u, which allows the user to choose whether the random effects are resampled for each sample, or not. If we choose not to resample them, then we are simulating a world where the random effects come out at the same values each time -- sounds more like fixed effects? -- well, choosing that setting gives us the same standard errors as from the fixed effects model! On the other hand, choosing to simulate u, we get the same standard errors as from the mixed-effects model. (See below for code and output!)
Of course, this fits with the idea of a random effect as one that will come out with different levels each time, as opposed to a fixed effect that does not change (albeit my only confusion in answering this question is that that seems to be a definition that is refuted in some highly rated StackExchange answers such as this one). But that then makes sense of the standard errors presented in the question: if fixed effects are indeed, "fixed", then we are making inference in the context of those specific levels of the fixed effects: in the example in the question, in a repeat of the experiment, we would expect the dots to jiggle around within the bounds of the lines they are currently in, but those lines themselves would not move much. On the other hand, if the intercepts of those lines are random effects, then the lines should be expected to appear in different places in each new sample, and our idea of where the "middle" is, is far less clear: for all we know, this sample might have come out with all ten levels of the random effect being below the mean, for example, in which the case the true mean could even be above all the lines in the plot. On the other hand, the lines themselves are consistently pointing at a gradient of -1 and whether we view the heights of those lines as fixed or random, we see the same behaviour within a given line, so we should expect to draw the same inference around the gradient in both models.
Code and output:
> booted.simulate.u = bootMer(mod,fixef,nsim=10000,use.u=FALSE)
> booted.fix.u = bootMer(mod,fixef,nsim=10000,use.u=TRUE )
> print(apply(booted.simulate.u$t,2,sd))
(Intercept) x
1.915025294 0.009903413
> print(apply(booted.fix.u$t,2,sd))
(Intercept) x
0.055405505 0.009868006 | Random vs fixed effects: why is the standard error the same for the slope but wildly different for t | OK, I think I understand this now. I figured one way to understand it might be through a parametric bootstrap of mod using bootMer as an alternative way of generating the standard errors. This funct | Random vs fixed effects: why is the standard error the same for the slope but wildly different for the intercept?
OK, I think I understand this now. I figured one way to understand it might be through a parametric bootstrap of mod using bootMer as an alternative way of generating the standard errors. This function works by randomly sampling new datasets according to the parameters fitted in a given mixed model from lmer, then fitting a new model to that simulated data, to see how the parameter estimates vary for different samples from the same population. It offers one argument in particular, use.u, which allows the user to choose whether the random effects are resampled for each sample, or not. If we choose not to resample them, then we are simulating a world where the random effects come out at the same values each time -- sounds more like fixed effects? -- well, choosing that setting gives us the same standard errors as from the fixed effects model! On the other hand, choosing to simulate u, we get the same standard errors as from the mixed-effects model. (See below for code and output!)
Of course, this fits with the idea of a random effect as one that will come out with different levels each time, as opposed to a fixed effect that does not change (albeit my only confusion in answering this question is that that seems to be a definition that is refuted in some highly rated StackExchange answers such as this one). But that then makes sense of the standard errors presented in the question: if fixed effects are indeed, "fixed", then we are making inference in the context of those specific levels of the fixed effects: in the example in the question, in a repeat of the experiment, we would expect the dots to jiggle around within the bounds of the lines they are currently in, but those lines themselves would not move much. On the other hand, if the intercepts of those lines are random effects, then the lines should be expected to appear in different places in each new sample, and our idea of where the "middle" is, is far less clear: for all we know, this sample might have come out with all ten levels of the random effect being below the mean, for example, in which the case the true mean could even be above all the lines in the plot. On the other hand, the lines themselves are consistently pointing at a gradient of -1 and whether we view the heights of those lines as fixed or random, we see the same behaviour within a given line, so we should expect to draw the same inference around the gradient in both models.
Code and output:
> booted.simulate.u = bootMer(mod,fixef,nsim=10000,use.u=FALSE)
> booted.fix.u = bootMer(mod,fixef,nsim=10000,use.u=TRUE )
> print(apply(booted.simulate.u$t,2,sd))
(Intercept) x
1.915025294 0.009903413
> print(apply(booted.fix.u$t,2,sd))
(Intercept) x
0.055405505 0.009868006 | Random vs fixed effects: why is the standard error the same for the slope but wildly different for t
OK, I think I understand this now. I figured one way to understand it might be through a parametric bootstrap of mod using bootMer as an alternative way of generating the standard errors. This funct |
31,643 | Random vs fixed effects: why is the standard error the same for the slope but wildly different for the intercept? | The rationale for fitting a fixed or random intercept is much the same. The choice ultimately boils down to power: considerably more power is required to fit the fixed effect with one level for each cluster. You have 10 clusters with 90 observations.
The term called "intercept" in the fixed and mixed effects model has a different interpretation. For the fixed model the intercept is generated by contrast to estimate the predicted mean in the $p_1$ cluster with $x=0$. In the mixed model, the intercept term is the expected value for a person with $X=0$ drawn from no cluster in particular (or, similarly, averaging up over all the possible clusters to which that person may belong).
You'll note in the fixed effects output that $p_1$ is not included as a term. The (intercept) is 12. The subsequent $p_2, p_3, \ldots$ terms provide the contrast as a mean difference from $p_1$. So they tend to be 2 units higher per your simulation.
Your use of contrasts then marginalizes the fixed intercepts into a grand mean which is centered at 21 which you'll note is about the average of the sequence of generated means for $p$ of [12, 14, 16, 18, 20, 22, 24, 26, 28, 30]. The standard error for this marginalized coefficient is going to be more precise than the value you would get from ignoring clustering and merely looking at the intercept term in a fixed effect model which only adjusts for $x$.
So this value has the same interpretation as the mixed model. Why is the inference different? The Wald based inference (obtained from summary.merMod) is inappropriate in this case. The documentation in R points to using bootMer or confint with profile likelihood. The joint distribution of fixed and random effects is hierarchical, so as with the fixed effects model, you need to perform some kind of marginalization. The bootstrap and profile likelihood methods are akin to MCMC and numerical integration. That's why these results finally agree. | Random vs fixed effects: why is the standard error the same for the slope but wildly different for t | The rationale for fitting a fixed or random intercept is much the same. The choice ultimately boils down to power: considerably more power is required to fit the fixed effect with one level for each c | Random vs fixed effects: why is the standard error the same for the slope but wildly different for the intercept?
The rationale for fitting a fixed or random intercept is much the same. The choice ultimately boils down to power: considerably more power is required to fit the fixed effect with one level for each cluster. You have 10 clusters with 90 observations.
The term called "intercept" in the fixed and mixed effects model has a different interpretation. For the fixed model the intercept is generated by contrast to estimate the predicted mean in the $p_1$ cluster with $x=0$. In the mixed model, the intercept term is the expected value for a person with $X=0$ drawn from no cluster in particular (or, similarly, averaging up over all the possible clusters to which that person may belong).
You'll note in the fixed effects output that $p_1$ is not included as a term. The (intercept) is 12. The subsequent $p_2, p_3, \ldots$ terms provide the contrast as a mean difference from $p_1$. So they tend to be 2 units higher per your simulation.
Your use of contrasts then marginalizes the fixed intercepts into a grand mean which is centered at 21 which you'll note is about the average of the sequence of generated means for $p$ of [12, 14, 16, 18, 20, 22, 24, 26, 28, 30]. The standard error for this marginalized coefficient is going to be more precise than the value you would get from ignoring clustering and merely looking at the intercept term in a fixed effect model which only adjusts for $x$.
So this value has the same interpretation as the mixed model. Why is the inference different? The Wald based inference (obtained from summary.merMod) is inappropriate in this case. The documentation in R points to using bootMer or confint with profile likelihood. The joint distribution of fixed and random effects is hierarchical, so as with the fixed effects model, you need to perform some kind of marginalization. The bootstrap and profile likelihood methods are akin to MCMC and numerical integration. That's why these results finally agree. | Random vs fixed effects: why is the standard error the same for the slope but wildly different for t
The rationale for fitting a fixed or random intercept is much the same. The choice ultimately boils down to power: considerably more power is required to fit the fixed effect with one level for each c |
31,644 | Does Simpson's Paradox cover all instances of reversal from a hidden variable? | The paradox is that there exist 2x2x2 contingency tables (Agresti,
Categorical Data Analysis) where the marginal association has a
different direction from each conditional association [...] Am I missing a subtle transformation from the original Simpson/Yule examples of contingency tables into real values that justify the regression line visualization?
The main issue is that you are equating one simple way to show the paradox as the paradox itself. The simple example of the contingency table is not the paradox per se. Simpson's paradox is about conflicting causal intuitions when comparing marginal and conditional associations, most often due to sign reversals (or extreme attenuations such as independence, as in the original example given by Simpson himself, in which there isn't a sign reversal). The paradox arises when you interpret both estimates causally, which could lead to different conclusions --- does the treatment help or hurt the patient? And which estimate should you use?
Whether the paradoxical pattern shows up on a contingency table or in a regression, it doesn't matter. All variables can be continuous and the paradox could still happen --- for instance, you could have a case where $\frac{\partial E(Y|X)}{\partial X} > 0$ yet $\frac{\partial E(Y|X, C = c)}{\partial X} < 0, \forall c$.
Surely Simpson's is a particular instance of confounding error.
This is incorrect! Simpson's paradox is not a particular instance of confounding error -- if it were just that, then there would be no paradox at all. After all, if you are sure some relationship is confounded you would not be surprised to see sign reversals or attenuations in contingency tables or regression coefficients --- maybe you would even expect that.
So while Simpson's paradox refers to a reversal (or extreme attenuation) of "effects" when comparing marginal and conditional associations, this might not be due to confounding and a priori you can't know whether the marginal or the conditional table is the "correct" one to consult to answer your causal query. In order to do that, you need to know more about the causal structure of the problem.
Consider these examples given in Pearl:
Imagine that you are interested in the total causal effect of $X$ on $Y$.
The reversal of associations could happen in all of these graphs. In (a) and (d) we have confounding, and you would adjust for $Z$. In (b) there's no confounding, $Z$ is a mediator, and you should not adjust for $Z$. In (c) $Z$ is a collider and there's no confounding, so you should not adjust for $Z$ either. That is, in two of these examples (b and c) you could observe Simpson's paradox, yet, there's no confounding whatsoever and the correct answer for your causal query would be given by the unadjusted estimate.
Pearl's explanation of why this was deemed a "paradox" and why it still puzzles people is very plausible. Take the simple case depicted in (a) for instance: causal effects can't simply reverse like that. Hence, if we are mistakenly assuming both estimates are causal (the marginal and the conditional), we would be surprised to see such a thing happening --- and humans seem to be wired to see causation in most associations.
So back to your main (title) question:
Does Simpson's Paradox cover all instances of reversal from a hidden
variable?
In a sense, this is the current definition of Simpson's paradox. But obviously the conditioning variable is not hidden, it has to be observed otherwise you would not see the paradox happening. Most of the puzzling part of the paradox stems from causal considerations and this "hidden" variable is not necessarily a confounder.
Contigency tables and regression
As discussed in the comments, the algebraic identity of running a regression with binary data and computing the differences of proportions from the contingency tables might help understanding why the paradox showing up in regressions is of similar nature. Imagine your outcome is $y$, your treatment $x$ and your groups $z$, all variables binary.
Then the overall difference in proportion is simply the regression coefficient of $y$ on $x$. Using your notation:
$$
\frac{a+b}{c+d} - \frac{e+f}{g+h} = \frac{cov(y,x)}{var(x)}
$$
And the same thing holds for each subgroup of $z$ if you run separate regressions, one for $z=1$:
$$
\frac{a}{c} - \frac{e}{g} = \frac{cov(y,x|z =1)}{var(x|z=1)}
$$
And another for $z =0$:
$$
\frac{b}{d} - \frac{f}{h} = \frac{cov(y,x|z=0)}{var(x|z=0)}
$$
Hence in terms of regression, the paradox corresponds to estimating the first coefficient $\left(\frac{cov(y,x)}{var(x)}\right)$ in one direction and the two coefficients of the subgroups $\left(\frac{cov(y,x|z)}{var(x|z)}\right)$ in a different direction than the coefficient for the whole population $\left(\frac{cov(y,x)}{var(x)}\right)$. | Does Simpson's Paradox cover all instances of reversal from a hidden variable? | The paradox is that there exist 2x2x2 contingency tables (Agresti,
Categorical Data Analysis) where the marginal association has a
different direction from each conditional association [...] Am I | Does Simpson's Paradox cover all instances of reversal from a hidden variable?
The paradox is that there exist 2x2x2 contingency tables (Agresti,
Categorical Data Analysis) where the marginal association has a
different direction from each conditional association [...] Am I missing a subtle transformation from the original Simpson/Yule examples of contingency tables into real values that justify the regression line visualization?
The main issue is that you are equating one simple way to show the paradox as the paradox itself. The simple example of the contingency table is not the paradox per se. Simpson's paradox is about conflicting causal intuitions when comparing marginal and conditional associations, most often due to sign reversals (or extreme attenuations such as independence, as in the original example given by Simpson himself, in which there isn't a sign reversal). The paradox arises when you interpret both estimates causally, which could lead to different conclusions --- does the treatment help or hurt the patient? And which estimate should you use?
Whether the paradoxical pattern shows up on a contingency table or in a regression, it doesn't matter. All variables can be continuous and the paradox could still happen --- for instance, you could have a case where $\frac{\partial E(Y|X)}{\partial X} > 0$ yet $\frac{\partial E(Y|X, C = c)}{\partial X} < 0, \forall c$.
Surely Simpson's is a particular instance of confounding error.
This is incorrect! Simpson's paradox is not a particular instance of confounding error -- if it were just that, then there would be no paradox at all. After all, if you are sure some relationship is confounded you would not be surprised to see sign reversals or attenuations in contingency tables or regression coefficients --- maybe you would even expect that.
So while Simpson's paradox refers to a reversal (or extreme attenuation) of "effects" when comparing marginal and conditional associations, this might not be due to confounding and a priori you can't know whether the marginal or the conditional table is the "correct" one to consult to answer your causal query. In order to do that, you need to know more about the causal structure of the problem.
Consider these examples given in Pearl:
Imagine that you are interested in the total causal effect of $X$ on $Y$.
The reversal of associations could happen in all of these graphs. In (a) and (d) we have confounding, and you would adjust for $Z$. In (b) there's no confounding, $Z$ is a mediator, and you should not adjust for $Z$. In (c) $Z$ is a collider and there's no confounding, so you should not adjust for $Z$ either. That is, in two of these examples (b and c) you could observe Simpson's paradox, yet, there's no confounding whatsoever and the correct answer for your causal query would be given by the unadjusted estimate.
Pearl's explanation of why this was deemed a "paradox" and why it still puzzles people is very plausible. Take the simple case depicted in (a) for instance: causal effects can't simply reverse like that. Hence, if we are mistakenly assuming both estimates are causal (the marginal and the conditional), we would be surprised to see such a thing happening --- and humans seem to be wired to see causation in most associations.
So back to your main (title) question:
Does Simpson's Paradox cover all instances of reversal from a hidden
variable?
In a sense, this is the current definition of Simpson's paradox. But obviously the conditioning variable is not hidden, it has to be observed otherwise you would not see the paradox happening. Most of the puzzling part of the paradox stems from causal considerations and this "hidden" variable is not necessarily a confounder.
Contigency tables and regression
As discussed in the comments, the algebraic identity of running a regression with binary data and computing the differences of proportions from the contingency tables might help understanding why the paradox showing up in regressions is of similar nature. Imagine your outcome is $y$, your treatment $x$ and your groups $z$, all variables binary.
Then the overall difference in proportion is simply the regression coefficient of $y$ on $x$. Using your notation:
$$
\frac{a+b}{c+d} - \frac{e+f}{g+h} = \frac{cov(y,x)}{var(x)}
$$
And the same thing holds for each subgroup of $z$ if you run separate regressions, one for $z=1$:
$$
\frac{a}{c} - \frac{e}{g} = \frac{cov(y,x|z =1)}{var(x|z=1)}
$$
And another for $z =0$:
$$
\frac{b}{d} - \frac{f}{h} = \frac{cov(y,x|z=0)}{var(x|z=0)}
$$
Hence in terms of regression, the paradox corresponds to estimating the first coefficient $\left(\frac{cov(y,x)}{var(x)}\right)$ in one direction and the two coefficients of the subgroups $\left(\frac{cov(y,x|z)}{var(x|z)}\right)$ in a different direction than the coefficient for the whole population $\left(\frac{cov(y,x)}{var(x)}\right)$. | Does Simpson's Paradox cover all instances of reversal from a hidden variable?
The paradox is that there exist 2x2x2 contingency tables (Agresti,
Categorical Data Analysis) where the marginal association has a
different direction from each conditional association [...] Am I |
31,645 | Does Simpson's Paradox cover all instances of reversal from a hidden variable? | Am I missing a subtle transformation from the original Simpson/Yule examples of contingency tables into real values that justify the regression line visualization?
Yes. A similar representation of categorical analyses is possible by visualizing the log-odds of response on the Y-axis. Simpson's paradox appears much the same way with a "crude" line running against the stratum-specific trends weighted in distance according to the stratum referent log-odds of the outcome.
Here's an example with the Berkeley admissions data
Here gender is a male/female code, on the X-axis is the crude admissions log-odds for male versus female, the heavy dashed black line shows gender preference: the positive slope suggests a bias toward male admissions. The colors represent admission to specific departments. In all but two cases, the slope of the department-specific gender-preference line is negative. If these results are averaged together in a logistic model not accounting for interaction, the overall effect is a reversal favoring female admissions. They applied to harder departments more frequently than did males.
Surely Simpson's is a particular instance of confounding error. Has the term 'Simpson's Paradox' now become equated with confounding error, so that whatever the math, any change in direction via a hidden variable can be called Simpson's Paradox?
Briefly, no. Simpson's paradox is merely the "what" whereas confounding is the "why". The dominant discussion has focused on where they agree. Confounding may have a minimal or negligible effect on estimates, and alternately Simpson's paradox, while dramatic, may be caused by non-confounders. As a note, the terms "hidden" or "lurking" variable are imprecise. From an epidemiologist perspective, careful control and design of study should enable measurement or control of possible contributors to confounding bias. They need not be "hidden" to be a problem.
There are times in which point estimates may vary drastically, to the point of reversal, that does not result from confounding. Colliders and mediators are also change effects, possibly reversing them. Causal reasoning warns that for studying effects, the main effect should be studied in isolation rather than adjust for these as the stratified estimate is wrong. (It is akin to inferring, incorrectly, that seeing the doctor makes you sick, or that guns kill people hence people don't kill people). | Does Simpson's Paradox cover all instances of reversal from a hidden variable? | Am I missing a subtle transformation from the original Simpson/Yule examples of contingency tables into real values that justify the regression line visualization?
Yes. A similar representation of ca | Does Simpson's Paradox cover all instances of reversal from a hidden variable?
Am I missing a subtle transformation from the original Simpson/Yule examples of contingency tables into real values that justify the regression line visualization?
Yes. A similar representation of categorical analyses is possible by visualizing the log-odds of response on the Y-axis. Simpson's paradox appears much the same way with a "crude" line running against the stratum-specific trends weighted in distance according to the stratum referent log-odds of the outcome.
Here's an example with the Berkeley admissions data
Here gender is a male/female code, on the X-axis is the crude admissions log-odds for male versus female, the heavy dashed black line shows gender preference: the positive slope suggests a bias toward male admissions. The colors represent admission to specific departments. In all but two cases, the slope of the department-specific gender-preference line is negative. If these results are averaged together in a logistic model not accounting for interaction, the overall effect is a reversal favoring female admissions. They applied to harder departments more frequently than did males.
Surely Simpson's is a particular instance of confounding error. Has the term 'Simpson's Paradox' now become equated with confounding error, so that whatever the math, any change in direction via a hidden variable can be called Simpson's Paradox?
Briefly, no. Simpson's paradox is merely the "what" whereas confounding is the "why". The dominant discussion has focused on where they agree. Confounding may have a minimal or negligible effect on estimates, and alternately Simpson's paradox, while dramatic, may be caused by non-confounders. As a note, the terms "hidden" or "lurking" variable are imprecise. From an epidemiologist perspective, careful control and design of study should enable measurement or control of possible contributors to confounding bias. They need not be "hidden" to be a problem.
There are times in which point estimates may vary drastically, to the point of reversal, that does not result from confounding. Colliders and mediators are also change effects, possibly reversing them. Causal reasoning warns that for studying effects, the main effect should be studied in isolation rather than adjust for these as the stratified estimate is wrong. (It is akin to inferring, incorrectly, that seeing the doctor makes you sick, or that guns kill people hence people don't kill people). | Does Simpson's Paradox cover all instances of reversal from a hidden variable?
Am I missing a subtle transformation from the original Simpson/Yule examples of contingency tables into real values that justify the regression line visualization?
Yes. A similar representation of ca |
31,646 | Why normalize data to the range [0,1] in autoencoders? | In general, the exactly normalization of data isn't super important in neural networks as long as the inputs are at some reasonable scale. As Alex mentioned, with images, normalization to 0 and 1 happens to be very convenient.
The fact that normalization doesn't matter much is only made stronger by use of batch-normalization, which is a function/layer frequently used in neural networks which renormalizes the activations halfway through the network to zero mean and unit variance. And the authors of the paper you linked did use batch normalization, which means however the data was normalized before, it was renormalized a bunch of times inside the network anyway.
Furthermore, reading their code, which is on github, they actually did preprocess the data two ways -- with zero mean unit variance normalization, and also with min 0 max 1 normalization. They didn't explain why they chose one preprocessed dataset over the other, but I suspect they either just arbitrarily to use min 0 max 1 normalization, or some preliminary hyperparameter searches showed that one worked better than the other for whatever reason. | Why normalize data to the range [0,1] in autoencoders? | In general, the exactly normalization of data isn't super important in neural networks as long as the inputs are at some reasonable scale. As Alex mentioned, with images, normalization to 0 and 1 happ | Why normalize data to the range [0,1] in autoencoders?
In general, the exactly normalization of data isn't super important in neural networks as long as the inputs are at some reasonable scale. As Alex mentioned, with images, normalization to 0 and 1 happens to be very convenient.
The fact that normalization doesn't matter much is only made stronger by use of batch-normalization, which is a function/layer frequently used in neural networks which renormalizes the activations halfway through the network to zero mean and unit variance. And the authors of the paper you linked did use batch normalization, which means however the data was normalized before, it was renormalized a bunch of times inside the network anyway.
Furthermore, reading their code, which is on github, they actually did preprocess the data two ways -- with zero mean unit variance normalization, and also with min 0 max 1 normalization. They didn't explain why they chose one preprocessed dataset over the other, but I suspect they either just arbitrarily to use min 0 max 1 normalization, or some preliminary hyperparameter searches showed that one worked better than the other for whatever reason. | Why normalize data to the range [0,1] in autoencoders?
In general, the exactly normalization of data isn't super important in neural networks as long as the inputs are at some reasonable scale. As Alex mentioned, with images, normalization to 0 and 1 happ |
31,647 | How does xgboost select which feature to split on? | I believe that the xgboost paper is a little underprecise on this point but I will share my interpretation of what they describe. This understanding is not based on reading the code so take it with a grain of salt and hope for a braver soul to put a finer point on this anwer (which I found because I also had it after reading the paper).
The lead-in to the discussion of the approximate algorithm for choosing a split point says
. . . a split
finding algorithm enumerates over all the possible splits on
all the features"
and then goes on to say that this exhaustive search is unfeasible for large datasets and also does not interact well with distributed training so that an approximate algorithm is needed. This algorithm is described in some detail in the appendix and it is (I believe) specifically an algorithm for finding candidate split points for a given feature.
Their description of how this approximate algorithm is used says
To summarize, the algorithm first proposes candidate
splitting points according to percentiles of feature
distribution (a specific criteria will be given in Sec. 3.3).
The algorithm then maps the continuous features into buckets
split by these candidate points, aggregates the statistics
and finds the best solution among proposals based on the
aggregated statistics.
I believe that this means that candidate splits are proposed for every single feature in the dataset and that all of these candidate splits are inspected to find the optimal one. This means that the method of finding splits really is an approximate form of the algorithm that inspects every single feature and every single split to find the best one but that the approximation only happens at the level of the splits proposed for each feature. Every single feature is still inspected. | How does xgboost select which feature to split on? | I believe that the xgboost paper is a little underprecise on this point but I will share my interpretation of what they describe. This understanding is not based on reading the code so take it with a | How does xgboost select which feature to split on?
I believe that the xgboost paper is a little underprecise on this point but I will share my interpretation of what they describe. This understanding is not based on reading the code so take it with a grain of salt and hope for a braver soul to put a finer point on this anwer (which I found because I also had it after reading the paper).
The lead-in to the discussion of the approximate algorithm for choosing a split point says
. . . a split
finding algorithm enumerates over all the possible splits on
all the features"
and then goes on to say that this exhaustive search is unfeasible for large datasets and also does not interact well with distributed training so that an approximate algorithm is needed. This algorithm is described in some detail in the appendix and it is (I believe) specifically an algorithm for finding candidate split points for a given feature.
Their description of how this approximate algorithm is used says
To summarize, the algorithm first proposes candidate
splitting points according to percentiles of feature
distribution (a specific criteria will be given in Sec. 3.3).
The algorithm then maps the continuous features into buckets
split by these candidate points, aggregates the statistics
and finds the best solution among proposals based on the
aggregated statistics.
I believe that this means that candidate splits are proposed for every single feature in the dataset and that all of these candidate splits are inspected to find the optimal one. This means that the method of finding splits really is an approximate form of the algorithm that inspects every single feature and every single split to find the best one but that the approximation only happens at the level of the splits proposed for each feature. Every single feature is still inspected. | How does xgboost select which feature to split on?
I believe that the xgboost paper is a little underprecise on this point but I will share my interpretation of what they describe. This understanding is not based on reading the code so take it with a |
31,648 | How does xgboost select which feature to split on? | The complete algorithm is outlined in the xgboost paper, which also provides this summary:
We summarize an approximate framework, which resembles the ideas proposed in past literatures, in Alg. 2. To summarize, the algorithm first proposes candidate splitting points according to percentiles of feature distribution (a specific criteria will be given in Sec. 3.3). The algorithm then maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.
Since it's finding the best solution (feature and split) from some set of proposed solutions, we can infer that it is (approximately) optimizing some metric, and that the selection of features and splits is not at random. More complete information is provided in the paper.
It's also worth noting that random forest doesn't simply select features at random -- for each split, random forest selects a subset of features, and then chooses the best split from among the random subset. | How does xgboost select which feature to split on? | The complete algorithm is outlined in the xgboost paper, which also provides this summary:
We summarize an approximate framework, which resembles the ideas proposed in past literatures, in Alg. 2. To | How does xgboost select which feature to split on?
The complete algorithm is outlined in the xgboost paper, which also provides this summary:
We summarize an approximate framework, which resembles the ideas proposed in past literatures, in Alg. 2. To summarize, the algorithm first proposes candidate splitting points according to percentiles of feature distribution (a specific criteria will be given in Sec. 3.3). The algorithm then maps the continuous features into buckets split by these candidate points, aggregates the statistics and finds the best solution among proposals based on the aggregated statistics.
Since it's finding the best solution (feature and split) from some set of proposed solutions, we can infer that it is (approximately) optimizing some metric, and that the selection of features and splits is not at random. More complete information is provided in the paper.
It's also worth noting that random forest doesn't simply select features at random -- for each split, random forest selects a subset of features, and then chooses the best split from among the random subset. | How does xgboost select which feature to split on?
The complete algorithm is outlined in the xgboost paper, which also provides this summary:
We summarize an approximate framework, which resembles the ideas proposed in past literatures, in Alg. 2. To |
31,649 | How does xgboost select which feature to split on? | The splitting algorithm for XgBoost is pre-sorted and histogram (which is what Joel described in his answer). If you search with these keywords, you will understand them better. Also, my understanding is that to figure out the best split, Xgboost scores the split candidates by using the equation obtained from the derivative of objective function. | How does xgboost select which feature to split on? | The splitting algorithm for XgBoost is pre-sorted and histogram (which is what Joel described in his answer). If you search with these keywords, you will understand them better. Also, my understanding | How does xgboost select which feature to split on?
The splitting algorithm for XgBoost is pre-sorted and histogram (which is what Joel described in his answer). If you search with these keywords, you will understand them better. Also, my understanding is that to figure out the best split, Xgboost scores the split candidates by using the equation obtained from the derivative of objective function. | How does xgboost select which feature to split on?
The splitting algorithm for XgBoost is pre-sorted and histogram (which is what Joel described in his answer). If you search with these keywords, you will understand them better. Also, my understanding |
31,650 | The correct loss function for logistic regression | With the sigmoid function in logistic regression, these two loss functions are totally same, the main difference is that
$y_i\in\{-1,1\}$ is used in first loss function;
$y_i\in\{0,1\}$ is used in the second loss function.
Two loss functions can be derived by maximizing likelihood function. | The correct loss function for logistic regression | With the sigmoid function in logistic regression, these two loss functions are totally same, the main difference is that
$y_i\in\{-1,1\}$ is used in first loss function;
$y_i\in\{0,1\}$ is used in t | The correct loss function for logistic regression
With the sigmoid function in logistic regression, these two loss functions are totally same, the main difference is that
$y_i\in\{-1,1\}$ is used in first loss function;
$y_i\in\{0,1\}$ is used in the second loss function.
Two loss functions can be derived by maximizing likelihood function. | The correct loss function for logistic regression
With the sigmoid function in logistic regression, these two loss functions are totally same, the main difference is that
$y_i\in\{-1,1\}$ is used in first loss function;
$y_i\in\{0,1\}$ is used in t |
31,651 | The correct loss function for logistic regression | This is related to the choice of the labels, and each choice has (arguably) some advantages over the other. You should visit here for more detailed information on the topic. | The correct loss function for logistic regression | This is related to the choice of the labels, and each choice has (arguably) some advantages over the other. You should visit here for more detailed information on the topic. | The correct loss function for logistic regression
This is related to the choice of the labels, and each choice has (arguably) some advantages over the other. You should visit here for more detailed information on the topic. | The correct loss function for logistic regression
This is related to the choice of the labels, and each choice has (arguably) some advantages over the other. You should visit here for more detailed information on the topic. |
31,652 | Where does sklearn's weighted F1 score come from? | The F1 Scores are calculated for each label and then their average is weighted by support - which is the number of true instances for each label. It can result in an F-score that is not between precision and recall.
For example, a simple weighted average is calculated as:
>>> import numpy as np;
>>> from sklearn.metrics import f1_score
>>> np.average( [0,1,1,0 ], weights=[1,1,1,1] )
0.5
>>> np.average( [0,1,1,0 ], weights=[1,1,2,1] )
0.59999999999999998
>>> np.average( [0,1,1,0 ], weights=[1,1,4,1] )
0.7142857142857143
The weighted average for each F1 score is calculated the same way:
f_score = np.average(f_score, weights=weights)
For example:
>>> f1_score( [1,0,1,0], [0,0,1,1] )
0.5
>>> f1_score( [1,0,1,0], [0,0,1,1], sample_weight=[1,1,2,1] )
0.66666666666666663
>>> f1_score( [1,0,1,0], [0,0,1,1], sample_weight=[1,1,4,1] )
0.80000000000000016
Its intended to be used for emphasizing the importance of some samples w.r.t. the others.
Edited to answer the origin of the F-score:
The F-measure was first introduced to evaluate tasks of information extraction at the Fourth Message Understanding Conference (MUC-4) in 1992 by Nancy Chinchor, "MUC-4 Evaluation Metrics", https://www.aclweb.org/anthology/M/M92/M92-1002.pdf . It refers to van Rijsbergen's F-measure, which refers to the paper by N Jardine and van Rijsbergen CJ - "The use of hierarchical clustering in information retrieval."
It is also known by other names such as Sørensen–Dice coefficient, the Sørensen index and Dice's coefficient. This originates from the 1948 paper by Thorvald Julius Sørensen - "A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons." | Where does sklearn's weighted F1 score come from? | The F1 Scores are calculated for each label and then their average is weighted by support - which is the number of true instances for each label. It can result in an F-score that is not between precis | Where does sklearn's weighted F1 score come from?
The F1 Scores are calculated for each label and then their average is weighted by support - which is the number of true instances for each label. It can result in an F-score that is not between precision and recall.
For example, a simple weighted average is calculated as:
>>> import numpy as np;
>>> from sklearn.metrics import f1_score
>>> np.average( [0,1,1,0 ], weights=[1,1,1,1] )
0.5
>>> np.average( [0,1,1,0 ], weights=[1,1,2,1] )
0.59999999999999998
>>> np.average( [0,1,1,0 ], weights=[1,1,4,1] )
0.7142857142857143
The weighted average for each F1 score is calculated the same way:
f_score = np.average(f_score, weights=weights)
For example:
>>> f1_score( [1,0,1,0], [0,0,1,1] )
0.5
>>> f1_score( [1,0,1,0], [0,0,1,1], sample_weight=[1,1,2,1] )
0.66666666666666663
>>> f1_score( [1,0,1,0], [0,0,1,1], sample_weight=[1,1,4,1] )
0.80000000000000016
Its intended to be used for emphasizing the importance of some samples w.r.t. the others.
Edited to answer the origin of the F-score:
The F-measure was first introduced to evaluate tasks of information extraction at the Fourth Message Understanding Conference (MUC-4) in 1992 by Nancy Chinchor, "MUC-4 Evaluation Metrics", https://www.aclweb.org/anthology/M/M92/M92-1002.pdf . It refers to van Rijsbergen's F-measure, which refers to the paper by N Jardine and van Rijsbergen CJ - "The use of hierarchical clustering in information retrieval."
It is also known by other names such as Sørensen–Dice coefficient, the Sørensen index and Dice's coefficient. This originates from the 1948 paper by Thorvald Julius Sørensen - "A method of establishing groups of equal amplitude in plant sociology based on similarity of species content and its application to analyses of the vegetation on Danish commons." | Where does sklearn's weighted F1 score come from?
The F1 Scores are calculated for each label and then their average is weighted by support - which is the number of true instances for each label. It can result in an F-score that is not between precis |
31,653 | The value of a terminal state in reinforcement learning | My question is, how should I define the value of the terminal state?
The state value of the terminal state in an episodic problem should always be zero. The value of a state is the expected sum of all future rewards when starting in that state and following a specific policy. For the terminal state, this is zero - there are no more rewards to be had.
So if I want to improve my policy by making it greedy in respect to the neighbor states, the states next to the terminal states won't want to choose the terminal state (since there are positive non-terminal states neighboring it)
You have not made it 100% clear here, but I am concerned that you might be thinking the greedy policy is chosen like this: $\pi(s) = \text{argmax}_a [\sum_{s'} p(s'|s, a) v(s') ]$ - where $v(s)$ is your state value function, and $p(s'|s, a)$ is the probability of transition to state $s'$ given starting state $s$ and action $a$ (using same notation as Sutton & Barto, 2nd edition). That is not the correct formula for the greedy action choice. Instead, in order to maximise reward from the next action you take into account the immediate reward plus expected future rewards from the next state (I have added in the commonly-used discount factor $\gamma$ here):
$$\pi(s) = \text{argmax}_a [\sum_{s',r} p(s',r|s, a)(r + \gamma v(s')) ]$$
If you are more used to seeing transition matrix $P_{ss'}^a$ and expected reward matrix $R_{ss'}^a$, then the same formula using those is:
$$\pi(s) = \text{argmax}_a [\sum_{s'} P_{ss'}^a( R_{ss'}^a + \gamma v(s')) ]$$
When you use this greedy action choice, then the action to transition to the terminal state is at least equal value to other choices.
In addition, your specific problem has another issue, related to how you have set the rewards.
I am working in an environment where each transition rewards 0 except for the transitions into the terminal state, which reward 1.
Does this sort of environment just not work with state-value dynamic programming methods of reinforcement learning? I don't see how I can make this work.
Recall that state values are the value of the state only assuming a specific policy. Answers to your problem are going to depend on the type of learning algorithm you use, and whether you allow stochastic or deterministic policies. There should always be at least one state with at least some small chance of transitioning to the terminal state, in order for any value other than 0 for all the other states. That should be guaranteed under most learning algorithms. However, many of these algorithms could well learn convoluted policies which choose not to transition to the terminal state when you would expect/want them to (without knowing your problem definition, I could not say which that is intuitively).
Your biggest issue is that with your reward structure, you have given the agent no incentive to end the episode. Yes it can get a reward of 1, but your reward scheme means that the agent is guaranteed to get that reward eventually whatever it does, there is no time constraint. If you applied a learning algorithm - e.g. Policy Iteration - to your MDP, you could find that all states except the terminal state have value of 1, which the agent will get eventually once it transitions to the terminal state. As long as it learns a policy where that happens eventually, then as far as the agent is concerned, it has learned an optimal policy.
If you want to have an agent that solves your MDP in minimal time, in an episodic problem, it is usual to encode some negative reward for each time step. A basic maze solver for instance, typically gives reward -1 for each time step.
An alternative might be to apply a discount factor $0 \lt \gamma \lt 1$ - which will cause the agent to have some preference for immediate rewards, and this should impact the policy so that the step to the terminal state is always taken. | The value of a terminal state in reinforcement learning | My question is, how should I define the value of the terminal state?
The state value of the terminal state in an episodic problem should always be zero. The value of a state is the expected sum of al | The value of a terminal state in reinforcement learning
My question is, how should I define the value of the terminal state?
The state value of the terminal state in an episodic problem should always be zero. The value of a state is the expected sum of all future rewards when starting in that state and following a specific policy. For the terminal state, this is zero - there are no more rewards to be had.
So if I want to improve my policy by making it greedy in respect to the neighbor states, the states next to the terminal states won't want to choose the terminal state (since there are positive non-terminal states neighboring it)
You have not made it 100% clear here, but I am concerned that you might be thinking the greedy policy is chosen like this: $\pi(s) = \text{argmax}_a [\sum_{s'} p(s'|s, a) v(s') ]$ - where $v(s)$ is your state value function, and $p(s'|s, a)$ is the probability of transition to state $s'$ given starting state $s$ and action $a$ (using same notation as Sutton & Barto, 2nd edition). That is not the correct formula for the greedy action choice. Instead, in order to maximise reward from the next action you take into account the immediate reward plus expected future rewards from the next state (I have added in the commonly-used discount factor $\gamma$ here):
$$\pi(s) = \text{argmax}_a [\sum_{s',r} p(s',r|s, a)(r + \gamma v(s')) ]$$
If you are more used to seeing transition matrix $P_{ss'}^a$ and expected reward matrix $R_{ss'}^a$, then the same formula using those is:
$$\pi(s) = \text{argmax}_a [\sum_{s'} P_{ss'}^a( R_{ss'}^a + \gamma v(s')) ]$$
When you use this greedy action choice, then the action to transition to the terminal state is at least equal value to other choices.
In addition, your specific problem has another issue, related to how you have set the rewards.
I am working in an environment where each transition rewards 0 except for the transitions into the terminal state, which reward 1.
Does this sort of environment just not work with state-value dynamic programming methods of reinforcement learning? I don't see how I can make this work.
Recall that state values are the value of the state only assuming a specific policy. Answers to your problem are going to depend on the type of learning algorithm you use, and whether you allow stochastic or deterministic policies. There should always be at least one state with at least some small chance of transitioning to the terminal state, in order for any value other than 0 for all the other states. That should be guaranteed under most learning algorithms. However, many of these algorithms could well learn convoluted policies which choose not to transition to the terminal state when you would expect/want them to (without knowing your problem definition, I could not say which that is intuitively).
Your biggest issue is that with your reward structure, you have given the agent no incentive to end the episode. Yes it can get a reward of 1, but your reward scheme means that the agent is guaranteed to get that reward eventually whatever it does, there is no time constraint. If you applied a learning algorithm - e.g. Policy Iteration - to your MDP, you could find that all states except the terminal state have value of 1, which the agent will get eventually once it transitions to the terminal state. As long as it learns a policy where that happens eventually, then as far as the agent is concerned, it has learned an optimal policy.
If you want to have an agent that solves your MDP in minimal time, in an episodic problem, it is usual to encode some negative reward for each time step. A basic maze solver for instance, typically gives reward -1 for each time step.
An alternative might be to apply a discount factor $0 \lt \gamma \lt 1$ - which will cause the agent to have some preference for immediate rewards, and this should impact the policy so that the step to the terminal state is always taken. | The value of a terminal state in reinforcement learning
My question is, how should I define the value of the terminal state?
The state value of the terminal state in an episodic problem should always be zero. The value of a state is the expected sum of al |
31,654 | The value of a terminal state in reinforcement learning | Make your transition of entering your final state 1 and then a transition from the terminal to itself 0. | The value of a terminal state in reinforcement learning | Make your transition of entering your final state 1 and then a transition from the terminal to itself 0. | The value of a terminal state in reinforcement learning
Make your transition of entering your final state 1 and then a transition from the terminal to itself 0. | The value of a terminal state in reinforcement learning
Make your transition of entering your final state 1 and then a transition from the terminal to itself 0. |
31,655 | Response is an Integer. Should I use classification or regression? | I have recently used the abalone dataset for illustrating some regression methods and encountered basically the same questions. (UPDATE: link to paper "Predictive State Smoothing (PRESS): Scalable non-parametric regression for high-dimensional data with variable selection".)
Here is my take on it:
I would say regression is the most natural way to approach this problem (see general comment at end of post for the domain-specific rationale). Doing a plain multi-class classification approach is IMHO downright wrong -- for the reason you point out (predicting '22' for a '3' is as good/bad as a predicting a '4' -- which is obviously not true).
I think you are looking for 'ordered' or 'ordinal' classification, which takes such an ordering into account (see e.g., http://www.cs.waikato.ac.nz/~eibe/pubs/ordinal_tech_report.pdf which also contains an example on the Abalone dataset.) However, even ordinal classification has the problem that you can't predict anything else than the observed number of rings. Say, one day there is a massive abalone shell that's 20% larger than any shell we have seen before -- a classification approach will most likely put it in the largest class, which is '29'. However, that makes no sense as any biologist will tell you that that shell is most likely a rare find of a, say, 35 ring abalone shell.
No, not a problem at all -- it's just part of your prediction model.
Having said all this, in the end you should ask yourself what is the domain-specific problem the abalone data is trying to help solve?!
It is predicting the age of a shell, which uses the number of rings as a proxy. A biologist is not really interested in predicting the number of rings, they want to know the age. So a prediction of, say, 6.124 is not less useful than '6' or '7' -- in fact, it's probably more useful. I "blame" this on CS/eng trying to cast everything as a precision/recall problem, so they like to emphasize this as an integer prediction/classification problem rather than regression -- not because that's actually the underlying problem, but because it fits their tools and benchmark metrics (who does not love to throw a deep net classifier on this problem and declare victory because "precision/recall or AUC is really high" ;) ) | Response is an Integer. Should I use classification or regression? | I have recently used the abalone dataset for illustrating some regression methods and encountered basically the same questions. (UPDATE: link to paper "Predictive State Smoothing (PRESS): Scalable non | Response is an Integer. Should I use classification or regression?
I have recently used the abalone dataset for illustrating some regression methods and encountered basically the same questions. (UPDATE: link to paper "Predictive State Smoothing (PRESS): Scalable non-parametric regression for high-dimensional data with variable selection".)
Here is my take on it:
I would say regression is the most natural way to approach this problem (see general comment at end of post for the domain-specific rationale). Doing a plain multi-class classification approach is IMHO downright wrong -- for the reason you point out (predicting '22' for a '3' is as good/bad as a predicting a '4' -- which is obviously not true).
I think you are looking for 'ordered' or 'ordinal' classification, which takes such an ordering into account (see e.g., http://www.cs.waikato.ac.nz/~eibe/pubs/ordinal_tech_report.pdf which also contains an example on the Abalone dataset.) However, even ordinal classification has the problem that you can't predict anything else than the observed number of rings. Say, one day there is a massive abalone shell that's 20% larger than any shell we have seen before -- a classification approach will most likely put it in the largest class, which is '29'. However, that makes no sense as any biologist will tell you that that shell is most likely a rare find of a, say, 35 ring abalone shell.
No, not a problem at all -- it's just part of your prediction model.
Having said all this, in the end you should ask yourself what is the domain-specific problem the abalone data is trying to help solve?!
It is predicting the age of a shell, which uses the number of rings as a proxy. A biologist is not really interested in predicting the number of rings, they want to know the age. So a prediction of, say, 6.124 is not less useful than '6' or '7' -- in fact, it's probably more useful. I "blame" this on CS/eng trying to cast everything as a precision/recall problem, so they like to emphasize this as an integer prediction/classification problem rather than regression -- not because that's actually the underlying problem, but because it fits their tools and benchmark metrics (who does not love to throw a deep net classifier on this problem and declare victory because "precision/recall or AUC is really high" ;) ) | Response is an Integer. Should I use classification or regression?
I have recently used the abalone dataset for illustrating some regression methods and encountered basically the same questions. (UPDATE: link to paper "Predictive State Smoothing (PRESS): Scalable non |
31,656 | Too large batch size | Read the following paper. It's a great read.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Nitish Shirish Keska et al, ICLR 2017.
There are many great discussions and empirical results on benchmark datasets comparing the effect of different batchsizes. As they conclude, large batchsize causes over-fitting and they explain it as it converges to a sharp minima.
The code is also available here. | Too large batch size | Read the following paper. It's a great read.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Nitish Shirish Keska et al, ICLR 2017.
There are many great discussions and | Too large batch size
Read the following paper. It's a great read.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Nitish Shirish Keska et al, ICLR 2017.
There are many great discussions and empirical results on benchmark datasets comparing the effect of different batchsizes. As they conclude, large batchsize causes over-fitting and they explain it as it converges to a sharp minima.
The code is also available here. | Too large batch size
Read the following paper. It's a great read.
On Large-Batch Training for Deep Learning: Generalization Gap and Sharp Minima, Nitish Shirish Keska et al, ICLR 2017.
There are many great discussions and |
31,657 | Too large batch size | The too-large batch size can introduce numerical instability and the Layer-wise Adaptive Learning Rates would help stabilize the training. | Too large batch size | The too-large batch size can introduce numerical instability and the Layer-wise Adaptive Learning Rates would help stabilize the training. | Too large batch size
The too-large batch size can introduce numerical instability and the Layer-wise Adaptive Learning Rates would help stabilize the training. | Too large batch size
The too-large batch size can introduce numerical instability and the Layer-wise Adaptive Learning Rates would help stabilize the training. |
31,658 | Estimating the gradient of log density given samples | I added a paragraph in 2021, but this is still a 2018 answer. There's been a bunch of recent progress on score-based methods, particularly in diffusion models.
This problem is sometimes called score estimation, because $\nabla_x \log p(x)$ is the score of whatever model $p$ with respect to a hypothetical location parameter. I've seen it called the Hyvärinen score after Aapo Hyvärinen's technique called score matching for fitting (unnormalized) statistical models.
Some flavors of score estimation:
From a density estimate
Just do any flavor of density estimation, $\hat p(x)$, and differentiate its log, perhaps using automatic differentiation as in the other answers.
Kernel density estimation might be reasonable in fairly low dimensions:
$$
\hat p(x) = \frac1n \sum_{i=1}^n k(X_i, x)
\qquad
\nabla_x \log \hat p(x)
= \frac{\frac1n \sum_{i=1}^n \nabla_x k(X_i, x)}{\frac1n \sum_{i=1}^n k(X_i, x)}
.$$
Deep density estimation models are a nice fit for this setup since they already use autodiff. One good recent model is
Papamakarios, Pavlakou, and Murray. Masked Autoregressive Flow for Density Estimation. NIPS 2018.
This type of approach is more computationally expensive than KDE but can yield excellent results, particularly if you have a fair amount of data and a GPU. Code is available from the authors.
It can also be pretty bad, though. A histogram estimator is an example of an estimator that can be quite good at density estimation (e.g. minimax optimal in some settings), but whose estimate of the derivative is terrible (zero everywhere except where it's undefined). So you also might want a loss function that encourages getting the derivative right.
From an unnormalized log-density estimate
Unnormalized log-densities are sometimes called the energy. Differentiating one of these estimates will be the score.
Kernel exponential families, which are fit via score matching, are a reasonably natural fit for score estimation. (They're hard to normalize, but luckily that doesn't matter for the score.) There's some evidence that these scale better than KDE in moderate dimensions for at least certain kinds of densities. For a recent computationally reasonable-ish approch, see
Sutherland (ahem), Strathmann, Arbel, and Gretton. Efficient and principled score estimation with Nyström kernel exponential families. AISTATS 2018.
These have some nice theoretical properties but, in my opinion as one of the authors of that paper: practical application still requires a lot of manual work in choosing the kernel and hyperparameters (something we're working on now). Code is available from the authors, but email me if you want to actually try it. :)
This is a more recent approach to directly train a network to output the energy, using a score matching variant. I'm not familiar with the practicalities here, but it seems potentially promising:
Saremi, Mehrjou, Schölkopf, and Hyvärinen. Deep Energy Estimator Networks. arXiv, May 2018.
Direct score estimators
You can also train a model to just directly output the score.
One problem with this approach is that you might or might not get something that's actually a valid gradient of some vector field; the first approach below certainly suffers from that problem, I'm not sure about the later ones.
In theory, if you train a denoising autoencoder $r(x + \sigma \varepsilon) \approx x$, where $\varepsilon \sim \mathcal N(0, I)$, then as $\sigma \to 0$ you have $\frac{r(x + \sigma \varepsilon) - x}{\sigma} \to \nabla_x \log p(x)$ under some assumptions about the strength of the autoencoder architecture and your ability to optimize it.
The estimator as written in this paper doesn't seem to work especially well (see the experimental results of the kernel exp family and deep energy estimator network papers above), but it might work well with a more complex network structure and more careful parameter tuning / etc. I wouldn't particularly recommend this strategy, but it's interesting.
Alain and Bengio. What Regularized Auto-Encoders Learn from the Data Generating Distribution. JMLR 2014.
The following paper (in Section 3.1) proposes an estimator specifically for $\nabla_x \log p(X_i)$, given only $X_i \sim p$, based on inverting Stein's identity (and also depending on a kernel choice). I'm not familiar with its practical performance, but imagine it should be okay if you choose a reasonable kernel and can get away with only evaluating the score at the $X_i$.
Li and Turner. Gradient Estimators for Implicit Models. ICLR 2018.
This paper is a sequel of sorts to the previous one, which gives a different (Stein-based) estimator that can be evaluated at out-of-sample points and provides some theoretical guarantees. I haven't read it yet, but it's high on the list:
Shi, Sun, and Zhu. A Spectral Approach to Gradient Estimation for Implicit Distributions. arXiv, June 2018. | Estimating the gradient of log density given samples | I added a paragraph in 2021, but this is still a 2018 answer. There's been a bunch of recent progress on score-based methods, particularly in diffusion models.
This problem is sometimes called score e | Estimating the gradient of log density given samples
I added a paragraph in 2021, but this is still a 2018 answer. There's been a bunch of recent progress on score-based methods, particularly in diffusion models.
This problem is sometimes called score estimation, because $\nabla_x \log p(x)$ is the score of whatever model $p$ with respect to a hypothetical location parameter. I've seen it called the Hyvärinen score after Aapo Hyvärinen's technique called score matching for fitting (unnormalized) statistical models.
Some flavors of score estimation:
From a density estimate
Just do any flavor of density estimation, $\hat p(x)$, and differentiate its log, perhaps using automatic differentiation as in the other answers.
Kernel density estimation might be reasonable in fairly low dimensions:
$$
\hat p(x) = \frac1n \sum_{i=1}^n k(X_i, x)
\qquad
\nabla_x \log \hat p(x)
= \frac{\frac1n \sum_{i=1}^n \nabla_x k(X_i, x)}{\frac1n \sum_{i=1}^n k(X_i, x)}
.$$
Deep density estimation models are a nice fit for this setup since they already use autodiff. One good recent model is
Papamakarios, Pavlakou, and Murray. Masked Autoregressive Flow for Density Estimation. NIPS 2018.
This type of approach is more computationally expensive than KDE but can yield excellent results, particularly if you have a fair amount of data and a GPU. Code is available from the authors.
It can also be pretty bad, though. A histogram estimator is an example of an estimator that can be quite good at density estimation (e.g. minimax optimal in some settings), but whose estimate of the derivative is terrible (zero everywhere except where it's undefined). So you also might want a loss function that encourages getting the derivative right.
From an unnormalized log-density estimate
Unnormalized log-densities are sometimes called the energy. Differentiating one of these estimates will be the score.
Kernel exponential families, which are fit via score matching, are a reasonably natural fit for score estimation. (They're hard to normalize, but luckily that doesn't matter for the score.) There's some evidence that these scale better than KDE in moderate dimensions for at least certain kinds of densities. For a recent computationally reasonable-ish approch, see
Sutherland (ahem), Strathmann, Arbel, and Gretton. Efficient and principled score estimation with Nyström kernel exponential families. AISTATS 2018.
These have some nice theoretical properties but, in my opinion as one of the authors of that paper: practical application still requires a lot of manual work in choosing the kernel and hyperparameters (something we're working on now). Code is available from the authors, but email me if you want to actually try it. :)
This is a more recent approach to directly train a network to output the energy, using a score matching variant. I'm not familiar with the practicalities here, but it seems potentially promising:
Saremi, Mehrjou, Schölkopf, and Hyvärinen. Deep Energy Estimator Networks. arXiv, May 2018.
Direct score estimators
You can also train a model to just directly output the score.
One problem with this approach is that you might or might not get something that's actually a valid gradient of some vector field; the first approach below certainly suffers from that problem, I'm not sure about the later ones.
In theory, if you train a denoising autoencoder $r(x + \sigma \varepsilon) \approx x$, where $\varepsilon \sim \mathcal N(0, I)$, then as $\sigma \to 0$ you have $\frac{r(x + \sigma \varepsilon) - x}{\sigma} \to \nabla_x \log p(x)$ under some assumptions about the strength of the autoencoder architecture and your ability to optimize it.
The estimator as written in this paper doesn't seem to work especially well (see the experimental results of the kernel exp family and deep energy estimator network papers above), but it might work well with a more complex network structure and more careful parameter tuning / etc. I wouldn't particularly recommend this strategy, but it's interesting.
Alain and Bengio. What Regularized Auto-Encoders Learn from the Data Generating Distribution. JMLR 2014.
The following paper (in Section 3.1) proposes an estimator specifically for $\nabla_x \log p(X_i)$, given only $X_i \sim p$, based on inverting Stein's identity (and also depending on a kernel choice). I'm not familiar with its practical performance, but imagine it should be okay if you choose a reasonable kernel and can get away with only evaluating the score at the $X_i$.
Li and Turner. Gradient Estimators for Implicit Models. ICLR 2018.
This paper is a sequel of sorts to the previous one, which gives a different (Stein-based) estimator that can be evaluated at out-of-sample points and provides some theoretical guarantees. I haven't read it yet, but it's high on the list:
Shi, Sun, and Zhu. A Spectral Approach to Gradient Estimation for Implicit Distributions. arXiv, June 2018. | Estimating the gradient of log density given samples
I added a paragraph in 2021, but this is still a 2018 answer. There's been a bunch of recent progress on score-based methods, particularly in diffusion models.
This problem is sometimes called score e |
31,659 | Estimating the gradient of log density given samples | To get things started and also give some suggestions for open-source implementations, I know that theano (which is accessed via Python) allows one to compute gradients numerically, using theano.gradient.numeric_grad (http://deeplearning.net/software/theano/library/gradient.html#module-theano.gradient).
There is also a python package called PyMC3 which allows one to implement Bayesian methods in Python and it also makes heavy use of theano. It offers Hamiltonian Monte Carlo:
https://pymc-devs.github.io/pymc3/notebooks/getting_started.html :
PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
This might help you to get some of the suggestions implemented. | Estimating the gradient of log density given samples | To get things started and also give some suggestions for open-source implementations, I know that theano (which is accessed via Python) allows one to compute gradients numerically, using theano.gradie | Estimating the gradient of log density given samples
To get things started and also give some suggestions for open-source implementations, I know that theano (which is accessed via Python) allows one to compute gradients numerically, using theano.gradient.numeric_grad (http://deeplearning.net/software/theano/library/gradient.html#module-theano.gradient).
There is also a python package called PyMC3 which allows one to implement Bayesian methods in Python and it also makes heavy use of theano. It offers Hamiltonian Monte Carlo:
https://pymc-devs.github.io/pymc3/notebooks/getting_started.html :
PyMC3, Stan (Stan Development Team, 2014), and the LaplacesDemon package for R are currently the only PP packages to offer HMC.
This might help you to get some of the suggestions implemented. | Estimating the gradient of log density given samples
To get things started and also give some suggestions for open-source implementations, I know that theano (which is accessed via Python) allows one to compute gradients numerically, using theano.gradie |
31,660 | non normality in multiple linear regression | Okay, a few things.
1) I always advise against using tests for normality. They answer a question you already know the answer to, i.e. "Is your data normal?" (The answer is no because nothing is normal) vs the question "Is the lack of normality going to be a problem?" which is the question you should be interested in.
2) The assumption of normality is not so much about the predictive performance, but rather the correctness of the inference you would perform (hypothesis tests and confidence intervals).
3) Some deviation from normality is okay, because we have asymptotics that drive test statistics to normality.
4) You QQ-plot does not appear to be severely not normal (although there might be some bimodality in your residuals. You may want to check if there is an omitted variable or something). As another commenter stated, the normality is the one that can kind of fail (can have mild - moderate deviations from it).
5) So to answer your question
(i) Yes, you do the log transform (or some other transformation) first.
(ii) Once you transform your variable the nonnormality EDIT may be worth looking to see why the residuals seem to be in two distinct clusters. | non normality in multiple linear regression | Okay, a few things.
1) I always advise against using tests for normality. They answer a question you already know the answer to, i.e. "Is your data normal?" (The answer is no because nothing is normal | non normality in multiple linear regression
Okay, a few things.
1) I always advise against using tests for normality. They answer a question you already know the answer to, i.e. "Is your data normal?" (The answer is no because nothing is normal) vs the question "Is the lack of normality going to be a problem?" which is the question you should be interested in.
2) The assumption of normality is not so much about the predictive performance, but rather the correctness of the inference you would perform (hypothesis tests and confidence intervals).
3) Some deviation from normality is okay, because we have asymptotics that drive test statistics to normality.
4) You QQ-plot does not appear to be severely not normal (although there might be some bimodality in your residuals. You may want to check if there is an omitted variable or something). As another commenter stated, the normality is the one that can kind of fail (can have mild - moderate deviations from it).
5) So to answer your question
(i) Yes, you do the log transform (or some other transformation) first.
(ii) Once you transform your variable the nonnormality EDIT may be worth looking to see why the residuals seem to be in two distinct clusters. | non normality in multiple linear regression
Okay, a few things.
1) I always advise against using tests for normality. They answer a question you already know the answer to, i.e. "Is your data normal?" (The answer is no because nothing is normal |
31,661 | non normality in multiple linear regression | Note: Linear regression does not have assumptions on response variable to be normally distributed. Instead, it has assumptions on residual needs to be normally distributed (See Gauss-Markov theorem). In addition, this assumption is the "least important one", i.e., can be violated and the model will work "fine".
They are different, one is on marginal distribution and another is the conditional distribution. An detailed example can be found here: Why linear regression has assumption on residual but generalized linear model has assumptions on response? | non normality in multiple linear regression | Note: Linear regression does not have assumptions on response variable to be normally distributed. Instead, it has assumptions on residual needs to be normally distributed (See Gauss-Markov theorem). | non normality in multiple linear regression
Note: Linear regression does not have assumptions on response variable to be normally distributed. Instead, it has assumptions on residual needs to be normally distributed (See Gauss-Markov theorem). In addition, this assumption is the "least important one", i.e., can be violated and the model will work "fine".
They are different, one is on marginal distribution and another is the conditional distribution. An detailed example can be found here: Why linear regression has assumption on residual but generalized linear model has assumptions on response? | non normality in multiple linear regression
Note: Linear regression does not have assumptions on response variable to be normally distributed. Instead, it has assumptions on residual needs to be normally distributed (See Gauss-Markov theorem). |
31,662 | non normality in multiple linear regression | I wouldn't worry about normality, at least, at this stage of your analysis. Try using log transformation on the dependent variable. Salary's a good candidate for log-transform. This removes skewness, then you'll be good to continue analysis. | non normality in multiple linear regression | I wouldn't worry about normality, at least, at this stage of your analysis. Try using log transformation on the dependent variable. Salary's a good candidate for log-transform. This removes skewness, | non normality in multiple linear regression
I wouldn't worry about normality, at least, at this stage of your analysis. Try using log transformation on the dependent variable. Salary's a good candidate for log-transform. This removes skewness, then you'll be good to continue analysis. | non normality in multiple linear regression
I wouldn't worry about normality, at least, at this stage of your analysis. Try using log transformation on the dependent variable. Salary's a good candidate for log-transform. This removes skewness, |
31,663 | Which deep learning model can classify categories which are not mutually exclusive | You can achieve this multi-label classification by replacing the softmax with a sigmoid activation and using binary crossentropy instead of categorical crossentropy as the loss function. Then you just need one network with as many output units/neurons as you have labels.
You need to change the loss to binary crossentropy as the categorical cross entropy only gets the loss from the prediction for the positive targets. To understand this, look at the formula for the categorical crossentropy loss for one example $i$ (class indices are $j$):
$ L_i = - \sum_j{t_{i,j} \log(p_{i,j})}$
In the normal multiclass setting, you use a softmax, so that the prediction for the correct class is directly dependent on the predictions for the other classes. If you replace the softmax by sigmoid this is no longer true, so negative examples (where $t_{i,j}=0$) are no longer used in the training!
That's why you need to change to binary crossentropy, which uses both positive and negative examples:
$L_i=-\sum_j{t_{i,j} \log(p_{i,j})} -\sum_j{(1 - t_{i,j}) \log(1 - p_{i,j})} $ | Which deep learning model can classify categories which are not mutually exclusive | You can achieve this multi-label classification by replacing the softmax with a sigmoid activation and using binary crossentropy instead of categorical crossentropy as the loss function. Then you just | Which deep learning model can classify categories which are not mutually exclusive
You can achieve this multi-label classification by replacing the softmax with a sigmoid activation and using binary crossentropy instead of categorical crossentropy as the loss function. Then you just need one network with as many output units/neurons as you have labels.
You need to change the loss to binary crossentropy as the categorical cross entropy only gets the loss from the prediction for the positive targets. To understand this, look at the formula for the categorical crossentropy loss for one example $i$ (class indices are $j$):
$ L_i = - \sum_j{t_{i,j} \log(p_{i,j})}$
In the normal multiclass setting, you use a softmax, so that the prediction for the correct class is directly dependent on the predictions for the other classes. If you replace the softmax by sigmoid this is no longer true, so negative examples (where $t_{i,j}=0$) are no longer used in the training!
That's why you need to change to binary crossentropy, which uses both positive and negative examples:
$L_i=-\sum_j{t_{i,j} \log(p_{i,j})} -\sum_j{(1 - t_{i,j}) \log(1 - p_{i,j})} $ | Which deep learning model can classify categories which are not mutually exclusive
You can achieve this multi-label classification by replacing the softmax with a sigmoid activation and using binary crossentropy instead of categorical crossentropy as the loss function. Then you just |
31,664 | What's the practical meaning of alpha in a GLM with gamma family? | The values of the alpha-parameter of the gamma describe the shape of the distribution at any given value of IV (essentially giving an idea how skewed the conditional distribution is -- the smaller $\alpha$ is, the more skew)
In a GLM, it doesn't change the mean ...
... so it has no impact on the fitted curves in your plots; it only describes the shape of the distribution about the mean.
The steepness of the descent of the curves in your plots is determined by the coefficient of your IV. You might find it instructive to look at your plots (of the data and the fitted model) on the log scale. | What's the practical meaning of alpha in a GLM with gamma family? | The values of the alpha-parameter of the gamma describe the shape of the distribution at any given value of IV (essentially giving an idea how skewed the conditional distribution is -- the smaller $\a | What's the practical meaning of alpha in a GLM with gamma family?
The values of the alpha-parameter of the gamma describe the shape of the distribution at any given value of IV (essentially giving an idea how skewed the conditional distribution is -- the smaller $\alpha$ is, the more skew)
In a GLM, it doesn't change the mean ...
... so it has no impact on the fitted curves in your plots; it only describes the shape of the distribution about the mean.
The steepness of the descent of the curves in your plots is determined by the coefficient of your IV. You might find it instructive to look at your plots (of the data and the fitted model) on the log scale. | What's the practical meaning of alpha in a GLM with gamma family?
The values of the alpha-parameter of the gamma describe the shape of the distribution at any given value of IV (essentially giving an idea how skewed the conditional distribution is -- the smaller $\a |
31,665 | What is the difference between patch-wise training and fully convolutional training in FCNs? | Basically, fully convolutional training takes the whole MxM image and produces outputs for all subimages in a single ConvNet forward pass. Patchwise training explicitly crops out the subimages and produces outputs for each subimage in independent forward passes. Therefore, fully convolutional training is usually substantially faster than patchwise training.
So, for fully convolutional training, you make updates like this:
Input whole MxM image (or multiple images)
Push through ConvNet -> get an entire map of outputs (maximum size MxM per image, possibly smaller)
Make updates using the loss of all outputs
Now while this is quite fast, it restricts your training sampling process compared to patchwise training: You are forced to make a lot of updates on the same image (actually, all possible updates for all subimages) during one step of your training. That's why they write that fully convolutional training is only identical to patchwise training, if each receptive field (aka subimage) of an image is contained in a training batch of the patchwise training procedure (for patchwise training, you also could have two of ten possible subimages from image A, three of eight possible subimages from image B, etc. in one batch).
Then, they argue that by not using all outputs during fully convolutional training, you get closer to patchwise training again (since you are not making all possible updates for all subimages of an image in a single training step). However, you waste some of the computation. Also, in Section 4.4/Figure 5, they describe that making all possible updates works just fine and there is no need to ignore some outputs. | What is the difference between patch-wise training and fully convolutional training in FCNs? | Basically, fully convolutional training takes the whole MxM image and produces outputs for all subimages in a single ConvNet forward pass. Patchwise training explicitly crops out the subimages and pro | What is the difference between patch-wise training and fully convolutional training in FCNs?
Basically, fully convolutional training takes the whole MxM image and produces outputs for all subimages in a single ConvNet forward pass. Patchwise training explicitly crops out the subimages and produces outputs for each subimage in independent forward passes. Therefore, fully convolutional training is usually substantially faster than patchwise training.
So, for fully convolutional training, you make updates like this:
Input whole MxM image (or multiple images)
Push through ConvNet -> get an entire map of outputs (maximum size MxM per image, possibly smaller)
Make updates using the loss of all outputs
Now while this is quite fast, it restricts your training sampling process compared to patchwise training: You are forced to make a lot of updates on the same image (actually, all possible updates for all subimages) during one step of your training. That's why they write that fully convolutional training is only identical to patchwise training, if each receptive field (aka subimage) of an image is contained in a training batch of the patchwise training procedure (for patchwise training, you also could have two of ten possible subimages from image A, three of eight possible subimages from image B, etc. in one batch).
Then, they argue that by not using all outputs during fully convolutional training, you get closer to patchwise training again (since you are not making all possible updates for all subimages of an image in a single training step). However, you waste some of the computation. Also, in Section 4.4/Figure 5, they describe that making all possible updates works just fine and there is no need to ignore some outputs. | What is the difference between patch-wise training and fully convolutional training in FCNs?
Basically, fully convolutional training takes the whole MxM image and produces outputs for all subimages in a single ConvNet forward pass. Patchwise training explicitly crops out the subimages and pro |
31,666 | Resolving heteroscedasticity in Poisson GLMM | It is difficult to assess the fit of the Poisson (or any other integer-valued GLM for that matter) with Pearson or deviance residuals, because also a perfectly fitting Poisson GLMM will exhibit inhomogeneous deviance residuals.
This is especially so if you do GLMMs with observation-level REs, because the dispersion created by OL-REs is not considered by the Pearson residuals.
To demonstrate the issue, the following code creates overdispersed Poisson data, that is then fitted with a perfect model. The Pearson residuals look very much like your plot - hence, it may be that there is no problem at all.
This problem is solved by the DHARMa R package, which simulates from the fitted model to transform the residuals of any GL(M)M into a standardized space. Once this is done, you can visually assess / test residual problems, such as deviations from the distribution, residual dependency on a predictor, heteroskedasticity or autocorrelation in the normal way. See the package vignette for worked-through examples. You can see in the lower plot that the same model now looks fine, as it should.
If you still see heteroscedasticity after plotting with DHARMa, you will have to model dispersion as a function of something, which is not a big problem, but would likely require you to move to JAGs or another Bayesian software.
library(DHARMa)
library(lme4)
testData = createData(sampleSize = 200, overdispersion = 1, randomEffectVariance = 1, family = poisson())
fittedModel <- glmer(observedResponse ~ Environment1 + (1|group) + (1|ID), family = "poisson", data = testData, control=glmerControl(optCtrl=list(maxfun=20000) ))
# standard Pearson residuals
plot(fittedModel, resid(., type = "pearson") ~ fitted(.) , abline = 0)
# DHARMa residuals
plot(simulateResiduals(fittedModel)) | Resolving heteroscedasticity in Poisson GLMM | It is difficult to assess the fit of the Poisson (or any other integer-valued GLM for that matter) with Pearson or deviance residuals, because also a perfectly fitting Poisson GLMM will exhibit inhomo | Resolving heteroscedasticity in Poisson GLMM
It is difficult to assess the fit of the Poisson (or any other integer-valued GLM for that matter) with Pearson or deviance residuals, because also a perfectly fitting Poisson GLMM will exhibit inhomogeneous deviance residuals.
This is especially so if you do GLMMs with observation-level REs, because the dispersion created by OL-REs is not considered by the Pearson residuals.
To demonstrate the issue, the following code creates overdispersed Poisson data, that is then fitted with a perfect model. The Pearson residuals look very much like your plot - hence, it may be that there is no problem at all.
This problem is solved by the DHARMa R package, which simulates from the fitted model to transform the residuals of any GL(M)M into a standardized space. Once this is done, you can visually assess / test residual problems, such as deviations from the distribution, residual dependency on a predictor, heteroskedasticity or autocorrelation in the normal way. See the package vignette for worked-through examples. You can see in the lower plot that the same model now looks fine, as it should.
If you still see heteroscedasticity after plotting with DHARMa, you will have to model dispersion as a function of something, which is not a big problem, but would likely require you to move to JAGs or another Bayesian software.
library(DHARMa)
library(lme4)
testData = createData(sampleSize = 200, overdispersion = 1, randomEffectVariance = 1, family = poisson())
fittedModel <- glmer(observedResponse ~ Environment1 + (1|group) + (1|ID), family = "poisson", data = testData, control=glmerControl(optCtrl=list(maxfun=20000) ))
# standard Pearson residuals
plot(fittedModel, resid(., type = "pearson") ~ fitted(.) , abline = 0)
# DHARMa residuals
plot(simulateResiduals(fittedModel)) | Resolving heteroscedasticity in Poisson GLMM
It is difficult to assess the fit of the Poisson (or any other integer-valued GLM for that matter) with Pearson or deviance residuals, because also a perfectly fitting Poisson GLMM will exhibit inhomo |
31,667 | Derivation of Group Lasso | It took me some time to understand this derivation. As usual, once you get the trick, it's actually straightforward.
To solve the group LASSO via block coordinate descent, we solve for each group of variables $\beta_j = (\beta_{j1}, \ldots, \beta_{jp_j})^\top$ separately, i.e. we need to solve
$$
\arg\min_{\beta_j} \frac{1}{2} (\mathbf{Y} - \mathbf{X} \mathbf{\beta})^\top
(\mathbf{Y} - \mathbf{X} \mathbf{\beta}) + \lambda \sum_{j=1}^J d_j \lVert \beta_j \rVert , \\
\Leftrightarrow
\arg\min_{\beta_j} \frac{1}{2} (\mathbf{Y} - \mathbf{X}^{-j} \mathbf{\beta}_{-j} - \mathbf{X}^{j} \beta_j)^\top
(\mathbf{Y} - \mathbf{X}^{-j} \mathbf{\beta}_{-j} - \mathbf{X}^{j} \beta_j) + \lambda \sum_{j=1}^J d_j \lVert \beta_j \rVert , \\
\Leftrightarrow
\arg\min_{\beta_j} \frac{1}{2} (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)^\top
(\mathbf{r}_j - \mathbf{X}^{j} \beta_j) + \lambda \sum_{j=1}^J d_j \lVert \beta_j \rVert , \\
$$
where $\mathbf{X}^{j}$ denotes the columns of $\mathbf{X}$ corresponding to the $j$-th group (with $(\mathbf{X}^{j})^\top \mathbf{X}^{j} = \mathbf{I}$), $\mathbf{X}^{-j}$ denotes the matrix $\mathbf{X}$ without the columns corresponding to the $j$-th group, and $\beta_{-j}$ is the
corresponding vector of coefficients.
Case I
Considering the case $\beta_j \neq \mathbf{0}$ and setting the derivative with respect to $\beta_j$ to zero, we obtain
$$
-(\mathbf{X}^{j})^\top (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)
+ \lambda d_j \frac{\beta_j}{\lVert \beta_j \rVert} = 0
, \\
\Leftrightarrow
-(\mathbf{X}^{j})^\top \mathbf{r}_j + (\mathbf{X}^{j})^\top \mathbf{X}^{j} \beta_j
+ \lambda d_j \frac{\beta_j}{\lVert \beta_j \rVert} = 0
, \\
\Leftrightarrow
-(\mathbf{X}^{j})^\top \mathbf{r}_j + \mathbf{I} \beta_j
+ \frac{\lambda d_j}{\lVert \beta_j \rVert}\beta_j = 0
, \\
\Leftrightarrow
\left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right) \beta_j =
(\mathbf{X}^{j})^\top \mathbf{r}_j
, \\
\Leftrightarrow
\beta_j = \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
(\mathbf{X}^{j})^\top \mathbf{r}_j .
$$
We still have $\beta_j$ on both sites, namely in the form of $\lVert \beta_j \rVert$, which we would like to eliminate.
Let $\mathbf{s}_j = (\mathbf{X}^{j})^\top \mathbf{r}_j$, we take the last equation to express $\lVert \beta_j \rVert$ as
$$
\lVert \beta_j \rVert =
\left\Vert \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
\mathbf{s}_j \right\Vert \\
= \sqrt{ \sum_{i=1}^n \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-2} s_{ji}^2 } \\
= \sqrt{ \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-2}
\sum_{i=1}^n s_{ji}^2 } \\
= \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
\lVert \mathbf{s}_j \rVert .
$$
Now, solving for $\lVert \beta_j \rVert$, we get
$$
\lVert \beta_j \rVert \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)
= \lVert \mathbf{s}_j \rVert, \\
\Leftrightarrow
\lVert \beta_j \rVert + \lambda d_j = \lVert \mathbf{s}_j \rVert, \\
\Leftrightarrow
\lVert \beta_j \rVert = \lVert \mathbf{s}_j \rVert - \lambda d_j .
$$
Thus, we can use this formulation of $\lVert \beta_j \rVert$ and substitute it in the equation above:
$$
\beta_j = \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
\mathbf{s}_j \\
= \left( 1
+ \frac{\lambda d_j}{\lVert \mathbf{s}_j \rVert - \lambda d_j} \right)^{-1}
\mathbf{s}_j \\
= \left( \frac{\lVert \mathbf{s}_j \rVert}{\lVert \mathbf{s}_j \rVert - \lambda d_j} \right)^{-1} \mathbf{s}_j \\
= \left(1 - \frac{\lambda d_j}{\lVert \mathbf{s}_j \rVert} \right) \mathbf{s}_j .
$$
Case II
The vector norm is non-differentiable at $\mathbf{0}$, therefore we need to follow a different path if $\beta_j = \mathbf{0}$. We consider the subdifferential of $f(\beta_j) = \lVert \beta_j \rVert$ at $\beta_j = \mathbf{0}$, which has the form
$$
\partial f(\beta_j) = \{ \mathbf{v} \in \mathbb{R}^{p_j} \mid
f(\beta_j^\prime) \geq f(\beta_j) + \mathbf{v}^\top (\beta_j^\prime - \beta_j), \forall \beta_j^\prime \in \mathbb{R}^{p_j} \} \\
= \{ \mathbf{v} \in \mathbb{R}^{p_j} \mid
\lVert \beta_j^\prime \rVert \geq \mathbf{v}^\top \beta_j^\prime, \forall \beta_j^\prime \in \mathbb{R}^{p_j} \} .
$$
Thus, a subgradient vector $\mathbf{v}$ of $\lVert \beta_j \rVert$ at $\beta_j = \mathbf{0}$ needs to satisfy $\lVert \mathbf{v} \rVert \leq 1$.
The KKT conditions require $\mathbf{0} \in -(\mathbf{X}^{j})^\top (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)
+ \lambda d_j \mathbf{v}$, so we obtain
$$
-(\mathbf{X}^{j})^\top (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)
+ \lambda d_j \mathbf{v}
= -(\mathbf{X}^{j})^\top \mathbf{r}_j + \lambda d_j \mathbf{v}
= \mathbf{0} , \\
\Leftrightarrow
\lambda d_j \mathbf{v} = \mathbf{s}_j , \\
\Leftrightarrow
\mathbf{v} = \frac{1}{\lambda d_j} \mathbf{s}_j .
$$
Because, $\lVert \mathbf{v} \rVert \leq 1$,
$\lVert \frac{1}{\lambda d_j} \mathbf{s}_j \rVert \leq 1
\Leftrightarrow
\lVert \mathbf{s}_j \rVert \leq \lambda d_j$.
Therefore, $\beta_j$ becomes zero if $\lVert \mathbf{s}_j \rVert \leq \lambda d_j$.
Combining both cases
It is straightforward to combine both cases into a single equation:
$$
\beta_j = \left(1 - \frac{\lambda d_j}{\lVert \mathbf{s}_j \rVert} \right)_+ \mathbf{s}_j ,
$$
where $(\cdot)_+$ denotes the positive part of its argument. | Derivation of Group Lasso | It took me some time to understand this derivation. As usual, once you get the trick, it's actually straightforward.
To solve the group LASSO via block coordinate descent, we solve for each group of v | Derivation of Group Lasso
It took me some time to understand this derivation. As usual, once you get the trick, it's actually straightforward.
To solve the group LASSO via block coordinate descent, we solve for each group of variables $\beta_j = (\beta_{j1}, \ldots, \beta_{jp_j})^\top$ separately, i.e. we need to solve
$$
\arg\min_{\beta_j} \frac{1}{2} (\mathbf{Y} - \mathbf{X} \mathbf{\beta})^\top
(\mathbf{Y} - \mathbf{X} \mathbf{\beta}) + \lambda \sum_{j=1}^J d_j \lVert \beta_j \rVert , \\
\Leftrightarrow
\arg\min_{\beta_j} \frac{1}{2} (\mathbf{Y} - \mathbf{X}^{-j} \mathbf{\beta}_{-j} - \mathbf{X}^{j} \beta_j)^\top
(\mathbf{Y} - \mathbf{X}^{-j} \mathbf{\beta}_{-j} - \mathbf{X}^{j} \beta_j) + \lambda \sum_{j=1}^J d_j \lVert \beta_j \rVert , \\
\Leftrightarrow
\arg\min_{\beta_j} \frac{1}{2} (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)^\top
(\mathbf{r}_j - \mathbf{X}^{j} \beta_j) + \lambda \sum_{j=1}^J d_j \lVert \beta_j \rVert , \\
$$
where $\mathbf{X}^{j}$ denotes the columns of $\mathbf{X}$ corresponding to the $j$-th group (with $(\mathbf{X}^{j})^\top \mathbf{X}^{j} = \mathbf{I}$), $\mathbf{X}^{-j}$ denotes the matrix $\mathbf{X}$ without the columns corresponding to the $j$-th group, and $\beta_{-j}$ is the
corresponding vector of coefficients.
Case I
Considering the case $\beta_j \neq \mathbf{0}$ and setting the derivative with respect to $\beta_j$ to zero, we obtain
$$
-(\mathbf{X}^{j})^\top (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)
+ \lambda d_j \frac{\beta_j}{\lVert \beta_j \rVert} = 0
, \\
\Leftrightarrow
-(\mathbf{X}^{j})^\top \mathbf{r}_j + (\mathbf{X}^{j})^\top \mathbf{X}^{j} \beta_j
+ \lambda d_j \frac{\beta_j}{\lVert \beta_j \rVert} = 0
, \\
\Leftrightarrow
-(\mathbf{X}^{j})^\top \mathbf{r}_j + \mathbf{I} \beta_j
+ \frac{\lambda d_j}{\lVert \beta_j \rVert}\beta_j = 0
, \\
\Leftrightarrow
\left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right) \beta_j =
(\mathbf{X}^{j})^\top \mathbf{r}_j
, \\
\Leftrightarrow
\beta_j = \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
(\mathbf{X}^{j})^\top \mathbf{r}_j .
$$
We still have $\beta_j$ on both sites, namely in the form of $\lVert \beta_j \rVert$, which we would like to eliminate.
Let $\mathbf{s}_j = (\mathbf{X}^{j})^\top \mathbf{r}_j$, we take the last equation to express $\lVert \beta_j \rVert$ as
$$
\lVert \beta_j \rVert =
\left\Vert \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
\mathbf{s}_j \right\Vert \\
= \sqrt{ \sum_{i=1}^n \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-2} s_{ji}^2 } \\
= \sqrt{ \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-2}
\sum_{i=1}^n s_{ji}^2 } \\
= \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
\lVert \mathbf{s}_j \rVert .
$$
Now, solving for $\lVert \beta_j \rVert$, we get
$$
\lVert \beta_j \rVert \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)
= \lVert \mathbf{s}_j \rVert, \\
\Leftrightarrow
\lVert \beta_j \rVert + \lambda d_j = \lVert \mathbf{s}_j \rVert, \\
\Leftrightarrow
\lVert \beta_j \rVert = \lVert \mathbf{s}_j \rVert - \lambda d_j .
$$
Thus, we can use this formulation of $\lVert \beta_j \rVert$ and substitute it in the equation above:
$$
\beta_j = \left( 1
+ \frac{\lambda d_j}{\lVert \beta_j \rVert} \right)^{-1}
\mathbf{s}_j \\
= \left( 1
+ \frac{\lambda d_j}{\lVert \mathbf{s}_j \rVert - \lambda d_j} \right)^{-1}
\mathbf{s}_j \\
= \left( \frac{\lVert \mathbf{s}_j \rVert}{\lVert \mathbf{s}_j \rVert - \lambda d_j} \right)^{-1} \mathbf{s}_j \\
= \left(1 - \frac{\lambda d_j}{\lVert \mathbf{s}_j \rVert} \right) \mathbf{s}_j .
$$
Case II
The vector norm is non-differentiable at $\mathbf{0}$, therefore we need to follow a different path if $\beta_j = \mathbf{0}$. We consider the subdifferential of $f(\beta_j) = \lVert \beta_j \rVert$ at $\beta_j = \mathbf{0}$, which has the form
$$
\partial f(\beta_j) = \{ \mathbf{v} \in \mathbb{R}^{p_j} \mid
f(\beta_j^\prime) \geq f(\beta_j) + \mathbf{v}^\top (\beta_j^\prime - \beta_j), \forall \beta_j^\prime \in \mathbb{R}^{p_j} \} \\
= \{ \mathbf{v} \in \mathbb{R}^{p_j} \mid
\lVert \beta_j^\prime \rVert \geq \mathbf{v}^\top \beta_j^\prime, \forall \beta_j^\prime \in \mathbb{R}^{p_j} \} .
$$
Thus, a subgradient vector $\mathbf{v}$ of $\lVert \beta_j \rVert$ at $\beta_j = \mathbf{0}$ needs to satisfy $\lVert \mathbf{v} \rVert \leq 1$.
The KKT conditions require $\mathbf{0} \in -(\mathbf{X}^{j})^\top (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)
+ \lambda d_j \mathbf{v}$, so we obtain
$$
-(\mathbf{X}^{j})^\top (\mathbf{r}_j - \mathbf{X}^{j} \beta_j)
+ \lambda d_j \mathbf{v}
= -(\mathbf{X}^{j})^\top \mathbf{r}_j + \lambda d_j \mathbf{v}
= \mathbf{0} , \\
\Leftrightarrow
\lambda d_j \mathbf{v} = \mathbf{s}_j , \\
\Leftrightarrow
\mathbf{v} = \frac{1}{\lambda d_j} \mathbf{s}_j .
$$
Because, $\lVert \mathbf{v} \rVert \leq 1$,
$\lVert \frac{1}{\lambda d_j} \mathbf{s}_j \rVert \leq 1
\Leftrightarrow
\lVert \mathbf{s}_j \rVert \leq \lambda d_j$.
Therefore, $\beta_j$ becomes zero if $\lVert \mathbf{s}_j \rVert \leq \lambda d_j$.
Combining both cases
It is straightforward to combine both cases into a single equation:
$$
\beta_j = \left(1 - \frac{\lambda d_j}{\lVert \mathbf{s}_j \rVert} \right)_+ \mathbf{s}_j ,
$$
where $(\cdot)_+$ denotes the positive part of its argument. | Derivation of Group Lasso
It took me some time to understand this derivation. As usual, once you get the trick, it's actually straightforward.
To solve the group LASSO via block coordinate descent, we solve for each group of v |
31,668 | Bayesian vs Frequentist: practical difference w.r.t. machine learning | Once you've fitted the model, it will be what it will be, so I think the difference is prior to that. That is, the models / parameters are fitted differently between the Bayesian and Frequentist approaches. More specifically, the fitted Bayesian parameters will incorporate additional information outside of what is in the data. If you know something about what the parameters are likely to be (and you aren't wrong), that could boost the model's performance. Even if you use an 'uninformative' prior, you will typically find the fitted Bayesian parameters will be shrunk to some degree towards $0$ relative to the fitted Frequentist parameters. | Bayesian vs Frequentist: practical difference w.r.t. machine learning | Once you've fitted the model, it will be what it will be, so I think the difference is prior to that. That is, the models / parameters are fitted differently between the Bayesian and Frequentist appr | Bayesian vs Frequentist: practical difference w.r.t. machine learning
Once you've fitted the model, it will be what it will be, so I think the difference is prior to that. That is, the models / parameters are fitted differently between the Bayesian and Frequentist approaches. More specifically, the fitted Bayesian parameters will incorporate additional information outside of what is in the data. If you know something about what the parameters are likely to be (and you aren't wrong), that could boost the model's performance. Even if you use an 'uninformative' prior, you will typically find the fitted Bayesian parameters will be shrunk to some degree towards $0$ relative to the fitted Frequentist parameters. | Bayesian vs Frequentist: practical difference w.r.t. machine learning
Once you've fitted the model, it will be what it will be, so I think the difference is prior to that. That is, the models / parameters are fitted differently between the Bayesian and Frequentist appr |
31,669 | How to obtain the angle of rotation produced by a PCA on a 2D dataset? | In a 2D case, the rotation matrix is $2\times 2$ and contains two eigenvectors as its columns. The first eigenvector $(x,y)$ is given by the first column. Its angle to the horizontal axis (abscissa) is given by $$\alpha = \operatorname{arctan} (y/x).$$As all eigenvectors are scaled to have unit length, this is equal to $$\alpha = \operatorname{arccos} (x/1)=\operatorname{arccos} (x).$$
You might call it a "counterclockwise" angle "from the East direction". If you want a "clockwise" angle "from the North direction", then you need $$\beta = 90^\circ - \alpha=90^\circ - \operatorname{arccos} (x)=
\operatorname{arcsin} (x).$$
In R, it should be:
beta = asin(pc$rotation[1,1])*180/pi
which is more or less your formula too. | How to obtain the angle of rotation produced by a PCA on a 2D dataset? | In a 2D case, the rotation matrix is $2\times 2$ and contains two eigenvectors as its columns. The first eigenvector $(x,y)$ is given by the first column. Its angle to the horizontal axis (abscissa) i | How to obtain the angle of rotation produced by a PCA on a 2D dataset?
In a 2D case, the rotation matrix is $2\times 2$ and contains two eigenvectors as its columns. The first eigenvector $(x,y)$ is given by the first column. Its angle to the horizontal axis (abscissa) is given by $$\alpha = \operatorname{arctan} (y/x).$$As all eigenvectors are scaled to have unit length, this is equal to $$\alpha = \operatorname{arccos} (x/1)=\operatorname{arccos} (x).$$
You might call it a "counterclockwise" angle "from the East direction". If you want a "clockwise" angle "from the North direction", then you need $$\beta = 90^\circ - \alpha=90^\circ - \operatorname{arccos} (x)=
\operatorname{arcsin} (x).$$
In R, it should be:
beta = asin(pc$rotation[1,1])*180/pi
which is more or less your formula too. | How to obtain the angle of rotation produced by a PCA on a 2D dataset?
In a 2D case, the rotation matrix is $2\times 2$ and contains two eigenvectors as its columns. The first eigenvector $(x,y)$ is given by the first column. Its angle to the horizontal axis (abscissa) i |
31,670 | Why do we need the regularization term for NMF but not for SVD? | NMF does not always include regularization -- for example, see the first two cost functions here. But, regularized NMF can be useful:
If you're willing to add extra constraints beyond nonnegativity, you can produce highly interpretable structures, including the K-means centroids.
If you want to fit an NMF so that $X = LR$ where $L$ is wider than $X$, the model is useless (trivial, ill-posed, non-identifiable) because you could just set $L$ equal to a zero-padded version of $X$ and set $R$ to a zero-padded identity matrix. In this case, an L1 penalty on each factor matrix will help out by encouraging both factors to contribute. For a low-dimensional example, a pair of 3's incurs a lower L1 penalty than a 9 and a 1.
Another possible answer is that SVD is a rigid term and NMF is not. If you added regularization to an SVD, it would no longer be called an SVD, but the term NMF is less prescriptive. | Why do we need the regularization term for NMF but not for SVD? | NMF does not always include regularization -- for example, see the first two cost functions here. But, regularized NMF can be useful:
If you're willing to add extra constraints beyond nonnegativity, | Why do we need the regularization term for NMF but not for SVD?
NMF does not always include regularization -- for example, see the first two cost functions here. But, regularized NMF can be useful:
If you're willing to add extra constraints beyond nonnegativity, you can produce highly interpretable structures, including the K-means centroids.
If you want to fit an NMF so that $X = LR$ where $L$ is wider than $X$, the model is useless (trivial, ill-posed, non-identifiable) because you could just set $L$ equal to a zero-padded version of $X$ and set $R$ to a zero-padded identity matrix. In this case, an L1 penalty on each factor matrix will help out by encouraging both factors to contribute. For a low-dimensional example, a pair of 3's incurs a lower L1 penalty than a 9 and a 1.
Another possible answer is that SVD is a rigid term and NMF is not. If you added regularization to an SVD, it would no longer be called an SVD, but the term NMF is less prescriptive. | Why do we need the regularization term for NMF but not for SVD?
NMF does not always include regularization -- for example, see the first two cost functions here. But, regularized NMF can be useful:
If you're willing to add extra constraints beyond nonnegativity, |
31,671 | Is Greedy Layer-Wise Training of Deep Networks necessary for successfully training or is stochastic gradient descent enough? | Pre-training is no longer necessary. Its purpose was to find a good initialization for the network weights in order to facilitate convergence when a high number of layers were employed. Nowadays, we have ReLU, dropout and batch normalization, all of which contribute to solve the problem of training deep neural networks. Quoting from the above linked reddit post (by the Galaxy Zoo Kaggle challenge winner):
I would say that the “pre-training era”, which started around 2006, ended in the early ’10s when people started using rectified linear units (ReLUs), and later dropout, and discovered that pre-training was no longer beneficial for this type of networks.
From the ReLU paper (linked above):
deep rectifier networks
can reach their best performance without
requiring any unsupervised pre-training
With that said, it is no longer necessary, but still may improve performance in some cases where there are too many unsupervised (unlabeled) samples, as seen in this paper. | Is Greedy Layer-Wise Training of Deep Networks necessary for successfully training or is stochastic | Pre-training is no longer necessary. Its purpose was to find a good initialization for the network weights in order to facilitate convergence when a high number of layers were employed. Nowadays, we h | Is Greedy Layer-Wise Training of Deep Networks necessary for successfully training or is stochastic gradient descent enough?
Pre-training is no longer necessary. Its purpose was to find a good initialization for the network weights in order to facilitate convergence when a high number of layers were employed. Nowadays, we have ReLU, dropout and batch normalization, all of which contribute to solve the problem of training deep neural networks. Quoting from the above linked reddit post (by the Galaxy Zoo Kaggle challenge winner):
I would say that the “pre-training era”, which started around 2006, ended in the early ’10s when people started using rectified linear units (ReLUs), and later dropout, and discovered that pre-training was no longer beneficial for this type of networks.
From the ReLU paper (linked above):
deep rectifier networks
can reach their best performance without
requiring any unsupervised pre-training
With that said, it is no longer necessary, but still may improve performance in some cases where there are too many unsupervised (unlabeled) samples, as seen in this paper. | Is Greedy Layer-Wise Training of Deep Networks necessary for successfully training or is stochastic
Pre-training is no longer necessary. Its purpose was to find a good initialization for the network weights in order to facilitate convergence when a high number of layers were employed. Nowadays, we h |
31,672 | Do Cohen's d and Hedges' g apply to the Welch t-test? | Depends on what is meant by "effect size" here. Welch's t-test is used to test the null hypothesis $\mu_1 = \mu_2$ when we cannot or don't want to assume that the variances are homoscedastic within the two groups.
So what is a good effect size measure to go along with this test? Obviously, it should express how different the means are in the sample. So, we could just compute the mean difference (which is an effect size measure) and there are no difficulties in computing it in this case. It's just $$y = \bar{x}_1 - \bar{x}_2,$$ whose variance can be estimated with $$\mbox{Var}[y] = \frac{SD^2_1}{n_1} + \frac{SD^2_2}{n_2}.$$
However, with "effect size", people often mean some kind of standardized measure (like Cohen's d) and the title makes it clear that this is what the asker is after. There are at least two possibilities here.
The first is to standardize the mean difference by the standard deviation from one of the two groups (e.g., if one group is a control and the other a treatment group, then we could standardize using the control group SD). So, we compute $$y = \frac{\bar{x}_1 - \bar{x}_2}{SD_1},$$ whose variance can be estimated with $$\mbox{Var}[y] = \frac{1}{n_1} + \frac{SD^2_2/SD^2_1}{n_2} + \frac{y^2}{2n_1}.$$
Alternatively, if it does not make sense to choose one of the two SDs for the standardization, then we could proceed as suggested by Bonett (2008) and standardize based on the average SD. This is computed by averaging the two variances and then taking the square-root, that is, let $$\overline{SD} = \sqrt{\frac{SD^2_1 + SD^2_2}{2}}.$$ Then we compute $$y = \frac{\bar{x}_1 - \bar{x}_2}{\overline{SD}},$$ whose variance can be estimated with $$\mbox{Var}[y] = \frac{y^2}{8(\overline{SD})^4} \left(\frac{SD_1^4}{n_1-1} + \frac{SD_2^4}{n_2-1}\right) + \frac{SD_1^2 / \overline{SD}^2}{n_1-1} + \frac{SD_2^2 / \overline{SD}^2}{n_2-1}.$$ | Do Cohen's d and Hedges' g apply to the Welch t-test? | Depends on what is meant by "effect size" here. Welch's t-test is used to test the null hypothesis $\mu_1 = \mu_2$ when we cannot or don't want to assume that the variances are homoscedastic within th | Do Cohen's d and Hedges' g apply to the Welch t-test?
Depends on what is meant by "effect size" here. Welch's t-test is used to test the null hypothesis $\mu_1 = \mu_2$ when we cannot or don't want to assume that the variances are homoscedastic within the two groups.
So what is a good effect size measure to go along with this test? Obviously, it should express how different the means are in the sample. So, we could just compute the mean difference (which is an effect size measure) and there are no difficulties in computing it in this case. It's just $$y = \bar{x}_1 - \bar{x}_2,$$ whose variance can be estimated with $$\mbox{Var}[y] = \frac{SD^2_1}{n_1} + \frac{SD^2_2}{n_2}.$$
However, with "effect size", people often mean some kind of standardized measure (like Cohen's d) and the title makes it clear that this is what the asker is after. There are at least two possibilities here.
The first is to standardize the mean difference by the standard deviation from one of the two groups (e.g., if one group is a control and the other a treatment group, then we could standardize using the control group SD). So, we compute $$y = \frac{\bar{x}_1 - \bar{x}_2}{SD_1},$$ whose variance can be estimated with $$\mbox{Var}[y] = \frac{1}{n_1} + \frac{SD^2_2/SD^2_1}{n_2} + \frac{y^2}{2n_1}.$$
Alternatively, if it does not make sense to choose one of the two SDs for the standardization, then we could proceed as suggested by Bonett (2008) and standardize based on the average SD. This is computed by averaging the two variances and then taking the square-root, that is, let $$\overline{SD} = \sqrt{\frac{SD^2_1 + SD^2_2}{2}}.$$ Then we compute $$y = \frac{\bar{x}_1 - \bar{x}_2}{\overline{SD}},$$ whose variance can be estimated with $$\mbox{Var}[y] = \frac{y^2}{8(\overline{SD})^4} \left(\frac{SD_1^4}{n_1-1} + \frac{SD_2^4}{n_2-1}\right) + \frac{SD_1^2 / \overline{SD}^2}{n_1-1} + \frac{SD_2^2 / \overline{SD}^2}{n_2-1}.$$ | Do Cohen's d and Hedges' g apply to the Welch t-test?
Depends on what is meant by "effect size" here. Welch's t-test is used to test the null hypothesis $\mu_1 = \mu_2$ when we cannot or don't want to assume that the variances are homoscedastic within th |
31,673 | Do Cohen's d and Hedges' g apply to the Welch t-test? | You don't. You calculate the effect size using the data, irrespective of the kind of T-test you used. One package in R is effsize.
d <- cohen.d (y ~ factor (x), hedges.correction = TRUE)
You can also subtract mu2 from mu1 and divide that difference by the average of the standard deviations. This will result in Cohen's d. | Do Cohen's d and Hedges' g apply to the Welch t-test? | You don't. You calculate the effect size using the data, irrespective of the kind of T-test you used. One package in R is effsize.
d <- cohen.d (y ~ factor (x), hedges.correction = TRUE)
You can a | Do Cohen's d and Hedges' g apply to the Welch t-test?
You don't. You calculate the effect size using the data, irrespective of the kind of T-test you used. One package in R is effsize.
d <- cohen.d (y ~ factor (x), hedges.correction = TRUE)
You can also subtract mu2 from mu1 and divide that difference by the average of the standard deviations. This will result in Cohen's d. | Do Cohen's d and Hedges' g apply to the Welch t-test?
You don't. You calculate the effect size using the data, irrespective of the kind of T-test you used. One package in R is effsize.
d <- cohen.d (y ~ factor (x), hedges.correction = TRUE)
You can a |
31,674 | How do I know when a Q-learning algorithm converges? | In practice, a reinforcement learning algorithm is considered to converge when the learning curve gets flat and no longer increases.
However, other elements should be taken into account since it depends on your use case and your setup. In theory, Q-Learning has been proven to converge towards the optimal solution. However, in this section of (Sutton and Barto, 1998), since the exploration parameter $\varepsilon$ parameter is not gradually increased, Q-Learning converges in a premature fashion (before reaching the optimal policy).
To my experience, it is not always obvious to make the $\varepsilon$ and the learning rate $\alpha$ decrease in a way that ensures convergence and most of the time, there is some tuning involved here (when moving these parameters, your Q-Learning curve will stabilize in different levels).
Finally, don't forget that Q-Learning has been propose in 1989 by Watkins, which is a little bit outdated. It is well suited when you learn about reinforcement learning but not that much when implementing real learning agents. I would recommend exploring more state-of-the-art techniques. | How do I know when a Q-learning algorithm converges? | In practice, a reinforcement learning algorithm is considered to converge when the learning curve gets flat and no longer increases.
However, other elements should be taken into account since it depen | How do I know when a Q-learning algorithm converges?
In practice, a reinforcement learning algorithm is considered to converge when the learning curve gets flat and no longer increases.
However, other elements should be taken into account since it depends on your use case and your setup. In theory, Q-Learning has been proven to converge towards the optimal solution. However, in this section of (Sutton and Barto, 1998), since the exploration parameter $\varepsilon$ parameter is not gradually increased, Q-Learning converges in a premature fashion (before reaching the optimal policy).
To my experience, it is not always obvious to make the $\varepsilon$ and the learning rate $\alpha$ decrease in a way that ensures convergence and most of the time, there is some tuning involved here (when moving these parameters, your Q-Learning curve will stabilize in different levels).
Finally, don't forget that Q-Learning has been propose in 1989 by Watkins, which is a little bit outdated. It is well suited when you learn about reinforcement learning but not that much when implementing real learning agents. I would recommend exploring more state-of-the-art techniques. | How do I know when a Q-learning algorithm converges?
In practice, a reinforcement learning algorithm is considered to converge when the learning curve gets flat and no longer increases.
However, other elements should be taken into account since it depen |
31,675 | How do I know when a Q-learning algorithm converges? | Do a fixed number of episodes/iterations. Simplest approach which will give you near-optimal solution.
Evaluate on N number of episodes and take an average. For example rollout 5 episodes, take average return $ G $ and compare that with best possible $ G_{max} $(if that info is available) or with 2-3 previous results with something like RMSE.
UPD. This one was incorrect. Due to randomness involved in an algorithm you cannot do that. This will work only for value iteration. Track Q-function updates. Once it becomes smaller than some small number e you can stop running episodes/iterations.
e = 0.001 # some small number
while True: # improving our Q
delta = 0 # track updates
while True: # running episode
old_Q = Q[s, a]
new_Q = Q[s, a] + alpha * (r + gamma*max_a(Q[s', a']) - Q[s, a])
delta = max(delta, old_Q-new_Q)
if delta < e:
# Assuming Q has been converged, no major updates
# while running an episode (max update was less
# than small `e`)
break
^^ apply the code only for value iteration.
Also check this question for more information. | How do I know when a Q-learning algorithm converges? | Do a fixed number of episodes/iterations. Simplest approach which will give you near-optimal solution.
Evaluate on N number of episodes and take an average. For example rollout 5 episodes, take averag | How do I know when a Q-learning algorithm converges?
Do a fixed number of episodes/iterations. Simplest approach which will give you near-optimal solution.
Evaluate on N number of episodes and take an average. For example rollout 5 episodes, take average return $ G $ and compare that with best possible $ G_{max} $(if that info is available) or with 2-3 previous results with something like RMSE.
UPD. This one was incorrect. Due to randomness involved in an algorithm you cannot do that. This will work only for value iteration. Track Q-function updates. Once it becomes smaller than some small number e you can stop running episodes/iterations.
e = 0.001 # some small number
while True: # improving our Q
delta = 0 # track updates
while True: # running episode
old_Q = Q[s, a]
new_Q = Q[s, a] + alpha * (r + gamma*max_a(Q[s', a']) - Q[s, a])
delta = max(delta, old_Q-new_Q)
if delta < e:
# Assuming Q has been converged, no major updates
# while running an episode (max update was less
# than small `e`)
break
^^ apply the code only for value iteration.
Also check this question for more information. | How do I know when a Q-learning algorithm converges?
Do a fixed number of episodes/iterations. Simplest approach which will give you near-optimal solution.
Evaluate on N number of episodes and take an average. For example rollout 5 episodes, take averag |
31,676 | What is an intuitive definition/explanation of an intercept in SEM? | The intercept or mean of a latent variable is arbitrary, like the variance, and is usually fixed to zero if you have a single group model (or a single time point model). The intercept of the measured variable is the expected value when the predictor (the latent variable) is equal to zero.
You anchor the mean of the latent variable to the intercept of the measured variables, and that means that you can compare them over time. But if the intercepts of the measured variables drift apart, you can't anchor the means to them any more, because you don't know where they are anchored.
Enough analogies, let's have a concrete example.
Let's say you want to compare depression symptoms in men and women.
So you ask three questions:
How many days in the past week have you:
Felt lonely.
Felt sad
Cried
I create a latent variable based on this, and error and loadings look good. Now I want to compare the means of the latent variables, so I fix the male latent mean to zero. I constrain the intercepts of the three measured variables to be equal across groups.
Women and men do not differ on how much they have felt lonely, how much they have felt sad, but then we find that women say that they have cried more than men.
Does that mean that the women have 'more' depression than the men? If we anchor to crying - yes. If we anchor to the other two variables - no. We don't have intercept invariance, and because of that, we can't compare the means of the latent variables.
Another (only slightly different) way to think about it. The intercept of the measured variable is the expected value of the variable if the mean of the factor is equal to zero. The predicted values for the measured variables should be the same between men and women when the values of the factors are equal (that is, when the value of the factors is zero). But the predicted values of the measured variables are not equal when the factors are equal. Some are equal (in our example, 1 and 2), one is not (3). | What is an intuitive definition/explanation of an intercept in SEM? | The intercept or mean of a latent variable is arbitrary, like the variance, and is usually fixed to zero if you have a single group model (or a single time point model). The intercept of the measured | What is an intuitive definition/explanation of an intercept in SEM?
The intercept or mean of a latent variable is arbitrary, like the variance, and is usually fixed to zero if you have a single group model (or a single time point model). The intercept of the measured variable is the expected value when the predictor (the latent variable) is equal to zero.
You anchor the mean of the latent variable to the intercept of the measured variables, and that means that you can compare them over time. But if the intercepts of the measured variables drift apart, you can't anchor the means to them any more, because you don't know where they are anchored.
Enough analogies, let's have a concrete example.
Let's say you want to compare depression symptoms in men and women.
So you ask three questions:
How many days in the past week have you:
Felt lonely.
Felt sad
Cried
I create a latent variable based on this, and error and loadings look good. Now I want to compare the means of the latent variables, so I fix the male latent mean to zero. I constrain the intercepts of the three measured variables to be equal across groups.
Women and men do not differ on how much they have felt lonely, how much they have felt sad, but then we find that women say that they have cried more than men.
Does that mean that the women have 'more' depression than the men? If we anchor to crying - yes. If we anchor to the other two variables - no. We don't have intercept invariance, and because of that, we can't compare the means of the latent variables.
Another (only slightly different) way to think about it. The intercept of the measured variable is the expected value of the variable if the mean of the factor is equal to zero. The predicted values for the measured variables should be the same between men and women when the values of the factors are equal (that is, when the value of the factors is zero). But the predicted values of the measured variables are not equal when the factors are equal. Some are equal (in our example, 1 and 2), one is not (3). | What is an intuitive definition/explanation of an intercept in SEM?
The intercept or mean of a latent variable is arbitrary, like the variance, and is usually fixed to zero if you have a single group model (or a single time point model). The intercept of the measured |
31,677 | Intuition behind strong vs weak laws of large numbers (with an R simulation) | It might be clearer to state the weak law as $$\overline{Y}_n\ \xrightarrow{P}\ \mu \,\textrm{ when }\ n \to \infty , \text{ i.e. } \forall \varepsilon \gt 0: \lim_{n\to\infty}\Pr\!\left(\,|\overline{Y}_n-\mu| \lt \varepsilon\,\right) = 1$$
and the strong law as
$$\overline{Y}_n\ \xrightarrow{a.s.}\ \mu \,\textrm{ when }\ n \to \infty , \text{ i.e. } \Pr\!\left( \lim_{n\to\infty}\overline{Y}_n = \mu \right) = 1$$
You might think of the weak law as saying that the sample average is usually close to the mean when the sample size is big, and the strong law as saying the sample average almost certainly converges to the mean as the sample size grows.
The difference happens when failures of the sample average to be close to the mean are big enough to prevent convergence.
As an illustration using R, take Wikipedia's first example, with $X$ being exponentially distributed random variable with parameter $1$ and $Y= \dfrac{\sin(x) e^x}{x}$ so $E[Y]=\frac{\pi}{2}$. Let's consider $100$ cases where the sample size is $10000$:
set.seed(1)
cases <- 100
samplesize <- 10000
Xmat <- matrix(rexp(samplesize*cases, rate=1), ncol=samplesize)
Ymat <- sin(Xmat) * exp(Xmat) / Xmat
plot(samplemeans <- rowMeans(Ymat),
main="most sample averages close to expectation")
abline(h=pi/2, col="red")
but now look at the failure of the running sample average over the same $1$ million observations to get to the mean and stay there
plot(cumsum(Ymat)/(1:(samplesize*cases)),
main="running sample average not always converging to expectation")
abline(h=pi/2, col="red") | Intuition behind strong vs weak laws of large numbers (with an R simulation) | It might be clearer to state the weak law as $$\overline{Y}_n\ \xrightarrow{P}\ \mu \,\textrm{ when }\ n \to \infty , \text{ i.e. } \forall \varepsilon \gt 0: \lim_{n\to\infty}\Pr\!\left(\,|\overline{ | Intuition behind strong vs weak laws of large numbers (with an R simulation)
It might be clearer to state the weak law as $$\overline{Y}_n\ \xrightarrow{P}\ \mu \,\textrm{ when }\ n \to \infty , \text{ i.e. } \forall \varepsilon \gt 0: \lim_{n\to\infty}\Pr\!\left(\,|\overline{Y}_n-\mu| \lt \varepsilon\,\right) = 1$$
and the strong law as
$$\overline{Y}_n\ \xrightarrow{a.s.}\ \mu \,\textrm{ when }\ n \to \infty , \text{ i.e. } \Pr\!\left( \lim_{n\to\infty}\overline{Y}_n = \mu \right) = 1$$
You might think of the weak law as saying that the sample average is usually close to the mean when the sample size is big, and the strong law as saying the sample average almost certainly converges to the mean as the sample size grows.
The difference happens when failures of the sample average to be close to the mean are big enough to prevent convergence.
As an illustration using R, take Wikipedia's first example, with $X$ being exponentially distributed random variable with parameter $1$ and $Y= \dfrac{\sin(x) e^x}{x}$ so $E[Y]=\frac{\pi}{2}$. Let's consider $100$ cases where the sample size is $10000$:
set.seed(1)
cases <- 100
samplesize <- 10000
Xmat <- matrix(rexp(samplesize*cases, rate=1), ncol=samplesize)
Ymat <- sin(Xmat) * exp(Xmat) / Xmat
plot(samplemeans <- rowMeans(Ymat),
main="most sample averages close to expectation")
abline(h=pi/2, col="red")
but now look at the failure of the running sample average over the same $1$ million observations to get to the mean and stay there
plot(cumsum(Ymat)/(1:(samplesize*cases)),
main="running sample average not always converging to expectation")
abline(h=pi/2, col="red") | Intuition behind strong vs weak laws of large numbers (with an R simulation)
It might be clearer to state the weak law as $$\overline{Y}_n\ \xrightarrow{P}\ \mu \,\textrm{ when }\ n \to \infty , \text{ i.e. } \forall \varepsilon \gt 0: \lim_{n\to\infty}\Pr\!\left(\,|\overline{ |
31,678 | Are tree estimators ALWAYS biased? | A decision tree model is no more always bias than any other learning model.
To illustrate, let's look at two examples. Let $X$ be a random uniform variable on $[0, 1]$. Here are possible statistical processes
Truth 1: $Y$ given $X$ is an an indicator function of X, plus noise:
$$ Y \mid X \sim I_{< .5}(X) + N(0, 1) $$
Truth 2: $Y$ given $X$ is a linear function of $X$, plus noise:
$$ Y \mid X \sim X + N(0, 1) $$
If we fit a decision tree in both situations, the model is un-biased in the first situation, but is biased in the second. This is because a one split binary tree can recover the true underlying data model in the first situation. In the second, the best a tree can do is approximate the linear function by stir stepping at ever finer intervals - a tree of finite depth can only get so close.
If we fit a linear regression in both situations, the model is biased in the first situation, but is un-biased in the second.
So, to know whether a model is biased, you need to know what the true underlying data mechanism is. In real life situations, you just never know this, so you can never really say whether a model in real life is biased or not. Sometimes, we think we are totally right for a long time, but then the bias emerges with deeper understanding (Newtonian Gravity to Einstein Gravity is at least an apocryphal example).
In some sense, we expect most real world processes (with some exceptions) to be so unknowable, that a reasonable enough approximation of the truth is that all our models are biased. I some how doubt the question is asking for a deep philosophical discussion about the essential futility of modeling complex statistical process, but it is fun to think about. | Are tree estimators ALWAYS biased? | A decision tree model is no more always bias than any other learning model.
To illustrate, let's look at two examples. Let $X$ be a random uniform variable on $[0, 1]$. Here are possible statistical | Are tree estimators ALWAYS biased?
A decision tree model is no more always bias than any other learning model.
To illustrate, let's look at two examples. Let $X$ be a random uniform variable on $[0, 1]$. Here are possible statistical processes
Truth 1: $Y$ given $X$ is an an indicator function of X, plus noise:
$$ Y \mid X \sim I_{< .5}(X) + N(0, 1) $$
Truth 2: $Y$ given $X$ is a linear function of $X$, plus noise:
$$ Y \mid X \sim X + N(0, 1) $$
If we fit a decision tree in both situations, the model is un-biased in the first situation, but is biased in the second. This is because a one split binary tree can recover the true underlying data model in the first situation. In the second, the best a tree can do is approximate the linear function by stir stepping at ever finer intervals - a tree of finite depth can only get so close.
If we fit a linear regression in both situations, the model is biased in the first situation, but is un-biased in the second.
So, to know whether a model is biased, you need to know what the true underlying data mechanism is. In real life situations, you just never know this, so you can never really say whether a model in real life is biased or not. Sometimes, we think we are totally right for a long time, but then the bias emerges with deeper understanding (Newtonian Gravity to Einstein Gravity is at least an apocryphal example).
In some sense, we expect most real world processes (with some exceptions) to be so unknowable, that a reasonable enough approximation of the truth is that all our models are biased. I some how doubt the question is asking for a deep philosophical discussion about the essential futility of modeling complex statistical process, but it is fun to think about. | Are tree estimators ALWAYS biased?
A decision tree model is no more always bias than any other learning model.
To illustrate, let's look at two examples. Let $X$ be a random uniform variable on $[0, 1]$. Here are possible statistical |
31,679 | Are tree estimators ALWAYS biased? | The fact that some points in your data are still not being predicted could be due to something called irreducible error. The theory is that in machine learning there is reducible, and irreducible error. The idea of irreducible error is that no matter how good your model is, it won't ever be perfect. This is due to a few reasons. One, no matter how robust your training features are, there will always be some hidden feature affecting the output that your training data doesn't include. Another reason is that in almost all data, there is bound to be some outliers. You can always try to make your models as robust to outliers as possible, but no matter how hard you try, outliers will always exist. (This doesn't mean that you shouldn't think about outliers when creating your models). And one final detail is that you don't actually want your model to overfit (you may have already known this). | Are tree estimators ALWAYS biased? | The fact that some points in your data are still not being predicted could be due to something called irreducible error. The theory is that in machine learning there is reducible, and irreducible erro | Are tree estimators ALWAYS biased?
The fact that some points in your data are still not being predicted could be due to something called irreducible error. The theory is that in machine learning there is reducible, and irreducible error. The idea of irreducible error is that no matter how good your model is, it won't ever be perfect. This is due to a few reasons. One, no matter how robust your training features are, there will always be some hidden feature affecting the output that your training data doesn't include. Another reason is that in almost all data, there is bound to be some outliers. You can always try to make your models as robust to outliers as possible, but no matter how hard you try, outliers will always exist. (This doesn't mean that you shouldn't think about outliers when creating your models). And one final detail is that you don't actually want your model to overfit (you may have already known this). | Are tree estimators ALWAYS biased?
The fact that some points in your data are still not being predicted could be due to something called irreducible error. The theory is that in machine learning there is reducible, and irreducible erro |
31,680 | ARIMAX vs VAR comparison | From a theoretical perspective, VAR does not include moving-average (MA) terms and approximates any existing MA patterns by extra autoregressives lags, which is a less parsimonious solution than directly including MA terms as in an ARIMAX model. On the other hand, VAR can be estimated using OLS or GLS which are generally fast, while ARIMAX requires maximum likelihood estimation which is generally slow.
Whether VAR or ARIMAX provides a better representation of the underlying process in your application is an empirical question. You could try fitting both and doing some validation. E.g. construct a rolling window within your sample, fit a VAR and an ARIMAX model in it, and predict one step ahead. Roll the window all the way, collect the one-step-ahead forecasts from the two models and compare their accuracy. The model generating the higher accuracy is to be preferred.
Rob J. Hyndman has a brief note on ARIMAX and related models in his blog: "The ARIMAX model muddle", perhaps it will be of help. | ARIMAX vs VAR comparison | From a theoretical perspective, VAR does not include moving-average (MA) terms and approximates any existing MA patterns by extra autoregressives lags, which is a less parsimonious solution than direc | ARIMAX vs VAR comparison
From a theoretical perspective, VAR does not include moving-average (MA) terms and approximates any existing MA patterns by extra autoregressives lags, which is a less parsimonious solution than directly including MA terms as in an ARIMAX model. On the other hand, VAR can be estimated using OLS or GLS which are generally fast, while ARIMAX requires maximum likelihood estimation which is generally slow.
Whether VAR or ARIMAX provides a better representation of the underlying process in your application is an empirical question. You could try fitting both and doing some validation. E.g. construct a rolling window within your sample, fit a VAR and an ARIMAX model in it, and predict one step ahead. Roll the window all the way, collect the one-step-ahead forecasts from the two models and compare their accuracy. The model generating the higher accuracy is to be preferred.
Rob J. Hyndman has a brief note on ARIMAX and related models in his blog: "The ARIMAX model muddle", perhaps it will be of help. | ARIMAX vs VAR comparison
From a theoretical perspective, VAR does not include moving-average (MA) terms and approximates any existing MA patterns by extra autoregressives lags, which is a less parsimonious solution than direc |
31,681 | What is the difference between covariance matrix and the variance-covariance matrix? | Covariance matrix = Variance-covariance matrix | What is the difference between covariance matrix and the variance-covariance matrix? | Covariance matrix = Variance-covariance matrix | What is the difference between covariance matrix and the variance-covariance matrix?
Covariance matrix = Variance-covariance matrix | What is the difference between covariance matrix and the variance-covariance matrix?
Covariance matrix = Variance-covariance matrix |
31,682 | What is the difference between covariance matrix and the variance-covariance matrix? | In such matrices, you find variances (on the main diagonal) and covariances (on the off-diagonal). So variance-covariance matrix is completely fine, but a bit redundant as a variance is a special Kind of covariance ($Var(X) = Cov(X, X)$). So covariance matrix is also correct - while beeing shorter. | What is the difference between covariance matrix and the variance-covariance matrix? | In such matrices, you find variances (on the main diagonal) and covariances (on the off-diagonal). So variance-covariance matrix is completely fine, but a bit redundant as a variance is a special Kind | What is the difference between covariance matrix and the variance-covariance matrix?
In such matrices, you find variances (on the main diagonal) and covariances (on the off-diagonal). So variance-covariance matrix is completely fine, but a bit redundant as a variance is a special Kind of covariance ($Var(X) = Cov(X, X)$). So covariance matrix is also correct - while beeing shorter. | What is the difference between covariance matrix and the variance-covariance matrix?
In such matrices, you find variances (on the main diagonal) and covariances (on the off-diagonal). So variance-covariance matrix is completely fine, but a bit redundant as a variance is a special Kind |
31,683 | Proving similarities of two time series | Here's what I understand the situation is. You have one model, which you call your simulation, that you are confident generates a set of data that accurately represents what will actually happen in the epidemic. For some reason (presumably because it's expensive or slow to build and run, or there's theoretical interest in a simple equation that generates similar results to the complex model), you have an alternative model (the one you call a model) which can also generate a set of data, and you want to check if the version generated by this model is close to the version generated by the known-good model.
I'm also presuming that each time either of the models generates data, it generates a similar and pretty regular trend to other times. Otherwise (for example, if there's a random "take off" moment where the series suddenly breaks) there's another big complication.
First, the method of comparing parameters from an auto-fit ARIMA is a bad one (I'm guessing the reason that answer you linked to survived is that it is on Stack Overflow rather than Cross-Validated, where the statistical problems would have been picked up probably). The reason is that the same time series can get good fits with quite different combinations of auto-regressive and moving average values. There's no obvious way to look at the "similarity" of two different ARIMAs - ones that look very different may in fact be similar. As @IrishStat says in his answer to the second question you linked to, you could construct an F-test of a common set of parameters for both models, but that would require something quite a bit more complex than auto.arima(). And even then you might find that they don't have common parameters, but deliver similar predictions of the trend which is what you are actually interested in, rather than the details of the ARMA process that is generating some of the random noise around the trend.
So what would I recommend instead? It sounds like you aren't worried about the small fluctuations but only the overall trend. I would compare a smoothed version of the trend of each dataset, and start by making a visual comparison. In the case you've got, this shows that they're definitely not the same time series; one of them hovers around 1478, the other around zero, and that's good enough for me. But if there were some ambiguity, I would probably sum the squares or absolute values of the difference between the two smoothed series and determine if that was close enough, for some arbitrarily chosen meaning of "close enough" which in the end will have to depend on your domain, and the costs of being wrong. Definitely I'd start with the graphic.
If you want a more objective benchmark, I would try running both simulations multiple times and seeing how much difference (sum of squares or absolute differences) there is between different instances of the same simulation, and comparing that to the inter-simulation differences. If they're the same, that shows that you can't tell which model produced the simulation. If they're different, you still have to make a judgement call about how different is too much, but you'll have some numbers to help you.
While fitting ARIMA models is a bad idea for identifying similarity in trends, it's a good way to let me generate some data, so pasted below is how I did that. I'm guessing something's wrong with the data - maybe you fit the ARIMA model to a transformed or differenced version of the data, in which case you might want to go the next step of quantifying the difference between the two trends.
library(forecast)
library(ggplot2)
library(tidyr)
library(dplyr)
# generate some data
good_model <- arima.sim(model = list(ar = c(1.4848, -0.5619)), n = 1000)
test_model <- arima.sim(model = list(ar = c(1.5170, -0.7996)), n = 1000) + 1478
combined <- data.frame(good = good_model, test = test_model, time = 1:1000) %>%
gather(variable, value, -time) %>%
mutate(value = as.numeric(value))
ggplot(combined, aes(x = time, colour = variable, y = value)) +
geom_line(alpha = 0.5) +
geom_smooth(se = FALSE, size = 2) +
theme_minimal()
Edit
I blogged about this at http://ellisp.github.io/blog/2015/09/20/timeseries-differences , basically just exploring how you might use simulation brute force to determine if two models are similar. However, I reach the conclusion that you still need a (probably) subjective decision on a cost function - obviously you're two methods will be different, but how different are you prepared to put up with? | Proving similarities of two time series | Here's what I understand the situation is. You have one model, which you call your simulation, that you are confident generates a set of data that accurately represents what will actually happen in t | Proving similarities of two time series
Here's what I understand the situation is. You have one model, which you call your simulation, that you are confident generates a set of data that accurately represents what will actually happen in the epidemic. For some reason (presumably because it's expensive or slow to build and run, or there's theoretical interest in a simple equation that generates similar results to the complex model), you have an alternative model (the one you call a model) which can also generate a set of data, and you want to check if the version generated by this model is close to the version generated by the known-good model.
I'm also presuming that each time either of the models generates data, it generates a similar and pretty regular trend to other times. Otherwise (for example, if there's a random "take off" moment where the series suddenly breaks) there's another big complication.
First, the method of comparing parameters from an auto-fit ARIMA is a bad one (I'm guessing the reason that answer you linked to survived is that it is on Stack Overflow rather than Cross-Validated, where the statistical problems would have been picked up probably). The reason is that the same time series can get good fits with quite different combinations of auto-regressive and moving average values. There's no obvious way to look at the "similarity" of two different ARIMAs - ones that look very different may in fact be similar. As @IrishStat says in his answer to the second question you linked to, you could construct an F-test of a common set of parameters for both models, but that would require something quite a bit more complex than auto.arima(). And even then you might find that they don't have common parameters, but deliver similar predictions of the trend which is what you are actually interested in, rather than the details of the ARMA process that is generating some of the random noise around the trend.
So what would I recommend instead? It sounds like you aren't worried about the small fluctuations but only the overall trend. I would compare a smoothed version of the trend of each dataset, and start by making a visual comparison. In the case you've got, this shows that they're definitely not the same time series; one of them hovers around 1478, the other around zero, and that's good enough for me. But if there were some ambiguity, I would probably sum the squares or absolute values of the difference between the two smoothed series and determine if that was close enough, for some arbitrarily chosen meaning of "close enough" which in the end will have to depend on your domain, and the costs of being wrong. Definitely I'd start with the graphic.
If you want a more objective benchmark, I would try running both simulations multiple times and seeing how much difference (sum of squares or absolute differences) there is between different instances of the same simulation, and comparing that to the inter-simulation differences. If they're the same, that shows that you can't tell which model produced the simulation. If they're different, you still have to make a judgement call about how different is too much, but you'll have some numbers to help you.
While fitting ARIMA models is a bad idea for identifying similarity in trends, it's a good way to let me generate some data, so pasted below is how I did that. I'm guessing something's wrong with the data - maybe you fit the ARIMA model to a transformed or differenced version of the data, in which case you might want to go the next step of quantifying the difference between the two trends.
library(forecast)
library(ggplot2)
library(tidyr)
library(dplyr)
# generate some data
good_model <- arima.sim(model = list(ar = c(1.4848, -0.5619)), n = 1000)
test_model <- arima.sim(model = list(ar = c(1.5170, -0.7996)), n = 1000) + 1478
combined <- data.frame(good = good_model, test = test_model, time = 1:1000) %>%
gather(variable, value, -time) %>%
mutate(value = as.numeric(value))
ggplot(combined, aes(x = time, colour = variable, y = value)) +
geom_line(alpha = 0.5) +
geom_smooth(se = FALSE, size = 2) +
theme_minimal()
Edit
I blogged about this at http://ellisp.github.io/blog/2015/09/20/timeseries-differences , basically just exploring how you might use simulation brute force to determine if two models are similar. However, I reach the conclusion that you still need a (probably) subjective decision on a cost function - obviously you're two methods will be different, but how different are you prepared to put up with? | Proving similarities of two time series
Here's what I understand the situation is. You have one model, which you call your simulation, that you are confident generates a set of data that accurately represents what will actually happen in t |
31,684 | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regression? | For the multiple testing problem it might be good to take a look at Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems?.
In your example above, if you estimate a regression on one sample, then you can, with a t-test only decide on the significance of an individual coefficient, so, yes, there is a multiple testing problem if you draw conclusions for multiple coefficients, based on multiple t-tests.
Let us call the coefficients $\beta_i, i = 1, 2, \dots 5$, then you can test $H_0^{(1)}: \beta_1 = 0$ versus $H_1^{(1)}: \beta_1 \ne 0$ with a t-test and conclude that $\beta_1$ is significant. Note that, if you can not reject $H_0^{(1)}$ that you can not conclude that $\beta_1$ is zero (see What follows if we fail to reject the null hypothesis?).
So if you want to find 'statistical evidence' for $\beta_1$ not being zero, then your $H_1^{(1)}$ must be the expression that you want to 'prove', i.e. $H_1^{(1)}: \beta_1 \ne 0$ and then $H_0^{(1)}$ is the opposite, i.e. $\beta_1=0$. As you assume $H_0^{(1)}$ to be true (to derive a statistical contradiction) you have a fixed value for the parameter $\beta_1=0$ and therefrom it follows that you know the distribution of the estimator $\hat{\beta}_1$ (see theory on linear regression) and you can compute p-values.
Let us now take the case where you want to show that $(\beta_1 \ne 0 \text{ and } \beta_2 \ne 0)$, then this must be your $H_1^{(1,2)}$ and the opposite $H_0^{(1,2)}$ is that either $(\beta_1 = 0 \text{ or } \beta_2 = 0)$, as there is an 'or' in there you can not fix all the parameters of the combined distribution of $(\hat{\beta}_1, \hat{\beta}_2)$ !
Can you apply multiple testing procedures ? Most of them assume that the individual p-values are independent, in this example $\hat{\beta}_1$ and $\hat{\beta}_2$ can not be shown to be independent !
But, in an advanced book on econometrics (e.g. W.H. Greene, "Econometric Analysis") you will find applicable test for J (simultaneous) linear restrictions ($\beta_i=0, i=1,2,3,4,5$ is a special type of 5 linear restrictions) that avoid the multiple testing problem. | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regre | For the multiple testing problem it might be good to take a look at Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems? | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regression?
For the multiple testing problem it might be good to take a look at Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems?.
In your example above, if you estimate a regression on one sample, then you can, with a t-test only decide on the significance of an individual coefficient, so, yes, there is a multiple testing problem if you draw conclusions for multiple coefficients, based on multiple t-tests.
Let us call the coefficients $\beta_i, i = 1, 2, \dots 5$, then you can test $H_0^{(1)}: \beta_1 = 0$ versus $H_1^{(1)}: \beta_1 \ne 0$ with a t-test and conclude that $\beta_1$ is significant. Note that, if you can not reject $H_0^{(1)}$ that you can not conclude that $\beta_1$ is zero (see What follows if we fail to reject the null hypothesis?).
So if you want to find 'statistical evidence' for $\beta_1$ not being zero, then your $H_1^{(1)}$ must be the expression that you want to 'prove', i.e. $H_1^{(1)}: \beta_1 \ne 0$ and then $H_0^{(1)}$ is the opposite, i.e. $\beta_1=0$. As you assume $H_0^{(1)}$ to be true (to derive a statistical contradiction) you have a fixed value for the parameter $\beta_1=0$ and therefrom it follows that you know the distribution of the estimator $\hat{\beta}_1$ (see theory on linear regression) and you can compute p-values.
Let us now take the case where you want to show that $(\beta_1 \ne 0 \text{ and } \beta_2 \ne 0)$, then this must be your $H_1^{(1,2)}$ and the opposite $H_0^{(1,2)}$ is that either $(\beta_1 = 0 \text{ or } \beta_2 = 0)$, as there is an 'or' in there you can not fix all the parameters of the combined distribution of $(\hat{\beta}_1, \hat{\beta}_2)$ !
Can you apply multiple testing procedures ? Most of them assume that the individual p-values are independent, in this example $\hat{\beta}_1$ and $\hat{\beta}_2$ can not be shown to be independent !
But, in an advanced book on econometrics (e.g. W.H. Greene, "Econometric Analysis") you will find applicable test for J (simultaneous) linear restrictions ($\beta_i=0, i=1,2,3,4,5$ is a special type of 5 linear restrictions) that avoid the multiple testing problem. | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regre
For the multiple testing problem it might be good to take a look at Family-wise error boundary: Does re-using data sets on different studies of independent questions lead to multiple testing problems? |
31,685 | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regression? | There may be a few additional aspects worth considering (which are a little too long for a comment).
Whether or not there is a multiple testing problem in a given application quite strongly depends on which coefficients a researcher looks at. In many applications, one is only interested in 1-2 key variables, and the others only act as "controls". Say, in a fixed effects panel data model we may feel that we need individual specific intercepts to control for unobserved heterogeneity, but we are typically not really interested in these $N$ fixed effects per se. On the other hand, in for example growth econometrics, we sift through all possible determinants for growth and as such, we are willing to look at all significant variables. In the latter case, we do have a multiple testing problem, but not necessarily in the former.
I would argue that there are indeed several high-powered (at least, higher powered than Bonferroni) alternatives for performing such a model selection exercise. These include Bayesian model averaging, extreme-bounds analysis, General-to-specific, penalized methods (Lasso and related methods) and also methods directly deriving from the multiple testing literature. The latter group includes classical ones based on the Benjamini-Hochberg method, but also more recent bootstrap-based methods. To do some shameless self-promotion, these are compared and applied in a paper of mine. | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regre | There may be a few additional aspects worth considering (which are a little too long for a comment).
Whether or not there is a multiple testing problem in a given application quite strongly depends o | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regression?
There may be a few additional aspects worth considering (which are a little too long for a comment).
Whether or not there is a multiple testing problem in a given application quite strongly depends on which coefficients a researcher looks at. In many applications, one is only interested in 1-2 key variables, and the others only act as "controls". Say, in a fixed effects panel data model we may feel that we need individual specific intercepts to control for unobserved heterogeneity, but we are typically not really interested in these $N$ fixed effects per se. On the other hand, in for example growth econometrics, we sift through all possible determinants for growth and as such, we are willing to look at all significant variables. In the latter case, we do have a multiple testing problem, but not necessarily in the former.
I would argue that there are indeed several high-powered (at least, higher powered than Bonferroni) alternatives for performing such a model selection exercise. These include Bayesian model averaging, extreme-bounds analysis, General-to-specific, penalized methods (Lasso and related methods) and also methods directly deriving from the multiple testing literature. The latter group includes classical ones based on the Benjamini-Hochberg method, but also more recent bootstrap-based methods. To do some shameless self-promotion, these are compared and applied in a paper of mine. | Is there a multiple testing problem when performing t-tests for multiple coeffcients in linear regre
There may be a few additional aspects worth considering (which are a little too long for a comment).
Whether or not there is a multiple testing problem in a given application quite strongly depends o |
31,686 | Multilevel multivariate meta-regression | As you note, the model that adds random effects for each study and random effects for each outcome is a model that accounts for hierarchical dependence. This model allows the true outcomes/effects within a study to be correlated. This is the Konstantopoulos (2011) example you link to.
But this model still assumes that the sampling errors of the observed outcomes/effects within a study are independent, which is definitely not the case when those outcomes are assessed within the same individuals. So, as in the Berkey et al. (1998) example you link to, ideally you need to construct the whole variance-covariance matrix of the sampling errors (with the sampling variances along the diagonal). The chapter by Gleser and Olkin (2009) from the Handbook of research synthesis and meta-analysis describes how the covariances can be computed for various outcomes measures (including standardized mean differences). The analyses/methods from that chapter are replicated here (you are dealing with the multiple-endpoint case).
And as you note, doing this requires knowing how the actual measurements within studies are correlated. Using your example, you would need to know for study 1 how strong the correlation was between the two measurements for "Phonological loop" (more accurately, there are two correlations, one for the first and one for the second group, but we typically assume that the correlation is the same for the two groups), and how strongly those measurements were correlated with the "Central Executive" measurements. So, three correlations in total.
Obtaining/extracting these correlations is often difficult, if not impossible (as they are often not reported). If you really cannot obtain them (even after contacting study authors in an attempt to obtain the missing information), there are several options:
One can still often make a rough/educated guess how large the correlations are. Then we use those 'guestimates' and conduct sensitivity analyses to ensure that conclusions remain unchanged when the values are varied within a reasonable range.
One could use robust methods -- in essence, we then consider the assumed variance-covariance matrix of the sampling errors to be misspecified (i.e., we assume it is diagonal, when in fact we know it isn't) and then estimate the variance-covariance matrix of the fixed effects (which are typically of primary interest) using consistent methods even under such a model misspecification. This is in essence the approach described by Hedges, Tipton, and Johnson (2010) that you mentioned.
Resampling methods (i.e., bootstrapping and permutation testing) may also work.
There are also some alternative models that try to circumvent the problem by means of some simplification of the model. Specifically, in the model/approach by Riley and colleagues (see, for example: Riley, Abrams, Lambert, Sutton, & Thompson, 2007, Statistics in Medicine, 26, 78-97), we assume that the correlation among the sampling errors is identical to the correlation among the underlying true effects, and then we just estimate that one correlation. This can work, but whether it does depends on how well that simplification matches up with reality.
There is always another option: Avoid any kind of statistical dependence via data reduction (e.g., selecting only one estimate, conducting separate analyses for different outcomes). This is still the most commonly used approach for 'handling' the problem, because it allows practitioners to stick to (relatively simple) models/methods/software they are already familiar with. But this approach can be wasteful and limits inference (e.g., if we conduct two separate meta-analyses for outcomes A and B, we cannot test whether the estimated effect is different for A and B unless we can again properly account for their covariance).
Note: The same issue was discussed on the R-sig-mixed-models mailing list and in essence I am repeating what I already posted there. See here.
For the robust method, you could try the robumeta package. If you want to stick to metafor, you will find these, blog, posts by James Pustejovsky of interest. He is also working on another package, called clubSandwich which adds some additional small-sample corrections. You can also try the development version of metafor (see here) -- it includes a new function called robust() which you can use after you have fitted your model to obtain cluster robust tests and confidence intervals. And you can find some code to get you started with bootstrapping here. | Multilevel multivariate meta-regression | As you note, the model that adds random effects for each study and random effects for each outcome is a model that accounts for hierarchical dependence. This model allows the true outcomes/effects wit | Multilevel multivariate meta-regression
As you note, the model that adds random effects for each study and random effects for each outcome is a model that accounts for hierarchical dependence. This model allows the true outcomes/effects within a study to be correlated. This is the Konstantopoulos (2011) example you link to.
But this model still assumes that the sampling errors of the observed outcomes/effects within a study are independent, which is definitely not the case when those outcomes are assessed within the same individuals. So, as in the Berkey et al. (1998) example you link to, ideally you need to construct the whole variance-covariance matrix of the sampling errors (with the sampling variances along the diagonal). The chapter by Gleser and Olkin (2009) from the Handbook of research synthesis and meta-analysis describes how the covariances can be computed for various outcomes measures (including standardized mean differences). The analyses/methods from that chapter are replicated here (you are dealing with the multiple-endpoint case).
And as you note, doing this requires knowing how the actual measurements within studies are correlated. Using your example, you would need to know for study 1 how strong the correlation was between the two measurements for "Phonological loop" (more accurately, there are two correlations, one for the first and one for the second group, but we typically assume that the correlation is the same for the two groups), and how strongly those measurements were correlated with the "Central Executive" measurements. So, three correlations in total.
Obtaining/extracting these correlations is often difficult, if not impossible (as they are often not reported). If you really cannot obtain them (even after contacting study authors in an attempt to obtain the missing information), there are several options:
One can still often make a rough/educated guess how large the correlations are. Then we use those 'guestimates' and conduct sensitivity analyses to ensure that conclusions remain unchanged when the values are varied within a reasonable range.
One could use robust methods -- in essence, we then consider the assumed variance-covariance matrix of the sampling errors to be misspecified (i.e., we assume it is diagonal, when in fact we know it isn't) and then estimate the variance-covariance matrix of the fixed effects (which are typically of primary interest) using consistent methods even under such a model misspecification. This is in essence the approach described by Hedges, Tipton, and Johnson (2010) that you mentioned.
Resampling methods (i.e., bootstrapping and permutation testing) may also work.
There are also some alternative models that try to circumvent the problem by means of some simplification of the model. Specifically, in the model/approach by Riley and colleagues (see, for example: Riley, Abrams, Lambert, Sutton, & Thompson, 2007, Statistics in Medicine, 26, 78-97), we assume that the correlation among the sampling errors is identical to the correlation among the underlying true effects, and then we just estimate that one correlation. This can work, but whether it does depends on how well that simplification matches up with reality.
There is always another option: Avoid any kind of statistical dependence via data reduction (e.g., selecting only one estimate, conducting separate analyses for different outcomes). This is still the most commonly used approach for 'handling' the problem, because it allows practitioners to stick to (relatively simple) models/methods/software they are already familiar with. But this approach can be wasteful and limits inference (e.g., if we conduct two separate meta-analyses for outcomes A and B, we cannot test whether the estimated effect is different for A and B unless we can again properly account for their covariance).
Note: The same issue was discussed on the R-sig-mixed-models mailing list and in essence I am repeating what I already posted there. See here.
For the robust method, you could try the robumeta package. If you want to stick to metafor, you will find these, blog, posts by James Pustejovsky of interest. He is also working on another package, called clubSandwich which adds some additional small-sample corrections. You can also try the development version of metafor (see here) -- it includes a new function called robust() which you can use after you have fitted your model to obtain cluster robust tests and confidence intervals. And you can find some code to get you started with bootstrapping here. | Multilevel multivariate meta-regression
As you note, the model that adds random effects for each study and random effects for each outcome is a model that accounts for hierarchical dependence. This model allows the true outcomes/effects wit |
31,687 | Systematic/measurement error on a linear regression | We can model the experiment as
$$x_i=x_i^*+\tilde u_i$$
$$y_i=y_i^*+\tilde v_i$$
$$\tilde u_i=\bar u + v_i$$
$$\tilde v_i=\bar v + u_i$$
where $x_i^*, y_i^*$ denote true values, $\tilde u_i,\tilde v_i $ are measurement errors, $\bar u,\bar v $ are their "fixed" components independent from observation (which could arise from wrong calibration of the sensors) and $u,v$ vary from observation to observation and correspond to many possible factors which we treat as random.
Simple linear regression is
$$y_i^*=\alpha+\beta x_i^*+e_i$$
and OLS estimate of the slope is
$$\hat\beta=\frac{Cov(x^*,y^*)}{Var(x^*)}$$
What we obtain is however $$\tilde\beta=\frac{Cov(x,y)}{Var(x)}=\frac{Cov(x^* + u,y^*+ v)}{Var(x^* + u)}=\frac{Cov(x^*,y^*)+Cov(x^*,v)+Cov(y^*,u)+Cov(u,v)}{Var(x^*) + Var(u) + 2Cov(x,u)}$$
Now let's assume that $v,u$ are uncorrelated with $x^*,y^*$ and each other (a rather strong assumption that can be improved if we have more inferences about the nature of errors). Then our estimate is
$$\tilde\beta=\beta\frac{\sigma^2_{x^*}}{\sigma^2_{x^*}+\sigma^2_{u}}\approx\beta\frac{\hat\sigma^2_x-\hat\sigma^2_u}{\hat\sigma^2_x}=\beta\hat\lambda$$
We can estimate $\hat\sigma^2_x$ as sample variation of $x_i$. We also need to estimate $\sigma^2_u$. If we have an experiment when we can observe $x^*_i$ multiple times, then one simple approach is to estimate $\sigma^2_u=E[\sigma^2_x|x^*_i$].
Now we can use our $\hat\sigma^2_{\tilde\beta}$ calculated with, for example, bootstrap method, and correct it for $\hat\beta =\tilde\beta /\hat\lambda$ so that $$\hat\sigma^2_{\hat\beta}=\frac{\hat\sigma^2_{\tilde\beta}}{\hat\lambda^2}$$. | Systematic/measurement error on a linear regression | We can model the experiment as
$$x_i=x_i^*+\tilde u_i$$
$$y_i=y_i^*+\tilde v_i$$
$$\tilde u_i=\bar u + v_i$$
$$\tilde v_i=\bar v + u_i$$
where $x_i^*, y_i^*$ denote true values, $\tilde u_i,\tilde v | Systematic/measurement error on a linear regression
We can model the experiment as
$$x_i=x_i^*+\tilde u_i$$
$$y_i=y_i^*+\tilde v_i$$
$$\tilde u_i=\bar u + v_i$$
$$\tilde v_i=\bar v + u_i$$
where $x_i^*, y_i^*$ denote true values, $\tilde u_i,\tilde v_i $ are measurement errors, $\bar u,\bar v $ are their "fixed" components independent from observation (which could arise from wrong calibration of the sensors) and $u,v$ vary from observation to observation and correspond to many possible factors which we treat as random.
Simple linear regression is
$$y_i^*=\alpha+\beta x_i^*+e_i$$
and OLS estimate of the slope is
$$\hat\beta=\frac{Cov(x^*,y^*)}{Var(x^*)}$$
What we obtain is however $$\tilde\beta=\frac{Cov(x,y)}{Var(x)}=\frac{Cov(x^* + u,y^*+ v)}{Var(x^* + u)}=\frac{Cov(x^*,y^*)+Cov(x^*,v)+Cov(y^*,u)+Cov(u,v)}{Var(x^*) + Var(u) + 2Cov(x,u)}$$
Now let's assume that $v,u$ are uncorrelated with $x^*,y^*$ and each other (a rather strong assumption that can be improved if we have more inferences about the nature of errors). Then our estimate is
$$\tilde\beta=\beta\frac{\sigma^2_{x^*}}{\sigma^2_{x^*}+\sigma^2_{u}}\approx\beta\frac{\hat\sigma^2_x-\hat\sigma^2_u}{\hat\sigma^2_x}=\beta\hat\lambda$$
We can estimate $\hat\sigma^2_x$ as sample variation of $x_i$. We also need to estimate $\sigma^2_u$. If we have an experiment when we can observe $x^*_i$ multiple times, then one simple approach is to estimate $\sigma^2_u=E[\sigma^2_x|x^*_i$].
Now we can use our $\hat\sigma^2_{\tilde\beta}$ calculated with, for example, bootstrap method, and correct it for $\hat\beta =\tilde\beta /\hat\lambda$ so that $$\hat\sigma^2_{\hat\beta}=\frac{\hat\sigma^2_{\tilde\beta}}{\hat\lambda^2}$$. | Systematic/measurement error on a linear regression
We can model the experiment as
$$x_i=x_i^*+\tilde u_i$$
$$y_i=y_i^*+\tilde v_i$$
$$\tilde u_i=\bar u + v_i$$
$$\tilde v_i=\bar v + u_i$$
where $x_i^*, y_i^*$ denote true values, $\tilde u_i,\tilde v |
31,688 | Systematic/measurement error on a linear regression | I think the answer given by @yshilov is definitely awesome by considering the measurement error into the error term and significantly, deduces the result
$$\tilde \beta = \beta \frac{\sigma_x^2}{\sigma_x^2 + \sigma_u^2}$$
To elaborate, this beta has special properties that it is an biased estimator, but biased towards 0. Specifically, for linear regression, $E(\hat \beta_1)=\beta_1 \cdot\Big[\frac{\sigma_x^2+\sigma_{x\delta}}{\sigma_x^2+2\sigma_{x\delta}+\sigma_{\delta}^2}\Big]$
The proof is as follows:
in simple linear regression, recall
$$\hat \beta_1 = \frac{\sum_{i=1}^n(x_i-\bar x)y_i}{\sum_{i=1}^n(x_i-\bar x)^2}$$
In the case of measurement error, we have $x_i^O=x_i^A=\delta_i$, $y_i^O=y_i^A+\epsilon_i$, and $y_i^A=\beta_0 +\beta_1 x_i^A$, so we get
$$y_i^O=\beta_0+\beta_1(x_i^O-\delta_i)+\epsilon_i=\beta_0+\beta_1x_i^O+(\epsilon_i-\beta_1 \delta_i)$$
Assuming that $E(\epsilon_i)=E(\delta_i)=0$, $var(\epsilon_i)=\sigma_{\epsilon}^2$, $var(\delta_i)=\sigma_{\delta}^2 = \frac{1}{n}\sum_{i=1}^n(\delta_i-\bar \delta)^2$ and the variance of true predictor value $\sigma_{x}^2=\frac{\sum(x_i^A-\bar {x^A})^2}{n}$ and correlation of true predictor and error $\sigma_{x \delta}=cov(x^A,\delta)= \frac{1}{n}\sum_{i=1}^n(x_i^A-\bar {x_i^A})(\delta_i- \bar \delta)$, then
$$cov(x_i^O,\delta)=E(x_i^O\delta)-E(x_i^O)\cdot E(\delta)=E(x_i^O\delta)=E[(x_i^A+\delta)\delta]=E(x_i^A \delta)+E(\delta^2)$$
$$=\big[E(x_i^A \delta)-E(x_i^A)\cdot E(\delta)\big]+\big[var(\delta)+[E(\delta)]^2\big]=cov(x_i^A,\delta)+\sigma_{\delta}^2=\sigma_{x\delta}+\sigma_{\delta}^2$$
Then, by $\bar x = E(x_i)$ and bilinearity property in covariance, the expectation of $\hat \beta_1$ is
$$E(\hat \beta_1)=E\Big[\frac{\sum_{i=1}^n(x_i^O-\bar x^O)y_i^O}{\sum_{i=1}^n(x_i^O-\bar x^O)^2}\Big]=\frac{E(\sum_{i=1}^nx^O_iy_i^O)-E(\sum_{i=1}^n \bar x^Oy_i^O)}{\sum_{i=1}^n E\big[(x_i^O-E(x_i^O))^2\big]}=\frac{E(\sum_{i=1}^nx_i^Oy_i^O)-E(x_i^O)\cdot E(\sum_{i=1}^n y_i^O)}{\sum_{i=1}^nvar(x_i^O)}$$
$$=\frac{\sum_{i=1}^ncov(y_i^O,x_i^O)}{\sum_{i=1}^nvar(x_i^O)}=\frac{\sum_{i=1}^ncov(\beta_0+\beta_1x_i^O+\epsilon_i-\beta_1\delta_i,~x_i^O)}{\sum_{i=1}^nvar(x_i^O)}=\frac{\beta_1\cdot \sum_{i=1}^nvar(x_i^O)-\beta_1\cdot \sum_{i=1}^ncov(x_i^O, \delta_i)}{\sum_{i=1}^nvar(x_i^O)}$$
$$=\beta_1 \cdot \Big[ 1-\frac{{\sum_{i=1}^ncov(x_i^O,\delta_i)}/{n}}{\sum_{i=1}^nvar(x_i^A+\delta_i)/n}\Big]=\beta_1 \cdot\Big[1-\frac{\sigma_{x\delta}+\sigma_{\delta}^2}{\sigma_x^2+2cov(x_i^A,\delta_i)+\sigma_{\delta}^2}\Big] =\beta_1 \cdot\Big[\frac{\sigma_x^2+\sigma_{x\delta}}{\sigma_x^2+2\sigma_{x\delta}+\sigma_{\delta}^2}\Big]$$
, as desired.
Hence, the result $E(\hat \beta_1)=\beta_1 \cdot\Big[\frac{\sigma_x^2+\sigma_{x\delta}}{\sigma_x^2+2\sigma_{x\delta}+\sigma_{\delta}^2}\Big]$ is well-established. | Systematic/measurement error on a linear regression | I think the answer given by @yshilov is definitely awesome by considering the measurement error into the error term and significantly, deduces the result
$$\tilde \beta = \beta \frac{\sigma_x^2}{\sig | Systematic/measurement error on a linear regression
I think the answer given by @yshilov is definitely awesome by considering the measurement error into the error term and significantly, deduces the result
$$\tilde \beta = \beta \frac{\sigma_x^2}{\sigma_x^2 + \sigma_u^2}$$
To elaborate, this beta has special properties that it is an biased estimator, but biased towards 0. Specifically, for linear regression, $E(\hat \beta_1)=\beta_1 \cdot\Big[\frac{\sigma_x^2+\sigma_{x\delta}}{\sigma_x^2+2\sigma_{x\delta}+\sigma_{\delta}^2}\Big]$
The proof is as follows:
in simple linear regression, recall
$$\hat \beta_1 = \frac{\sum_{i=1}^n(x_i-\bar x)y_i}{\sum_{i=1}^n(x_i-\bar x)^2}$$
In the case of measurement error, we have $x_i^O=x_i^A=\delta_i$, $y_i^O=y_i^A+\epsilon_i$, and $y_i^A=\beta_0 +\beta_1 x_i^A$, so we get
$$y_i^O=\beta_0+\beta_1(x_i^O-\delta_i)+\epsilon_i=\beta_0+\beta_1x_i^O+(\epsilon_i-\beta_1 \delta_i)$$
Assuming that $E(\epsilon_i)=E(\delta_i)=0$, $var(\epsilon_i)=\sigma_{\epsilon}^2$, $var(\delta_i)=\sigma_{\delta}^2 = \frac{1}{n}\sum_{i=1}^n(\delta_i-\bar \delta)^2$ and the variance of true predictor value $\sigma_{x}^2=\frac{\sum(x_i^A-\bar {x^A})^2}{n}$ and correlation of true predictor and error $\sigma_{x \delta}=cov(x^A,\delta)= \frac{1}{n}\sum_{i=1}^n(x_i^A-\bar {x_i^A})(\delta_i- \bar \delta)$, then
$$cov(x_i^O,\delta)=E(x_i^O\delta)-E(x_i^O)\cdot E(\delta)=E(x_i^O\delta)=E[(x_i^A+\delta)\delta]=E(x_i^A \delta)+E(\delta^2)$$
$$=\big[E(x_i^A \delta)-E(x_i^A)\cdot E(\delta)\big]+\big[var(\delta)+[E(\delta)]^2\big]=cov(x_i^A,\delta)+\sigma_{\delta}^2=\sigma_{x\delta}+\sigma_{\delta}^2$$
Then, by $\bar x = E(x_i)$ and bilinearity property in covariance, the expectation of $\hat \beta_1$ is
$$E(\hat \beta_1)=E\Big[\frac{\sum_{i=1}^n(x_i^O-\bar x^O)y_i^O}{\sum_{i=1}^n(x_i^O-\bar x^O)^2}\Big]=\frac{E(\sum_{i=1}^nx^O_iy_i^O)-E(\sum_{i=1}^n \bar x^Oy_i^O)}{\sum_{i=1}^n E\big[(x_i^O-E(x_i^O))^2\big]}=\frac{E(\sum_{i=1}^nx_i^Oy_i^O)-E(x_i^O)\cdot E(\sum_{i=1}^n y_i^O)}{\sum_{i=1}^nvar(x_i^O)}$$
$$=\frac{\sum_{i=1}^ncov(y_i^O,x_i^O)}{\sum_{i=1}^nvar(x_i^O)}=\frac{\sum_{i=1}^ncov(\beta_0+\beta_1x_i^O+\epsilon_i-\beta_1\delta_i,~x_i^O)}{\sum_{i=1}^nvar(x_i^O)}=\frac{\beta_1\cdot \sum_{i=1}^nvar(x_i^O)-\beta_1\cdot \sum_{i=1}^ncov(x_i^O, \delta_i)}{\sum_{i=1}^nvar(x_i^O)}$$
$$=\beta_1 \cdot \Big[ 1-\frac{{\sum_{i=1}^ncov(x_i^O,\delta_i)}/{n}}{\sum_{i=1}^nvar(x_i^A+\delta_i)/n}\Big]=\beta_1 \cdot\Big[1-\frac{\sigma_{x\delta}+\sigma_{\delta}^2}{\sigma_x^2+2cov(x_i^A,\delta_i)+\sigma_{\delta}^2}\Big] =\beta_1 \cdot\Big[\frac{\sigma_x^2+\sigma_{x\delta}}{\sigma_x^2+2\sigma_{x\delta}+\sigma_{\delta}^2}\Big]$$
, as desired.
Hence, the result $E(\hat \beta_1)=\beta_1 \cdot\Big[\frac{\sigma_x^2+\sigma_{x\delta}}{\sigma_x^2+2\sigma_{x\delta}+\sigma_{\delta}^2}\Big]$ is well-established. | Systematic/measurement error on a linear regression
I think the answer given by @yshilov is definitely awesome by considering the measurement error into the error term and significantly, deduces the result
$$\tilde \beta = \beta \frac{\sigma_x^2}{\sig |
31,689 | Systematic/measurement error on a linear regression | I have a similar problem - posted here - and no certain answer still. What I did for the moment is simply gather a set of very similar Xs and check if there's a big variation for Y within those lines. Another kind of approach could be some a simulation: you use a single X from your dataset, but replicate the lines following the predictors systematic error (something like rnorm(...,0,0.3)). The confidence interval for slope may be something similar to the systematic error span. | Systematic/measurement error on a linear regression | I have a similar problem - posted here - and no certain answer still. What I did for the moment is simply gather a set of very similar Xs and check if there's a big variation for Y within those lines. | Systematic/measurement error on a linear regression
I have a similar problem - posted here - and no certain answer still. What I did for the moment is simply gather a set of very similar Xs and check if there's a big variation for Y within those lines. Another kind of approach could be some a simulation: you use a single X from your dataset, but replicate the lines following the predictors systematic error (something like rnorm(...,0,0.3)). The confidence interval for slope may be something similar to the systematic error span. | Systematic/measurement error on a linear regression
I have a similar problem - posted here - and no certain answer still. What I did for the moment is simply gather a set of very similar Xs and check if there's a big variation for Y within those lines. |
31,690 | Systematic/measurement error on a linear regression | I would recommend a parametric bootstrap on the data. That means generating new datasets that are similar to the real dataset, but are different to the extent implied by your uncertainty in each observation.
Here's some pseudo-code for that. Notice I'm using vector inputs to rnorm, as is normal in the R language. Also I'm assuming that what you are calling $\Delta$ are standard errors.
For each b in 1...B:
x_PB = rnorm(x, x_se)
y_PB = rnorm(y, y_se)
r[b] = cor(x_PB, y_PB)
Then look at the distribution of the values in r. | Systematic/measurement error on a linear regression | I would recommend a parametric bootstrap on the data. That means generating new datasets that are similar to the real dataset, but are different to the extent implied by your uncertainty in each obse | Systematic/measurement error on a linear regression
I would recommend a parametric bootstrap on the data. That means generating new datasets that are similar to the real dataset, but are different to the extent implied by your uncertainty in each observation.
Here's some pseudo-code for that. Notice I'm using vector inputs to rnorm, as is normal in the R language. Also I'm assuming that what you are calling $\Delta$ are standard errors.
For each b in 1...B:
x_PB = rnorm(x, x_se)
y_PB = rnorm(y, y_se)
r[b] = cor(x_PB, y_PB)
Then look at the distribution of the values in r. | Systematic/measurement error on a linear regression
I would recommend a parametric bootstrap on the data. That means generating new datasets that are similar to the real dataset, but are different to the extent implied by your uncertainty in each obse |
31,691 | Books - Deep learning and recurrent neural networks | There's a work-in-progress book on Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville. It's not finished yet, but you can view the draft online, it has a chapter on recurrent networks. | Books - Deep learning and recurrent neural networks | There's a work-in-progress book on Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville. It's not finished yet, but you can view the draft online, it has a chapter on recurrent networks. | Books - Deep learning and recurrent neural networks
There's a work-in-progress book on Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville. It's not finished yet, but you can view the draft online, it has a chapter on recurrent networks. | Books - Deep learning and recurrent neural networks
There's a work-in-progress book on Deep Learning by Ian Goodfellow, Yoshua Bengio and Aaron Courville. It's not finished yet, but you can view the draft online, it has a chapter on recurrent networks. |
31,692 | Books - Deep learning and recurrent neural networks | There are more resources available. I have listed a few below (no affiliation with any of them).
For a collection of information on Recurrent Neural Networks look here.
For a collection of information on deep learning look here
Check the deep learning part of the website of H2O
You can also look at the journal of machine learning research if there are any articles available. Or look at arXiv. Even SSRN has some articles about this. | Books - Deep learning and recurrent neural networks | There are more resources available. I have listed a few below (no affiliation with any of them).
For a collection of information on Recurrent Neural Networks look here.
For a collection of informatio | Books - Deep learning and recurrent neural networks
There are more resources available. I have listed a few below (no affiliation with any of them).
For a collection of information on Recurrent Neural Networks look here.
For a collection of information on deep learning look here
Check the deep learning part of the website of H2O
You can also look at the journal of machine learning research if there are any articles available. Or look at arXiv. Even SSRN has some articles about this. | Books - Deep learning and recurrent neural networks
There are more resources available. I have listed a few below (no affiliation with any of them).
For a collection of information on Recurrent Neural Networks look here.
For a collection of informatio |
31,693 | Books - Deep learning and recurrent neural networks | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I don't think there's any book better than http://neuralnetworksanddeeplearning.com/
I think the book is the best because:
It's easy to read and not much mathematics
There's a Github source code for the book
The book talks about advanced concepts such as dropout and gradient vanishing | Books - Deep learning and recurrent neural networks | Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
| Books - Deep learning and recurrent neural networks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
I don't think there's any book better than http://neuralnetworksanddeeplearning.com/
I think the book is the best because:
It's easy to read and not much mathematics
There's a Github source code for the book
The book talks about advanced concepts such as dropout and gradient vanishing | Books - Deep learning and recurrent neural networks
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
31,694 | Unbiased estimators of skewness and kurtosis | See pp. 8-9 of http://modelingwithdata.org/pdfs/moments.pdf . Also look at http://www.amstat.org/publications/jse/v19n2/doane.pdf for some useful perspectives to get your thinking in the right frame of mind.
Note that what you are probably calling the unbiased standard deviation is a biased estimator of standard deviation Why is sample standard deviation a biased estimator of $\sigma$? , although before taking the square root it is an unbiased estimator of variance.
A nonlinear function of an unbiased estimator is not necessarily going to be unbiased ("almost surely" won't be). The direction of the bias can be determined by Jensen's Inequality https://en.wikipedia.org/wiki/Jensen%27s_inequality if the function is convex or concave. | Unbiased estimators of skewness and kurtosis | See pp. 8-9 of http://modelingwithdata.org/pdfs/moments.pdf . Also look at http://www.amstat.org/publications/jse/v19n2/doane.pdf for some useful perspectives to get your thinking in the right frame | Unbiased estimators of skewness and kurtosis
See pp. 8-9 of http://modelingwithdata.org/pdfs/moments.pdf . Also look at http://www.amstat.org/publications/jse/v19n2/doane.pdf for some useful perspectives to get your thinking in the right frame of mind.
Note that what you are probably calling the unbiased standard deviation is a biased estimator of standard deviation Why is sample standard deviation a biased estimator of $\sigma$? , although before taking the square root it is an unbiased estimator of variance.
A nonlinear function of an unbiased estimator is not necessarily going to be unbiased ("almost surely" won't be). The direction of the bias can be determined by Jensen's Inequality https://en.wikipedia.org/wiki/Jensen%27s_inequality if the function is convex or concave. | Unbiased estimators of skewness and kurtosis
See pp. 8-9 of http://modelingwithdata.org/pdfs/moments.pdf . Also look at http://www.amstat.org/publications/jse/v19n2/doane.pdf for some useful perspectives to get your thinking in the right frame |
31,695 | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function | Definitely a prediction interval, see for example here.
Quantile regression for the $5^\textrm{th}$ and $95^\textrm{th}$ quantiles attempts to find bounds $y_0({\bf x})$ and $y_1({\bf x})$, on the response variable $y$ given predictor variables ${\bf x}$, such that
$$
\mathbb{P}\left(Y\le y_0({\bf X})\right)=0.05 \\
\mathbb{P}\left(Y\le y_1({\bf X})\right)=0.95
$$
so
$$
\mathbb{P}\left(\,y_0({\bf X})\le Y\le y_1({\bf X})\,\right)\ =\ 0.90
$$
which is by definition a $90\%$ prediction interval.
A $90\%$ prediction interval should contain (as-yet-unseen) new data $90\%$ of the time. In contrast, a $90\%$ confidence interval for some parameter (e.g. the mean) should contain the true mean unless we were unlucky to the tune of 1-in-10 in the data used to construct the interval. | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function | Definitely a prediction interval, see for example here.
Quantile regression for the $5^\textrm{th}$ and $95^\textrm{th}$ quantiles attempts to find bounds $y_0({\bf x})$ and $y_1({\bf x})$, on the res | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function
Definitely a prediction interval, see for example here.
Quantile regression for the $5^\textrm{th}$ and $95^\textrm{th}$ quantiles attempts to find bounds $y_0({\bf x})$ and $y_1({\bf x})$, on the response variable $y$ given predictor variables ${\bf x}$, such that
$$
\mathbb{P}\left(Y\le y_0({\bf X})\right)=0.05 \\
\mathbb{P}\left(Y\le y_1({\bf X})\right)=0.95
$$
so
$$
\mathbb{P}\left(\,y_0({\bf X})\le Y\le y_1({\bf X})\,\right)\ =\ 0.90
$$
which is by definition a $90\%$ prediction interval.
A $90\%$ prediction interval should contain (as-yet-unseen) new data $90\%$ of the time. In contrast, a $90\%$ confidence interval for some parameter (e.g. the mean) should contain the true mean unless we were unlucky to the tune of 1-in-10 in the data used to construct the interval. | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function
Definitely a prediction interval, see for example here.
Quantile regression for the $5^\textrm{th}$ and $95^\textrm{th}$ quantiles attempts to find bounds $y_0({\bf x})$ and $y_1({\bf x})$, on the res |
31,696 | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function | After posting this I realized it is most accurately called a confidence interval, regardless of the terminology used by sci-kit learn.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3885826/
"However, it must be kept in mind that the resulting confidence intervals are a model approximation rather than true statistics". | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function | After posting this I realized it is most accurately called a confidence interval, regardless of the terminology used by sci-kit learn.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3885826/
"However, it | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function
After posting this I realized it is most accurately called a confidence interval, regardless of the terminology used by sci-kit learn.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3885826/
"However, it must be kept in mind that the resulting confidence intervals are a model approximation rather than true statistics". | Confidence Versus Prediction Intervals using Quantile Regression / Quantile Loss Function
After posting this I realized it is most accurately called a confidence interval, regardless of the terminology used by sci-kit learn.
http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3885826/
"However, it |
31,697 | Very different results of principal component analysis in SPSS and Stata after rotation | You are correct. Stata is weird about this. Stata gives different results from SAS, R and SPSS, and it is difficult (in my opinion) to understand why without delving quite deep into the world of factor analysis and PCA.
Here's how you know that something weird is happening. The sum of the squared loadings for a component are equal to the eigenvalue for that component.
Pre-and post-rotation, the eigenvalues change, but the total eigenvalues don't change. Add up the sum of the squared loadings from your output (this is why I asked you to remove the blanks in my comment). With Stata's default, the sum of squared loadings will sum to 1.00 (within rounding error). With SPSS (and R, and SAS, and every other factor analysis program I've looked at) they will sum to the eigenvalue for that factor. (Post rotation eigenvalues change, but the sum of eigenvalues stays the same). The sum of squared loadings in SPSS is equal to the sum of the eigenvalues (i.e. 3.8723 + 1.40682), both pre- and post-rotation.
In Stata, the sum of the squared loadings for each factor is equal to 1.00, and so Stata has rescaled the loadings.
The only mention of this (that I have found) in the Stata documentation is in the estat loadings section of the help, where it says:
cnorm(unit | eigen | inveigen), an option used with estat loadings,
selects the normalization of the eigenvectors, the columns of the
principal-component loading matrix. The following normalizations are
available
However, this appears to apply only to the unrotated component matrix, not the component rotated matrix. I can't get the unnormalized rotated matrix after PCA.
The people at Stata seem to know what they are doing, and usually have a good reason for doing things the way that they do. This one is beyond me though.
(For future reference, it would have made my life easier if you'd used a dataset that I could access, and if you'd included all output, without blanks).
Edit: My usual go-to site for information about how to get the same results for different programs is the UCLA IDRE. They don't cover PCA in Stata: http://www.ats.ucla.edu/stat/AnnotatedOutput/ I have to wonder if that's because they couldn't get the same result. :) | Very different results of principal component analysis in SPSS and Stata after rotation | You are correct. Stata is weird about this. Stata gives different results from SAS, R and SPSS, and it is difficult (in my opinion) to understand why without delving quite deep into the world of facto | Very different results of principal component analysis in SPSS and Stata after rotation
You are correct. Stata is weird about this. Stata gives different results from SAS, R and SPSS, and it is difficult (in my opinion) to understand why without delving quite deep into the world of factor analysis and PCA.
Here's how you know that something weird is happening. The sum of the squared loadings for a component are equal to the eigenvalue for that component.
Pre-and post-rotation, the eigenvalues change, but the total eigenvalues don't change. Add up the sum of the squared loadings from your output (this is why I asked you to remove the blanks in my comment). With Stata's default, the sum of squared loadings will sum to 1.00 (within rounding error). With SPSS (and R, and SAS, and every other factor analysis program I've looked at) they will sum to the eigenvalue for that factor. (Post rotation eigenvalues change, but the sum of eigenvalues stays the same). The sum of squared loadings in SPSS is equal to the sum of the eigenvalues (i.e. 3.8723 + 1.40682), both pre- and post-rotation.
In Stata, the sum of the squared loadings for each factor is equal to 1.00, and so Stata has rescaled the loadings.
The only mention of this (that I have found) in the Stata documentation is in the estat loadings section of the help, where it says:
cnorm(unit | eigen | inveigen), an option used with estat loadings,
selects the normalization of the eigenvectors, the columns of the
principal-component loading matrix. The following normalizations are
available
However, this appears to apply only to the unrotated component matrix, not the component rotated matrix. I can't get the unnormalized rotated matrix after PCA.
The people at Stata seem to know what they are doing, and usually have a good reason for doing things the way that they do. This one is beyond me though.
(For future reference, it would have made my life easier if you'd used a dataset that I could access, and if you'd included all output, without blanks).
Edit: My usual go-to site for information about how to get the same results for different programs is the UCLA IDRE. They don't cover PCA in Stata: http://www.ats.ucla.edu/stat/AnnotatedOutput/ I have to wonder if that's because they couldn't get the same result. :) | Very different results of principal component analysis in SPSS and Stata after rotation
You are correct. Stata is weird about this. Stata gives different results from SAS, R and SPSS, and it is difficult (in my opinion) to understand why without delving quite deep into the world of facto |
31,698 | Very different results of principal component analysis in SPSS and Stata after rotation | The differences between the Stata PCA methods and the conventional methods used in R or SPSS are:
1. Scaling eigenvectors/components
Stata rotates eigenvectors. Whereas, R or SPSS PCA-rotation methods normally rotates after scaling eigenvectors by the sqrt of the eigenvalues to produce the component loadings more typical in factor analysis.
2. Convergence stopping criteria (tolerance)
The other difference is that Stata, rotations convergence stopping criteria (tolerance) is 1e-6 for varimax rotation (according to Stat's rotatemat function manual), while in R the default is 1e-5, and in SPSS the default is 1e-14, if I am not mistaken. After addressing these differences, I was able to get identical loadings and an identical rotation matrix in R.
R and SPSS varimax rotation function by default perform Kaiser normalization.
However, varimax rotations on eigenvectors directly seem to be unconventional. See the discussion titled Is PCA followed by a rotation (such as varimax) still PCA?.
Addendum
Each PC is orthogonal (uncorrelated). However, rotating PCs has neither orthogonal loadings nor uncorrelated components once the rotation is done. This is what has happened in the Stata codes. To obtain uncorrelated components a special scaling is needed, such as dividing the components by the square root of the corresponding eigenvalue (eigenvalues are the variances of the corresponding PCs), which is the default scaling method in R and SPSS.
References:
Jolliffe, Ian T. Principal Component Analysis. Springer, 2002. Chapter 11, pages 269-274. (Thanks amoeba for recommending)
https://www.personality-project.org/r/html/principal.html.
https://www.stata.com/manuals13/mvrotatemat.pdf | Very different results of principal component analysis in SPSS and Stata after rotation | The differences between the Stata PCA methods and the conventional methods used in R or SPSS are:
1. Scaling eigenvectors/components
Stata rotates eigenvectors. Whereas, R or SPSS PCA-rotation methods | Very different results of principal component analysis in SPSS and Stata after rotation
The differences between the Stata PCA methods and the conventional methods used in R or SPSS are:
1. Scaling eigenvectors/components
Stata rotates eigenvectors. Whereas, R or SPSS PCA-rotation methods normally rotates after scaling eigenvectors by the sqrt of the eigenvalues to produce the component loadings more typical in factor analysis.
2. Convergence stopping criteria (tolerance)
The other difference is that Stata, rotations convergence stopping criteria (tolerance) is 1e-6 for varimax rotation (according to Stat's rotatemat function manual), while in R the default is 1e-5, and in SPSS the default is 1e-14, if I am not mistaken. After addressing these differences, I was able to get identical loadings and an identical rotation matrix in R.
R and SPSS varimax rotation function by default perform Kaiser normalization.
However, varimax rotations on eigenvectors directly seem to be unconventional. See the discussion titled Is PCA followed by a rotation (such as varimax) still PCA?.
Addendum
Each PC is orthogonal (uncorrelated). However, rotating PCs has neither orthogonal loadings nor uncorrelated components once the rotation is done. This is what has happened in the Stata codes. To obtain uncorrelated components a special scaling is needed, such as dividing the components by the square root of the corresponding eigenvalue (eigenvalues are the variances of the corresponding PCs), which is the default scaling method in R and SPSS.
References:
Jolliffe, Ian T. Principal Component Analysis. Springer, 2002. Chapter 11, pages 269-274. (Thanks amoeba for recommending)
https://www.personality-project.org/r/html/principal.html.
https://www.stata.com/manuals13/mvrotatemat.pdf | Very different results of principal component analysis in SPSS and Stata after rotation
The differences between the Stata PCA methods and the conventional methods used in R or SPSS are:
1. Scaling eigenvectors/components
Stata rotates eigenvectors. Whereas, R or SPSS PCA-rotation methods |
31,699 | Very different results of principal component analysis in SPSS and Stata after rotation | remark: this is more a comment than an answer.
Here is a script which tries to reproduce, how the Stata solution are calculated.
Indeed it seems to be -different from the solution by SPSS- a transformation on the Eigenvectors (and more specifically: on their Kaiser-normalization over the first 3 eigenvectors) rather than on the PCA-components like in SPSS.
Here is my calculation using my matrix-calculator-software MatMate
;****** MatMate Version 0.1410 Beta *****************************
// first part of posted data (via clipboard) Eigenvectors (your first protocol)
// of Stata- computations
clp = csvdatei("clip")
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
| 0,2700 0,3901 -0,1477 |
| 0,3298 0,2303 -0,4027 |
| -0,3046 0,3149 0,1773 |
| 0,3489 0,1910 0,0700 |
| 0,3342 0,2067 0,2720 |
| -0,2001 0,4561 -0,1587 |
| 0,3057 0,3128 0,1531 |
| -0,3611 0,2180 0,2913 |
| 0,2352 -0,2211 0,3662 |
| -0,1556 0,3894 0,4578 |
| 0,3239 0,0525 0,0754 |
| 0,2091 -0,2445 0,4720 |
uniq = clp[*,4] // "not-explained" = unique (unexplained by 3 eigenvectors) variances
// (= squared values, not loadings)
| 0,4779 |
| 0,3129 |
| 0,4642 |
| 0,4715 |
| 0,4202 |
| 0,5227 |
| 0,4728 |
| 0,3280 |
| 0,5588 |
| 0,4457 |
| 0,5832 |
| 0,4839 |
// second part of posted data (via clipboard) : first three eigenvalues
pca_ssl = csvdatei("clip") // "ssl" means "sum of sqauares of loadings"
| 3,8723 1,4068 1,1791 |
// check whether the "unique" variance is really the non-explained
// variance by the first 3 eigenvectors/PCA-components:
chk = sumzl( eig_lad ^# 2 *# pca_ssl ) + uniq
// the squared pca-loadings (eigenvectors^2 scaled by eigenvalues)
// plus the unique variance should sum up to 1-variance for each row (=item)
| 1,0000 |
| 0,9999 |
| 1,0000 |
| 1,0000 |
| 1,0000 |
| 1,0001 |
| 1,0000 |
| 0,9998 |
| 0,9999 |
| 0,9999 |
| 1,0000 |
| 1,0000 |
// get the rotationmatrix to bring the "Kaiser"-normalized loadings to Varimax
// note that SPSS computes this based on the PCA-loadings, not on Eigenvector-values
t = gettrans(normzl(eig_lad ),"varimax") // "normzl(<loadings>)"
// provides Kaiser-normalization per row
// (of course here in Stata based on eigenvectors)
// rotation-/transformation-matrix "t" from pca to varimax coordinates
| 0,7942 -0,5573 0,2421 |
| 0,5724 0,5523 -0,6062 |
| 0,2041 0,6200 0,7576 |
vmx_lad = eig_lad * t // this computes the Stata - varimax-coordinates
| 0,4580 -0,0065 -0,1926 |
| 0,4012 -0,2970 -0,2735 |
| -0,0305 0,4428 -0,1625 |
| 0,3898 -0,0107 0,1051 |
| 0,3884 0,1409 0,2402 |
| 0,1409 0,2473 -0,4385 |
| 0,4351 0,1348 0,0853 |
| -0,1363 0,4913 -0,0528 |
| 0,0370 0,0087 0,4867 |
| 0,1310 0,6026 0,0716 |
| 0,2815 -0,0738 0,1693 |
| 0,0018 0,0790 0,5657 |
vmx_ssl = sqsumsp((pca_ssl *# sqrt(pca_ssl#)) *t)
// sums of squares of the varimax-rotated princ. components (not eigenvectors!)
| 2,9522 2,0849 1,4207 |
// =============== The SPSS-solution =========================
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
spss_pca_lad = eig_lad *# sqrt(pca_ssl#) // compute pca-loadings from eigenvectors
spss_t = gettrans(normzl(spss_pca_lad ),"varimax") // "normzl(<pca loadings>)"
spss_vmx_lad = spss_pca_lad * spss_t // this computes the SPSS - varimax-coordinates
| 0,7047 -0,0983 -0,1260 |
| 0,6731 -0,4479 -0,1827 |
| -0,2184 0,6266 -0,3091 |
| 0,6710 -0,1473 0,2377 |
| 0,6606 0,0244 0,3780 |
| 0,0474 0,3785 -0,5761 |
| 0,6989 0,0358 0,1935 |
| -0,3776 0,6975 -0,2066 |
| 0,1846 -0,1019 0,6298 |
| 0,0624 0,7418 -0,0018 |
| 0,5276 -0,2131 0,3050 |
| 0,1273 -0,0160 0,7068 |
spss_vmx_ssl = sqsumsp(spss_vmx_lad) // the SPSS-"variances" of the vmx-factors
| 2,8498 1,8626 1,7455 |
Since all coordinates and also the transformation/rotationmatrix seem to be reproduced correctly it seems this is indeed the internal computation of Stata and also of SPSS
The difference reduced to the syntax/concept would be
// Stata
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
t = gettrans(normzl(eig_lad ),"varimax") // "normzl(<on eigenvectors>)"
vmx_lad = eig_lad * t // this computes the Stata - varimax-coordinates
// SPSS
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
spss_pca_lad = eig_lad *# sqrt(pca_ssl#) // compute pca-loadings from eigenvectors
spss_t = gettrans(normzl(spss_pca_lad ),"varimax") // "normzl(<on pca loadings>)"
spss_vmx_lad = spss_pca_lad * spss_t // this computes the SPSS - varimax-coordinates
The rotation-criterion for the VARIMAX-concept seems to be different in both software-packets. While Stata computes the rotation-angles based on the unit-variance-normalized ("Kaiser-normalized") rows of the eigenvectors , does SPSS compute that rotation-angles based on the unit-variance-normalized ("Kaiser-normalized") rows of PCA-components, which are scalings of the eigenvectors by the square-roots of associated eigenvalues. This should -in most cases - result in different solutions. | Very different results of principal component analysis in SPSS and Stata after rotation | remark: this is more a comment than an answer.
Here is a script which tries to reproduce, how the Stata solution are calculated.
Indeed it seems to be -different from the solution by SPSS- a transfor | Very different results of principal component analysis in SPSS and Stata after rotation
remark: this is more a comment than an answer.
Here is a script which tries to reproduce, how the Stata solution are calculated.
Indeed it seems to be -different from the solution by SPSS- a transformation on the Eigenvectors (and more specifically: on their Kaiser-normalization over the first 3 eigenvectors) rather than on the PCA-components like in SPSS.
Here is my calculation using my matrix-calculator-software MatMate
;****** MatMate Version 0.1410 Beta *****************************
// first part of posted data (via clipboard) Eigenvectors (your first protocol)
// of Stata- computations
clp = csvdatei("clip")
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
| 0,2700 0,3901 -0,1477 |
| 0,3298 0,2303 -0,4027 |
| -0,3046 0,3149 0,1773 |
| 0,3489 0,1910 0,0700 |
| 0,3342 0,2067 0,2720 |
| -0,2001 0,4561 -0,1587 |
| 0,3057 0,3128 0,1531 |
| -0,3611 0,2180 0,2913 |
| 0,2352 -0,2211 0,3662 |
| -0,1556 0,3894 0,4578 |
| 0,3239 0,0525 0,0754 |
| 0,2091 -0,2445 0,4720 |
uniq = clp[*,4] // "not-explained" = unique (unexplained by 3 eigenvectors) variances
// (= squared values, not loadings)
| 0,4779 |
| 0,3129 |
| 0,4642 |
| 0,4715 |
| 0,4202 |
| 0,5227 |
| 0,4728 |
| 0,3280 |
| 0,5588 |
| 0,4457 |
| 0,5832 |
| 0,4839 |
// second part of posted data (via clipboard) : first three eigenvalues
pca_ssl = csvdatei("clip") // "ssl" means "sum of sqauares of loadings"
| 3,8723 1,4068 1,1791 |
// check whether the "unique" variance is really the non-explained
// variance by the first 3 eigenvectors/PCA-components:
chk = sumzl( eig_lad ^# 2 *# pca_ssl ) + uniq
// the squared pca-loadings (eigenvectors^2 scaled by eigenvalues)
// plus the unique variance should sum up to 1-variance for each row (=item)
| 1,0000 |
| 0,9999 |
| 1,0000 |
| 1,0000 |
| 1,0000 |
| 1,0001 |
| 1,0000 |
| 0,9998 |
| 0,9999 |
| 0,9999 |
| 1,0000 |
| 1,0000 |
// get the rotationmatrix to bring the "Kaiser"-normalized loadings to Varimax
// note that SPSS computes this based on the PCA-loadings, not on Eigenvector-values
t = gettrans(normzl(eig_lad ),"varimax") // "normzl(<loadings>)"
// provides Kaiser-normalization per row
// (of course here in Stata based on eigenvectors)
// rotation-/transformation-matrix "t" from pca to varimax coordinates
| 0,7942 -0,5573 0,2421 |
| 0,5724 0,5523 -0,6062 |
| 0,2041 0,6200 0,7576 |
vmx_lad = eig_lad * t // this computes the Stata - varimax-coordinates
| 0,4580 -0,0065 -0,1926 |
| 0,4012 -0,2970 -0,2735 |
| -0,0305 0,4428 -0,1625 |
| 0,3898 -0,0107 0,1051 |
| 0,3884 0,1409 0,2402 |
| 0,1409 0,2473 -0,4385 |
| 0,4351 0,1348 0,0853 |
| -0,1363 0,4913 -0,0528 |
| 0,0370 0,0087 0,4867 |
| 0,1310 0,6026 0,0716 |
| 0,2815 -0,0738 0,1693 |
| 0,0018 0,0790 0,5657 |
vmx_ssl = sqsumsp((pca_ssl *# sqrt(pca_ssl#)) *t)
// sums of squares of the varimax-rotated princ. components (not eigenvectors!)
| 2,9522 2,0849 1,4207 |
// =============== The SPSS-solution =========================
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
spss_pca_lad = eig_lad *# sqrt(pca_ssl#) // compute pca-loadings from eigenvectors
spss_t = gettrans(normzl(spss_pca_lad ),"varimax") // "normzl(<pca loadings>)"
spss_vmx_lad = spss_pca_lad * spss_t // this computes the SPSS - varimax-coordinates
| 0,7047 -0,0983 -0,1260 |
| 0,6731 -0,4479 -0,1827 |
| -0,2184 0,6266 -0,3091 |
| 0,6710 -0,1473 0,2377 |
| 0,6606 0,0244 0,3780 |
| 0,0474 0,3785 -0,5761 |
| 0,6989 0,0358 0,1935 |
| -0,3776 0,6975 -0,2066 |
| 0,1846 -0,1019 0,6298 |
| 0,0624 0,7418 -0,0018 |
| 0,5276 -0,2131 0,3050 |
| 0,1273 -0,0160 0,7068 |
spss_vmx_ssl = sqsumsp(spss_vmx_lad) // the SPSS-"variances" of the vmx-factors
| 2,8498 1,8626 1,7455 |
Since all coordinates and also the transformation/rotationmatrix seem to be reproduced correctly it seems this is indeed the internal computation of Stata and also of SPSS
The difference reduced to the syntax/concept would be
// Stata
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
t = gettrans(normzl(eig_lad ),"varimax") // "normzl(<on eigenvectors>)"
vmx_lad = eig_lad * t // this computes the Stata - varimax-coordinates
// SPSS
eig_lad = clp[*,1..3] // the first 3 columns: these are obviously eigenvector-values
spss_pca_lad = eig_lad *# sqrt(pca_ssl#) // compute pca-loadings from eigenvectors
spss_t = gettrans(normzl(spss_pca_lad ),"varimax") // "normzl(<on pca loadings>)"
spss_vmx_lad = spss_pca_lad * spss_t // this computes the SPSS - varimax-coordinates
The rotation-criterion for the VARIMAX-concept seems to be different in both software-packets. While Stata computes the rotation-angles based on the unit-variance-normalized ("Kaiser-normalized") rows of the eigenvectors , does SPSS compute that rotation-angles based on the unit-variance-normalized ("Kaiser-normalized") rows of PCA-components, which are scalings of the eigenvectors by the square-roots of associated eigenvalues. This should -in most cases - result in different solutions. | Very different results of principal component analysis in SPSS and Stata after rotation
remark: this is more a comment than an answer.
Here is a script which tries to reproduce, how the Stata solution are calculated.
Indeed it seems to be -different from the solution by SPSS- a transfor |
31,700 | Very different results of principal component analysis in SPSS and Stata after rotation | Here is a section from my notes that might help. All normalisations of the eigenvectors (or loadings) are correct! | Very different results of principal component analysis in SPSS and Stata after rotation | Here is a section from my notes that might help. All normalisations of the eigenvectors (or loadings) are correct! | Very different results of principal component analysis in SPSS and Stata after rotation
Here is a section from my notes that might help. All normalisations of the eigenvectors (or loadings) are correct! | Very different results of principal component analysis in SPSS and Stata after rotation
Here is a section from my notes that might help. All normalisations of the eigenvectors (or loadings) are correct! |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.