idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
40,001
|
meaning of 'Monte Carlo' in this sentence
|
What Ng and Russell seem to be saying is that for each policy $\pi$ they simulate $m$ "possible" outcomes for processes starting at point $s_0$. By "trajectories" they seem to mean the possible developments in time of simulated processes -- different possible scenarios created by simulation. So you were correct, Monte Carlo stands here for "simulation" (see also this thread).
|
meaning of 'Monte Carlo' in this sentence
|
What Ng and Russell seem to be saying is that for each policy $\pi$ they simulate $m$ "possible" outcomes for processes starting at point $s_0$. By "trajectories" they seem to mean the possible develo
|
meaning of 'Monte Carlo' in this sentence
What Ng and Russell seem to be saying is that for each policy $\pi$ they simulate $m$ "possible" outcomes for processes starting at point $s_0$. By "trajectories" they seem to mean the possible developments in time of simulated processes -- different possible scenarios created by simulation. So you were correct, Monte Carlo stands here for "simulation" (see also this thread).
|
meaning of 'Monte Carlo' in this sentence
What Ng and Russell seem to be saying is that for each policy $\pi$ they simulate $m$ "possible" outcomes for processes starting at point $s_0$. By "trajectories" they seem to mean the possible develo
|
40,002
|
meaning of 'Monte Carlo' in this sentence
|
Monte Carlo here simply means use sampling to estimate the values. Practically this means collecting a sequence of (state, action) pairs, i.e. the trajectory using some arbitrary policy, and from this you can compute relevant quantities like V, etc
|
meaning of 'Monte Carlo' in this sentence
|
Monte Carlo here simply means use sampling to estimate the values. Practically this means collecting a sequence of (state, action) pairs, i.e. the trajectory using some arbitrary policy, and from this
|
meaning of 'Monte Carlo' in this sentence
Monte Carlo here simply means use sampling to estimate the values. Practically this means collecting a sequence of (state, action) pairs, i.e. the trajectory using some arbitrary policy, and from this you can compute relevant quantities like V, etc
|
meaning of 'Monte Carlo' in this sentence
Monte Carlo here simply means use sampling to estimate the values. Practically this means collecting a sequence of (state, action) pairs, i.e. the trajectory using some arbitrary policy, and from this
|
40,003
|
How can I assess performance of a semi-supervised learning method?
|
We have addressed this problem in Assessing binary classifiers using only positive and unlabeled data. Specifically, we show how to compute strict bounds on any metric based on contingency tables (accuracy, precision, ROC/PR curves, ...). Our work was accepted by all reviewers at this year's NIPS conference, but then rejected by the editor for lack of significance (go figure). We will submit it to the upcoming KDD.
Our approach is based on the reasonable assumption that known positives are sampled completely at random from all positives. If you can't rely on this assumption, any form of performance evaluation is infeasible. Additionally, we require an estimate of the fraction of positives in the unlabeled set, which you can often acquire via domain knowledge or by explicitly obtaining labels for a small, random subset of the unlabeled set.
|
How can I assess performance of a semi-supervised learning method?
|
We have addressed this problem in Assessing binary classifiers using only positive and unlabeled data. Specifically, we show how to compute strict bounds on any metric based on contingency tables (acc
|
How can I assess performance of a semi-supervised learning method?
We have addressed this problem in Assessing binary classifiers using only positive and unlabeled data. Specifically, we show how to compute strict bounds on any metric based on contingency tables (accuracy, precision, ROC/PR curves, ...). Our work was accepted by all reviewers at this year's NIPS conference, but then rejected by the editor for lack of significance (go figure). We will submit it to the upcoming KDD.
Our approach is based on the reasonable assumption that known positives are sampled completely at random from all positives. If you can't rely on this assumption, any form of performance evaluation is infeasible. Additionally, we require an estimate of the fraction of positives in the unlabeled set, which you can often acquire via domain knowledge or by explicitly obtaining labels for a small, random subset of the unlabeled set.
|
How can I assess performance of a semi-supervised learning method?
We have addressed this problem in Assessing binary classifiers using only positive and unlabeled data. Specifically, we show how to compute strict bounds on any metric based on contingency tables (acc
|
40,004
|
How can I assess performance of a semi-supervised learning method?
|
Here's a sideways thinking idea: You have some positive labels and you can estimate the natural grouping of data using unsupervised learning. Try to measure the overlap between the known information and the way the data groups together, use the overlap as a ground truth measure.
So, perform unsupervised learning, see how the labeled data corresponds to the clusters. If your're in luck, then the labels will correlate to only one of the clusters or to outliers (which might turn out to be clusters given more data).
Outcome A - disjoint groups of data
Let's say that you have 10 labels from 100 unlabeled examples and after clustering it turns out that the 10 labels belong to a cluster with 20 data points. This is the happy case and you can now label all 20 with 1 and everything else as 0. Problem solved, just use AUC.
Outcome B - more than 2 groups, fuzzy clusters
What if this is not the case? What about the other groups?
If not, let's say you have 9 labels in cluster with 20 and 1 in one of the other clusters (hopefully the only other one). Repeat multiple times and count how many times did a label 'land' in a certain group. Compute the mutual information between the labeled data (positive examples) $X$ and the other groups $Y$ over multiple clusterings.
$$
I(X;Y) = \sum_{y \in Y} \sum_{x \in X}
p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)}
\right) }, \,\!
$$
So, with $K=3$ clusters you will finally have $I_k(X;Y)$ for each group.
Assume that these values are the ground truth (target values) when you evaluate your final model.
This is based on the assumption that your prediction will also have the positive labels (now, more of them) distributed in a certain way in the unsupervised grouping of data.
|
How can I assess performance of a semi-supervised learning method?
|
Here's a sideways thinking idea: You have some positive labels and you can estimate the natural grouping of data using unsupervised learning. Try to measure the overlap between the known information a
|
How can I assess performance of a semi-supervised learning method?
Here's a sideways thinking idea: You have some positive labels and you can estimate the natural grouping of data using unsupervised learning. Try to measure the overlap between the known information and the way the data groups together, use the overlap as a ground truth measure.
So, perform unsupervised learning, see how the labeled data corresponds to the clusters. If your're in luck, then the labels will correlate to only one of the clusters or to outliers (which might turn out to be clusters given more data).
Outcome A - disjoint groups of data
Let's say that you have 10 labels from 100 unlabeled examples and after clustering it turns out that the 10 labels belong to a cluster with 20 data points. This is the happy case and you can now label all 20 with 1 and everything else as 0. Problem solved, just use AUC.
Outcome B - more than 2 groups, fuzzy clusters
What if this is not the case? What about the other groups?
If not, let's say you have 9 labels in cluster with 20 and 1 in one of the other clusters (hopefully the only other one). Repeat multiple times and count how many times did a label 'land' in a certain group. Compute the mutual information between the labeled data (positive examples) $X$ and the other groups $Y$ over multiple clusterings.
$$
I(X;Y) = \sum_{y \in Y} \sum_{x \in X}
p(x,y) \log{ \left(\frac{p(x,y)}{p(x)\,p(y)}
\right) }, \,\!
$$
So, with $K=3$ clusters you will finally have $I_k(X;Y)$ for each group.
Assume that these values are the ground truth (target values) when you evaluate your final model.
This is based on the assumption that your prediction will also have the positive labels (now, more of them) distributed in a certain way in the unsupervised grouping of data.
|
How can I assess performance of a semi-supervised learning method?
Here's a sideways thinking idea: You have some positive labels and you can estimate the natural grouping of data using unsupervised learning. Try to measure the overlap between the known information a
|
40,005
|
Difference in difference with interaction
|
A triple difference-in-difference is the correct specification for this problem. I'll present a conceptual explanation and then a mathematical one.
Conceptually, the standard (double) difference-in-difference can also be thought of as estimating heterogeneous treatment effect. In this perspective, time is the "treatment", and we want to estimate how time affects the outcome differentially across two groups. (Of course, time itself doesn't cause anything. It's just a stand-in for the real treatment that happens between the two time periods).
Thus, we can extend the standard D-in-D into triple D-in-D if we want to add another layer of heterogeneous treatment effect (i.e. the heterogeneity across big firms vs small firms in your cases).
Mathematically, the specification would be as follows:
\begin{equation}
Y = \alpha + \beta_1 T + \beta_2 G + \beta_3 B + \gamma_1 TG + \gamma_2 GB + \gamma_3 TB + \delta_1 TGB
\end{equation}
with
\begin{align}
T &= \text{treatment time} \\
G &= \text{treatment group} \\
B &= \text{big firms}
\end{align}
The DD estimate for treatment effect in small firms is $\gamma_1$ (exactly the same as the standard DD)
The DD estimate for treatment effect in big firms is $\gamma_1 + \delta_1$
Thus the treatment effect for big and small firms differs by $(\gamma_1 + \delta_1) - \gamma_1 = \delta_1$, which is also the coefficient of the triple interaction term, or the DDD estimate.
|
Difference in difference with interaction
|
A triple difference-in-difference is the correct specification for this problem. I'll present a conceptual explanation and then a mathematical one.
Conceptually, the standard (double) difference-in-di
|
Difference in difference with interaction
A triple difference-in-difference is the correct specification for this problem. I'll present a conceptual explanation and then a mathematical one.
Conceptually, the standard (double) difference-in-difference can also be thought of as estimating heterogeneous treatment effect. In this perspective, time is the "treatment", and we want to estimate how time affects the outcome differentially across two groups. (Of course, time itself doesn't cause anything. It's just a stand-in for the real treatment that happens between the two time periods).
Thus, we can extend the standard D-in-D into triple D-in-D if we want to add another layer of heterogeneous treatment effect (i.e. the heterogeneity across big firms vs small firms in your cases).
Mathematically, the specification would be as follows:
\begin{equation}
Y = \alpha + \beta_1 T + \beta_2 G + \beta_3 B + \gamma_1 TG + \gamma_2 GB + \gamma_3 TB + \delta_1 TGB
\end{equation}
with
\begin{align}
T &= \text{treatment time} \\
G &= \text{treatment group} \\
B &= \text{big firms}
\end{align}
The DD estimate for treatment effect in small firms is $\gamma_1$ (exactly the same as the standard DD)
The DD estimate for treatment effect in big firms is $\gamma_1 + \delta_1$
Thus the treatment effect for big and small firms differs by $(\gamma_1 + \delta_1) - \gamma_1 = \delta_1$, which is also the coefficient of the triple interaction term, or the DDD estimate.
|
Difference in difference with interaction
A triple difference-in-difference is the correct specification for this problem. I'll present a conceptual explanation and then a mathematical one.
Conceptually, the standard (double) difference-in-di
|
40,006
|
Difference in difference with interaction
|
I think this (exploring the heterogeneous treatment effects of DD for different groups) could be easily confused with the DDD method.
However, they share the same specification, I'd just run the following:
$y_{it} = \alpha + \beta_1 d_t + \beta_2 Treat_i + \beta_3 d_t \times Treat_i + \\ \beta_4 big_i + \beta_5 big_i\times Treat_i + \beta_6 big_i \times d_t + \\ \delta_0 d_t \times Treat_i \times big _i + \epsilon _{it}$
where $\delta_0$ is what you want.
$\delta_0 = [(\bar{y}_{Treat,2}-\bar{y}_{Treat,1})-(\bar{y}_{Control,2}-\bar{y}_{Control,1})]_{big}-[(\bar{y}_{Treat,2}-\bar{y}_{Treat,1})-(\bar{y}_{Control,2}-\bar{y}_{Control,1})]_{small}$
|
Difference in difference with interaction
|
I think this (exploring the heterogeneous treatment effects of DD for different groups) could be easily confused with the DDD method.
However, they share the same specification, I'd just run the foll
|
Difference in difference with interaction
I think this (exploring the heterogeneous treatment effects of DD for different groups) could be easily confused with the DDD method.
However, they share the same specification, I'd just run the following:
$y_{it} = \alpha + \beta_1 d_t + \beta_2 Treat_i + \beta_3 d_t \times Treat_i + \\ \beta_4 big_i + \beta_5 big_i\times Treat_i + \beta_6 big_i \times d_t + \\ \delta_0 d_t \times Treat_i \times big _i + \epsilon _{it}$
where $\delta_0$ is what you want.
$\delta_0 = [(\bar{y}_{Treat,2}-\bar{y}_{Treat,1})-(\bar{y}_{Control,2}-\bar{y}_{Control,1})]_{big}-[(\bar{y}_{Treat,2}-\bar{y}_{Treat,1})-(\bar{y}_{Control,2}-\bar{y}_{Control,1})]_{small}$
|
Difference in difference with interaction
I think this (exploring the heterogeneous treatment effects of DD for different groups) could be easily confused with the DDD method.
However, they share the same specification, I'd just run the foll
|
40,007
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
|
I think one could write a whole book dealing exclusively with your question (and I am definitely not qualified to write it). So without any attempt at providing a comprehensive answer, here are some points that can hopefully be helpful.
Confirmatory vs. exploratory approach to analysis
As you note yourself, you have a very rich dataset and you can test a lot of things. We can quickly compute the number of meaningful tests: you have $12$ measures; each was measured $3$ times in $3$ groups. So if we count all pairwise tests, it will be $3$ tests per group and $3$ tests per measurement time, i.e. $18$ tests per measure, i.e. $216$ tests. You are obviously aware of the lurking multiple comparisons problem (remember the green beans comic?), but if you are normally happy to use $\alpha=0.05$ and were to use e.g. Bonferroni adjustment then you would have to use $\alpha = 0.05/216\approx 0.002$ and to risk not finding any significant effects because you do not have enough power.
This is of course not a unique, but in fact a very common situation.
Broadly speaking, you can adopt one of the two approaches.
Confirmatory approach insists on strict adherence to the rules of significance testing. You should formulate your one or several (but as few as possible) research hypotheses in advance and carefully plan which statistical tests you are going to carry out. To mitigate the multiple comparisons / low power problem, you should try to design your tests such that you use as few tests as possible while having maximal power to detect what you really want to detect. For example, you might want to combine your measures into some composite or pooled measures that are likely to be most affected by Treatment 1 or 2. Or you can pool over measurement times. Etc. In any case, you try to boil down all your data to a couple of crucial comparisons, and then you do only those, applying Bonferroni (or similar) adjustment. It's important that all of that is planned before you have ever looked at the data (because after looking at the data you will be tempted to change your tests).
Alas, in practice, this is often hardly possible.
Exploratory approach, in contrast, is like biting the bullet: you have a lot of rich data, so why not explore all sorts of relationships that are present in there. You will do lots of comparisons and lots of tests, you will adjust your analysis strategy depending on what you see in the data, but whatever -- this is all exploratory. You cannot do that if you are doing a clinical trial, but in more basic research this if often the only way to go. All $p$-values that you get out of this approach should be taken with a (big) grain of salt, though. In fact, some would say that you should not run or report any significance tests at all, but usually tests are still done. There is a good argument not to use multiple comparisons adjustments (such as Bonferroni) at all, and rather treat all the $p$-values as indicating strength of evidence in the Fisherian sence (as opposed to leading to a yes/no decision in the Neyman-Pearson sence).
Statistical tests if you are willing to assume normality
Let's for the moment ignore the issue of normality (see below) and assume that everything is normal. You have the following battery of tests:
For each measure, within-group pairwise comparison between two measurement times is a paired t-test. It will test if the measurements differ between these two times.
For each measure, between-group pairwise comparison for one measurement time is an unpaired t-test. It will test if these two groups differ on this specific measurement.
For each measure, within-group comparison between all three different measurement times is a repeated measures ANOVA. It will test if measurement time has any effect at all.
For each measure, between-group comparison between for one fixed measurement time, is a one-way ANOVA. It will test if groups differ in any way between each other.
For each measure, comparison between all groups and all times is a two-way repeated measures ANOVA. It will test if there is a significant effect of group, significant effect of time, and significant interaction between them.
For all measures, comparison between all groups and all times is a two-way repeated measures MANOVA. It will test if there is a significant effect of group, significant effect of time, or significant interaction between them on all measures taken together.
Note that #1 and #2 can be seen as a post-hocs to #3 and #4 respectively, #3 and #4 can be seen as post-hocs to #5, and #5 can be seen as post-hoc to #6.
[With an additional complication then when these tests are done as post-hocs they use some of the pooled estimates of the "parent" test in order to be more consistent with it; I am not sure though if these procedures exist on the higher levels of the hierarchy.]
So you have a layered structure and you can proceed in the top-down manner from the most general (#6) level down to most specific (#1 and #2) tests and run each next level only if you have significant omnibus effect on the higher level (apologies for the potential confusion; "higher" levels have higher numbers in my list and hence are located on the bottom of it... "top-down" means starting with MANOVA in #6 and going until t-tests in #1 and #2). This should protect you from false positives on the lower level, and so you arguably (!) don't need to do multiple comparison adjustments on the lower level (but as far as I understand, opinions on that differ).
You can also start directly at some middle layer and e.g. run 12 times #5 without doing #6, or 36 times #3 and 36 times #4 without doing #5. In confirmatory framework, you must then apply some multiple comparison correction (such as Bonferroni or rather Holm-Bonferroni). In exploratory framework this is not necessary, see above (example: maybe without adjustment you get $p=0.01$ effect in many different measures and it is very consistent; you are probably looking at a real effect then, but if you make Bonferroni adjustment then everything will stop being significant -- too bad. Instead, in exploratory framework you should rather keep $p=0.01$ as is and use your own expert judgment, but of course at your own risk).
By the way, if your Treatments work at all, you should expect significant effect of interaction in #6 and #5, so these are (hopefully!) almost guaranteed, and the interesting stuff begins at layers #3 and #4. If there is a real danger that both Treatments are as bad as placebo then perhaps you should really start with #6.
Another remark: a more "modern" approach would be to use a linear mixed model (with subjects being a random effect) instead of repeated measures ANOVA, but that's a whole other topic that I am not very familiar with. It would be great if somebody posted an answer here written from a mixed models perspective.
Statistical tests if you are not willing to assume normality
There are ranked analogues of most of these tests, but not of all of them. The analogues are as follows:
Wilcoxon test
Mann-Whitney-Wilcoxon test
Friedman test
Kruskal-Wallis test
?? (probably does not exist)
???? (most probably does not exist, but see here)
Additional complication is that post-hocs become tricky. Proper post-hoc to Kruskal-Wallis is not Mann-Whitney-Wilcoxon but the Dunn's test [it takes into account the issue that I mentioned in the square brackets above]. Similarly, proper post-hoc to Friedman is not Wilcoxon; not sure if it exists but if it does it is even more obscure than Dunn's.
Normality testing
It is in general a very bad idea to test for normality in order to decide whether you should use parametric or nonparametric tests. It will affect your $p$-values in an unpredictable way. At least in the confirmatory paradigm, you should decide on the test prior to looking at the data; if you have doubts about normality approximation, then rather don't use it. See here for more discussion: Choosing a statistical test based on the outcome of another (e.g. normality).
In your case, this means that you should use only parametric tests or only nonparametric tests for all measures (unless you have a priori grounds to suspect substantial deviations from normality in only a specific subset of measures; this does not seem to be the case).
In simple cases people often suggest to use ranked tests because they are powerful, simple, and you don't need to worry about the assumptions. But in your case, nonparametric tests will be a mess so you have a good argument in favour of classical ANOVAs. By the way, the histograms that you posted look "normal enough" to me that with your sample size you should not worry too much about them not being normal.
Data presentation
I would strongly advice to rely on visualization as opposed to only listing hundreds of $p$-values in a text or a table. With the data like that, first thing I would do (note: this is very exploratory!), would be to make a giant figure with 12 subplots, where each subplot corresponds to one measure and shows time on the x-axis (three measurements) and groups as lines of different color (with error bars).
Then just stare at this figure for really long and try to see if it makes sense. Hopefully the effects will be consistent across measures, across time points, etc. I would make this figure the main figure of the paper.
If you like, you can then pepper this figure with the results of your statistical tests (mark significant differences with stars).
Brief answers to your specific questions
Yes (almost -- see the caveat about Wilcoxon as post-hoc)
Yes
Yes
Use figures as much as you can.
Word of caution
We would like to know if Treatment 2 (Dietary Supplement 2) has the same effect (or even better) on body composition than Treatment 1, while not having those adverse effects on blood profiles.
To show that Treatment 2 does not have as much adverse effects as Treatment 1, it's not enough to show that there is significant difference between T1 and Controls but no significant difference between T2 and Controls. This is a common mistake. You actually need to show significant difference between T2 and T1.
Further reading:
Multiple Comparisons with Repeated Measures -- tutorial focused on SPSS but with a really good discussion.
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
|
I think one could write a whole book dealing exclusively with your question (and I am definitely not qualified to write it). So without any attempt at providing a comprehensive answer, here are some p
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
I think one could write a whole book dealing exclusively with your question (and I am definitely not qualified to write it). So without any attempt at providing a comprehensive answer, here are some points that can hopefully be helpful.
Confirmatory vs. exploratory approach to analysis
As you note yourself, you have a very rich dataset and you can test a lot of things. We can quickly compute the number of meaningful tests: you have $12$ measures; each was measured $3$ times in $3$ groups. So if we count all pairwise tests, it will be $3$ tests per group and $3$ tests per measurement time, i.e. $18$ tests per measure, i.e. $216$ tests. You are obviously aware of the lurking multiple comparisons problem (remember the green beans comic?), but if you are normally happy to use $\alpha=0.05$ and were to use e.g. Bonferroni adjustment then you would have to use $\alpha = 0.05/216\approx 0.002$ and to risk not finding any significant effects because you do not have enough power.
This is of course not a unique, but in fact a very common situation.
Broadly speaking, you can adopt one of the two approaches.
Confirmatory approach insists on strict adherence to the rules of significance testing. You should formulate your one or several (but as few as possible) research hypotheses in advance and carefully plan which statistical tests you are going to carry out. To mitigate the multiple comparisons / low power problem, you should try to design your tests such that you use as few tests as possible while having maximal power to detect what you really want to detect. For example, you might want to combine your measures into some composite or pooled measures that are likely to be most affected by Treatment 1 or 2. Or you can pool over measurement times. Etc. In any case, you try to boil down all your data to a couple of crucial comparisons, and then you do only those, applying Bonferroni (or similar) adjustment. It's important that all of that is planned before you have ever looked at the data (because after looking at the data you will be tempted to change your tests).
Alas, in practice, this is often hardly possible.
Exploratory approach, in contrast, is like biting the bullet: you have a lot of rich data, so why not explore all sorts of relationships that are present in there. You will do lots of comparisons and lots of tests, you will adjust your analysis strategy depending on what you see in the data, but whatever -- this is all exploratory. You cannot do that if you are doing a clinical trial, but in more basic research this if often the only way to go. All $p$-values that you get out of this approach should be taken with a (big) grain of salt, though. In fact, some would say that you should not run or report any significance tests at all, but usually tests are still done. There is a good argument not to use multiple comparisons adjustments (such as Bonferroni) at all, and rather treat all the $p$-values as indicating strength of evidence in the Fisherian sence (as opposed to leading to a yes/no decision in the Neyman-Pearson sence).
Statistical tests if you are willing to assume normality
Let's for the moment ignore the issue of normality (see below) and assume that everything is normal. You have the following battery of tests:
For each measure, within-group pairwise comparison between two measurement times is a paired t-test. It will test if the measurements differ between these two times.
For each measure, between-group pairwise comparison for one measurement time is an unpaired t-test. It will test if these two groups differ on this specific measurement.
For each measure, within-group comparison between all three different measurement times is a repeated measures ANOVA. It will test if measurement time has any effect at all.
For each measure, between-group comparison between for one fixed measurement time, is a one-way ANOVA. It will test if groups differ in any way between each other.
For each measure, comparison between all groups and all times is a two-way repeated measures ANOVA. It will test if there is a significant effect of group, significant effect of time, and significant interaction between them.
For all measures, comparison between all groups and all times is a two-way repeated measures MANOVA. It will test if there is a significant effect of group, significant effect of time, or significant interaction between them on all measures taken together.
Note that #1 and #2 can be seen as a post-hocs to #3 and #4 respectively, #3 and #4 can be seen as post-hocs to #5, and #5 can be seen as post-hoc to #6.
[With an additional complication then when these tests are done as post-hocs they use some of the pooled estimates of the "parent" test in order to be more consistent with it; I am not sure though if these procedures exist on the higher levels of the hierarchy.]
So you have a layered structure and you can proceed in the top-down manner from the most general (#6) level down to most specific (#1 and #2) tests and run each next level only if you have significant omnibus effect on the higher level (apologies for the potential confusion; "higher" levels have higher numbers in my list and hence are located on the bottom of it... "top-down" means starting with MANOVA in #6 and going until t-tests in #1 and #2). This should protect you from false positives on the lower level, and so you arguably (!) don't need to do multiple comparison adjustments on the lower level (but as far as I understand, opinions on that differ).
You can also start directly at some middle layer and e.g. run 12 times #5 without doing #6, or 36 times #3 and 36 times #4 without doing #5. In confirmatory framework, you must then apply some multiple comparison correction (such as Bonferroni or rather Holm-Bonferroni). In exploratory framework this is not necessary, see above (example: maybe without adjustment you get $p=0.01$ effect in many different measures and it is very consistent; you are probably looking at a real effect then, but if you make Bonferroni adjustment then everything will stop being significant -- too bad. Instead, in exploratory framework you should rather keep $p=0.01$ as is and use your own expert judgment, but of course at your own risk).
By the way, if your Treatments work at all, you should expect significant effect of interaction in #6 and #5, so these are (hopefully!) almost guaranteed, and the interesting stuff begins at layers #3 and #4. If there is a real danger that both Treatments are as bad as placebo then perhaps you should really start with #6.
Another remark: a more "modern" approach would be to use a linear mixed model (with subjects being a random effect) instead of repeated measures ANOVA, but that's a whole other topic that I am not very familiar with. It would be great if somebody posted an answer here written from a mixed models perspective.
Statistical tests if you are not willing to assume normality
There are ranked analogues of most of these tests, but not of all of them. The analogues are as follows:
Wilcoxon test
Mann-Whitney-Wilcoxon test
Friedman test
Kruskal-Wallis test
?? (probably does not exist)
???? (most probably does not exist, but see here)
Additional complication is that post-hocs become tricky. Proper post-hoc to Kruskal-Wallis is not Mann-Whitney-Wilcoxon but the Dunn's test [it takes into account the issue that I mentioned in the square brackets above]. Similarly, proper post-hoc to Friedman is not Wilcoxon; not sure if it exists but if it does it is even more obscure than Dunn's.
Normality testing
It is in general a very bad idea to test for normality in order to decide whether you should use parametric or nonparametric tests. It will affect your $p$-values in an unpredictable way. At least in the confirmatory paradigm, you should decide on the test prior to looking at the data; if you have doubts about normality approximation, then rather don't use it. See here for more discussion: Choosing a statistical test based on the outcome of another (e.g. normality).
In your case, this means that you should use only parametric tests or only nonparametric tests for all measures (unless you have a priori grounds to suspect substantial deviations from normality in only a specific subset of measures; this does not seem to be the case).
In simple cases people often suggest to use ranked tests because they are powerful, simple, and you don't need to worry about the assumptions. But in your case, nonparametric tests will be a mess so you have a good argument in favour of classical ANOVAs. By the way, the histograms that you posted look "normal enough" to me that with your sample size you should not worry too much about them not being normal.
Data presentation
I would strongly advice to rely on visualization as opposed to only listing hundreds of $p$-values in a text or a table. With the data like that, first thing I would do (note: this is very exploratory!), would be to make a giant figure with 12 subplots, where each subplot corresponds to one measure and shows time on the x-axis (three measurements) and groups as lines of different color (with error bars).
Then just stare at this figure for really long and try to see if it makes sense. Hopefully the effects will be consistent across measures, across time points, etc. I would make this figure the main figure of the paper.
If you like, you can then pepper this figure with the results of your statistical tests (mark significant differences with stars).
Brief answers to your specific questions
Yes (almost -- see the caveat about Wilcoxon as post-hoc)
Yes
Yes
Use figures as much as you can.
Word of caution
We would like to know if Treatment 2 (Dietary Supplement 2) has the same effect (or even better) on body composition than Treatment 1, while not having those adverse effects on blood profiles.
To show that Treatment 2 does not have as much adverse effects as Treatment 1, it's not enough to show that there is significant difference between T1 and Controls but no significant difference between T2 and Controls. This is a common mistake. You actually need to show significant difference between T2 and T1.
Further reading:
Multiple Comparisons with Repeated Measures -- tutorial focused on SPSS but with a really good discussion.
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
I think one could write a whole book dealing exclusively with your question (and I am definitely not qualified to write it). So without any attempt at providing a comprehensive answer, here are some p
|
40,008
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
|
It is a multi layered methodological onion to be peeled. I will only be able to deal with the top layers both because of the lack of time and the lack on knowledge. I will base this answer in the very clear statement of the goals of the analysis: in bold in the OP:
We would like to know if Treatment 2 (Dietary Supplement 2) has the
same effect (or even better) on body composition than Treatment 1,
while not having those adverse effects on blood profiles.
1) There is no need for the control group - you want to compare two groups Treatment1 and Treatment2 - this is good because you can do 2 group tests without multiple comparisons - (at least in principle) instead of multiple group tests _ post hoc tests
2) Let us assume that you have a single measure of body composition, say B. You want to show that T2 (treatment 2) is at least as good as T1 on the B measure.
A big problem here. All the tests you mentioned are tests to show that one group of measures is different than another, not to show that it is at least as good. Yes, you can use a standard 2 group test (say t-test - forget about Gaussian and non-Gaussian data for a while) and show that the B measures for T2 are significantly different (and better) than that of T1. If you are lucky, and you get the significant difference than you can show that T2 is better than T1 and thus, at least as good. But if you are not lucky, then what did you get - the fact that the p-value is high does not tell you that the two sets of measures are the same (and thus T2 is as least as good as T1) , it tells you dont have enough data to show that there is a difference!!
So what you need for the B measure is a non-inferiority test (or an equivalence test). I will not get into it - there are many answers in CV on equivalence tests. But my point 1 above is important because the non-inferiority tests I know (TOST for example) only work with two groups!
3) Let us assume you have only one blood measure (C). You want to show that T2 is better than T1 on the C measure, and here standard tests - the ones that show difference - are the appropriate tool. You show that the difference on the measure C for T1 and T2 are significantly different (and that T2 is better) and thus that T2 has significantly better on the blood profile.
4) Another problem is that you dont have a single measurement of B (and C) for each subject. You have 3 measurements at 3 different times
I dont really know what to do with the 3 measurements per subject. Notice that this is not a within-subject that matters to the research question - the 3 measurements are on the same subject but we are comparing the set of subjects in T1 and T2, and the subjects in T1 and T2 are not paired or the same.
I guess that I would treat the 3 timed measurements as 3 independent measurements to get a better estimate of the true value of B (and C) for each subject. Thus I would just average the three timed measurements into a single one. (I understand one would lose the information on variability by averaging the data but it is unclear to me where this information of variability of the B measurements would be useful for the research question).
4) The next problem is that there in no single B measure for body composition there are many different measures such as Body Weight, Body Mass Index, Body Fat Mass,which are probably correlated). Let us call them Ba, Bb, Bc and so on. (Notice that this is not the 3 measurements in time for each subject discussed above, they are different measures - I used measurements in the item above, and measures here)
You can run the procedure described so far (up to item 3 above) for each body measure (average the 3 measurements per subject, perform a non-inferiotity test on the two sets of data) on each Ba Bb Bc measure, and report the results. The same for all the blood measures Ca and Cb, and so on. But then you are making a lot of comparisons and test. In this example there would be 5 tests results (Ba Bb Bc Ca and Cb). Therefore you should also have a multiple comparison procedure to adjust the p-values!! (This is very uncommon - people usually do not do p-value adjusting for different tests - only for a single, multiple group test - but they should do it).
On the other hand, the measures Ba Bb and Bc are very correlated, and thus the results of the tests are not independent - and I dont know how to do the p-value adjustement (Notice that the Bonferroni correction assumes that each of the tests are independent - exactly the opposite of the situation here).
I will stop the answer at this time. Hopefully more knowledgeable CV contributors will be able to provide better answers, specially to points 3 and 4 above which are at the limit of my knowledge.
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
|
It is a multi layered methodological onion to be peeled. I will only be able to deal with the top layers both because of the lack of time and the lack on knowledge. I will base this answer in the very
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
It is a multi layered methodological onion to be peeled. I will only be able to deal with the top layers both because of the lack of time and the lack on knowledge. I will base this answer in the very clear statement of the goals of the analysis: in bold in the OP:
We would like to know if Treatment 2 (Dietary Supplement 2) has the
same effect (or even better) on body composition than Treatment 1,
while not having those adverse effects on blood profiles.
1) There is no need for the control group - you want to compare two groups Treatment1 and Treatment2 - this is good because you can do 2 group tests without multiple comparisons - (at least in principle) instead of multiple group tests _ post hoc tests
2) Let us assume that you have a single measure of body composition, say B. You want to show that T2 (treatment 2) is at least as good as T1 on the B measure.
A big problem here. All the tests you mentioned are tests to show that one group of measures is different than another, not to show that it is at least as good. Yes, you can use a standard 2 group test (say t-test - forget about Gaussian and non-Gaussian data for a while) and show that the B measures for T2 are significantly different (and better) than that of T1. If you are lucky, and you get the significant difference than you can show that T2 is better than T1 and thus, at least as good. But if you are not lucky, then what did you get - the fact that the p-value is high does not tell you that the two sets of measures are the same (and thus T2 is as least as good as T1) , it tells you dont have enough data to show that there is a difference!!
So what you need for the B measure is a non-inferiority test (or an equivalence test). I will not get into it - there are many answers in CV on equivalence tests. But my point 1 above is important because the non-inferiority tests I know (TOST for example) only work with two groups!
3) Let us assume you have only one blood measure (C). You want to show that T2 is better than T1 on the C measure, and here standard tests - the ones that show difference - are the appropriate tool. You show that the difference on the measure C for T1 and T2 are significantly different (and that T2 is better) and thus that T2 has significantly better on the blood profile.
4) Another problem is that you dont have a single measurement of B (and C) for each subject. You have 3 measurements at 3 different times
I dont really know what to do with the 3 measurements per subject. Notice that this is not a within-subject that matters to the research question - the 3 measurements are on the same subject but we are comparing the set of subjects in T1 and T2, and the subjects in T1 and T2 are not paired or the same.
I guess that I would treat the 3 timed measurements as 3 independent measurements to get a better estimate of the true value of B (and C) for each subject. Thus I would just average the three timed measurements into a single one. (I understand one would lose the information on variability by averaging the data but it is unclear to me where this information of variability of the B measurements would be useful for the research question).
4) The next problem is that there in no single B measure for body composition there are many different measures such as Body Weight, Body Mass Index, Body Fat Mass,which are probably correlated). Let us call them Ba, Bb, Bc and so on. (Notice that this is not the 3 measurements in time for each subject discussed above, they are different measures - I used measurements in the item above, and measures here)
You can run the procedure described so far (up to item 3 above) for each body measure (average the 3 measurements per subject, perform a non-inferiotity test on the two sets of data) on each Ba Bb Bc measure, and report the results. The same for all the blood measures Ca and Cb, and so on. But then you are making a lot of comparisons and test. In this example there would be 5 tests results (Ba Bb Bc Ca and Cb). Therefore you should also have a multiple comparison procedure to adjust the p-values!! (This is very uncommon - people usually do not do p-value adjusting for different tests - only for a single, multiple group test - but they should do it).
On the other hand, the measures Ba Bb and Bc are very correlated, and thus the results of the tests are not independent - and I dont know how to do the p-value adjustement (Notice that the Bonferroni correction assumes that each of the tests are independent - exactly the opposite of the situation here).
I will stop the answer at this time. Hopefully more knowledgeable CV contributors will be able to provide better answers, specially to points 3 and 4 above which are at the limit of my knowledge.
|
Analyzing repeated measures experiment with multiple treatment groups and multiple measures
It is a multi layered methodological onion to be peeled. I will only be able to deal with the top layers both because of the lack of time and the lack on knowledge. I will base this answer in the very
|
40,009
|
Show that a scale mixtures of normals is a power exponential
|
The marginal distribution of $\beta$ associated with
$$\beta|\tau\sim\mathcal{N}(0,\tau)\quad\tau\sim\mathcal{E}(\lambda^2/2)$$
[with the convention that $\tau$ is the variance] has density
\begin{align*}\mathfrak{t}(\beta) &=\int_0^\infty \tau^{-1/2}\varphi(\beta/\sqrt{\tau})\frac{\lambda^2}{2}\exp\{-\lambda^2\tau/2\}\text{d}\tau\\
&=\frac{\lambda^2 e^{-\lambda|\beta|}}{2\sqrt{2\pi}}\int_0^\infty \tau^{-1/2}\exp\left\{-\frac{1}{2}\left(\frac{|\beta|}{\sqrt{\tau}}-\lambda\sqrt{\tau}\right)^2\right\}\text{d}\tau\\
\end{align*}
[where the $\lambda|\beta|$ appears by creating a perfect square in the exponential]. This suggests the change of variable $\nu=\sqrt{\tau}$ and leads to
$$\mathfrak{t}(\beta) = \frac{\lambda^2 e^{-\lambda|\beta|}}{2\sqrt{2\pi}}\int_0^\infty \exp\left\{-\frac{1}{2}\left(\frac{|\beta|}{\nu}-\lambda\nu\right)^2\right\}\text{d}\nu$$
[since $\tau^{-1/2}\text{d}\tau=2\text{d}\nu$]. This further suggests the change of variable $$\zeta=\frac{|\beta|}{\nu}-\lambda\nu$$ with its inverse
$$\nu=\left\{-\zeta+\sqrt{\zeta^2+4\lambda|\beta|} \right\}\big/2\lambda$$
[obtained by solving a second degree polynomial equation] and the Jacobian
$$\frac{\text{d}\nu}{\text{d}\zeta}=\left\{-1+\frac{\zeta}{\sqrt{\zeta^2+4\lambda|\beta|}} \right\}\big/2\lambda$$
which is always negative. Hence
\begin{align*}\mathfrak{t}(\beta)&=\frac{\lambda^2 e^{-\lambda|\beta|}}{4\lambda\sqrt{2\pi}}\int_{-\infty}^\infty \exp\left\{-\frac{\zeta^2}{2}\right\}\left\{1-\frac{\zeta}{\sqrt{\zeta^2+4\lambda|\beta|}} \right\}\text{d}\zeta\\
&=\frac{\lambda e^{-\lambda|\beta|}}{2}\int_{-\infty}^\infty \left\{1-\frac{\zeta}{\sqrt{\zeta^2+4\lambda|\beta|}} \right\}\varphi(\zeta)\text{d}\zeta\\
&=\frac{\lambda e^{-\lambda|\beta|}}{2}\left\{1-\int_{-\infty}^\infty\frac{\zeta\varphi(\zeta)}{\sqrt{\zeta^2+4\lambda|\beta|}}\text{d}\zeta\right\}=\frac{\lambda e^{-\lambda|\beta|}}{2}\end{align*}
[since the integrand is an odd function of $\zeta$ in the last integral]. This establishes [without complex calculus] that the marginal distribution of $\beta$ is indeed a Laplace or double-exponential distribution.
|
Show that a scale mixtures of normals is a power exponential
|
The marginal distribution of $\beta$ associated with
$$\beta|\tau\sim\mathcal{N}(0,\tau)\quad\tau\sim\mathcal{E}(\lambda^2/2)$$
[with the convention that $\tau$ is the variance] has density
\begin{ali
|
Show that a scale mixtures of normals is a power exponential
The marginal distribution of $\beta$ associated with
$$\beta|\tau\sim\mathcal{N}(0,\tau)\quad\tau\sim\mathcal{E}(\lambda^2/2)$$
[with the convention that $\tau$ is the variance] has density
\begin{align*}\mathfrak{t}(\beta) &=\int_0^\infty \tau^{-1/2}\varphi(\beta/\sqrt{\tau})\frac{\lambda^2}{2}\exp\{-\lambda^2\tau/2\}\text{d}\tau\\
&=\frac{\lambda^2 e^{-\lambda|\beta|}}{2\sqrt{2\pi}}\int_0^\infty \tau^{-1/2}\exp\left\{-\frac{1}{2}\left(\frac{|\beta|}{\sqrt{\tau}}-\lambda\sqrt{\tau}\right)^2\right\}\text{d}\tau\\
\end{align*}
[where the $\lambda|\beta|$ appears by creating a perfect square in the exponential]. This suggests the change of variable $\nu=\sqrt{\tau}$ and leads to
$$\mathfrak{t}(\beta) = \frac{\lambda^2 e^{-\lambda|\beta|}}{2\sqrt{2\pi}}\int_0^\infty \exp\left\{-\frac{1}{2}\left(\frac{|\beta|}{\nu}-\lambda\nu\right)^2\right\}\text{d}\nu$$
[since $\tau^{-1/2}\text{d}\tau=2\text{d}\nu$]. This further suggests the change of variable $$\zeta=\frac{|\beta|}{\nu}-\lambda\nu$$ with its inverse
$$\nu=\left\{-\zeta+\sqrt{\zeta^2+4\lambda|\beta|} \right\}\big/2\lambda$$
[obtained by solving a second degree polynomial equation] and the Jacobian
$$\frac{\text{d}\nu}{\text{d}\zeta}=\left\{-1+\frac{\zeta}{\sqrt{\zeta^2+4\lambda|\beta|}} \right\}\big/2\lambda$$
which is always negative. Hence
\begin{align*}\mathfrak{t}(\beta)&=\frac{\lambda^2 e^{-\lambda|\beta|}}{4\lambda\sqrt{2\pi}}\int_{-\infty}^\infty \exp\left\{-\frac{\zeta^2}{2}\right\}\left\{1-\frac{\zeta}{\sqrt{\zeta^2+4\lambda|\beta|}} \right\}\text{d}\zeta\\
&=\frac{\lambda e^{-\lambda|\beta|}}{2}\int_{-\infty}^\infty \left\{1-\frac{\zeta}{\sqrt{\zeta^2+4\lambda|\beta|}} \right\}\varphi(\zeta)\text{d}\zeta\\
&=\frac{\lambda e^{-\lambda|\beta|}}{2}\left\{1-\int_{-\infty}^\infty\frac{\zeta\varphi(\zeta)}{\sqrt{\zeta^2+4\lambda|\beta|}}\text{d}\zeta\right\}=\frac{\lambda e^{-\lambda|\beta|}}{2}\end{align*}
[since the integrand is an odd function of $\zeta$ in the last integral]. This establishes [without complex calculus] that the marginal distribution of $\beta$ is indeed a Laplace or double-exponential distribution.
|
Show that a scale mixtures of normals is a power exponential
The marginal distribution of $\beta$ associated with
$$\beta|\tau\sim\mathcal{N}(0,\tau)\quad\tau\sim\mathcal{E}(\lambda^2/2)$$
[with the convention that $\tau$ is the variance] has density
\begin{ali
|
40,010
|
Show that a scale mixtures of normals is a power exponential
|
I've always found the direct integration in this case to be a complicated integral. The Moment Generating Function (MGF) approach works too.
The MGF: $\beta | \tau \sim N(0, \tau)$ and then
$M_{\beta|\tau}(t)=e^{\frac{\tau t^2}{2}}.$ Now to get the MGF of $\beta$ marginally, take the expectation with respect to $\tau$.
$$\mathbb{E}(M_{\beta|\tau}(t)) = \int_0^\infty e^{\frac{\tau t^2}{2}} \frac{\lambda^2}{2}e^{-\tau \frac{\lambda^2}{2}}d\tau
=\int_0^\infty \frac{\lambda^2}{2} e^{-\tau \left(-\frac{t^2}{2} +\frac{\lambda^2}{2}\right)} d\tau
=\frac{\lambda^2/2}{\lambda^2/2 - t^2/2}
=\frac{1}{1 - \frac{t^2}{\lambda^2}},
$$
Now you can recognize this last function as the MGF of a Laplace (double exponential) distribution.
|
Show that a scale mixtures of normals is a power exponential
|
I've always found the direct integration in this case to be a complicated integral. The Moment Generating Function (MGF) approach works too.
The MGF: $\beta | \tau \sim N(0, \tau)$ and then
$M_{\bet
|
Show that a scale mixtures of normals is a power exponential
I've always found the direct integration in this case to be a complicated integral. The Moment Generating Function (MGF) approach works too.
The MGF: $\beta | \tau \sim N(0, \tau)$ and then
$M_{\beta|\tau}(t)=e^{\frac{\tau t^2}{2}}.$ Now to get the MGF of $\beta$ marginally, take the expectation with respect to $\tau$.
$$\mathbb{E}(M_{\beta|\tau}(t)) = \int_0^\infty e^{\frac{\tau t^2}{2}} \frac{\lambda^2}{2}e^{-\tau \frac{\lambda^2}{2}}d\tau
=\int_0^\infty \frac{\lambda^2}{2} e^{-\tau \left(-\frac{t^2}{2} +\frac{\lambda^2}{2}\right)} d\tau
=\frac{\lambda^2/2}{\lambda^2/2 - t^2/2}
=\frac{1}{1 - \frac{t^2}{\lambda^2}},
$$
Now you can recognize this last function as the MGF of a Laplace (double exponential) distribution.
|
Show that a scale mixtures of normals is a power exponential
I've always found the direct integration in this case to be a complicated integral. The Moment Generating Function (MGF) approach works too.
The MGF: $\beta | \tau \sim N(0, \tau)$ and then
$M_{\bet
|
40,011
|
Does a lower pvalue mean that test has higher power?
|
In general, the answer is NO. Suppose you have two different hypothesis tests $T$ and $T'$ for the same hypothesis test problem $H_0$ versus $H_1$ on the same data. Supposedly, $T$ and $T'$ uses different aspects of the data, for example, original data versus ranks. To make a meaningful comparison we must suppose that the two tests have the same significance level $\alpha$, (say =0.05). Or, at least, that is the usual approach.
But, often only p-values are reported without any prior choice of significance level, and the p-value is interpreted as some measure of "strength of evidence". If that is valid, a good measure of strength of evidence (Important: NOT strength of association or effect size!) is of course debated. If going that way, power is not a natural concept, since that depends on the (not choosen!) significance level. The idea, somehow, is that a p-value close to zero is strong evidence against the null hypothesis. That, at least, was Fisher's argument.
How can we now compare the hypothesis tests without the concept of power? We can look at the distribution of $P$ (the p-value). Under the null, for both tests, $P$ is uniformly distributed. We want a test that, under the alternative, tends to give small values of $P$. So now, the two tests can be compared on the basis of the distribution of $P$ under the alternative hypothesis. We want the test which give the $P$ that is "stochastically smaller" in some sense.
For (much more) about this approach, see https://www.bookdepository.com/Confidence-Likelihood-Probability-Tore-Schweder/9780521861601
|
Does a lower pvalue mean that test has higher power?
|
In general, the answer is NO. Suppose you have two different hypothesis tests $T$ and $T'$ for the same hypothesis test problem $H_0$ versus $H_1$ on the same data. Supposedly, $T$ and $T'$ uses diff
|
Does a lower pvalue mean that test has higher power?
In general, the answer is NO. Suppose you have two different hypothesis tests $T$ and $T'$ for the same hypothesis test problem $H_0$ versus $H_1$ on the same data. Supposedly, $T$ and $T'$ uses different aspects of the data, for example, original data versus ranks. To make a meaningful comparison we must suppose that the two tests have the same significance level $\alpha$, (say =0.05). Or, at least, that is the usual approach.
But, often only p-values are reported without any prior choice of significance level, and the p-value is interpreted as some measure of "strength of evidence". If that is valid, a good measure of strength of evidence (Important: NOT strength of association or effect size!) is of course debated. If going that way, power is not a natural concept, since that depends on the (not choosen!) significance level. The idea, somehow, is that a p-value close to zero is strong evidence against the null hypothesis. That, at least, was Fisher's argument.
How can we now compare the hypothesis tests without the concept of power? We can look at the distribution of $P$ (the p-value). Under the null, for both tests, $P$ is uniformly distributed. We want a test that, under the alternative, tends to give small values of $P$. So now, the two tests can be compared on the basis of the distribution of $P$ under the alternative hypothesis. We want the test which give the $P$ that is "stochastically smaller" in some sense.
For (much more) about this approach, see https://www.bookdepository.com/Confidence-Likelihood-Probability-Tore-Schweder/9780521861601
|
Does a lower pvalue mean that test has higher power?
In general, the answer is NO. Suppose you have two different hypothesis tests $T$ and $T'$ for the same hypothesis test problem $H_0$ versus $H_1$ on the same data. Supposedly, $T$ and $T'$ uses diff
|
40,012
|
Does a lower pvalue mean that test has higher power?
|
The burden, SKAT, and SKAT-O tests represent 3 ways to pool information from low-frequency genetic variants so that relations of genomic loci to a biologic characteristic (phenotype) can be assessed. Burden tests assume that all low-frequency variants at a locus have the same relation to phenotype (unidirectional), so that variants all are pooled to obtain a single regression coefficient for the locus. The SKAT test instead treats variants as random effects, assuming a zero net effect among the variants and evaluating the magnitude of the variance of the phenotypic effects among genetic variants.
The SKAT-O is effectively a weighted combination of burden and SKAT tests, with the appropriate weight between burden (unidirectional) and SKAT (mean-zero) models determined from the data. It thus would be expected to perform better than burden tests or SKAT tests if there is a tendency toward one direction of phenotypic effect. In the linked paper describing SKAT-O, the authors did empirical power testing based on simulations and then examined a published data set with all these methods. They estimated the relative performance on the published data set by comparing p-values, presumably a part of the basis for this question.
In the context of that paper, that use of p-values to evaluate some closely related tests on the same data set makes sense. In general, however, general statements about relations of p-values to power can be misleading, as @kjetil b halvorsen notes in another answer here.
If you are considering analysis of your own data with these methods, consider your knowledge of the genomic loci first. Do not run all 3 tests and simply choose the one that provides the lowest p-value. If you don't have prior knowledge about the nature or effects of genomic variants at your loci of interest, the SKAT-O test would seem to be preferable as it will choose the best weight between the burden and SKAT models from your data. That will use up one extra degree of freedom (maybe 2) for statistical tests, but with a large number of variants that should not make much practical difference in terms of power.
|
Does a lower pvalue mean that test has higher power?
|
The burden, SKAT, and SKAT-O tests represent 3 ways to pool information from low-frequency genetic variants so that relations of genomic loci to a biologic characteristic (phenotype) can be assessed.
|
Does a lower pvalue mean that test has higher power?
The burden, SKAT, and SKAT-O tests represent 3 ways to pool information from low-frequency genetic variants so that relations of genomic loci to a biologic characteristic (phenotype) can be assessed. Burden tests assume that all low-frequency variants at a locus have the same relation to phenotype (unidirectional), so that variants all are pooled to obtain a single regression coefficient for the locus. The SKAT test instead treats variants as random effects, assuming a zero net effect among the variants and evaluating the magnitude of the variance of the phenotypic effects among genetic variants.
The SKAT-O is effectively a weighted combination of burden and SKAT tests, with the appropriate weight between burden (unidirectional) and SKAT (mean-zero) models determined from the data. It thus would be expected to perform better than burden tests or SKAT tests if there is a tendency toward one direction of phenotypic effect. In the linked paper describing SKAT-O, the authors did empirical power testing based on simulations and then examined a published data set with all these methods. They estimated the relative performance on the published data set by comparing p-values, presumably a part of the basis for this question.
In the context of that paper, that use of p-values to evaluate some closely related tests on the same data set makes sense. In general, however, general statements about relations of p-values to power can be misleading, as @kjetil b halvorsen notes in another answer here.
If you are considering analysis of your own data with these methods, consider your knowledge of the genomic loci first. Do not run all 3 tests and simply choose the one that provides the lowest p-value. If you don't have prior knowledge about the nature or effects of genomic variants at your loci of interest, the SKAT-O test would seem to be preferable as it will choose the best weight between the burden and SKAT models from your data. That will use up one extra degree of freedom (maybe 2) for statistical tests, but with a large number of variants that should not make much practical difference in terms of power.
|
Does a lower pvalue mean that test has higher power?
The burden, SKAT, and SKAT-O tests represent 3 ways to pool information from low-frequency genetic variants so that relations of genomic loci to a biologic characteristic (phenotype) can be assessed.
|
40,013
|
Am I understanding differences between Bayesian and frequentist inference correctly?
|
This questions is too broad, but I thought I would respond to a few points where your statements aren't accurate.
Bayesians (typically) believe there is a fixed value for the parameters, but use a probability distribution to represent their uncertainty about what the true value is.
A Bayesian is typically interested in the full posterior rather than a point or interval estimate of a particular parameter (although for simplicity in reporting results point or interval estimates are typically provided).
A frequentist would not use a normal approximation for hypothesis testing with a point null in a binomial experiment.
Even if a frequentist "rejects a null hypothesis" that does not mean they choose the alternative.
Bayesians will choose between hypotheses if forced to, but typically we would prefer model averaging.
In a regression problem many frequentists use penalized likelihood methods, e.g. lasso, ridge regression, elastic net, etc. and therefore would not be using the MLE or OLS estimators.
|
Am I understanding differences between Bayesian and frequentist inference correctly?
|
This questions is too broad, but I thought I would respond to a few points where your statements aren't accurate.
Bayesians (typically) believe there is a fixed value for the parameters, but use a p
|
Am I understanding differences between Bayesian and frequentist inference correctly?
This questions is too broad, but I thought I would respond to a few points where your statements aren't accurate.
Bayesians (typically) believe there is a fixed value for the parameters, but use a probability distribution to represent their uncertainty about what the true value is.
A Bayesian is typically interested in the full posterior rather than a point or interval estimate of a particular parameter (although for simplicity in reporting results point or interval estimates are typically provided).
A frequentist would not use a normal approximation for hypothesis testing with a point null in a binomial experiment.
Even if a frequentist "rejects a null hypothesis" that does not mean they choose the alternative.
Bayesians will choose between hypotheses if forced to, but typically we would prefer model averaging.
In a regression problem many frequentists use penalized likelihood methods, e.g. lasso, ridge regression, elastic net, etc. and therefore would not be using the MLE or OLS estimators.
|
Am I understanding differences between Bayesian and frequentist inference correctly?
This questions is too broad, but I thought I would respond to a few points where your statements aren't accurate.
Bayesians (typically) believe there is a fixed value for the parameters, but use a p
|
40,014
|
Am I understanding differences between Bayesian and frequentist inference correctly?
|
A Bayesian would consider the results of the experiments fixed and consider population parameters as stochasts. This in contrast to frequentist, who see the data as "just another sample in an endless stream of samples" and who see the population parameters as fixed (but unknown).
The logical Bayesian order would be:
1. define the prior distribution
2. collect data
3. use that data to update your prior distribution. After updating it is called the posterior distribution.
Mind you that a confidence interval is really different from a credible interval. A confidence interval relates to the sampling procedure. If you would take many samples and calculate a 95% confidence interval for each sample, you'd find that 95% of those intervals contain the population mean.
This is useful to for instance industrial quality departments. Those guys take many samples, and now they have the confidence that most of their estimates will be pretty close to the reality. They know that 95% of their estimates are close, but they can't say that about one specific estimate.
Compare this to rolling dice: if you roll 600 (fair) dice, your best guess is that 1/6, that is 100 dice, will roll a six. But if you someone has rolled 1 die, and asks you:
- "What is the probability that this throw was a 6 ?",
- the answer "Well, that is 1/6 or 16.6%" is wrong.
The die shows either a 6, or some other figure. So the probability is 1, or 0.
When asked before the throw what the probability of throwing a 6 is, a Bayesian would say "1/6" (based on prior information: everybody knows that a die has 6 sides), but a Frequestist would say "No idea" because frequentism is solely based on the data, not on priors.
Likewise, if you have only 1 sample (thus 1 confidence interval), you have no way to say how likely it is that the population mean is in that interval. It is either in it, or not. The probability is either 1, or 0.
If a frequentist rejects H0, this means that P(data|H0) is smaller than some threshold. He says "It is very unlikely to find these sort of data if H0 were true, therefore I assume that H0 is not true, thus H1 must be true". Therefore, in this framework, H0 and H1 must be mutually exclusive and cover all possibilities.
As far as I understand, some frequentist say that if H0 is rejected, this does not imply that H1 is formally accepted; others say that rejecting the one equals accepting the other.
Hypothesis testing in a Bayesian method is slightly different. The method is to see how good the data are predicted by Hypothesis A, or B, or C (no need to limit this to 2 hypotheses). The researcher could say: "Hypothesis A explains the data 3 x better than Hypothesis B and 50 times better than Hypothesis C".
|
Am I understanding differences between Bayesian and frequentist inference correctly?
|
A Bayesian would consider the results of the experiments fixed and consider population parameters as stochasts. This in contrast to frequentist, who see the data as "just another sample in an endless
|
Am I understanding differences between Bayesian and frequentist inference correctly?
A Bayesian would consider the results of the experiments fixed and consider population parameters as stochasts. This in contrast to frequentist, who see the data as "just another sample in an endless stream of samples" and who see the population parameters as fixed (but unknown).
The logical Bayesian order would be:
1. define the prior distribution
2. collect data
3. use that data to update your prior distribution. After updating it is called the posterior distribution.
Mind you that a confidence interval is really different from a credible interval. A confidence interval relates to the sampling procedure. If you would take many samples and calculate a 95% confidence interval for each sample, you'd find that 95% of those intervals contain the population mean.
This is useful to for instance industrial quality departments. Those guys take many samples, and now they have the confidence that most of their estimates will be pretty close to the reality. They know that 95% of their estimates are close, but they can't say that about one specific estimate.
Compare this to rolling dice: if you roll 600 (fair) dice, your best guess is that 1/6, that is 100 dice, will roll a six. But if you someone has rolled 1 die, and asks you:
- "What is the probability that this throw was a 6 ?",
- the answer "Well, that is 1/6 or 16.6%" is wrong.
The die shows either a 6, or some other figure. So the probability is 1, or 0.
When asked before the throw what the probability of throwing a 6 is, a Bayesian would say "1/6" (based on prior information: everybody knows that a die has 6 sides), but a Frequestist would say "No idea" because frequentism is solely based on the data, not on priors.
Likewise, if you have only 1 sample (thus 1 confidence interval), you have no way to say how likely it is that the population mean is in that interval. It is either in it, or not. The probability is either 1, or 0.
If a frequentist rejects H0, this means that P(data|H0) is smaller than some threshold. He says "It is very unlikely to find these sort of data if H0 were true, therefore I assume that H0 is not true, thus H1 must be true". Therefore, in this framework, H0 and H1 must be mutually exclusive and cover all possibilities.
As far as I understand, some frequentist say that if H0 is rejected, this does not imply that H1 is formally accepted; others say that rejecting the one equals accepting the other.
Hypothesis testing in a Bayesian method is slightly different. The method is to see how good the data are predicted by Hypothesis A, or B, or C (no need to limit this to 2 hypotheses). The researcher could say: "Hypothesis A explains the data 3 x better than Hypothesis B and 50 times better than Hypothesis C".
|
Am I understanding differences between Bayesian and frequentist inference correctly?
A Bayesian would consider the results of the experiments fixed and consider population parameters as stochasts. This in contrast to frequentist, who see the data as "just another sample in an endless
|
40,015
|
Variance and covariance in the context of deterministic variables
|
All five questions have "yes" answers--but we have to be careful about what they mean.
"Variance of a deterministic variable."
Let's understand a "deterministic variable" to be a univariate dataset. It's just a bunch of values $X=x_1, x_2, \ldots, x_n$, with no probability model. By definition its variance is
$$\text{Var}(X) = \frac{1}{n}\sum_{i=1}^n \left(x_i - \bar X\right)^2$$
where $$\bar X = \frac{1}{n}\sum_{i=1}^n x_i$$ is its mean. There is no justification whatsoever to use $n-1$ instead of $n$ in any of these fractions--and this is never legitimately done--because no estimates are being made.
We may always think of $X$ as defining a "population." This is the definition of a population variance.
"Covariance between a deterministic variable and a stochastic variable."
One way to understand this is to assume it refers to a sequence of the form $(x_1, Y_1), (x_2,Y_2), \ldots, (x_n,Y_n)$ where the $x_i$ are numbers and the $Y_i$ are random variables. Then we may define the random variable $$\bar Y = \frac{1}{n}\sum_{i=1}^n Y_i,$$ via which the covariance of $x$ and $Y$ is defined as
$$\text{Cov}(x,Y) = \frac{1}{n}\sum_{i=1}^n (x_i - \bar x)(Y_i - \bar Y).$$
It is a linear combination of the $Y_i$ and consequently is itself a random variable. This notation is frequently used as a shorthand in linear regression calculations.
"Covariance between two deterministic variables."
"Two deterministic variables" can be considered a dataset of ordered pairs $(x_1, y_1), (x_2,y_2), \ldots, (x_n,y_n)$. The covariance can be defined exactly as in (2) and interpreted similarly. In fact, this is a direct consequence of (1): after all, covariances are variances.
"Are these concepts well defined in samples?"
Because they are well-defined for any dataset, they are well-defined for a sample. Note that similar expressions with $n-1$ in the (outer) denominator are estimators: they are not the sample variance or sample covariance.
"Are these concepts well defined in populations?"
Because they are well-defined for any dataset, and a population can be considered a dataset (when fully enumerated), they are well-defined for a population.
|
Variance and covariance in the context of deterministic variables
|
All five questions have "yes" answers--but we have to be careful about what they mean.
"Variance of a deterministic variable."
Let's understand a "deterministic variable" to be a univariate dataset.
|
Variance and covariance in the context of deterministic variables
All five questions have "yes" answers--but we have to be careful about what they mean.
"Variance of a deterministic variable."
Let's understand a "deterministic variable" to be a univariate dataset. It's just a bunch of values $X=x_1, x_2, \ldots, x_n$, with no probability model. By definition its variance is
$$\text{Var}(X) = \frac{1}{n}\sum_{i=1}^n \left(x_i - \bar X\right)^2$$
where $$\bar X = \frac{1}{n}\sum_{i=1}^n x_i$$ is its mean. There is no justification whatsoever to use $n-1$ instead of $n$ in any of these fractions--and this is never legitimately done--because no estimates are being made.
We may always think of $X$ as defining a "population." This is the definition of a population variance.
"Covariance between a deterministic variable and a stochastic variable."
One way to understand this is to assume it refers to a sequence of the form $(x_1, Y_1), (x_2,Y_2), \ldots, (x_n,Y_n)$ where the $x_i$ are numbers and the $Y_i$ are random variables. Then we may define the random variable $$\bar Y = \frac{1}{n}\sum_{i=1}^n Y_i,$$ via which the covariance of $x$ and $Y$ is defined as
$$\text{Cov}(x,Y) = \frac{1}{n}\sum_{i=1}^n (x_i - \bar x)(Y_i - \bar Y).$$
It is a linear combination of the $Y_i$ and consequently is itself a random variable. This notation is frequently used as a shorthand in linear regression calculations.
"Covariance between two deterministic variables."
"Two deterministic variables" can be considered a dataset of ordered pairs $(x_1, y_1), (x_2,y_2), \ldots, (x_n,y_n)$. The covariance can be defined exactly as in (2) and interpreted similarly. In fact, this is a direct consequence of (1): after all, covariances are variances.
"Are these concepts well defined in samples?"
Because they are well-defined for any dataset, they are well-defined for a sample. Note that similar expressions with $n-1$ in the (outer) denominator are estimators: they are not the sample variance or sample covariance.
"Are these concepts well defined in populations?"
Because they are well-defined for any dataset, and a population can be considered a dataset (when fully enumerated), they are well-defined for a population.
|
Variance and covariance in the context of deterministic variables
All five questions have "yes" answers--but we have to be careful about what they mean.
"Variance of a deterministic variable."
Let's understand a "deterministic variable" to be a univariate dataset.
|
40,016
|
Variance and covariance in the context of deterministic variables
|
The simple answer to your first three questions is no: it makes no sense in general to talk about variance or covariance involving a deterministic variable.
However, if we begin with a deterministic variable but then use some method of imposing a probability distribution on it, it then becomes a random variable, and the concept of variance then makes sense. For example, any deterministic variable can become a random variable simply by imposing a degenerate (i.e., constant) distribution on it; in this case, the variance (along with its covariance with any other random variable) becomes 0.
A more interesting way of imposing a distribution on deterministic random variables is to use the empirical distribution, based on an observed sample. That is, if you observe $x_1,\dots,x_n$ in a sample, then we can define a discrete probability distribution on $x$ by $$P(x=x_0)=\frac1n \cdot\text{the number of $i$ such that $x_i=x_0$}$$
for all $x_0$. For example, in the case where $x_1,\dots,x_n$ are all distinct, we get $P(x=x_i)=\frac1n$ for each $i=1,\dots,n$. If we use this probability distribution on $x$, then the mean of $x$ becomes simply the sample mean $\overline x=\frac1n\sum_{i=1}^n x_i$, and the variance of $x$ becomes $\hat\sigma^2=\frac1n\sum_{i=1}^n (x_i-\overline x)^2$.
This same idea can be applied in the situation where we have observed vectors $(x_1,y_1),\dots,(x_n,y_n)$. We can define a joint probability distribution on $(x,y)$ again by using the empirical distribution, and the covariance between $x$ and $y$ then becomes $$\text{Cov}(x,y)=\frac1n\sum_{i=1}^n (x_i-\overline x)(y_i-\overline y)$$
|
Variance and covariance in the context of deterministic variables
|
The simple answer to your first three questions is no: it makes no sense in general to talk about variance or covariance involving a deterministic variable.
However, if we begin with a deterministic v
|
Variance and covariance in the context of deterministic variables
The simple answer to your first three questions is no: it makes no sense in general to talk about variance or covariance involving a deterministic variable.
However, if we begin with a deterministic variable but then use some method of imposing a probability distribution on it, it then becomes a random variable, and the concept of variance then makes sense. For example, any deterministic variable can become a random variable simply by imposing a degenerate (i.e., constant) distribution on it; in this case, the variance (along with its covariance with any other random variable) becomes 0.
A more interesting way of imposing a distribution on deterministic random variables is to use the empirical distribution, based on an observed sample. That is, if you observe $x_1,\dots,x_n$ in a sample, then we can define a discrete probability distribution on $x$ by $$P(x=x_0)=\frac1n \cdot\text{the number of $i$ such that $x_i=x_0$}$$
for all $x_0$. For example, in the case where $x_1,\dots,x_n$ are all distinct, we get $P(x=x_i)=\frac1n$ for each $i=1,\dots,n$. If we use this probability distribution on $x$, then the mean of $x$ becomes simply the sample mean $\overline x=\frac1n\sum_{i=1}^n x_i$, and the variance of $x$ becomes $\hat\sigma^2=\frac1n\sum_{i=1}^n (x_i-\overline x)^2$.
This same idea can be applied in the situation where we have observed vectors $(x_1,y_1),\dots,(x_n,y_n)$. We can define a joint probability distribution on $(x,y)$ again by using the empirical distribution, and the covariance between $x$ and $y$ then becomes $$\text{Cov}(x,y)=\frac1n\sum_{i=1}^n (x_i-\overline x)(y_i-\overline y)$$
|
Variance and covariance in the context of deterministic variables
The simple answer to your first three questions is no: it makes no sense in general to talk about variance or covariance involving a deterministic variable.
However, if we begin with a deterministic v
|
40,017
|
Understanding 'average slope' regression
|
Assuming:
$x_i$ and $x_j$ must always be distinct (for simplicity; otherwise the calculations require us to keep conditioning on the distinct pairs and I'd rather leave that out for now)
$y=\beta_0+\beta_1 x+\varepsilon$, with $E(\varepsilon)=0$
$x$'s fixed not random
then the pairwise slopes have expectation $\beta_1$, since:
$E(y_j-y_i) = E[\beta_0+\beta_1 x_i+\varepsilon_j - (\beta_0+\beta_1 x_i+\varepsilon_i)]$
$\qquad = \beta_1(x_j-x_i)$
And consequently so will an average of these individual pairwise estimates. So it's unbiased.
Actually this wasn't surprising, since OLS is actually a weighted average of those pairwise slopes.
e.g. see Sanford Weisberg's Applied Linear Regression, 4E sec 2.11.2
Also see Gelman's blog here.
We could look at variance (there's several ways to approach this); I'll outline a simple-minded approach but I haven't time to carry it all through right now.
For this we further assume the $\varepsilon$'s are independent with constant variance $\sigma^2$.
$\text{Var}(y_j-y_i) =\varepsilon_j+\varepsilon_i=2\sigma^2$, so
$\text{Var}b_{ij}=\text{Var}\frac{y_j-y_i}{x_j-x_i} =2\frac{\sigma^2}{(x_j-x_i)^2}$
OLS weights each slope $b_{ij}$ by $\frac{(x_j-x_i)^2}{2nS_{xx}}$ -- which is to say, observations that are further apart get more weight (as they should, since the variance of their slope estimate is smaller).
Since your estimator doesn't weight its average, the variance will be larger, but given that the point was to arrive at a conceptually "simple" estimator, we shouldn't quibble too much about efficiency.
The next step would be to try to compute the variance of the overall estimator of slope. Here we could just rely on basic properties of variance.
$\text{Var}(\sum_{ij} b_{ij}) = \sum_{ij} \text{Var}(b_{ij}) + \sum_{(i,j)\neq (k,l)}\text{Cov}(b_{ij},b_{kl})$
However, only those terms where $i$ or $j$ occurs twice will have nonzero covariance. I believe there are three cases to worry about: $k=i, k=j, l=j$ (other coincident pairings being covered by doubling in the usual fashion)
As a more concrete aside, consider a case with 4 points $(A,B,C,D)$ -- there are 15 (i.e. $ \binom{\binom {4} {2}}{2}$) pairs of slopes (themselves indexed as pairs) of which these pairs count for covariance:
AB AC
AB AD
AC AD
BC BD
AB BC
AB BD
AC CD
BC CD
AC BC
AD BD
AD CD
BD CD
and these don't count for covariance:
AB CD
AC BD
AD BC
(That's what I'm attempting to outline the general case of...)
$\qquad = 2\sigma^2\sum_{ij} \frac{1}{(x_j-x_i)^2} + 2\sum_{i<j<l}\text{Cov}(b_{ij},b_{il})+2\sum_{i<j<l}\text{Cov}(b_{ij},b_{jl})$
$\qquad\qquad+2\sum_{i<k<j}\text{Cov}(b_{ij},b_{kj})$
(I hope I have that right!)
from there it's just a matter of plugging through the usual basic linearity properties for covariances (making use of the fact that the x's are constant, and the epsilons are independent except when they're the same, when things reduce to a variance). Nothing is especially onerous.
I may come back and try to finish that when I get a chance, but it may well be easier to see if one can write this in the form $\hat{\beta}=Ay$ and derive the variance that way (I was attempting to avoid matrix calculations to remain in the spirit of 'derp regression' but the matrix approach might save effort). actually since we only have one beta coefficient, $A$ would be a vector of partials, $A=a'=\frac{\partial \hat{\beta_1}}{\partial y_k}$, which would be convenient to write in the form $q1'W$ where $q$ is a scaling factor, $1$ is a vector of ones and $W$ is a matrix of "weight" coefficients of the form $W_{ij}=\frac{1}{x_j-x_i}$ (for $i\neq j$, and $0$ otherwise).
The efficiency of this relative to OLS depends on the pattern of $x$'s. If the $x$'s have very small "kurtosis" (all the x's are at the very ends of the range of $x$ so all or almost all non-zero x-differences are large) then it will be highly efficient. If there's an abundance of small x-differences, then it will be much less efficient.
It occurs to me that papers on Theil regression already explore some of those notions (with similar conclusions relevant to efficiency).
There's also a connection to Theil regression here. For its slope estimate, it uses the median of pairwise slopes across all pairs (with distinct x). It also corresponds to making Kendall's tau between residuals and the x-variable equal to zero.
Here's some very simple-minded code for producing a median of pairwise slopes in R:
theilb = median(outer(y,y,"-")/outer(x,x,"-"),na.rm="TRUE")
of course substantially more efficient calculations can be arranged (this takes twice as long as even a sensible $O(n^2)$ calculation, and $O(n \log n)$ calculations are possible, though I can't say I've ever attempted to write them -- the papers on efficient calculations of this seem pretty heavy going).
Here's a comparison of the two on a small data set (your slope in red, Theil in green). The main difference in this particular example is in the estimate of intercept (I just used mean and median residual respectively).
|
Understanding 'average slope' regression
|
Assuming:
$x_i$ and $x_j$ must always be distinct (for simplicity; otherwise the calculations require us to keep conditioning on the distinct pairs and I'd rather leave that out for now)
$y=\beta_0+
|
Understanding 'average slope' regression
Assuming:
$x_i$ and $x_j$ must always be distinct (for simplicity; otherwise the calculations require us to keep conditioning on the distinct pairs and I'd rather leave that out for now)
$y=\beta_0+\beta_1 x+\varepsilon$, with $E(\varepsilon)=0$
$x$'s fixed not random
then the pairwise slopes have expectation $\beta_1$, since:
$E(y_j-y_i) = E[\beta_0+\beta_1 x_i+\varepsilon_j - (\beta_0+\beta_1 x_i+\varepsilon_i)]$
$\qquad = \beta_1(x_j-x_i)$
And consequently so will an average of these individual pairwise estimates. So it's unbiased.
Actually this wasn't surprising, since OLS is actually a weighted average of those pairwise slopes.
e.g. see Sanford Weisberg's Applied Linear Regression, 4E sec 2.11.2
Also see Gelman's blog here.
We could look at variance (there's several ways to approach this); I'll outline a simple-minded approach but I haven't time to carry it all through right now.
For this we further assume the $\varepsilon$'s are independent with constant variance $\sigma^2$.
$\text{Var}(y_j-y_i) =\varepsilon_j+\varepsilon_i=2\sigma^2$, so
$\text{Var}b_{ij}=\text{Var}\frac{y_j-y_i}{x_j-x_i} =2\frac{\sigma^2}{(x_j-x_i)^2}$
OLS weights each slope $b_{ij}$ by $\frac{(x_j-x_i)^2}{2nS_{xx}}$ -- which is to say, observations that are further apart get more weight (as they should, since the variance of their slope estimate is smaller).
Since your estimator doesn't weight its average, the variance will be larger, but given that the point was to arrive at a conceptually "simple" estimator, we shouldn't quibble too much about efficiency.
The next step would be to try to compute the variance of the overall estimator of slope. Here we could just rely on basic properties of variance.
$\text{Var}(\sum_{ij} b_{ij}) = \sum_{ij} \text{Var}(b_{ij}) + \sum_{(i,j)\neq (k,l)}\text{Cov}(b_{ij},b_{kl})$
However, only those terms where $i$ or $j$ occurs twice will have nonzero covariance. I believe there are three cases to worry about: $k=i, k=j, l=j$ (other coincident pairings being covered by doubling in the usual fashion)
As a more concrete aside, consider a case with 4 points $(A,B,C,D)$ -- there are 15 (i.e. $ \binom{\binom {4} {2}}{2}$) pairs of slopes (themselves indexed as pairs) of which these pairs count for covariance:
AB AC
AB AD
AC AD
BC BD
AB BC
AB BD
AC CD
BC CD
AC BC
AD BD
AD CD
BD CD
and these don't count for covariance:
AB CD
AC BD
AD BC
(That's what I'm attempting to outline the general case of...)
$\qquad = 2\sigma^2\sum_{ij} \frac{1}{(x_j-x_i)^2} + 2\sum_{i<j<l}\text{Cov}(b_{ij},b_{il})+2\sum_{i<j<l}\text{Cov}(b_{ij},b_{jl})$
$\qquad\qquad+2\sum_{i<k<j}\text{Cov}(b_{ij},b_{kj})$
(I hope I have that right!)
from there it's just a matter of plugging through the usual basic linearity properties for covariances (making use of the fact that the x's are constant, and the epsilons are independent except when they're the same, when things reduce to a variance). Nothing is especially onerous.
I may come back and try to finish that when I get a chance, but it may well be easier to see if one can write this in the form $\hat{\beta}=Ay$ and derive the variance that way (I was attempting to avoid matrix calculations to remain in the spirit of 'derp regression' but the matrix approach might save effort). actually since we only have one beta coefficient, $A$ would be a vector of partials, $A=a'=\frac{\partial \hat{\beta_1}}{\partial y_k}$, which would be convenient to write in the form $q1'W$ where $q$ is a scaling factor, $1$ is a vector of ones and $W$ is a matrix of "weight" coefficients of the form $W_{ij}=\frac{1}{x_j-x_i}$ (for $i\neq j$, and $0$ otherwise).
The efficiency of this relative to OLS depends on the pattern of $x$'s. If the $x$'s have very small "kurtosis" (all the x's are at the very ends of the range of $x$ so all or almost all non-zero x-differences are large) then it will be highly efficient. If there's an abundance of small x-differences, then it will be much less efficient.
It occurs to me that papers on Theil regression already explore some of those notions (with similar conclusions relevant to efficiency).
There's also a connection to Theil regression here. For its slope estimate, it uses the median of pairwise slopes across all pairs (with distinct x). It also corresponds to making Kendall's tau between residuals and the x-variable equal to zero.
Here's some very simple-minded code for producing a median of pairwise slopes in R:
theilb = median(outer(y,y,"-")/outer(x,x,"-"),na.rm="TRUE")
of course substantially more efficient calculations can be arranged (this takes twice as long as even a sensible $O(n^2)$ calculation, and $O(n \log n)$ calculations are possible, though I can't say I've ever attempted to write them -- the papers on efficient calculations of this seem pretty heavy going).
Here's a comparison of the two on a small data set (your slope in red, Theil in green). The main difference in this particular example is in the estimate of intercept (I just used mean and median residual respectively).
|
Understanding 'average slope' regression
Assuming:
$x_i$ and $x_j$ must always be distinct (for simplicity; otherwise the calculations require us to keep conditioning on the distinct pairs and I'd rather leave that out for now)
$y=\beta_0+
|
40,018
|
Linear regression with sine/cosine elements
|
You simply compute $x_c=\cos(2\pi x)$ and $x_s=\sin(2\pi x)$ and perform a plain multiple linear regression of $y$ on $x, x_c,$ and $x_s$.
That is you supply the original $x$ and the two calculated predictors as if you had three independent variables for your regression, so your now-linear model is:
$$Y = \alpha + \beta x +\gamma x_c + \delta x_s+\varepsilon$$
This same idea applies to any transformation of the predictors. You can fit a regression of the form $y = \beta_0 + \beta_1 s_1(x_1) + \beta_2 s_2(x_2) +...+ \beta_k s_k(x_k)+\varepsilon$ for transformations $s_1$, ... $s_k$ by supplying $s_0(x_1), s_2(x_2), ..., s_k(x_k)$ as predictors.
So for example, $y = \beta_0 + \beta_1 \log(x_1) + \beta_2 \exp(x_1) + \beta_3 (x_2\log x_2) + \beta_4 \sqrt{x_3x_4} +\varepsilon$ would be fitted by supplying
$\log(x_1),$ $\exp(x_1),$ $(x_2\log x_2),$ and $\sqrt{x_3x_4}$ as predictors (IVs) to linear regression software.
The regression is just fitted as normal to the new set of predictors and the coefficients are those for the original equation.
See, for example the answer here: regression that creates $x\log(x)$ functions, which details a different specific example.
|
Linear regression with sine/cosine elements
|
You simply compute $x_c=\cos(2\pi x)$ and $x_s=\sin(2\pi x)$ and perform a plain multiple linear regression of $y$ on $x, x_c,$ and $x_s$.
That is you supply the original $x$ and the two calculated pr
|
Linear regression with sine/cosine elements
You simply compute $x_c=\cos(2\pi x)$ and $x_s=\sin(2\pi x)$ and perform a plain multiple linear regression of $y$ on $x, x_c,$ and $x_s$.
That is you supply the original $x$ and the two calculated predictors as if you had three independent variables for your regression, so your now-linear model is:
$$Y = \alpha + \beta x +\gamma x_c + \delta x_s+\varepsilon$$
This same idea applies to any transformation of the predictors. You can fit a regression of the form $y = \beta_0 + \beta_1 s_1(x_1) + \beta_2 s_2(x_2) +...+ \beta_k s_k(x_k)+\varepsilon$ for transformations $s_1$, ... $s_k$ by supplying $s_0(x_1), s_2(x_2), ..., s_k(x_k)$ as predictors.
So for example, $y = \beta_0 + \beta_1 \log(x_1) + \beta_2 \exp(x_1) + \beta_3 (x_2\log x_2) + \beta_4 \sqrt{x_3x_4} +\varepsilon$ would be fitted by supplying
$\log(x_1),$ $\exp(x_1),$ $(x_2\log x_2),$ and $\sqrt{x_3x_4}$ as predictors (IVs) to linear regression software.
The regression is just fitted as normal to the new set of predictors and the coefficients are those for the original equation.
See, for example the answer here: regression that creates $x\log(x)$ functions, which details a different specific example.
|
Linear regression with sine/cosine elements
You simply compute $x_c=\cos(2\pi x)$ and $x_s=\sin(2\pi x)$ and perform a plain multiple linear regression of $y$ on $x, x_c,$ and $x_s$.
That is you supply the original $x$ and the two calculated pr
|
40,019
|
Linear regression with sine/cosine elements
|
You can find list of methods used for solving of linear regression problems in this article from Do Q Lee:
Numerically efficient methods for solving Least-Squares problems
Most commonly used methods for these kind of problems are:
Normal equations method using Cholesky factorization. It is the fastest method but numerically unstable. Normal equations is basically system of linear equations. You get this system by computing partial derivations using every predictor and setting this partial derivation to zero. This corresponds to finding global minimum of error term.
QR factorization. More accurate and broadly applicable, but may fail when matrix of linear system of equations is nearly rank-deficient.
Singular value decomposition. It is expensive to compute, but is numerically stable and can handle rank deficiency. You can use tool like Matlab to compute SVD of choosen matrix. If you are deploying customized solution you can use software package like LAPACK or its Intel clone which is heavily optimised using x86 assembler and from september 2015 completely free for everyone.
In all three cases you need to find a solution to system of linear equations. There are not analytical formulas for regression coefficient except for very simple cases like for example line fitting.
|
Linear regression with sine/cosine elements
|
You can find list of methods used for solving of linear regression problems in this article from Do Q Lee:
Numerically efficient methods for solving Least-Squares problems
Most commonly used methods f
|
Linear regression with sine/cosine elements
You can find list of methods used for solving of linear regression problems in this article from Do Q Lee:
Numerically efficient methods for solving Least-Squares problems
Most commonly used methods for these kind of problems are:
Normal equations method using Cholesky factorization. It is the fastest method but numerically unstable. Normal equations is basically system of linear equations. You get this system by computing partial derivations using every predictor and setting this partial derivation to zero. This corresponds to finding global minimum of error term.
QR factorization. More accurate and broadly applicable, but may fail when matrix of linear system of equations is nearly rank-deficient.
Singular value decomposition. It is expensive to compute, but is numerically stable and can handle rank deficiency. You can use tool like Matlab to compute SVD of choosen matrix. If you are deploying customized solution you can use software package like LAPACK or its Intel clone which is heavily optimised using x86 assembler and from september 2015 completely free for everyone.
In all three cases you need to find a solution to system of linear equations. There are not analytical formulas for regression coefficient except for very simple cases like for example line fitting.
|
Linear regression with sine/cosine elements
You can find list of methods used for solving of linear regression problems in this article from Do Q Lee:
Numerically efficient methods for solving Least-Squares problems
Most commonly used methods f
|
40,020
|
Interpreting percent variance explained in Random Forest output
|
Yes %explained variance is a measure of how well out-of-bag predictions explain the target variance of the training set. Unexplained variance would be to due true random behaviour or lack of fit.
%explained variance is retrieved by randomForest:::print.randomForest as last element in rf.fit$rsq and multiplied with 100.
Documentation on rsq:
rsq (regression only) “pseudo R-squared”: 1 - mse / Var(y).
Where mse is mean square error of OOB-predictions versus targets, and var(y) is variance of targets.
See this answer also.
|
Interpreting percent variance explained in Random Forest output
|
Yes %explained variance is a measure of how well out-of-bag predictions explain the target variance of the training set. Unexplained variance would be to due true random behaviour or lack of fit.
%exp
|
Interpreting percent variance explained in Random Forest output
Yes %explained variance is a measure of how well out-of-bag predictions explain the target variance of the training set. Unexplained variance would be to due true random behaviour or lack of fit.
%explained variance is retrieved by randomForest:::print.randomForest as last element in rf.fit$rsq and multiplied with 100.
Documentation on rsq:
rsq (regression only) “pseudo R-squared”: 1 - mse / Var(y).
Where mse is mean square error of OOB-predictions versus targets, and var(y) is variance of targets.
See this answer also.
|
Interpreting percent variance explained in Random Forest output
Yes %explained variance is a measure of how well out-of-bag predictions explain the target variance of the training set. Unexplained variance would be to due true random behaviour or lack of fit.
%exp
|
40,021
|
Interpreting percent variance explained in Random Forest output
|
To add some details to the content of the other answer, the formula to get the explained variance displayed in the summary is:
#fit.rf <- randomForest(...)
round(100 * fit.rf$rsq[length(fit.rf$rsq)], digits = 2)
You can check this by looking at what randomForest is printing with the command getAnywhere(print.randomForest).
Furthermore, this is equivalent to the following commands:
# recalculate using model output
round(100* (1 - var(fit.rf$y - fit.rf$predicted) / var(fit.rf$y)), digits = 2)
# recalculate using the formula for rsq used internally
# see getAnywhere(randomForest.default).
n <- length(fit.rft$y)
rsq = 1 - fit.rf$mse/(var(fit.rf$y) * (n - 1)/n)
round(100 * rsq[length(rsq)], digits = 2)
|
Interpreting percent variance explained in Random Forest output
|
To add some details to the content of the other answer, the formula to get the explained variance displayed in the summary is:
#fit.rf <- randomForest(...)
round(100 * fit.rf$rsq[length(fit.rf$rsq)],
|
Interpreting percent variance explained in Random Forest output
To add some details to the content of the other answer, the formula to get the explained variance displayed in the summary is:
#fit.rf <- randomForest(...)
round(100 * fit.rf$rsq[length(fit.rf$rsq)], digits = 2)
You can check this by looking at what randomForest is printing with the command getAnywhere(print.randomForest).
Furthermore, this is equivalent to the following commands:
# recalculate using model output
round(100* (1 - var(fit.rf$y - fit.rf$predicted) / var(fit.rf$y)), digits = 2)
# recalculate using the formula for rsq used internally
# see getAnywhere(randomForest.default).
n <- length(fit.rft$y)
rsq = 1 - fit.rf$mse/(var(fit.rf$y) * (n - 1)/n)
round(100 * rsq[length(rsq)], digits = 2)
|
Interpreting percent variance explained in Random Forest output
To add some details to the content of the other answer, the formula to get the explained variance displayed in the summary is:
#fit.rf <- randomForest(...)
round(100 * fit.rf$rsq[length(fit.rf$rsq)],
|
40,022
|
Interpreting percent variance explained in Random Forest output
|
This seems to be a misinterpretation of extending $R^2$ to more complicated situations than the usual in-sample OLS linear regression. In particular, the "propotion of variance explained" interpretation of $R^2$ is the exception, not the rule. As is derived in the link, that definition only applies when $\overset{N}{\underset{i=1}{\sum}}\left[
\left(
y_i - \hat y_i
\right)\left(
\hat y_i - \bar y
\right)
\right]
= 0$, which is not the case in a random forest regression.
library(randomForest)
set.seed(2023)
N <- 1000
x1 <- rnorm(N)
x2 <- rnorm(N)
x3 <- rnorm(N)
y <- x1*x2 + x3^2 + rnorm(N)
# d <- data.frame(x1, x2, x3, y)
forest <- randomForest(y ~ x1 + x2 + x3, mtry=3)
y_hat <- forest$predicted
y_bar <- mean(y)
sum(
(
y - y_hat
)
*
(
y - y_bar
)
)
# I get 1778.79
Indeed, the documentation gives this quantity as:
$$
1-\left(
\dfrac{
\text{MSE}
}{
\text{var}\left(y\right)
}
\right)
=
1-\left(
\dfrac{
\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
}{
\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \bar y
\right)^2
}
\right)
=
1-\left(
\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \bar y
\right)^2
}
\right)
$$
The third of the three expressions is a common definition of $R^2$, so the linked information about $R^2$ applies.
This does not mean that such a value is worthless, however. Indeed, I have lots of thoughts on an $R^2$-style performance metric in complicated settings.
|
Interpreting percent variance explained in Random Forest output
|
This seems to be a misinterpretation of extending $R^2$ to more complicated situations than the usual in-sample OLS linear regression. In particular, the "propotion of variance explained" interpretati
|
Interpreting percent variance explained in Random Forest output
This seems to be a misinterpretation of extending $R^2$ to more complicated situations than the usual in-sample OLS linear regression. In particular, the "propotion of variance explained" interpretation of $R^2$ is the exception, not the rule. As is derived in the link, that definition only applies when $\overset{N}{\underset{i=1}{\sum}}\left[
\left(
y_i - \hat y_i
\right)\left(
\hat y_i - \bar y
\right)
\right]
= 0$, which is not the case in a random forest regression.
library(randomForest)
set.seed(2023)
N <- 1000
x1 <- rnorm(N)
x2 <- rnorm(N)
x3 <- rnorm(N)
y <- x1*x2 + x3^2 + rnorm(N)
# d <- data.frame(x1, x2, x3, y)
forest <- randomForest(y ~ x1 + x2 + x3, mtry=3)
y_hat <- forest$predicted
y_bar <- mean(y)
sum(
(
y - y_hat
)
*
(
y - y_bar
)
)
# I get 1778.79
Indeed, the documentation gives this quantity as:
$$
1-\left(
\dfrac{
\text{MSE}
}{
\text{var}\left(y\right)
}
\right)
=
1-\left(
\dfrac{
\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
}{
\dfrac{1}{N}\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \bar y
\right)^2
}
\right)
=
1-\left(
\dfrac{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \hat y_i
\right)^2
}{
\overset{N}{\underset{i=1}{\sum}}\left(
y_i - \bar y
\right)^2
}
\right)
$$
The third of the three expressions is a common definition of $R^2$, so the linked information about $R^2$ applies.
This does not mean that such a value is worthless, however. Indeed, I have lots of thoughts on an $R^2$-style performance metric in complicated settings.
|
Interpreting percent variance explained in Random Forest output
This seems to be a misinterpretation of extending $R^2$ to more complicated situations than the usual in-sample OLS linear regression. In particular, the "propotion of variance explained" interpretati
|
40,023
|
Testing the variance component in a mixed effects model
|
This is usually done with maximum likelihood ratio between original model and a model omitting the variance coefficient to be estimate (random intercept/random slope/random co-variance between slope and intercept).
A good example is in these tutorials:
When model has more than one random coefficient: http://www.bodowinter.com/tutorial/bw_LME_tutorial.pdf (p.12)
When model has one random coefficient: http://www.stat.wisc.edu/~ane/st572/notes/lec21.pdf (p.13)
Sample R code:
> model1 = lmer(resp ˜ fixed1 + (1 | random1))
> model2 = lm(resp ˜ fixed1)
> chi2 = -2*logLik(model2, REML=T) +2*logLik(model1, REML=T)
> chi2
[1] 5.011
> pchisq(chi2, df=1, lower.tail=F)
[1] 0.02518675
|
Testing the variance component in a mixed effects model
|
This is usually done with maximum likelihood ratio between original model and a model omitting the variance coefficient to be estimate (random intercept/random slope/random co-variance between slope a
|
Testing the variance component in a mixed effects model
This is usually done with maximum likelihood ratio between original model and a model omitting the variance coefficient to be estimate (random intercept/random slope/random co-variance between slope and intercept).
A good example is in these tutorials:
When model has more than one random coefficient: http://www.bodowinter.com/tutorial/bw_LME_tutorial.pdf (p.12)
When model has one random coefficient: http://www.stat.wisc.edu/~ane/st572/notes/lec21.pdf (p.13)
Sample R code:
> model1 = lmer(resp ˜ fixed1 + (1 | random1))
> model2 = lm(resp ˜ fixed1)
> chi2 = -2*logLik(model2, REML=T) +2*logLik(model1, REML=T)
> chi2
[1] 5.011
> pchisq(chi2, df=1, lower.tail=F)
[1] 0.02518675
|
Testing the variance component in a mixed effects model
This is usually done with maximum likelihood ratio between original model and a model omitting the variance coefficient to be estimate (random intercept/random slope/random co-variance between slope a
|
40,024
|
Testing the variance component in a mixed effects model
|
Asymptotic test are problematic for variance parameters, because parameter space is bounded by zero. Moreover, the hypothesis you are trying to test, can't be true, as the parameter is continuous. Probability of $\sigma^2 = 0$ is exactly 0.
What you can do to make inference on the variance parameters is to switch to a Bayesian implementation, where you would get the full posterior distribution for the variance parameters. For lme4 users, the MCMCglmm package is easy to learn. You could also use JAGS or Stan. For an example, where Stan was used to compare several random effects, see [1].
[1] Schmettow, M., & Havinga, J. (2013). Are users more diverse than designs? Testing and extending a 25 years old claim . In S. Love, K. Hone, & Tom McEwan (Eds.), Proceedings of BCS HCI 2013- The Internet of Things XXVII. Uxbridge, UK: BCS Learning and Development Ltd.
|
Testing the variance component in a mixed effects model
|
Asymptotic test are problematic for variance parameters, because parameter space is bounded by zero. Moreover, the hypothesis you are trying to test, can't be true, as the parameter is continuous. Pro
|
Testing the variance component in a mixed effects model
Asymptotic test are problematic for variance parameters, because parameter space is bounded by zero. Moreover, the hypothesis you are trying to test, can't be true, as the parameter is continuous. Probability of $\sigma^2 = 0$ is exactly 0.
What you can do to make inference on the variance parameters is to switch to a Bayesian implementation, where you would get the full posterior distribution for the variance parameters. For lme4 users, the MCMCglmm package is easy to learn. You could also use JAGS or Stan. For an example, where Stan was used to compare several random effects, see [1].
[1] Schmettow, M., & Havinga, J. (2013). Are users more diverse than designs? Testing and extending a 25 years old claim . In S. Love, K. Hone, & Tom McEwan (Eds.), Proceedings of BCS HCI 2013- The Internet of Things XXVII. Uxbridge, UK: BCS Learning and Development Ltd.
|
Testing the variance component in a mixed effects model
Asymptotic test are problematic for variance parameters, because parameter space is bounded by zero. Moreover, the hypothesis you are trying to test, can't be true, as the parameter is continuous. Pro
|
40,025
|
Do logistic population growth models relate to binary logistic regressions?
|
The name logistic originally comes from the logistic growth equation:
$$ \frac{d N}{d t} = r N (1 - N) $$
which is a simple differential equation model for the growth of a population. The logistic function is its solution:
$$ N(t) = \frac{e^{rt}}{1 + e^{rt}} $$
Which has the attractive property that it is increasing, $lim_{x \rightarrow \infty} = 1$ and $lim_{x \rightarrow -\infty} = 0$. Because of these properties (and more) it is used as the (inverse) link function in logistic regression to model the probability of a binary outcome:
$$ Pr(y \mid x) = \frac{e^{\beta \cdot x}}{1 + e^{\beta \cdot x}} $$
The population model came first (1845), so the regression inherited the name (1958).
Clarification question: so in my case, I want to find an equation in R that models a population of rabbits. For example, the values start at 600 and multiply until the carrying capacity of 1400. It looks like a logistic curve, but it's not binary. If I want to model it in R, what category/package does it fall under? (Sorry, resources are conflicting).
That's actually an underspecified problem, because the rabbits could multiply to their carrying capacity quickly or slowly. In any case, once the rate $r$ of growth has been pre-specified (or left as a free parameter), you don't need R to solve the problem. Just take the general solution to the logistic growth model:
$$ N(t) = a \frac{e^{r(t - t_0)}}{1 + e^{r(t - t_0)}} $$
And solve the two equations $N(0) = 600$ and $lim_{t \rightarrow \infty} N(t) = 1400$ for $a$ and $t_0$.
If the idea for using R is that you have some data, and you want to determine the growth parameter $r$ by fitting a curve to the data, you could do what is advised in this answer.
|
Do logistic population growth models relate to binary logistic regressions?
|
The name logistic originally comes from the logistic growth equation:
$$ \frac{d N}{d t} = r N (1 - N) $$
which is a simple differential equation model for the growth of a population. The logistic fu
|
Do logistic population growth models relate to binary logistic regressions?
The name logistic originally comes from the logistic growth equation:
$$ \frac{d N}{d t} = r N (1 - N) $$
which is a simple differential equation model for the growth of a population. The logistic function is its solution:
$$ N(t) = \frac{e^{rt}}{1 + e^{rt}} $$
Which has the attractive property that it is increasing, $lim_{x \rightarrow \infty} = 1$ and $lim_{x \rightarrow -\infty} = 0$. Because of these properties (and more) it is used as the (inverse) link function in logistic regression to model the probability of a binary outcome:
$$ Pr(y \mid x) = \frac{e^{\beta \cdot x}}{1 + e^{\beta \cdot x}} $$
The population model came first (1845), so the regression inherited the name (1958).
Clarification question: so in my case, I want to find an equation in R that models a population of rabbits. For example, the values start at 600 and multiply until the carrying capacity of 1400. It looks like a logistic curve, but it's not binary. If I want to model it in R, what category/package does it fall under? (Sorry, resources are conflicting).
That's actually an underspecified problem, because the rabbits could multiply to their carrying capacity quickly or slowly. In any case, once the rate $r$ of growth has been pre-specified (or left as a free parameter), you don't need R to solve the problem. Just take the general solution to the logistic growth model:
$$ N(t) = a \frac{e^{r(t - t_0)}}{1 + e^{r(t - t_0)}} $$
And solve the two equations $N(0) = 600$ and $lim_{t \rightarrow \infty} N(t) = 1400$ for $a$ and $t_0$.
If the idea for using R is that you have some data, and you want to determine the growth parameter $r$ by fitting a curve to the data, you could do what is advised in this answer.
|
Do logistic population growth models relate to binary logistic regressions?
The name logistic originally comes from the logistic growth equation:
$$ \frac{d N}{d t} = r N (1 - N) $$
which is a simple differential equation model for the growth of a population. The logistic fu
|
40,026
|
$\chi^2 $ of multidimensional data
|
To analyze a multi-way contingency table, you use log-linear models. In truth, log-linear models are a special case of the Poisson generalized linear model, so you could do that, but log-linear models are more user-friendly. In Python, you may need to use the Poisson GLM, as I gather log-linear models may not be implemented. I will demonstrate the log-linear model using your data with R.
library(MASS)
tab = array(c(95, 31, 20, 70, 29, 18, 21, 69, 98, 54, 35, 11), dim=c(3,2,2))
tab = as.table(tab)
names(dimnames(tab)) = c("outcomes", "actions", "observations")
dimnames(tab)[[1]] = c("0", "1", "2")
dimnames(tab)[[2]] = c("0", "1")
dimnames(tab)[[3]] = c("1", "2")
tab
# , , observations = 1
# actions
# outcomes 0 1
# 0 95 70
# 1 31 29
# 2 20 18
#
# , , observations = 2
# actions
# outcomes 0 1
# 0 21 54
# 1 69 35
# 2 98 11
Log-linear models are simply a series of goodness of fit tests. We can start with a (trivial) null model that assumes all cells have the same expected value:
summary(tab)
# Number of cases in table: 551
# Number of factors: 3
# Test for independence of all factors:
# Chisq = 159.18, df = 7, p-value = 4.772e-31
The null is rejected. Next, we can fit a saturated model:
m.sat = loglm(~observations*actions*outcomes, tab)
m.sat
# Call:
# loglm(formula = ~observations * actions * outcomes, data = tab)
#
# Statistics:
# X^2 df P(> X^2)
# Likelihood Ratio 0 0 1
# Pearson 0 0 1
Naturally, this fits perfectly. At this point, we could build up from the null model seeing if additional terms improve the fit, or drop terms from the saturated model to see if the fit gets significantly worse. The latter is more convenient and is conventional. To see if the distribution of outcomes by actions differs as a function of the observation, we need to drop the interactions between the observations and the actions * outcomes. If we also drop the marginal effect of observations, we are testing if the mean count differs between the two levels of observations. That may or may not be of interest to you, I don't know.
m1 = loglm(~observations + actions*outcomes, tab)
sum(tab[,,1]) # 263
sum(tab[,,2]) # 288
m2 = loglm(~actions*outcomes, tab)
anova(m2, m1)
# LR tests for hierarchical log-linear models
#
# Model 1:
# ~actions * outcomes
# Model 2:
# ~observations + actions * outcomes
#
# Deviance df Delta(Dev) Delta(df) P(> Delta(Dev))
# Model 1 126.4172 6
# Model 2 125.2825 5 1.134691 1 0.28678
# Saturated 0.0000 0 125.282534 5 0.00000
Model 1 has dropped a single degree of freedom from Model 2 (note that, confusingly, Model 1 $\leftrightarrow$ m2, and Model 2 $\leftrightarrow$ m1), but the decrease in model fit is very small. It is not significant. There is not enough evidence to suggest that the mean counts differ by observation. On the other hand, when Model 2 is compared to the Saturated model, the decrease in fit is highly significant. The data are inconsistent with the idea that the distribution of counts is the same in both levels of observation.
|
$\chi^2 $ of multidimensional data
|
To analyze a multi-way contingency table, you use log-linear models. In truth, log-linear models are a special case of the Poisson generalized linear model, so you could do that, but log-linear model
|
$\chi^2 $ of multidimensional data
To analyze a multi-way contingency table, you use log-linear models. In truth, log-linear models are a special case of the Poisson generalized linear model, so you could do that, but log-linear models are more user-friendly. In Python, you may need to use the Poisson GLM, as I gather log-linear models may not be implemented. I will demonstrate the log-linear model using your data with R.
library(MASS)
tab = array(c(95, 31, 20, 70, 29, 18, 21, 69, 98, 54, 35, 11), dim=c(3,2,2))
tab = as.table(tab)
names(dimnames(tab)) = c("outcomes", "actions", "observations")
dimnames(tab)[[1]] = c("0", "1", "2")
dimnames(tab)[[2]] = c("0", "1")
dimnames(tab)[[3]] = c("1", "2")
tab
# , , observations = 1
# actions
# outcomes 0 1
# 0 95 70
# 1 31 29
# 2 20 18
#
# , , observations = 2
# actions
# outcomes 0 1
# 0 21 54
# 1 69 35
# 2 98 11
Log-linear models are simply a series of goodness of fit tests. We can start with a (trivial) null model that assumes all cells have the same expected value:
summary(tab)
# Number of cases in table: 551
# Number of factors: 3
# Test for independence of all factors:
# Chisq = 159.18, df = 7, p-value = 4.772e-31
The null is rejected. Next, we can fit a saturated model:
m.sat = loglm(~observations*actions*outcomes, tab)
m.sat
# Call:
# loglm(formula = ~observations * actions * outcomes, data = tab)
#
# Statistics:
# X^2 df P(> X^2)
# Likelihood Ratio 0 0 1
# Pearson 0 0 1
Naturally, this fits perfectly. At this point, we could build up from the null model seeing if additional terms improve the fit, or drop terms from the saturated model to see if the fit gets significantly worse. The latter is more convenient and is conventional. To see if the distribution of outcomes by actions differs as a function of the observation, we need to drop the interactions between the observations and the actions * outcomes. If we also drop the marginal effect of observations, we are testing if the mean count differs between the two levels of observations. That may or may not be of interest to you, I don't know.
m1 = loglm(~observations + actions*outcomes, tab)
sum(tab[,,1]) # 263
sum(tab[,,2]) # 288
m2 = loglm(~actions*outcomes, tab)
anova(m2, m1)
# LR tests for hierarchical log-linear models
#
# Model 1:
# ~actions * outcomes
# Model 2:
# ~observations + actions * outcomes
#
# Deviance df Delta(Dev) Delta(df) P(> Delta(Dev))
# Model 1 126.4172 6
# Model 2 125.2825 5 1.134691 1 0.28678
# Saturated 0.0000 0 125.282534 5 0.00000
Model 1 has dropped a single degree of freedom from Model 2 (note that, confusingly, Model 1 $\leftrightarrow$ m2, and Model 2 $\leftrightarrow$ m1), but the decrease in model fit is very small. It is not significant. There is not enough evidence to suggest that the mean counts differ by observation. On the other hand, when Model 2 is compared to the Saturated model, the decrease in fit is highly significant. The data are inconsistent with the idea that the distribution of counts is the same in both levels of observation.
|
$\chi^2 $ of multidimensional data
To analyze a multi-way contingency table, you use log-linear models. In truth, log-linear models are a special case of the Poisson generalized linear model, so you could do that, but log-linear model
|
40,027
|
$\chi^2 $ of multidimensional data
|
I found the answer here under 5: "Three-Way Tables". Obviously, the term I was missing is three-way contingency tables. Determining the expected values in a three dimensional contingency table is actually pretty much analogue to the standard variant.
In normal contingency tables you get the expected values by multiplying the row sum with the column sum and dividing the product by the total sum. Commonly denoted as $e_{ij} = \frac{o_{i.} \cdot o_{.j}}{n} $, where $o_{i.} $ is the row sum, $o_{.j} $ is the column sum and $n $ is the total.
In three-way contingency tables, you do not multiply the sum of a line (i.e. row or column) but the sum of a plane. Accordingly you divide by the square of all the observations involved.
With the given example, the calculation for the first cell in table 1 goes as follows:
$\frac{(95+70+21+54) \cdot (95+31+20+21+69+98) \cdot (95+31+20+70+29+18)}{(263+288)^2} = \frac{240 \cdot 313 \cdot 263}{303601} = 69.44 $
The calculation for the last cell in table 2:
$\frac{(11+98+20+18) \cdot (70+29+18+54+35+11) \cdot (21+69+98+54+35+11)}{(263+288)^2} = 30.26$
The sought-after $\chi^2 $ value is the squared sum of differences between observed and expected, divided by expected, as usual.
|
$\chi^2 $ of multidimensional data
|
I found the answer here under 5: "Three-Way Tables". Obviously, the term I was missing is three-way contingency tables. Determining the expected values in a three dimensional contingency table is actu
|
$\chi^2 $ of multidimensional data
I found the answer here under 5: "Three-Way Tables". Obviously, the term I was missing is three-way contingency tables. Determining the expected values in a three dimensional contingency table is actually pretty much analogue to the standard variant.
In normal contingency tables you get the expected values by multiplying the row sum with the column sum and dividing the product by the total sum. Commonly denoted as $e_{ij} = \frac{o_{i.} \cdot o_{.j}}{n} $, where $o_{i.} $ is the row sum, $o_{.j} $ is the column sum and $n $ is the total.
In three-way contingency tables, you do not multiply the sum of a line (i.e. row or column) but the sum of a plane. Accordingly you divide by the square of all the observations involved.
With the given example, the calculation for the first cell in table 1 goes as follows:
$\frac{(95+70+21+54) \cdot (95+31+20+21+69+98) \cdot (95+31+20+70+29+18)}{(263+288)^2} = \frac{240 \cdot 313 \cdot 263}{303601} = 69.44 $
The calculation for the last cell in table 2:
$\frac{(11+98+20+18) \cdot (70+29+18+54+35+11) \cdot (21+69+98+54+35+11)}{(263+288)^2} = 30.26$
The sought-after $\chi^2 $ value is the squared sum of differences between observed and expected, divided by expected, as usual.
|
$\chi^2 $ of multidimensional data
I found the answer here under 5: "Three-Way Tables". Obviously, the term I was missing is three-way contingency tables. Determining the expected values in a three dimensional contingency table is actu
|
40,028
|
Stochastic Differential Equations - A Few General Questions
|
The single best introduction to SDE from numerical angle is this Higham's paper. It will probably give you an approximation to answers to your three questions.
a) In finance the assets are not priced on the basis of knowing exactly their cash flows. Moreover, they're not priced on the basis of knowing their expected cash flows either. Ideally, you need to know the whole distribution of future prices. It's often expressed as a pair "risk-return". The risk part is a quantified uncertainty about the return. That's why stochastic calculus seems to fit finance applications so well, it's precisely because it appears to capture our understanding of uncertainty about the future cash flows from the assets.
The sampling would represent possible cash flow paths. Each path is a possible realization of the future. In Monte Carlo methods you explicitly sample paths, and obtain the distribution of cash flows, which allows you to price the assets.
However, under certain conditions, you can formulate and solve the SDE as partial differential equations (PDE) - non-stochastic. That's what Merton did with Black-Scholes (BS) PDE approach: he linked them to SDE. Original BS paper formulated option pricing problem as a heat transfer equation from physics.
In BS equation for an option price, you can see that there are 5 inputs: asset price, volatility, strike price, risk free return and time to maturity. Even before BS equation, these were all known to be determinants of the option prices. That's why when the paper came out it immediately made a sense to practitioners. Note, now that there's nothing about the future price of the asset. The only information about the future price is volatility, which represents the uncertainty about the future.
So, intuitively, what BS equation does is it expresses the option price as a function of the distribution of future prices, namely its standard deviation. That's how SDEs are used: you express your price through the distributions of future outcomes, and if you're lucky you solution will have something simple like the standard deviation in it.
b) Monte Carlo is used a lot, but as I wrote above if you can convert the problem into PDE, then all kinds of methods such as finite elements can be used.
c) I'm not sure there's such a book, i.e. computational with measure theory. If you're mathematician I can recommend the one I used: Shreve's text "Stochastic Calculus for Finance II: Continuous-Time Models". There's no software coming with it though, it's quite theoretical, may not work for you, if you're not strong in math.
UPDATE
I want to add a physics example to a). Look at diffusion process. You can think of a single atom's path as a single path in SDE, maybe in its Monte Carlo sampling. It's totally unpredictable. However, when you look at the diffusion of large quantities of atoms, the diffusion process is very predictable in terms of the speed with wich one material goes into another in a macro level.
|
Stochastic Differential Equations - A Few General Questions
|
The single best introduction to SDE from numerical angle is this Higham's paper. It will probably give you an approximation to answers to your three questions.
a) In finance the assets are not priced
|
Stochastic Differential Equations - A Few General Questions
The single best introduction to SDE from numerical angle is this Higham's paper. It will probably give you an approximation to answers to your three questions.
a) In finance the assets are not priced on the basis of knowing exactly their cash flows. Moreover, they're not priced on the basis of knowing their expected cash flows either. Ideally, you need to know the whole distribution of future prices. It's often expressed as a pair "risk-return". The risk part is a quantified uncertainty about the return. That's why stochastic calculus seems to fit finance applications so well, it's precisely because it appears to capture our understanding of uncertainty about the future cash flows from the assets.
The sampling would represent possible cash flow paths. Each path is a possible realization of the future. In Monte Carlo methods you explicitly sample paths, and obtain the distribution of cash flows, which allows you to price the assets.
However, under certain conditions, you can formulate and solve the SDE as partial differential equations (PDE) - non-stochastic. That's what Merton did with Black-Scholes (BS) PDE approach: he linked them to SDE. Original BS paper formulated option pricing problem as a heat transfer equation from physics.
In BS equation for an option price, you can see that there are 5 inputs: asset price, volatility, strike price, risk free return and time to maturity. Even before BS equation, these were all known to be determinants of the option prices. That's why when the paper came out it immediately made a sense to practitioners. Note, now that there's nothing about the future price of the asset. The only information about the future price is volatility, which represents the uncertainty about the future.
So, intuitively, what BS equation does is it expresses the option price as a function of the distribution of future prices, namely its standard deviation. That's how SDEs are used: you express your price through the distributions of future outcomes, and if you're lucky you solution will have something simple like the standard deviation in it.
b) Monte Carlo is used a lot, but as I wrote above if you can convert the problem into PDE, then all kinds of methods such as finite elements can be used.
c) I'm not sure there's such a book, i.e. computational with measure theory. If you're mathematician I can recommend the one I used: Shreve's text "Stochastic Calculus for Finance II: Continuous-Time Models". There's no software coming with it though, it's quite theoretical, may not work for you, if you're not strong in math.
UPDATE
I want to add a physics example to a). Look at diffusion process. You can think of a single atom's path as a single path in SDE, maybe in its Monte Carlo sampling. It's totally unpredictable. However, when you look at the diffusion of large quantities of atoms, the diffusion process is very predictable in terms of the speed with wich one material goes into another in a macro level.
|
Stochastic Differential Equations - A Few General Questions
The single best introduction to SDE from numerical angle is this Higham's paper. It will probably give you an approximation to answers to your three questions.
a) In finance the assets are not priced
|
40,029
|
Stochastic Differential Equations - A Few General Questions
|
a) What is the point of simulating SDEs if the solution is always
different due to the randomness of the Wiener process? I have been
simulating Geometric Brownian Motion and read that it is used in the
Black-Scholes model in finance, so how do they actually price stocks
based on the SDE?
The Black-Scholes model is for evaluating the price of stock options (where the underlying stock is assumed to follow Geometric Brownian Motion). In terms of probability theory the idea is that the option price is the (discounted) expectation (as of today) of a function of the stock price at the maturity date of the option. So you are averaging over all the paths.
The expectation is under what is called the risk-neutral measure- the idea being that to avoid arbitrage[basically having two prices for the same financial product] - all bets(options and other [financial] derivatives) on the future value of the stock price must be representable as an expectation with respect to a single pricing measure which is absolutely continuous [agrees on what states are impossible] with the real measure.If you think of the pdf of the stock price at maturity,$S_T$, then the pdf in the pricing measure is the price today of receiving a dollar at time $T$ for each value of the stock at time $T$.
b) What are the methods used in determining the coefficients in an SDE to calibrate it to data?
AFAIK you cannot even estimate the drift of a "gaussian" SDE $dX_t=\mu dt+\sigma dW_t$ , whereas the diffusion term is easy. However, the drift term is irrelevant to pricing options where the drift term (for the pricing measure) is set at the risk free rate of interest.the diffusion term ($\sigma$), on the other hand, determines the possible pricing measures so it should tie in with historical estimates...[but things get complicated]
c) What is a good textbook for a very applied and computational approach to stochastic calculus with a crash course on measure theory?
Although I think it would be a nice idea to combine the theoretical issue of dealing with infinite dimensional spaces [namely in the time dimension] with computations, I don't think you will find a book that takes that approach.
you might want to read the parable of the bookmaker from baxter and rennie's book, Financial Calculus to read about arbitrage pricing and how probability is used in derivative pricing.
|
Stochastic Differential Equations - A Few General Questions
|
a) What is the point of simulating SDEs if the solution is always
different due to the randomness of the Wiener process? I have been
simulating Geometric Brownian Motion and read that it is used i
|
Stochastic Differential Equations - A Few General Questions
a) What is the point of simulating SDEs if the solution is always
different due to the randomness of the Wiener process? I have been
simulating Geometric Brownian Motion and read that it is used in the
Black-Scholes model in finance, so how do they actually price stocks
based on the SDE?
The Black-Scholes model is for evaluating the price of stock options (where the underlying stock is assumed to follow Geometric Brownian Motion). In terms of probability theory the idea is that the option price is the (discounted) expectation (as of today) of a function of the stock price at the maturity date of the option. So you are averaging over all the paths.
The expectation is under what is called the risk-neutral measure- the idea being that to avoid arbitrage[basically having two prices for the same financial product] - all bets(options and other [financial] derivatives) on the future value of the stock price must be representable as an expectation with respect to a single pricing measure which is absolutely continuous [agrees on what states are impossible] with the real measure.If you think of the pdf of the stock price at maturity,$S_T$, then the pdf in the pricing measure is the price today of receiving a dollar at time $T$ for each value of the stock at time $T$.
b) What are the methods used in determining the coefficients in an SDE to calibrate it to data?
AFAIK you cannot even estimate the drift of a "gaussian" SDE $dX_t=\mu dt+\sigma dW_t$ , whereas the diffusion term is easy. However, the drift term is irrelevant to pricing options where the drift term (for the pricing measure) is set at the risk free rate of interest.the diffusion term ($\sigma$), on the other hand, determines the possible pricing measures so it should tie in with historical estimates...[but things get complicated]
c) What is a good textbook for a very applied and computational approach to stochastic calculus with a crash course on measure theory?
Although I think it would be a nice idea to combine the theoretical issue of dealing with infinite dimensional spaces [namely in the time dimension] with computations, I don't think you will find a book that takes that approach.
you might want to read the parable of the bookmaker from baxter and rennie's book, Financial Calculus to read about arbitrage pricing and how probability is used in derivative pricing.
|
Stochastic Differential Equations - A Few General Questions
a) What is the point of simulating SDEs if the solution is always
different due to the randomness of the Wiener process? I have been
simulating Geometric Brownian Motion and read that it is used i
|
40,030
|
Stochastic Differential Equations - A Few General Questions
|
"What is the point of sampling random variables at all if the solution is always different due to the randomness..."? When you're simulating an SDE (or any stochastic process in general), you're sampling from a certain distribution of sample paths. (More precisely, a probability measure defined on an infinite dimensional space---e.g. the Wiener measure is defined on $C[0, \infty)$).
In the GBM case you cite, estimation of parameters reduces to classical parametric models. In general, see Statistics of Random Processes I by Liptser and Shiryaev for starters.
Stochastic Differential Equations: An Introduction with Applications by Øksendal has 6 editions. You can't go wrong with that.
|
Stochastic Differential Equations - A Few General Questions
|
"What is the point of sampling random variables at all if the solution is always different due to the randomness..."? When you're simulating an SDE (or any stochastic process in general), you're sampl
|
Stochastic Differential Equations - A Few General Questions
"What is the point of sampling random variables at all if the solution is always different due to the randomness..."? When you're simulating an SDE (or any stochastic process in general), you're sampling from a certain distribution of sample paths. (More precisely, a probability measure defined on an infinite dimensional space---e.g. the Wiener measure is defined on $C[0, \infty)$).
In the GBM case you cite, estimation of parameters reduces to classical parametric models. In general, see Statistics of Random Processes I by Liptser and Shiryaev for starters.
Stochastic Differential Equations: An Introduction with Applications by Øksendal has 6 editions. You can't go wrong with that.
|
Stochastic Differential Equations - A Few General Questions
"What is the point of sampling random variables at all if the solution is always different due to the randomness..."? When you're simulating an SDE (or any stochastic process in general), you're sampl
|
40,031
|
2SLS - logit/probit in the second stage?
|
The reference for this should be Newey (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36(3), pp. 231–250 (link). This is the estimator that is implemented with the probitiv command in Stata, for instance, where you can have an OLS first stage and probit second stage.
|
2SLS - logit/probit in the second stage?
|
The reference for this should be Newey (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36(3), pp. 231–250 (link)
|
2SLS - logit/probit in the second stage?
The reference for this should be Newey (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36(3), pp. 231–250 (link). This is the estimator that is implemented with the probitiv command in Stata, for instance, where you can have an OLS first stage and probit second stage.
|
2SLS - logit/probit in the second stage?
The reference for this should be Newey (1987) "Efficient estimation of limited dependent variable models with endogenous explanatory variables", Journal of Econometrics, Vol. 36(3), pp. 231–250 (link)
|
40,032
|
2SLS - logit/probit in the second stage?
|
When googling this problem myself, I found the highly-cited article
Terza, J.V., Basu, A. and Rathouz, P.J., 2008. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling. Journal of health economics, 27(3), pp.531-543.
which proposes to use a method called 2-stage residual inclusion (2SRI) for the general linear model case. The method is very simple: Fit the first-stage model to get the residual and include both the residuals and the endogenous variable in the second-stage model.
Or more formally, let $𝑦_2$ be the endogenous variable, $𝑥_1$ till $𝑥_8$ the other exogenous control variables and $𝑖_1$ and $𝑖_2$ two instruments for $𝑦_2$. In the first stage, $𝑦_2$ is explained using linear regression
$𝑦_2=𝛼_0+𝛼_1 𝑥_1+𝛼_2 𝑥_2+…+𝛼_8 𝑥_8+𝛼_9 𝑖_1+𝛼_10 𝑖_2+𝜀_2$,
with $𝛼$ as coefficients and $𝜀_2$ as error term. The equation splits $𝑦_2$ in an exogenous component $𝛼_0+𝛼_1 𝑥_1+𝛼_2 𝑥_2+…+𝛼_8 𝑥_8+𝛼_9 𝑖_1+𝛼_10 𝑖_2$ and omitted-variable component $𝜀_2$. The 2SRI method includes both the endogenous variable $𝑦_2$ and the residual $𝜀_2$ as estimator for the omitted variable in the model; i.e. $𝑦_1=logit(𝛽_0+𝛽_1 𝑥_1+𝛽_2 𝑥_2+…+𝛽_8 𝑥_8+𝛽_9 𝑦_2+𝛽_10 𝜀_2 )+𝜀_1$
with $𝑦_1$ being the dichotomous variable. The implementation with a statics software is straight forward. (However, getting the standard errors for the estimators is not.)
It has been shown by
Burgess, S., & Thompson, S. G. (2012). Improving bias and coverage in instrumental variable analysis with weak instruments for continuous and binary outcomes. Statistics in medicine, 31(15), 1582-1600.
through a simulation that the 2SRI is better than 2SLS to provide another reference.
|
2SLS - logit/probit in the second stage?
|
When googling this problem myself, I found the highly-cited article
Terza, J.V., Basu, A. and Rathouz, P.J., 2008. Two-stage residual inclusion estimation: addressing endogeneity in health econometr
|
2SLS - logit/probit in the second stage?
When googling this problem myself, I found the highly-cited article
Terza, J.V., Basu, A. and Rathouz, P.J., 2008. Two-stage residual inclusion estimation: addressing endogeneity in health econometric modeling. Journal of health economics, 27(3), pp.531-543.
which proposes to use a method called 2-stage residual inclusion (2SRI) for the general linear model case. The method is very simple: Fit the first-stage model to get the residual and include both the residuals and the endogenous variable in the second-stage model.
Or more formally, let $𝑦_2$ be the endogenous variable, $𝑥_1$ till $𝑥_8$ the other exogenous control variables and $𝑖_1$ and $𝑖_2$ two instruments for $𝑦_2$. In the first stage, $𝑦_2$ is explained using linear regression
$𝑦_2=𝛼_0+𝛼_1 𝑥_1+𝛼_2 𝑥_2+…+𝛼_8 𝑥_8+𝛼_9 𝑖_1+𝛼_10 𝑖_2+𝜀_2$,
with $𝛼$ as coefficients and $𝜀_2$ as error term. The equation splits $𝑦_2$ in an exogenous component $𝛼_0+𝛼_1 𝑥_1+𝛼_2 𝑥_2+…+𝛼_8 𝑥_8+𝛼_9 𝑖_1+𝛼_10 𝑖_2$ and omitted-variable component $𝜀_2$. The 2SRI method includes both the endogenous variable $𝑦_2$ and the residual $𝜀_2$ as estimator for the omitted variable in the model; i.e. $𝑦_1=logit(𝛽_0+𝛽_1 𝑥_1+𝛽_2 𝑥_2+…+𝛽_8 𝑥_8+𝛽_9 𝑦_2+𝛽_10 𝜀_2 )+𝜀_1$
with $𝑦_1$ being the dichotomous variable. The implementation with a statics software is straight forward. (However, getting the standard errors for the estimators is not.)
It has been shown by
Burgess, S., & Thompson, S. G. (2012). Improving bias and coverage in instrumental variable analysis with weak instruments for continuous and binary outcomes. Statistics in medicine, 31(15), 1582-1600.
through a simulation that the 2SRI is better than 2SLS to provide another reference.
|
2SLS - logit/probit in the second stage?
When googling this problem myself, I found the highly-cited article
Terza, J.V., Basu, A. and Rathouz, P.J., 2008. Two-stage residual inclusion estimation: addressing endogeneity in health econometr
|
40,033
|
R gives me the error "contrasts can be applied only to factors with 2 or more levels" running an mlogit model, but all my factors have 2 levels [closed]
|
See Answer Here - https://stackoverflow.com/questions/18171246/error-in-contrasts-when-defining-a-linear-model-in-r
There are factors that you are using that either have only 1 distinct value or are NA
|
R gives me the error "contrasts can be applied only to factors with 2 or more levels" running an mlo
|
See Answer Here - https://stackoverflow.com/questions/18171246/error-in-contrasts-when-defining-a-linear-model-in-r
There are factors that you are using that either have only 1 distinct value or are N
|
R gives me the error "contrasts can be applied only to factors with 2 or more levels" running an mlogit model, but all my factors have 2 levels [closed]
See Answer Here - https://stackoverflow.com/questions/18171246/error-in-contrasts-when-defining-a-linear-model-in-r
There are factors that you are using that either have only 1 distinct value or are NA
|
R gives me the error "contrasts can be applied only to factors with 2 or more levels" running an mlo
See Answer Here - https://stackoverflow.com/questions/18171246/error-in-contrasts-when-defining-a-linear-model-in-r
There are factors that you are using that either have only 1 distinct value or are N
|
40,034
|
Transformation of Random Variable - Normal Distribution
|
Since this is self-study I will give you a hint.
$F_{|X|} = P(|X|<x) = P(-x<X<x) = F_X(x) - F_X(-x)$ where $F$ indicates the CDF of the appropriate random variable. Since we know the CDF of a normal random variable, $X$, we now know the CDF the absolute value of $X$. Differentiating will give us the PDF.
|
Transformation of Random Variable - Normal Distribution
|
Since this is self-study I will give you a hint.
$F_{|X|} = P(|X|<x) = P(-x<X<x) = F_X(x) - F_X(-x)$ where $F$ indicates the CDF of the appropriate random variable. Since we know the CDF of a normal
|
Transformation of Random Variable - Normal Distribution
Since this is self-study I will give you a hint.
$F_{|X|} = P(|X|<x) = P(-x<X<x) = F_X(x) - F_X(-x)$ where $F$ indicates the CDF of the appropriate random variable. Since we know the CDF of a normal random variable, $X$, we now know the CDF the absolute value of $X$. Differentiating will give us the PDF.
|
Transformation of Random Variable - Normal Distribution
Since this is self-study I will give you a hint.
$F_{|X|} = P(|X|<x) = P(-x<X<x) = F_X(x) - F_X(-x)$ where $F$ indicates the CDF of the appropriate random variable. Since we know the CDF of a normal
|
40,035
|
Transformation of Random Variable - Normal Distribution
|
I will try to give you some good hints towards the simple solution:
The distribution for $Y=|X|$ must be 0 for $Y<0$.
As $X$ is symmetric about 0 we know that $f_X(-x)=f_X(x)$.
$Y$ can take the value $y\ge 0$ if $x=y$ or $x=-y$.
Hopefully that will get you on your way.
|
Transformation of Random Variable - Normal Distribution
|
I will try to give you some good hints towards the simple solution:
The distribution for $Y=|X|$ must be 0 for $Y<0$.
As $X$ is symmetric about 0 we know that $f_X(-x)=f_X(x)$.
$Y$ can take the value
|
Transformation of Random Variable - Normal Distribution
I will try to give you some good hints towards the simple solution:
The distribution for $Y=|X|$ must be 0 for $Y<0$.
As $X$ is symmetric about 0 we know that $f_X(-x)=f_X(x)$.
$Y$ can take the value $y\ge 0$ if $x=y$ or $x=-y$.
Hopefully that will get you on your way.
|
Transformation of Random Variable - Normal Distribution
I will try to give you some good hints towards the simple solution:
The distribution for $Y=|X|$ must be 0 for $Y<0$.
As $X$ is symmetric about 0 we know that $f_X(-x)=f_X(x)$.
$Y$ can take the value
|
40,036
|
Transformation of Random Variable - Normal Distribution
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You just have to double the Right branch and nullify the left branch
|
Transformation of Random Variable - Normal Distribution
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Transformation of Random Variable - Normal Distribution
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
You just have to double the Right branch and nullify the left branch
|
Transformation of Random Variable - Normal Distribution
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
40,037
|
Is it possible to use SD instead of entropy?
|
Why can't we just compute the standard deviation?
Here's why. Let's compare the formulas for entropy and variance:
$H(X) = - \sum\limits_x p(x) \, \log p(x) = - \mathbb E \, [ \log p(X) ]$
$\text{var} (X) = \mathbb E \, \Big[(X - \mathbb E[X])^2 \Big]$
So note that entropy does not care about values that $X$ may take, it cares only about the distribution itself, while variance does care about the values of $X$. Also, for variance the variable has to be numeric, and it's not the case for entropy. Both these properties make entropy a good candidate for calculating the information gain.
To get more insights into entropy and other information-theoretic measures, you may read this question on math.SE.
|
Is it possible to use SD instead of entropy?
|
Why can't we just compute the standard deviation?
Here's why. Let's compare the formulas for entropy and variance:
$H(X) = - \sum\limits_x p(x) \, \log p(x) = - \mathbb E \, [ \log p(X) ]$
$\text{va
|
Is it possible to use SD instead of entropy?
Why can't we just compute the standard deviation?
Here's why. Let's compare the formulas for entropy and variance:
$H(X) = - \sum\limits_x p(x) \, \log p(x) = - \mathbb E \, [ \log p(X) ]$
$\text{var} (X) = \mathbb E \, \Big[(X - \mathbb E[X])^2 \Big]$
So note that entropy does not care about values that $X$ may take, it cares only about the distribution itself, while variance does care about the values of $X$. Also, for variance the variable has to be numeric, and it's not the case for entropy. Both these properties make entropy a good candidate for calculating the information gain.
To get more insights into entropy and other information-theoretic measures, you may read this question on math.SE.
|
Is it possible to use SD instead of entropy?
Why can't we just compute the standard deviation?
Here's why. Let's compare the formulas for entropy and variance:
$H(X) = - \sum\limits_x p(x) \, \log p(x) = - \mathbb E \, [ \log p(X) ]$
$\text{va
|
40,038
|
Is it possible to use SD instead of entropy?
|
When comparing standard deviation and entropy in context of statistics and related areas, I think that it is important to realize the difference between two notions of the entropy concept: entropy as a measure of variability, volatility, chaos (this meaning is usually implied in physics and similar domains) and entropy as a measure of an average information in a message (this meaning is usually implied in domains, based on Shannon's theory of information). However, despite obvious surface differences between the notions, the above-mentioned two dimensions reflect the close parallels between physics-based entropy and information theory-based entropy concepts. A discussion on this topic is beyond the scope of my answer, but this article is IMHO a good start.
The entropy formula, which you were confused about, comes from the theory of information (see this section) and is the basis for the use of entropy via the concept of information gain (notice the similarity in the formulas). If I understand correctly, all those types of entropy are particular (contextual) cases of a mathematics-based generalized concept of entropy in dynamic systems.
In terms of your particular question on potential use of standard deviation (SD) as a substitute measure for decision trees, I have to say the following:
yes, it is possible to use SD as a substitute for entropy (information gain, to be more accurate);
it seems that your statement about wanting higher SD as a criteria for attribute splitting is wrong - you need higher information gain, which can be substituted by standard deviation reduction, not SD itself. This nice page explains the idea and algorithm behind it rather well.
Finally, I would like to share two resources for reducing confusion and providing more details on the topic. First, this discussion is useful to see why your statements "higher the standard deviation, lesser the entropy" and "lesser the SD, more the entropy" [original style preserved] are incorrect. Second, this paper, despite its financial focus, presents potential reasons for preferring using entropy to standard deviation. Let me summarize them in the following list:
entropy is a more general measure and supports a wider range of data types;
entropy incorporates more information than SD (thus, making models more realistic);
entropy is distribution-free, that is not dependent on a particular distribution (less errors);
entropy satisfies the first order condition (used in optimization and econometric models);
entropy also serves as a measure of dispersion (hence, playing an SD's role).
Reasons for not preferring using entropy to SD also should be noted and include the former's complexity and potential statistical bias, related to considered degrees of freedom of a model.
|
Is it possible to use SD instead of entropy?
|
When comparing standard deviation and entropy in context of statistics and related areas, I think that it is important to realize the difference between two notions of the entropy concept: entropy as
|
Is it possible to use SD instead of entropy?
When comparing standard deviation and entropy in context of statistics and related areas, I think that it is important to realize the difference between two notions of the entropy concept: entropy as a measure of variability, volatility, chaos (this meaning is usually implied in physics and similar domains) and entropy as a measure of an average information in a message (this meaning is usually implied in domains, based on Shannon's theory of information). However, despite obvious surface differences between the notions, the above-mentioned two dimensions reflect the close parallels between physics-based entropy and information theory-based entropy concepts. A discussion on this topic is beyond the scope of my answer, but this article is IMHO a good start.
The entropy formula, which you were confused about, comes from the theory of information (see this section) and is the basis for the use of entropy via the concept of information gain (notice the similarity in the formulas). If I understand correctly, all those types of entropy are particular (contextual) cases of a mathematics-based generalized concept of entropy in dynamic systems.
In terms of your particular question on potential use of standard deviation (SD) as a substitute measure for decision trees, I have to say the following:
yes, it is possible to use SD as a substitute for entropy (information gain, to be more accurate);
it seems that your statement about wanting higher SD as a criteria for attribute splitting is wrong - you need higher information gain, which can be substituted by standard deviation reduction, not SD itself. This nice page explains the idea and algorithm behind it rather well.
Finally, I would like to share two resources for reducing confusion and providing more details on the topic. First, this discussion is useful to see why your statements "higher the standard deviation, lesser the entropy" and "lesser the SD, more the entropy" [original style preserved] are incorrect. Second, this paper, despite its financial focus, presents potential reasons for preferring using entropy to standard deviation. Let me summarize them in the following list:
entropy is a more general measure and supports a wider range of data types;
entropy incorporates more information than SD (thus, making models more realistic);
entropy is distribution-free, that is not dependent on a particular distribution (less errors);
entropy satisfies the first order condition (used in optimization and econometric models);
entropy also serves as a measure of dispersion (hence, playing an SD's role).
Reasons for not preferring using entropy to SD also should be noted and include the former's complexity and potential statistical bias, related to considered degrees of freedom of a model.
|
Is it possible to use SD instead of entropy?
When comparing standard deviation and entropy in context of statistics and related areas, I think that it is important to realize the difference between two notions of the entropy concept: entropy as
|
40,039
|
I need both quadratic and linear coefficients in a GLM with binary response. What's the best option? [closed]
|
You can add a quadratic term with logistic regression just as you can with regular old linear regression. That is a simple way to include a 'curve' in your model. Be sure you understand what that means. I suspect you want an R tutorial, which is off-topic on CV. The basic approach to adding a quadratic in R is to include I(x^2) in the formula. Here is a simple example:
lo.to.p = function(lo){ # we need this function to generate the data
odds = exp(lo)
prob = odds/(1+odds)
return(prob)
}
set.seed(4649) # this makes the example exactly reproducible
x1 = runif(100, min=0, max=10) # you have 3, largely uncorrelated predictors
x2 = runif(100, min=0, max=10)
x3 = runif(100, min=0, max=10)
lo = -78 + 35*x1 - 3.5*(x1^2) + .1*x2 # there is a quadratic relationship w/ x1, a
p = lo.to.p(lo) # linear relationship w/ x2 & no relationship
y = rbinom(100, size=1, prob=p) # w/ x3
model = glm(y~x1+I(x1^2)+x2+x3, family=binomial)
summary(model)
# Call:
# glm(formula = y ~ x1 + I(x1^2) + x2 + x3, family = binomial)
#
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -1.74280 -0.00387 0.00000 0.04145 1.74573
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -53.65462 19.65288 -2.730 0.00633 **
# x1 24.78164 8.92910 2.775 0.00551 **
# I(x1^2) -2.49888 0.89344 -2.797 0.00516 **
# x2 0.03318 0.20198 0.164 0.86952
# x3 -0.09277 0.18650 -0.497 0.61890
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 128.207 on 99 degrees of freedom
# Residual deviance: 18.647 on 95 degrees of freedom
# AIC: 28.647
#
# Number of Fisher Scoring iterations: 10
|
I need both quadratic and linear coefficients in a GLM with binary response. What's the best option?
|
You can add a quadratic term with logistic regression just as you can with regular old linear regression. That is a simple way to include a 'curve' in your model. Be sure you understand what that me
|
I need both quadratic and linear coefficients in a GLM with binary response. What's the best option? [closed]
You can add a quadratic term with logistic regression just as you can with regular old linear regression. That is a simple way to include a 'curve' in your model. Be sure you understand what that means. I suspect you want an R tutorial, which is off-topic on CV. The basic approach to adding a quadratic in R is to include I(x^2) in the formula. Here is a simple example:
lo.to.p = function(lo){ # we need this function to generate the data
odds = exp(lo)
prob = odds/(1+odds)
return(prob)
}
set.seed(4649) # this makes the example exactly reproducible
x1 = runif(100, min=0, max=10) # you have 3, largely uncorrelated predictors
x2 = runif(100, min=0, max=10)
x3 = runif(100, min=0, max=10)
lo = -78 + 35*x1 - 3.5*(x1^2) + .1*x2 # there is a quadratic relationship w/ x1, a
p = lo.to.p(lo) # linear relationship w/ x2 & no relationship
y = rbinom(100, size=1, prob=p) # w/ x3
model = glm(y~x1+I(x1^2)+x2+x3, family=binomial)
summary(model)
# Call:
# glm(formula = y ~ x1 + I(x1^2) + x2 + x3, family = binomial)
#
# Deviance Residuals:
# Min 1Q Median 3Q Max
# -1.74280 -0.00387 0.00000 0.04145 1.74573
#
# Coefficients:
# Estimate Std. Error z value Pr(>|z|)
# (Intercept) -53.65462 19.65288 -2.730 0.00633 **
# x1 24.78164 8.92910 2.775 0.00551 **
# I(x1^2) -2.49888 0.89344 -2.797 0.00516 **
# x2 0.03318 0.20198 0.164 0.86952
# x3 -0.09277 0.18650 -0.497 0.61890
# ---
# Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
#
# (Dispersion parameter for binomial family taken to be 1)
#
# Null deviance: 128.207 on 99 degrees of freedom
# Residual deviance: 18.647 on 95 degrees of freedom
# AIC: 28.647
#
# Number of Fisher Scoring iterations: 10
|
I need both quadratic and linear coefficients in a GLM with binary response. What's the best option?
You can add a quadratic term with logistic regression just as you can with regular old linear regression. That is a simple way to include a 'curve' in your model. Be sure you understand what that me
|
40,040
|
Bootstrap two-sample t test
|
As @Tim notes, your bootsamples should have the same $n_j$s as your original data.
Next, recognize that there are several ways to bootstrap: e.g., you can bootstrap your data directly or bootstrap a test statistic, you can bootstrap your sampling distribution or a null distribution, etc. You need to make sure you understand which kind of thing you're doing. You can bootstrap simply the mean difference, if you want to. In the linked post, I bootstrapped the null distribution of the test statistic. That is essentially what you are doing in your code.
Also, because of the ways tests can differ, the bootstrapping strategy may need to be customized to the test you want to perform. In the linked post, I bootstrapped an $F$-statistic, but the way the $F$-test works is somewhat different from how a $t$-test works. Since you are bootstrapping the test statistic, you are somewhat safe from that.
In your case, think about the logic of the type of bootstrap you used. You bootstrapped a null sampling distribution for your $t$-statistic. Your observed $t$-statistic is so extreme that none of the bootstrapped $t$s overlapped with it. The implication of that is that the probability ($p$-value) of getting a $t$-statistic as far or further from $0$ from your bootstrapped null sampling distribution is $< (1/10000) / 2$. In other words, your result is highly significant. (However, you should re-do your bootstrap using the correct $n_j$s before you go with this result.)
|
Bootstrap two-sample t test
|
As @Tim notes, your bootsamples should have the same $n_j$s as your original data.
Next, recognize that there are several ways to bootstrap: e.g., you can bootstrap your data directly or bootstrap a
|
Bootstrap two-sample t test
As @Tim notes, your bootsamples should have the same $n_j$s as your original data.
Next, recognize that there are several ways to bootstrap: e.g., you can bootstrap your data directly or bootstrap a test statistic, you can bootstrap your sampling distribution or a null distribution, etc. You need to make sure you understand which kind of thing you're doing. You can bootstrap simply the mean difference, if you want to. In the linked post, I bootstrapped the null distribution of the test statistic. That is essentially what you are doing in your code.
Also, because of the ways tests can differ, the bootstrapping strategy may need to be customized to the test you want to perform. In the linked post, I bootstrapped an $F$-statistic, but the way the $F$-test works is somewhat different from how a $t$-test works. Since you are bootstrapping the test statistic, you are somewhat safe from that.
In your case, think about the logic of the type of bootstrap you used. You bootstrapped a null sampling distribution for your $t$-statistic. Your observed $t$-statistic is so extreme that none of the bootstrapped $t$s overlapped with it. The implication of that is that the probability ($p$-value) of getting a $t$-statistic as far or further from $0$ from your bootstrapped null sampling distribution is $< (1/10000) / 2$. In other words, your result is highly significant. (However, you should re-do your bootstrap using the correct $n_j$s before you go with this result.)
|
Bootstrap two-sample t test
As @Tim notes, your bootsamples should have the same $n_j$s as your original data.
Next, recognize that there are several ways to bootstrap: e.g., you can bootstrap your data directly or bootstrap a
|
40,041
|
Bootstrap two-sample t test
|
First of all, with bootstrap you sample $N$ cases out of $N$ cases in your data. So the number of observations to choose is simple.
And, yes, you can do:
f <- function() {
A <- sample(group_A, 377, replace=T)
B <- sample(group_B, 377, replace=T)
mean(A)-mean(B)
}
replicate(1000, f())
but this is a different approach than using t-test, because it will provide you information of the possible range of differences between those two means. You could use this range in similar fashion as you could use boxplots for (informal) analysis of differences between the means. For hypothesis testing it is however better to use a classical bootstrap that computes t-test on every iteration and outputs t-statistics. The reason for that is that t-test does not only compute the difference between means, but also takes into consideration variances of the two groups.
For gaining deeper understanding of bootstrap I would recommend you the classic 1979 paper by Efron and his very readable book.
|
Bootstrap two-sample t test
|
First of all, with bootstrap you sample $N$ cases out of $N$ cases in your data. So the number of observations to choose is simple.
And, yes, you can do:
f <- function() {
A <- sample(group_A, 377,
|
Bootstrap two-sample t test
First of all, with bootstrap you sample $N$ cases out of $N$ cases in your data. So the number of observations to choose is simple.
And, yes, you can do:
f <- function() {
A <- sample(group_A, 377, replace=T)
B <- sample(group_B, 377, replace=T)
mean(A)-mean(B)
}
replicate(1000, f())
but this is a different approach than using t-test, because it will provide you information of the possible range of differences between those two means. You could use this range in similar fashion as you could use boxplots for (informal) analysis of differences between the means. For hypothesis testing it is however better to use a classical bootstrap that computes t-test on every iteration and outputs t-statistics. The reason for that is that t-test does not only compute the difference between means, but also takes into consideration variances of the two groups.
For gaining deeper understanding of bootstrap I would recommend you the classic 1979 paper by Efron and his very readable book.
|
Bootstrap two-sample t test
First of all, with bootstrap you sample $N$ cases out of $N$ cases in your data. So the number of observations to choose is simple.
And, yes, you can do:
f <- function() {
A <- sample(group_A, 377,
|
40,042
|
Bootstrap two-sample t test
|
One more thing: if you're not assuming equal variance, you might want to consider Welch's t-test (http://beheco.oxfordjournals.org/content/17/4/688.full)
|
Bootstrap two-sample t test
|
One more thing: if you're not assuming equal variance, you might want to consider Welch's t-test (http://beheco.oxfordjournals.org/content/17/4/688.full)
|
Bootstrap two-sample t test
One more thing: if you're not assuming equal variance, you might want to consider Welch's t-test (http://beheco.oxfordjournals.org/content/17/4/688.full)
|
Bootstrap two-sample t test
One more thing: if you're not assuming equal variance, you might want to consider Welch's t-test (http://beheco.oxfordjournals.org/content/17/4/688.full)
|
40,043
|
Bootstrap two-sample t test
|
I would condition on the total sample size, not fix the group sizes. Of course, if your group sizes are fixed in advance then do condition on those, my answer is for the case where they are not.
I construct a two column matrix with the first column being an indication of group membership and the second column the observed values for the two groups stacked underneath each other. Then I bootstrap rows of the matrix and calculate the observed difference in means between the groups. Lastly I use the bootstrapped differences to calculate an approximate p-value for the test of zero difference in means.
pvalfunc <- function(sims,target=0) { 2*min(mean(sims<target),mean(sims>target)) }
boot.2sdif.test <- function(s1,s2, nboot=9999) {
n1 <- length(s1); n2 <- length(s2); n <- n1+n2
X <- cbind(rep(c(1,2), c(n1,n2)), c(s1,s2))
d <- rep(0,nboot)
for (i in 1:nboot) {
b <- X[sample.int(n, n, T),]
d[i] <- mean(b[b[,1]==1, 2]) - mean(b[b[,1]==2, 2])
}
return(pvalfunc(d))
(pvalue <- boot.2sdif.test(group_k, group_m))
|
Bootstrap two-sample t test
|
I would condition on the total sample size, not fix the group sizes. Of course, if your group sizes are fixed in advance then do condition on those, my answer is for the case where they are not.
I con
|
Bootstrap two-sample t test
I would condition on the total sample size, not fix the group sizes. Of course, if your group sizes are fixed in advance then do condition on those, my answer is for the case where they are not.
I construct a two column matrix with the first column being an indication of group membership and the second column the observed values for the two groups stacked underneath each other. Then I bootstrap rows of the matrix and calculate the observed difference in means between the groups. Lastly I use the bootstrapped differences to calculate an approximate p-value for the test of zero difference in means.
pvalfunc <- function(sims,target=0) { 2*min(mean(sims<target),mean(sims>target)) }
boot.2sdif.test <- function(s1,s2, nboot=9999) {
n1 <- length(s1); n2 <- length(s2); n <- n1+n2
X <- cbind(rep(c(1,2), c(n1,n2)), c(s1,s2))
d <- rep(0,nboot)
for (i in 1:nboot) {
b <- X[sample.int(n, n, T),]
d[i] <- mean(b[b[,1]==1, 2]) - mean(b[b[,1]==2, 2])
}
return(pvalfunc(d))
(pvalue <- boot.2sdif.test(group_k, group_m))
|
Bootstrap two-sample t test
I would condition on the total sample size, not fix the group sizes. Of course, if your group sizes are fixed in advance then do condition on those, my answer is for the case where they are not.
I con
|
40,044
|
Why do p values for test of likelihood ratio vs Fisher's Exact Test not agree
|
You have a few issues here. First, understanding what each test is doing, and second interpreting the p-values.
First, each test has different underlying assumptions. The likelihood ratio test statistic is formed by taking the log of the ratio of the likelihood under the null model, divided by the alternative model. The test statistic is approximately chi-squared distributed, and is asymptotically equivalent to the Pearson Chi-squared test. Because you are calculating p-values using an asymptotic approximation, you want to make sure you have enough data to justify doing this. Further, one of the key assumptions of this test is that the observations are independent.
Fisher's Exact test, as its name implies, calculates an exact p-value based on the underlying assumptions. Instead of using a continuous distribution as an approximation as the sample size grows, it is based on a discrete distribution. Specifically, the probability of observing any 2x2 table follows the hypergeometric distribution. This test is typically used when there are not enough observations to justify the assumptions of the asymptotic tests, although you could use it for any given number of observations (see edit below, I no longer believe this to be correct). Importantly however, this test still assumes independence. For both this test and the likelihood ratio test, your null hypothesis is that the probability of each outcome is equal.
The linear-by-linear association model is testing something different. This models the log odds ratio as a function of each variable, as well as a term accounting for the relationship between between the variables. You are estimating a general linear model, assuming a Poisson distribution of counts with the log link function. I am not familiar with SPSS, but the significance of that test may indicate that your observations are not independent (and thus violate the assumptions of the other tests). (edit: I think you can ignore this issue based on what SPSS output).
Finally, it seems like you have some issues interpreting your p-values. It might be worth considering what the difference between a p-value of 0.051 vs. 0.049 means practically. Do you consider the first to be significantly different than the second in terms of evidence it provides? Another issue you may want to investigate is calculating mid p-values. This can help account for some conservatism of tests based on discrete distributions. For example, say one more observation would change your Fisher exact p-value from 0.051 to 0.025. This discontinuity in p-values can effectively make tests like the Fisher exact test more conservative. For a reference on all of these topics, I would recommend Categorical Data Analysis by Agresti.
Edit:
I'll address a few additional points more in depth. 1) Why are the p-values different and 2) Which test to use
To start at the top, the p-values in general for these tests can be different because they are using different assumptions. To illustrate, I generated some random sample data for 2x2 tables. To do this I started with n = 10 (5 data points in each row), and went to n = 1000. Row 1 had a true probability of 30%, and row 2 had a true probability of 70%. Because we know that the odds ratios are truly different, we should ideally see a low p-value. The chart below shows the difference between the p-value estimated by fisher's test, vs. the Pearson chi-square test (this was easier for me to run quickly to illustrate the point than the LR-test).
Note that, especially for small n, these values can be very different although they differ less when a continuity correction is applied.
Second, which test should you use? Now that you have posted your actual data in the comments, I believe you should use Fisher's test. This is because you have a zero cell. However, because this test is conservative, you should probably apply a correction (mid-P corrections are what I am familiar with, not sure if there are other superior options). See the thread below for more discussion and references. That thread also caused me to reconsider suggesting that Fisher's test could be used in any situation, given the evidence the author provides:
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test?
Finally, this site suggests you can ignore the issue of the linear-by-linear test. I didn't go into too much detail, but it seems that it may be equivalent in SPSS to the Chi-Square test:
https://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/notes/chisqr_assumptions
|
Why do p values for test of likelihood ratio vs Fisher's Exact Test not agree
|
You have a few issues here. First, understanding what each test is doing, and second interpreting the p-values.
First, each test has different underlying assumptions. The likelihood ratio test stati
|
Why do p values for test of likelihood ratio vs Fisher's Exact Test not agree
You have a few issues here. First, understanding what each test is doing, and second interpreting the p-values.
First, each test has different underlying assumptions. The likelihood ratio test statistic is formed by taking the log of the ratio of the likelihood under the null model, divided by the alternative model. The test statistic is approximately chi-squared distributed, and is asymptotically equivalent to the Pearson Chi-squared test. Because you are calculating p-values using an asymptotic approximation, you want to make sure you have enough data to justify doing this. Further, one of the key assumptions of this test is that the observations are independent.
Fisher's Exact test, as its name implies, calculates an exact p-value based on the underlying assumptions. Instead of using a continuous distribution as an approximation as the sample size grows, it is based on a discrete distribution. Specifically, the probability of observing any 2x2 table follows the hypergeometric distribution. This test is typically used when there are not enough observations to justify the assumptions of the asymptotic tests, although you could use it for any given number of observations (see edit below, I no longer believe this to be correct). Importantly however, this test still assumes independence. For both this test and the likelihood ratio test, your null hypothesis is that the probability of each outcome is equal.
The linear-by-linear association model is testing something different. This models the log odds ratio as a function of each variable, as well as a term accounting for the relationship between between the variables. You are estimating a general linear model, assuming a Poisson distribution of counts with the log link function. I am not familiar with SPSS, but the significance of that test may indicate that your observations are not independent (and thus violate the assumptions of the other tests). (edit: I think you can ignore this issue based on what SPSS output).
Finally, it seems like you have some issues interpreting your p-values. It might be worth considering what the difference between a p-value of 0.051 vs. 0.049 means practically. Do you consider the first to be significantly different than the second in terms of evidence it provides? Another issue you may want to investigate is calculating mid p-values. This can help account for some conservatism of tests based on discrete distributions. For example, say one more observation would change your Fisher exact p-value from 0.051 to 0.025. This discontinuity in p-values can effectively make tests like the Fisher exact test more conservative. For a reference on all of these topics, I would recommend Categorical Data Analysis by Agresti.
Edit:
I'll address a few additional points more in depth. 1) Why are the p-values different and 2) Which test to use
To start at the top, the p-values in general for these tests can be different because they are using different assumptions. To illustrate, I generated some random sample data for 2x2 tables. To do this I started with n = 10 (5 data points in each row), and went to n = 1000. Row 1 had a true probability of 30%, and row 2 had a true probability of 70%. Because we know that the odds ratios are truly different, we should ideally see a low p-value. The chart below shows the difference between the p-value estimated by fisher's test, vs. the Pearson chi-square test (this was easier for me to run quickly to illustrate the point than the LR-test).
Note that, especially for small n, these values can be very different although they differ less when a continuity correction is applied.
Second, which test should you use? Now that you have posted your actual data in the comments, I believe you should use Fisher's test. This is because you have a zero cell. However, because this test is conservative, you should probably apply a correction (mid-P corrections are what I am familiar with, not sure if there are other superior options). See the thread below for more discussion and references. That thread also caused me to reconsider suggesting that Fisher's test could be used in any situation, given the evidence the author provides:
Given the power of computers these days, is there ever a reason to do a chi-squared test rather than Fisher's exact test?
Finally, this site suggests you can ignore the issue of the linear-by-linear test. I didn't go into too much detail, but it seems that it may be equivalent in SPSS to the Chi-Square test:
https://sites.google.com/a/lakeheadu.ca/bweaver/Home/statistics/notes/chisqr_assumptions
|
Why do p values for test of likelihood ratio vs Fisher's Exact Test not agree
You have a few issues here. First, understanding what each test is doing, and second interpreting the p-values.
First, each test has different underlying assumptions. The likelihood ratio test stati
|
40,045
|
How to bound a probability with Chernoff's inequality?
|
Your class is using needlessly complicated expressions for the Chernoff bound
and apparently giving them to you as magical formulas to be applied without
any understanding of how they came about.
Suppose that $X$ is a random variable for which we wish to compute $P\{X \geq t\}$. One way of doing this is to define a real-valued function $g(x)$
as follows:
$$g(x) = \mathbf 1_{x \geq t}
= \begin{cases}1, & x \geq t,\\0, & x < t,\end{cases}$$ and then consider the expected value of the random variable $g(X)$. This is readily expressed; we
have that
$$\displaystyle E[g(X)] = \int_{-\infty}^\infty g(x)f_X(x)\,\mathrm dx
= \int_t^\infty f_X(x)\,\mathrm dx = P\{X \geq t\}$$
or that
$$E[g(X)] = \sum_i g(x_i)p_X(x_i) = \sum_{i: x_i \geq t}p_X(x_i)
= P\{X \geq t\}$$
according as $X$ is a continuous random variable or a discrete random
variable. Computations of this kind are, of course, straightforward when
we know the probability density function or probability mass
function of $X$. But what if don't know these or are too lazy to
determine these? In such cases, perhaps a bound might be useful.
Note that for all positive real numbers $\lambda$,
$g(x) \leq e^{\lambda(x-t)}$ for all $x \in \mathbb R$.
In fact, equality holds only at $x=t$ where both functions equal $1$.
Therefore, we have that
$$\begin{align}P\{X \geq t\}&= E[g(X)]\\
&\leq E[e^{\lambda(X-t)}]\\
&= e^{-\lambda t}\cdot E[e^{\lambda X}].\\
&\Downarrow\\
P\{X \geq t\} &\leq e^{-\lambda t}\cdot E[e^{\lambda X}]\tag{1}\end{align}$$
Do you begin to see why moment-generating functions (MGFs)
might have been recommended to you? I point out that the
occurrence of the MGF has been very cleverly concealed in your classroom
materials: you wrote down: $P(X\ge -t) \le e^{(-(\lambda*t - \log( E(e^{\lambda*x}))))}$ where the
MGF comes from the $e^{\log( E(e^{\lambda*x}))} = E(e^{\lambda*x})$ part.
So, once you have the MGF of $X$ (which is $E[e^{\lambda X}]$ and not
$E(e^{\lambda*x})$ as your instructor calls it) note that the MGF is
a real-valued function of $\lambda$, and so the right side
of $(1)$ is a function $h(\lambda)$ of the real variable $\lambda$.
Since $P\{X \geq t\}\leq h(\lambda)$ for all $\lambda >0$,
we get the best
upper bound (meaning smallest upper bound) on $P\{X \geq t\}$ by
determining the minimum value of
$h(\lambda)$ on $(0,\infty)$. Remember that $h(\lambda)$ is just an
ordinary real-valued function -- we have squeezed out all the probability
stuff from it -- and hopefully you know how to find the minimum value
of $h(\lambda)$ on $(0,\infty)$.
Finally, I will mention that since $X_1$ and $X_2$ are independent
random variables, we can determine the MGF of $Y = X_1+X_2$ from
the (hopefully known) MGFs of $X_1$ and $X_2$: we don't need to
find the probability mass function of $Y$ in order to apply the
Chernoff bound. Of course, in this case, the probability mass function
is not that hard to find and one can get the exact value of
$P\{X \geq t\}$ without extraordinarily complicated calculations, but
the bound certainly is a lot easier to calculate.
|
How to bound a probability with Chernoff's inequality?
|
Your class is using needlessly complicated expressions for the Chernoff bound
and apparently giving them to you as magical formulas to be applied without
any understanding of how they came about.
Supp
|
How to bound a probability with Chernoff's inequality?
Your class is using needlessly complicated expressions for the Chernoff bound
and apparently giving them to you as magical formulas to be applied without
any understanding of how they came about.
Suppose that $X$ is a random variable for which we wish to compute $P\{X \geq t\}$. One way of doing this is to define a real-valued function $g(x)$
as follows:
$$g(x) = \mathbf 1_{x \geq t}
= \begin{cases}1, & x \geq t,\\0, & x < t,\end{cases}$$ and then consider the expected value of the random variable $g(X)$. This is readily expressed; we
have that
$$\displaystyle E[g(X)] = \int_{-\infty}^\infty g(x)f_X(x)\,\mathrm dx
= \int_t^\infty f_X(x)\,\mathrm dx = P\{X \geq t\}$$
or that
$$E[g(X)] = \sum_i g(x_i)p_X(x_i) = \sum_{i: x_i \geq t}p_X(x_i)
= P\{X \geq t\}$$
according as $X$ is a continuous random variable or a discrete random
variable. Computations of this kind are, of course, straightforward when
we know the probability density function or probability mass
function of $X$. But what if don't know these or are too lazy to
determine these? In such cases, perhaps a bound might be useful.
Note that for all positive real numbers $\lambda$,
$g(x) \leq e^{\lambda(x-t)}$ for all $x \in \mathbb R$.
In fact, equality holds only at $x=t$ where both functions equal $1$.
Therefore, we have that
$$\begin{align}P\{X \geq t\}&= E[g(X)]\\
&\leq E[e^{\lambda(X-t)}]\\
&= e^{-\lambda t}\cdot E[e^{\lambda X}].\\
&\Downarrow\\
P\{X \geq t\} &\leq e^{-\lambda t}\cdot E[e^{\lambda X}]\tag{1}\end{align}$$
Do you begin to see why moment-generating functions (MGFs)
might have been recommended to you? I point out that the
occurrence of the MGF has been very cleverly concealed in your classroom
materials: you wrote down: $P(X\ge -t) \le e^{(-(\lambda*t - \log( E(e^{\lambda*x}))))}$ where the
MGF comes from the $e^{\log( E(e^{\lambda*x}))} = E(e^{\lambda*x})$ part.
So, once you have the MGF of $X$ (which is $E[e^{\lambda X}]$ and not
$E(e^{\lambda*x})$ as your instructor calls it) note that the MGF is
a real-valued function of $\lambda$, and so the right side
of $(1)$ is a function $h(\lambda)$ of the real variable $\lambda$.
Since $P\{X \geq t\}\leq h(\lambda)$ for all $\lambda >0$,
we get the best
upper bound (meaning smallest upper bound) on $P\{X \geq t\}$ by
determining the minimum value of
$h(\lambda)$ on $(0,\infty)$. Remember that $h(\lambda)$ is just an
ordinary real-valued function -- we have squeezed out all the probability
stuff from it -- and hopefully you know how to find the minimum value
of $h(\lambda)$ on $(0,\infty)$.
Finally, I will mention that since $X_1$ and $X_2$ are independent
random variables, we can determine the MGF of $Y = X_1+X_2$ from
the (hopefully known) MGFs of $X_1$ and $X_2$: we don't need to
find the probability mass function of $Y$ in order to apply the
Chernoff bound. Of course, in this case, the probability mass function
is not that hard to find and one can get the exact value of
$P\{X \geq t\}$ without extraordinarily complicated calculations, but
the bound certainly is a lot easier to calculate.
|
How to bound a probability with Chernoff's inequality?
Your class is using needlessly complicated expressions for the Chernoff bound
and apparently giving them to you as magical formulas to be applied without
any understanding of how they came about.
Supp
|
40,046
|
cluster-robust standard errors are smaller than unclustered ones in fgls with cluster fixed effects
|
Robust clustered standard errors can change your standard errors in both directions. That is, clustered standard errors can be larger or smaller than conventional standard errors. The direction in which standard errors will change depends on the sign of the intra-class correlation. This post explains robust standard errors in greater detail.
|
cluster-robust standard errors are smaller than unclustered ones in fgls with cluster fixed effects
|
Robust clustered standard errors can change your standard errors in both directions. That is, clustered standard errors can be larger or smaller than conventional standard errors. The direction in whi
|
cluster-robust standard errors are smaller than unclustered ones in fgls with cluster fixed effects
Robust clustered standard errors can change your standard errors in both directions. That is, clustered standard errors can be larger or smaller than conventional standard errors. The direction in which standard errors will change depends on the sign of the intra-class correlation. This post explains robust standard errors in greater detail.
|
cluster-robust standard errors are smaller than unclustered ones in fgls with cluster fixed effects
Robust clustered standard errors can change your standard errors in both directions. That is, clustered standard errors can be larger or smaller than conventional standard errors. The direction in whi
|
40,047
|
Power-law fitting and testing
|
As one of the authors of the methods you're using, I can say with some certainty that the answer to your Question 1 (can you apply the fitting and hypothesis-test methods to a dataset that contains all recorded events in a system) is "yes". In fact, in the 24 datasets that we analyzed in Clauset, Shalizi and Newman, "Power-law distributions in empirical data." SIAM Review 51(4), 661-703 (2009), a number of them are full traces of data from their system rather than a random sample.
For your Question 2 (are the results correct), I would say that if you applied the methods correctly (note that you're using someone else's implementation of our methods, so I cannot comment on their correctness) then the results seem fairly reasonable. Having stared myself at hundreds of similar plots, the p-values you quote also seem reasonable given the visual structure of the data and the fit. So, with $p>0.1$ in result A, it is okay to proceed as if the data is consistent with being drawn by from power-law distribution. With the $p<0.1$ in result B, this is not okay. The reason those data do not pass the test could be because either the data are not drawn from a single power-law distribution (violates the "id" assumption of iid) or they are but are not independent draws (violates the "i" assumption of iid).
In general, the smaller the number of observations in the fitted power-law region, which we denote $n_{\rm tail}$, the less statistical power in the resulting $p$-value. So, the result C may be spurious. When there's very little data in the upper-tail region, most distributions will fit (because there's not much data there). The method is still giving you the "right" answer, but it's not a particularly useful answer.
|
Power-law fitting and testing
|
As one of the authors of the methods you're using, I can say with some certainty that the answer to your Question 1 (can you apply the fitting and hypothesis-test methods to a dataset that contains al
|
Power-law fitting and testing
As one of the authors of the methods you're using, I can say with some certainty that the answer to your Question 1 (can you apply the fitting and hypothesis-test methods to a dataset that contains all recorded events in a system) is "yes". In fact, in the 24 datasets that we analyzed in Clauset, Shalizi and Newman, "Power-law distributions in empirical data." SIAM Review 51(4), 661-703 (2009), a number of them are full traces of data from their system rather than a random sample.
For your Question 2 (are the results correct), I would say that if you applied the methods correctly (note that you're using someone else's implementation of our methods, so I cannot comment on their correctness) then the results seem fairly reasonable. Having stared myself at hundreds of similar plots, the p-values you quote also seem reasonable given the visual structure of the data and the fit. So, with $p>0.1$ in result A, it is okay to proceed as if the data is consistent with being drawn by from power-law distribution. With the $p<0.1$ in result B, this is not okay. The reason those data do not pass the test could be because either the data are not drawn from a single power-law distribution (violates the "id" assumption of iid) or they are but are not independent draws (violates the "i" assumption of iid).
In general, the smaller the number of observations in the fitted power-law region, which we denote $n_{\rm tail}$, the less statistical power in the resulting $p$-value. So, the result C may be spurious. When there's very little data in the upper-tail region, most distributions will fit (because there's not much data there). The method is still giving you the "right" answer, but it's not a particularly useful answer.
|
Power-law fitting and testing
As one of the authors of the methods you're using, I can say with some certainty that the answer to your Question 1 (can you apply the fitting and hypothesis-test methods to a dataset that contains al
|
40,048
|
Can I model standard deviations in a linear model?
|
It sounds like you are proposing essentially a two-stage least squares, where stage one reduces each cluster to its standard deviation about a cluster-specific mean. This seems fine, although note that you could actually model on the observational level, ie, let the variance for each observation be a linear function of covariates. Note that I don't know of any off-the-shelf software that would allow for exactly that.
Returning to the two-stage approach, if cluster $i=1,...,N$ are normally distributed, eg $Z_i \sim N(\mu_i, \rho^2_i)$ then the sample variances will be scale chi-square distributed with $N_i -1$ degrees of freedom. Letting $S^2_i$ denote the sample variance in cluster $i$, then
$$S^2_i \sim \frac{\rho^2_i}{N_i-1} \times \chi^2(N_i-1).$$
In more detail, we have that
\begin{align*}
E S^2_i & = \rho^2_i, \\
Var S^2_i & = 2\frac{\rho_i^4}{N_i - 1}.
\end{align*}
A gamma GLM assumes that $Var Y = \phi (E Y)^2$, so this might actually be a case for gamma regression, with an identity link! (Which is a first for me, I think.) If the $N_i$ differ very much, then you need precision weights $1/(N_i-1)$.
|
Can I model standard deviations in a linear model?
|
It sounds like you are proposing essentially a two-stage least squares, where stage one reduces each cluster to its standard deviation about a cluster-specific mean. This seems fine, although note th
|
Can I model standard deviations in a linear model?
It sounds like you are proposing essentially a two-stage least squares, where stage one reduces each cluster to its standard deviation about a cluster-specific mean. This seems fine, although note that you could actually model on the observational level, ie, let the variance for each observation be a linear function of covariates. Note that I don't know of any off-the-shelf software that would allow for exactly that.
Returning to the two-stage approach, if cluster $i=1,...,N$ are normally distributed, eg $Z_i \sim N(\mu_i, \rho^2_i)$ then the sample variances will be scale chi-square distributed with $N_i -1$ degrees of freedom. Letting $S^2_i$ denote the sample variance in cluster $i$, then
$$S^2_i \sim \frac{\rho^2_i}{N_i-1} \times \chi^2(N_i-1).$$
In more detail, we have that
\begin{align*}
E S^2_i & = \rho^2_i, \\
Var S^2_i & = 2\frac{\rho_i^4}{N_i - 1}.
\end{align*}
A gamma GLM assumes that $Var Y = \phi (E Y)^2$, so this might actually be a case for gamma regression, with an identity link! (Which is a first for me, I think.) If the $N_i$ differ very much, then you need precision weights $1/(N_i-1)$.
|
Can I model standard deviations in a linear model?
It sounds like you are proposing essentially a two-stage least squares, where stage one reduces each cluster to its standard deviation about a cluster-specific mean. This seems fine, although note th
|
40,049
|
Can I model standard deviations in a linear model?
|
Yes, you can do this. A GLM of the SDs with a log link and a gamma family is one way to do it, if you think the populations are normal.
It is also not uncommon for people to regress log SD on a bunch of predictors. It is approximate, but all models are. One text where you can see this being done is Box, Hunter, and Hunter, Statistics For Experimenters (2nd edition), in their helicopter experiment in Chapter 12.
The log is intuitively correct here because scale parameters like SDs are multiplicative effects, and logging them makes hem additive -- suitable for a linear model.
|
Can I model standard deviations in a linear model?
|
Yes, you can do this. A GLM of the SDs with a log link and a gamma family is one way to do it, if you think the populations are normal.
It is also not uncommon for people to regress log SD on a bunch
|
Can I model standard deviations in a linear model?
Yes, you can do this. A GLM of the SDs with a log link and a gamma family is one way to do it, if you think the populations are normal.
It is also not uncommon for people to regress log SD on a bunch of predictors. It is approximate, but all models are. One text where you can see this being done is Box, Hunter, and Hunter, Statistics For Experimenters (2nd edition), in their helicopter experiment in Chapter 12.
The log is intuitively correct here because scale parameters like SDs are multiplicative effects, and logging them makes hem additive -- suitable for a linear model.
|
Can I model standard deviations in a linear model?
Yes, you can do this. A GLM of the SDs with a log link and a gamma family is one way to do it, if you think the populations are normal.
It is also not uncommon for people to regress log SD on a bunch
|
40,050
|
Problem with Mann-Whitney U test in scipy
|
There might be a bug in the package, but if you store the u and the prob output separately, you'll see the u value, although the prob is missing for some reason.
u, prob=scipy.stats.mannwhitneyu(x,y)
u
Out[18]: 193405.5
prob
Out[19]: nan
You could then use the normal approximation of $U$ to get a p-value, though. For large samples,
$$z = \frac{U-m_U}{\sigma_U}$$
where $m_U = \frac{n_1n_2}{2}$ and $\sigma_U=\sqrt{\frac{n_1n_2(n_1+n_2+1)}{12}}$ has approximately a standard Normal distribution.
m_u = len(x)*len(y)/2
sigma_u = np.sqrt(len(x)*len(y)*(len(x)+len(y)+1)/12)
z = (u - m_u)/sigma_u
z
Out[23]: -3.2920646126227546
Then you can compute a p-value.
pval = 2*scipy.stats.norm.cdf(z)
pval
Out[27]: 0.00099454759456888472
Scipy might be trying to compute the direct null hypothesis distribution of $U$, but the Normal approximation should work fine given the number of observations that you have.
|
Problem with Mann-Whitney U test in scipy
|
There might be a bug in the package, but if you store the u and the prob output separately, you'll see the u value, although the prob is missing for some reason.
u, prob=scipy.stats.mannwhitneyu(x,y)
|
Problem with Mann-Whitney U test in scipy
There might be a bug in the package, but if you store the u and the prob output separately, you'll see the u value, although the prob is missing for some reason.
u, prob=scipy.stats.mannwhitneyu(x,y)
u
Out[18]: 193405.5
prob
Out[19]: nan
You could then use the normal approximation of $U$ to get a p-value, though. For large samples,
$$z = \frac{U-m_U}{\sigma_U}$$
where $m_U = \frac{n_1n_2}{2}$ and $\sigma_U=\sqrt{\frac{n_1n_2(n_1+n_2+1)}{12}}$ has approximately a standard Normal distribution.
m_u = len(x)*len(y)/2
sigma_u = np.sqrt(len(x)*len(y)*(len(x)+len(y)+1)/12)
z = (u - m_u)/sigma_u
z
Out[23]: -3.2920646126227546
Then you can compute a p-value.
pval = 2*scipy.stats.norm.cdf(z)
pval
Out[27]: 0.00099454759456888472
Scipy might be trying to compute the direct null hypothesis distribution of $U$, but the Normal approximation should work fine given the number of observations that you have.
|
Problem with Mann-Whitney U test in scipy
There might be a bug in the package, but if you store the u and the prob output separately, you'll see the u value, although the prob is missing for some reason.
u, prob=scipy.stats.mannwhitneyu(x,y)
|
40,051
|
Problem with Mann-Whitney U test in scipy
|
This thread is old, but for those like me who encounter this scipy bug and find themselves here -
The issue is indeed the tiecorrect function in scipy.stats.mannwhitneyu(x,y)
If you do not need tie correcting, the scipy.stats.ranksums(x,y) test works fine.
For your case:
import scipy.stats
x = [1.] * 163 + [2.] * 81 + [3.] * 40 + [4.] * 6 + [5.] * 2
y = [1.] * 1007 + [2.] * 362 + [3.] * 99 + [4.] * 27 + [5.] * 13 # real-world example
print(scipy.stats.ranksums(x,y))
Out[6]: RanksumsResult(statistic=3.2920646126227546, pvalue=0.00099454759456888472)
|
Problem with Mann-Whitney U test in scipy
|
This thread is old, but for those like me who encounter this scipy bug and find themselves here -
The issue is indeed the tiecorrect function in scipy.stats.mannwhitneyu(x,y)
If you do not need tie c
|
Problem with Mann-Whitney U test in scipy
This thread is old, but for those like me who encounter this scipy bug and find themselves here -
The issue is indeed the tiecorrect function in scipy.stats.mannwhitneyu(x,y)
If you do not need tie correcting, the scipy.stats.ranksums(x,y) test works fine.
For your case:
import scipy.stats
x = [1.] * 163 + [2.] * 81 + [3.] * 40 + [4.] * 6 + [5.] * 2
y = [1.] * 1007 + [2.] * 362 + [3.] * 99 + [4.] * 27 + [5.] * 13 # real-world example
print(scipy.stats.ranksums(x,y))
Out[6]: RanksumsResult(statistic=3.2920646126227546, pvalue=0.00099454759456888472)
|
Problem with Mann-Whitney U test in scipy
This thread is old, but for those like me who encounter this scipy bug and find themselves here -
The issue is indeed the tiecorrect function in scipy.stats.mannwhitneyu(x,y)
If you do not need tie c
|
40,052
|
Using a chi square test instead of a F test in a linear regression
|
They are closely related. If you divide the Wald statistic by its degrees of freedom, you in essence have an $F$ statistic with that many numerator df, and infinite denominator df. The Wald statistic is seen in cases where the error variance is known, or where asymptotic (large-sample) approximations are used. Seems surprising to see it in a linear regression, as usually there you have a mean-square-error term and use that to make an $F$ test. But in generalized linear models, like logistic or Poisson regression, they are pretty common.
|
Using a chi square test instead of a F test in a linear regression
|
They are closely related. If you divide the Wald statistic by its degrees of freedom, you in essence have an $F$ statistic with that many numerator df, and infinite denominator df. The Wald statistic
|
Using a chi square test instead of a F test in a linear regression
They are closely related. If you divide the Wald statistic by its degrees of freedom, you in essence have an $F$ statistic with that many numerator df, and infinite denominator df. The Wald statistic is seen in cases where the error variance is known, or where asymptotic (large-sample) approximations are used. Seems surprising to see it in a linear regression, as usually there you have a mean-square-error term and use that to make an $F$ test. But in generalized linear models, like logistic or Poisson regression, they are pretty common.
|
Using a chi square test instead of a F test in a linear regression
They are closely related. If you divide the Wald statistic by its degrees of freedom, you in essence have an $F$ statistic with that many numerator df, and infinite denominator df. The Wald statistic
|
40,053
|
Using a chi square test instead of a F test in a linear regression
|
This is analogous to the $z$-test vs the $t$-test in the univariate case, where if the variance is known, the distribution of the test statistic is normal ($z$-test), and if it is estimated, the distribution of the test statistic is $t$ ($t$-test), with the $t$-test converging to the $z$-test with large n.
Same thing in linear regression, if the error variance is assumed known (or large $n$ with the asymptotic assumption), then wald test. You are correct that the p-value based on the $F$ is usually reported.
In addition, note that the wald chi-square test reduces to the $z$-test with one variable, and that the $F$-test reduces to the $t$-test.
A wrinkle here - another test that uses the chi-squared distribution is the likelihood ratio test, though I don't think I've seen it referred to as a wald test before.
|
Using a chi square test instead of a F test in a linear regression
|
This is analogous to the $z$-test vs the $t$-test in the univariate case, where if the variance is known, the distribution of the test statistic is normal ($z$-test), and if it is estimated, the distr
|
Using a chi square test instead of a F test in a linear regression
This is analogous to the $z$-test vs the $t$-test in the univariate case, where if the variance is known, the distribution of the test statistic is normal ($z$-test), and if it is estimated, the distribution of the test statistic is $t$ ($t$-test), with the $t$-test converging to the $z$-test with large n.
Same thing in linear regression, if the error variance is assumed known (or large $n$ with the asymptotic assumption), then wald test. You are correct that the p-value based on the $F$ is usually reported.
In addition, note that the wald chi-square test reduces to the $z$-test with one variable, and that the $F$-test reduces to the $t$-test.
A wrinkle here - another test that uses the chi-squared distribution is the likelihood ratio test, though I don't think I've seen it referred to as a wald test before.
|
Using a chi square test instead of a F test in a linear regression
This is analogous to the $z$-test vs the $t$-test in the univariate case, where if the variance is known, the distribution of the test statistic is normal ($z$-test), and if it is estimated, the distr
|
40,054
|
Name for the special estimate of the mean
|
It's called the midrange.
It's a good way to estimate the population mean of a $\text{Unif}(\mu-\theta,\mu+\theta)$.
It may be quite good in a variety of other circumstances; they'll generally be ones where the density is both symmetric and 'cuts off' relatively quickly at the bounds, rather than ones that very smoothly tail off.
So it should do fairly well as an estimator for the center of say a Beta(1.5,1.5), even though it's not ML (indeed, it looks like it's more efficient than the sample mean even at a Beta(2,2), at least in moderately small samples.)
(It will not in general be suitable as an estimator of the population mean for a non-symmetric distribution, even if it's distinctly platykurtic. So for example, it wouldn't be suitable for estimating the mean of a Beta(0.45,1.8), say.)
|
Name for the special estimate of the mean
|
It's called the midrange.
It's a good way to estimate the population mean of a $\text{Unif}(\mu-\theta,\mu+\theta)$.
It may be quite good in a variety of other circumstances; they'll generally be ones
|
Name for the special estimate of the mean
It's called the midrange.
It's a good way to estimate the population mean of a $\text{Unif}(\mu-\theta,\mu+\theta)$.
It may be quite good in a variety of other circumstances; they'll generally be ones where the density is both symmetric and 'cuts off' relatively quickly at the bounds, rather than ones that very smoothly tail off.
So it should do fairly well as an estimator for the center of say a Beta(1.5,1.5), even though it's not ML (indeed, it looks like it's more efficient than the sample mean even at a Beta(2,2), at least in moderately small samples.)
(It will not in general be suitable as an estimator of the population mean for a non-symmetric distribution, even if it's distinctly platykurtic. So for example, it wouldn't be suitable for estimating the mean of a Beta(0.45,1.8), say.)
|
Name for the special estimate of the mean
It's called the midrange.
It's a good way to estimate the population mean of a $\text{Unif}(\mu-\theta,\mu+\theta)$.
It may be quite good in a variety of other circumstances; they'll generally be ones
|
40,055
|
Standard error of the sampling distribution of the mean
|
The quoted formula is not quite right. Let's derive the correct one.
Since the population mean (or any other constant) may be subtracted from every value in a population $S$ without changing the variance of the population or of any sample thereof, we might as well assume the population mean is zero. Letting the values in the population be $\{x_i\, \vert\, i\in S\}$, this implies
$$0 = \sum_{i\in S} x_i.$$
Squaring both sides maintains the equality, giving
$$0 = \sum_{i,j\in S}x_ix_j = \sum_{i\in S}x_i^2 + \sum_{i \ne j \in S} x_ix_j,$$
whence
$$\sum_{i\ne j \in S} x_ix_j = -\sum_{i\in S} x_i^2.$$
This key result will be employed later.
Let $S$ have $N$ elements. Because its mean is zero, its variance is the average squared value:
$$s^2 = \frac{1}{N}\sum_{i\in S}x_i^2.$$
(Please note that there can be no dispute about the denominator of $N$; in particular, it definitely is not $N-1$: this is a population variance, not an estimator.)
To find the variance of the sample distribution of the mean, consider all possible $n$-element samples. Each corresponds to an $n$-subset $A\subset S$ and has mean
$$\frac{1}{n}\sum_{i\in A} x_i.$$
Since the mean of all the sample means equals the mean of $S$, which is zero, the variance of these $\binom{N}{n}$ sample means is the average of their squares:
$$s_n^2 = \frac{1}{\binom{N}{n}} \sum_{A\subset S}\left(\frac{1}{n}\sum_{i\in A}x_i\right)^2 = \frac{1}{n^2\binom{N}{n}} \sum_{A\subset S}\sum_{i,j\in A}x_ix_j \\= \frac{1}{n^2\binom{N}{n}} \sum_{A\subset S}\left(\sum_{i\in A}x_i^2 + \sum_{i\ne j\in A}x_ix_j\right) .$$
(Once again, $\binom{N}{n}$, not $\binom{N}{n}-1$, is the correct denominator: this is the variance of a collection of $\binom{N}{n}$ numbers, not an estimator of anything.)
Fix, for a moment, any particular index $i$. The value $x_i$ will appear in $\binom{N-1}{n-1}$ samples, because each such sample supplements $x_i$ with $n-1$ more elements of $S$ out of the $N-1$ remaining elements (sampling is without replacement, remember). Its contribution to the right hand side therefore equals $\binom{N-1}{n-1}x_i^2$.
Also fixing an index $j\ne i$, similar reasoning shows the product $x_ix_j$ appears in $\binom{N-2}{n-2}$ samples, thereby contributing $\binom{N-1}{n-1}x_ix_j$ to the right hand side. Therefore, upon summing over all such $i$ and $j$ in $S$,
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1}\sum_{i\in S}x_i^2 + \binom{N-2}{n-2}\sum_{i\ne j\in S}x_ix_j\right).$$
Plug the first result into that last sum:
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1}\sum_{i\in S}x_i^2 + \binom{N-2}{n-2}\left(-\sum_{i\in S}x_i^2\right)\right).$$
It is now straightforward to relate this to the variance of $S$, because $\sum_{i\in S}x_i^2 = Ns^2$:
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1} - \binom{N-2}{n-2}\right)\left(Ns^2\right) = \frac{s^2}{n}\left(1 - \frac{n-1}{N-1}\right).$$
Thus the sampling variance for sampling with replacment, $\frac{s^2}{n}$, is multiplied by $1 - \frac{n-1}{N-1}$ to obtain the sampling variance for sampling without replacement, $s_n^2$. Accordingly, the multiplicative adjustment for the sampling standard deviation is its square root, $\sqrt{1- \frac{n-1}{N-1}}$. This differs from the quoted formula, which uses $\sqrt{1 - \frac{n}{N}}$.
Two simple checks can give us some comfort concerning the correctness of this result. First, the sample variance of means of samples of size $n=1$, $s_1^2$, obviously equals the population variance $s^2$. The correct formula states
$$s_1^2 = \frac{s^2}{1}\left(1 - \frac{1-1}{N-1}\right) = s^2,$$
as it should. Unfortunately, the quoted formula asserts that $s_1^2 = s^2(\frac{1}{1} - \frac{1}{N})$ which obviously cannot be right. Second, the sample variance of the means of samples of size $n=N$ is zero, because there is no variation, and indeed both formulas give $0$ in this case.
|
Standard error of the sampling distribution of the mean
|
The quoted formula is not quite right. Let's derive the correct one.
Since the population mean (or any other constant) may be subtracted from every value in a population $S$ without changing the vari
|
Standard error of the sampling distribution of the mean
The quoted formula is not quite right. Let's derive the correct one.
Since the population mean (or any other constant) may be subtracted from every value in a population $S$ without changing the variance of the population or of any sample thereof, we might as well assume the population mean is zero. Letting the values in the population be $\{x_i\, \vert\, i\in S\}$, this implies
$$0 = \sum_{i\in S} x_i.$$
Squaring both sides maintains the equality, giving
$$0 = \sum_{i,j\in S}x_ix_j = \sum_{i\in S}x_i^2 + \sum_{i \ne j \in S} x_ix_j,$$
whence
$$\sum_{i\ne j \in S} x_ix_j = -\sum_{i\in S} x_i^2.$$
This key result will be employed later.
Let $S$ have $N$ elements. Because its mean is zero, its variance is the average squared value:
$$s^2 = \frac{1}{N}\sum_{i\in S}x_i^2.$$
(Please note that there can be no dispute about the denominator of $N$; in particular, it definitely is not $N-1$: this is a population variance, not an estimator.)
To find the variance of the sample distribution of the mean, consider all possible $n$-element samples. Each corresponds to an $n$-subset $A\subset S$ and has mean
$$\frac{1}{n}\sum_{i\in A} x_i.$$
Since the mean of all the sample means equals the mean of $S$, which is zero, the variance of these $\binom{N}{n}$ sample means is the average of their squares:
$$s_n^2 = \frac{1}{\binom{N}{n}} \sum_{A\subset S}\left(\frac{1}{n}\sum_{i\in A}x_i\right)^2 = \frac{1}{n^2\binom{N}{n}} \sum_{A\subset S}\sum_{i,j\in A}x_ix_j \\= \frac{1}{n^2\binom{N}{n}} \sum_{A\subset S}\left(\sum_{i\in A}x_i^2 + \sum_{i\ne j\in A}x_ix_j\right) .$$
(Once again, $\binom{N}{n}$, not $\binom{N}{n}-1$, is the correct denominator: this is the variance of a collection of $\binom{N}{n}$ numbers, not an estimator of anything.)
Fix, for a moment, any particular index $i$. The value $x_i$ will appear in $\binom{N-1}{n-1}$ samples, because each such sample supplements $x_i$ with $n-1$ more elements of $S$ out of the $N-1$ remaining elements (sampling is without replacement, remember). Its contribution to the right hand side therefore equals $\binom{N-1}{n-1}x_i^2$.
Also fixing an index $j\ne i$, similar reasoning shows the product $x_ix_j$ appears in $\binom{N-2}{n-2}$ samples, thereby contributing $\binom{N-1}{n-1}x_ix_j$ to the right hand side. Therefore, upon summing over all such $i$ and $j$ in $S$,
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1}\sum_{i\in S}x_i^2 + \binom{N-2}{n-2}\sum_{i\ne j\in S}x_ix_j\right).$$
Plug the first result into that last sum:
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1}\sum_{i\in S}x_i^2 + \binom{N-2}{n-2}\left(-\sum_{i\in S}x_i^2\right)\right).$$
It is now straightforward to relate this to the variance of $S$, because $\sum_{i\in S}x_i^2 = Ns^2$:
$$s_n^2 = \frac{1}{n^2\binom{N}{n}} \left(\binom{N-1}{n-1} - \binom{N-2}{n-2}\right)\left(Ns^2\right) = \frac{s^2}{n}\left(1 - \frac{n-1}{N-1}\right).$$
Thus the sampling variance for sampling with replacment, $\frac{s^2}{n}$, is multiplied by $1 - \frac{n-1}{N-1}$ to obtain the sampling variance for sampling without replacement, $s_n^2$. Accordingly, the multiplicative adjustment for the sampling standard deviation is its square root, $\sqrt{1- \frac{n-1}{N-1}}$. This differs from the quoted formula, which uses $\sqrt{1 - \frac{n}{N}}$.
Two simple checks can give us some comfort concerning the correctness of this result. First, the sample variance of means of samples of size $n=1$, $s_1^2$, obviously equals the population variance $s^2$. The correct formula states
$$s_1^2 = \frac{s^2}{1}\left(1 - \frac{1-1}{N-1}\right) = s^2,$$
as it should. Unfortunately, the quoted formula asserts that $s_1^2 = s^2(\frac{1}{1} - \frac{1}{N})$ which obviously cannot be right. Second, the sample variance of the means of samples of size $n=N$ is zero, because there is no variation, and indeed both formulas give $0$ in this case.
|
Standard error of the sampling distribution of the mean
The quoted formula is not quite right. Let's derive the correct one.
Since the population mean (or any other constant) may be subtracted from every value in a population $S$ without changing the vari
|
40,056
|
Standard error of the sampling distribution of the mean
|
This result relates specifically for finite population (of size $N$) and for sampling without replacement (for sample size $n$). It becomes clearer if we write
$$\sigma_{\bar{X}} = \sigma \sqrt{\frac{1}{n}-\frac{1}{N}} = \frac {\sigma}{\sqrt n}\left(\sqrt {1-\frac nN}\right) = \frac {\sigma}{\sqrt n}\left(\sqrt {\frac {N-n}N}\right)$$
Comparing the correction factor with the formula provided in the CV post mentioned in the comment as a possible duplicate-maker of this one, Explanation of finite correction factor, there appears as $\left(\sqrt {\frac {N-n}{N-1}}\right)$. Why this difference in the denominator?
@chl answer there mentions
You will notice that some authors use $N$ instead of $N-1$ in the
denominator of the FPC; in fact, it depends on whether you work with
the sample or population statistic: for the variance, it will be $N$
instead of $N-1$ if you are interested in $S^2$ rather than
$\sigma^2$.
...which needs to be reconciled with @whuber's answer.
|
Standard error of the sampling distribution of the mean
|
This result relates specifically for finite population (of size $N$) and for sampling without replacement (for sample size $n$). It becomes clearer if we write
$$\sigma_{\bar{X}} = \sigma \sqrt{\frac
|
Standard error of the sampling distribution of the mean
This result relates specifically for finite population (of size $N$) and for sampling without replacement (for sample size $n$). It becomes clearer if we write
$$\sigma_{\bar{X}} = \sigma \sqrt{\frac{1}{n}-\frac{1}{N}} = \frac {\sigma}{\sqrt n}\left(\sqrt {1-\frac nN}\right) = \frac {\sigma}{\sqrt n}\left(\sqrt {\frac {N-n}N}\right)$$
Comparing the correction factor with the formula provided in the CV post mentioned in the comment as a possible duplicate-maker of this one, Explanation of finite correction factor, there appears as $\left(\sqrt {\frac {N-n}{N-1}}\right)$. Why this difference in the denominator?
@chl answer there mentions
You will notice that some authors use $N$ instead of $N-1$ in the
denominator of the FPC; in fact, it depends on whether you work with
the sample or population statistic: for the variance, it will be $N$
instead of $N-1$ if you are interested in $S^2$ rather than
$\sigma^2$.
...which needs to be reconciled with @whuber's answer.
|
Standard error of the sampling distribution of the mean
This result relates specifically for finite population (of size $N$) and for sampling without replacement (for sample size $n$). It becomes clearer if we write
$$\sigma_{\bar{X}} = \sigma \sqrt{\frac
|
40,057
|
How are standard errors affected in a multivariate regression?
|
Yes, adding controls can increase the power of your statistical tests and make standard errors smaller. To see this, consider the following two regressions for comparison:
$$
\begin{align}
Y_i &= \alpha + \beta D_i + X'_i\gamma +e_i \newline
Y_i &= \mu + \pi D_i + u_i
\end{align}
$$
Assume that $X,D$ are uncorrelated with the error terms $e$ and $u$ and that we have homoscedasticity. Then you can show that:
$$
\begin{align}
\sqrt{n}(\widehat{\beta} - \beta) &\stackrel{d}\rightarrow N\left(0, \frac{E(e^2)}{\text{Var}(D_i)(1-R^2_{D,X})} \right) \newline
\sqrt{n}(\widehat{\mu} - \mu) &\stackrel{d}\rightarrow N\left( 0,\frac{E(u^2)}{\text{Var}(D_i)} \right)
\end{align}
$$
where $\stackrel{d}\rightarrow$ denotes convergence in distribution and $R^2_{D,X}$ is the $R^2$ from the regression of $D_i$ on $X_i$. I'm not going to prove this unless you explicitly request it because the main point of interest is the next result which uses the variances from these two distributions. The ratio of the two asymptotic variances is:
$$
\frac{1-R^2_{Y,(D,X)}}{1-R^2_{Y,D}}\cdot \frac{1}{1-R^2_{D,X}}
$$
where again $R^2_{Y,(D,X)}$ and $R^2_{Y,D}$ are the $R^2$s from the first and the second regression, respectively.
What does this ratio tell you?
It shows the trade-off in asymptotic variances when going from the short to the long regression. The first term is smaller than (or equal to) one since $R^2$ increases when you add $X_i$ to the regression. It will be much smaller than one if $X_i$ explains a lot of the variation in $Y_i$. So this is how your standard errors decrease.
The second term will be larger than (or equal to) one depending on the correlation between $D_i$ and $X_i$. If the two are strongly correlated, then the $R^2$ from the regression of $D_i$ on $X_i$ will be large and hence this second term will be large which is why your standard errors increase in this case.
If $D_i$ and $X_i$ are uncorrelated (e.g. if $D_i$ comes from a randomized experiment), then $R^2_{D,X} = 0$. This is the case when adding control variables is very preferable because they soak up the residual variance and increase the power of your statistical tests on $D_i$ which is great if this is your variable of interest.
So why don't your standard errors change? It's probably because the two counteracting effects from adding controls to your regression balance each other.
|
How are standard errors affected in a multivariate regression?
|
Yes, adding controls can increase the power of your statistical tests and make standard errors smaller. To see this, consider the following two regressions for comparison:
$$
\begin{align}
Y_i &= \alp
|
How are standard errors affected in a multivariate regression?
Yes, adding controls can increase the power of your statistical tests and make standard errors smaller. To see this, consider the following two regressions for comparison:
$$
\begin{align}
Y_i &= \alpha + \beta D_i + X'_i\gamma +e_i \newline
Y_i &= \mu + \pi D_i + u_i
\end{align}
$$
Assume that $X,D$ are uncorrelated with the error terms $e$ and $u$ and that we have homoscedasticity. Then you can show that:
$$
\begin{align}
\sqrt{n}(\widehat{\beta} - \beta) &\stackrel{d}\rightarrow N\left(0, \frac{E(e^2)}{\text{Var}(D_i)(1-R^2_{D,X})} \right) \newline
\sqrt{n}(\widehat{\mu} - \mu) &\stackrel{d}\rightarrow N\left( 0,\frac{E(u^2)}{\text{Var}(D_i)} \right)
\end{align}
$$
where $\stackrel{d}\rightarrow$ denotes convergence in distribution and $R^2_{D,X}$ is the $R^2$ from the regression of $D_i$ on $X_i$. I'm not going to prove this unless you explicitly request it because the main point of interest is the next result which uses the variances from these two distributions. The ratio of the two asymptotic variances is:
$$
\frac{1-R^2_{Y,(D,X)}}{1-R^2_{Y,D}}\cdot \frac{1}{1-R^2_{D,X}}
$$
where again $R^2_{Y,(D,X)}$ and $R^2_{Y,D}$ are the $R^2$s from the first and the second regression, respectively.
What does this ratio tell you?
It shows the trade-off in asymptotic variances when going from the short to the long regression. The first term is smaller than (or equal to) one since $R^2$ increases when you add $X_i$ to the regression. It will be much smaller than one if $X_i$ explains a lot of the variation in $Y_i$. So this is how your standard errors decrease.
The second term will be larger than (or equal to) one depending on the correlation between $D_i$ and $X_i$. If the two are strongly correlated, then the $R^2$ from the regression of $D_i$ on $X_i$ will be large and hence this second term will be large which is why your standard errors increase in this case.
If $D_i$ and $X_i$ are uncorrelated (e.g. if $D_i$ comes from a randomized experiment), then $R^2_{D,X} = 0$. This is the case when adding control variables is very preferable because they soak up the residual variance and increase the power of your statistical tests on $D_i$ which is great if this is your variable of interest.
So why don't your standard errors change? It's probably because the two counteracting effects from adding controls to your regression balance each other.
|
How are standard errors affected in a multivariate regression?
Yes, adding controls can increase the power of your statistical tests and make standard errors smaller. To see this, consider the following two regressions for comparison:
$$
\begin{align}
Y_i &= \alp
|
40,058
|
Linear regression with upper and/or lower limits in R?
|
Yes, you can do this in Lavaan.
Here's an example. We fit a regression model, and find an estimate of 0.10. Then fit a model using lavaan, and get the same parameter estimate. Then fit the model with a constraint that b1 has to be greater than 0.
library(lavaan)
set.seed(1234)
df <- as.data.frame(matrix(rnorm(500), ncol=2))
names(df) <- c("x", "y")
summary(glm(y ~ x, data=df))
lavModel1 <- 'y ~ b1*x'
summary(sem(lavModel1, df))
lavModel2 <- 'y ~ b1*x
b1 > 0'
summary(sem(lavModel2, df))
|
Linear regression with upper and/or lower limits in R?
|
Yes, you can do this in Lavaan.
Here's an example. We fit a regression model, and find an estimate of 0.10. Then fit a model using lavaan, and get the same parameter estimate. Then fit the model with
|
Linear regression with upper and/or lower limits in R?
Yes, you can do this in Lavaan.
Here's an example. We fit a regression model, and find an estimate of 0.10. Then fit a model using lavaan, and get the same parameter estimate. Then fit the model with a constraint that b1 has to be greater than 0.
library(lavaan)
set.seed(1234)
df <- as.data.frame(matrix(rnorm(500), ncol=2))
names(df) <- c("x", "y")
summary(glm(y ~ x, data=df))
lavModel1 <- 'y ~ b1*x'
summary(sem(lavModel1, df))
lavModel2 <- 'y ~ b1*x
b1 > 0'
summary(sem(lavModel2, df))
|
Linear regression with upper and/or lower limits in R?
Yes, you can do this in Lavaan.
Here's an example. We fit a regression model, and find an estimate of 0.10. Then fit a model using lavaan, and get the same parameter estimate. Then fit the model with
|
40,059
|
Linear regression with upper and/or lower limits in R?
|
Yes, you can do it by re-defining linear regression as an optimization problem (a residual sum of squares cost function for example) and solving it using the constraints you want. So to use the numbers as the example from @Jeremy:
(Note that in this example we are fitting a model without an intercept)
set.seed(1234)
df <- as.data.frame(matrix(rnorm(500), ncol=2))
names(df) <- c("x", "y")
CostFunction <- function(theta){sum( (df$y - theta*df$x)^2)}
theta_0 =1;
theta_opt <- optim(fn= CostFunction, lower=0, par = theta_0, method="L-BFGS-B")
Ultimately any package you wish to use for constrained regression will do the same thing: formulate a cost function and solve a constrained optimization problem. In the case of ordinary least squares as the one shown here "ordinary" optimizers using "simple" BFGS variants will do just fine; harder problems (eg. GLMM) will require more exotic beasts like BOBYQA.
|
Linear regression with upper and/or lower limits in R?
|
Yes, you can do it by re-defining linear regression as an optimization problem (a residual sum of squares cost function for example) and solving it using the constraints you want. So to use the number
|
Linear regression with upper and/or lower limits in R?
Yes, you can do it by re-defining linear regression as an optimization problem (a residual sum of squares cost function for example) and solving it using the constraints you want. So to use the numbers as the example from @Jeremy:
(Note that in this example we are fitting a model without an intercept)
set.seed(1234)
df <- as.data.frame(matrix(rnorm(500), ncol=2))
names(df) <- c("x", "y")
CostFunction <- function(theta){sum( (df$y - theta*df$x)^2)}
theta_0 =1;
theta_opt <- optim(fn= CostFunction, lower=0, par = theta_0, method="L-BFGS-B")
Ultimately any package you wish to use for constrained regression will do the same thing: formulate a cost function and solve a constrained optimization problem. In the case of ordinary least squares as the one shown here "ordinary" optimizers using "simple" BFGS variants will do just fine; harder problems (eg. GLMM) will require more exotic beasts like BOBYQA.
|
Linear regression with upper and/or lower limits in R?
Yes, you can do it by re-defining linear regression as an optimization problem (a residual sum of squares cost function for example) and solving it using the constraints you want. So to use the number
|
40,060
|
Can averaging all the variables be seen as a crude form of PCA?
|
PCA forms linear combinations of the variables, and averaging all the variables is also taking a linear combination -- namely one where all the weights are equal to $1/d$, where $d$ is the number of variables. So one can view these approaches as conceptually related.
Moreover, under certain conditions averaging can indeed be called "a very crude relative of PCA", in a sense that PCA will result in the first principal component being proportional to the average of all variables (or close to it). What are these conditions?
Sphericity is perfect or near-perfect. All variables are perfectly orthogonal from all other variables. Variables are all perfectly scaled against each other.
If all variables are "perfectly scaled", let's assume that they are centread and standardized to have variances equal to $1$. This means that covariance matrix and correlation matrix coincide.
Note that if all the variables are indeed "perfectly orthogonal" to each other as you suggest, then the covariance/correlation matrix becomes identity matrix, and any vector can be chose to represent its first principal component; all eigenvalues are equal to $1$ and PCA would be useless (as would be the averaging). So let's rather consider small but non-zero pairwise covariances/correlations.
Now if all pairwise correlations are equal to the same number $c$, i.e. the covariance matrix looks like that: $$\left(\begin{array}{}1&c&c&c\\c&1&c&c\\c&c&1&c\\c&c&c&1\end{array} \right),$$ then the first eigenvector will be proportional to $$\left(\begin{array}{}1\\1\\1\\1 \end{array}\right),$$ i.e. the first PC will be proportional to the average over all variables. This should be obvious from the permutation-invariance of this covariance matrix.
This is true with any number of variables, not necessarily four. This also remains true for any value of $c \in (0,1)$, whether the variables are nearly orthogonal ($c\approx 0$) or not.
Moreover, this often remains approximately true (as noted by @whuber in the comments) if the off-diagonal elements are not exactly equal, but are of similar magnitude. Then the first PC will often be close to the average as well. For a nice real-life example of such a situation, see this answer illustrating the dataset of crab body measurements.
|
Can averaging all the variables be seen as a crude form of PCA?
|
PCA forms linear combinations of the variables, and averaging all the variables is also taking a linear combination -- namely one where all the weights are equal to $1/d$, where $d$ is the number of v
|
Can averaging all the variables be seen as a crude form of PCA?
PCA forms linear combinations of the variables, and averaging all the variables is also taking a linear combination -- namely one where all the weights are equal to $1/d$, where $d$ is the number of variables. So one can view these approaches as conceptually related.
Moreover, under certain conditions averaging can indeed be called "a very crude relative of PCA", in a sense that PCA will result in the first principal component being proportional to the average of all variables (or close to it). What are these conditions?
Sphericity is perfect or near-perfect. All variables are perfectly orthogonal from all other variables. Variables are all perfectly scaled against each other.
If all variables are "perfectly scaled", let's assume that they are centread and standardized to have variances equal to $1$. This means that covariance matrix and correlation matrix coincide.
Note that if all the variables are indeed "perfectly orthogonal" to each other as you suggest, then the covariance/correlation matrix becomes identity matrix, and any vector can be chose to represent its first principal component; all eigenvalues are equal to $1$ and PCA would be useless (as would be the averaging). So let's rather consider small but non-zero pairwise covariances/correlations.
Now if all pairwise correlations are equal to the same number $c$, i.e. the covariance matrix looks like that: $$\left(\begin{array}{}1&c&c&c\\c&1&c&c\\c&c&1&c\\c&c&c&1\end{array} \right),$$ then the first eigenvector will be proportional to $$\left(\begin{array}{}1\\1\\1\\1 \end{array}\right),$$ i.e. the first PC will be proportional to the average over all variables. This should be obvious from the permutation-invariance of this covariance matrix.
This is true with any number of variables, not necessarily four. This also remains true for any value of $c \in (0,1)$, whether the variables are nearly orthogonal ($c\approx 0$) or not.
Moreover, this often remains approximately true (as noted by @whuber in the comments) if the off-diagonal elements are not exactly equal, but are of similar magnitude. Then the first PC will often be close to the average as well. For a nice real-life example of such a situation, see this answer illustrating the dataset of crab body measurements.
|
Can averaging all the variables be seen as a crude form of PCA?
PCA forms linear combinations of the variables, and averaging all the variables is also taking a linear combination -- namely one where all the weights are equal to $1/d$, where $d$ is the number of v
|
40,061
|
Warning message in auto.arima
|
It looks fine, auto.arima() tries many candidate models. One of them may have been dodgy.
The auto.arima() algorithm follows Hyndman & Khandakar (2008) Automatic time series forecasting (pdf), although the OCSB test is a new development. The algorithm tries different versions of p, q, P and Q and chooses the one with the smallest AIC, AICc or BIC. The choice of criterion depends on the which parameters you pass to the function. For some versions of p, q, P and Q, it may not be able to fit a model and hence you get that warning. However, a "good one" is selected.
You should also make sure that you have enough data, at least four years.
Some important checks:
Does the model make sense? For example, if you have monthly retails sales, you will probably expect a seasonal model to be fit.
How well does it forecast out of sample?
|
Warning message in auto.arima
|
It looks fine, auto.arima() tries many candidate models. One of them may have been dodgy.
The auto.arima() algorithm follows Hyndman & Khandakar (2008) Automatic time series forecasting (pdf), althoug
|
Warning message in auto.arima
It looks fine, auto.arima() tries many candidate models. One of them may have been dodgy.
The auto.arima() algorithm follows Hyndman & Khandakar (2008) Automatic time series forecasting (pdf), although the OCSB test is a new development. The algorithm tries different versions of p, q, P and Q and chooses the one with the smallest AIC, AICc or BIC. The choice of criterion depends on the which parameters you pass to the function. For some versions of p, q, P and Q, it may not be able to fit a model and hence you get that warning. However, a "good one" is selected.
You should also make sure that you have enough data, at least four years.
Some important checks:
Does the model make sense? For example, if you have monthly retails sales, you will probably expect a seasonal model to be fit.
How well does it forecast out of sample?
|
Warning message in auto.arima
It looks fine, auto.arima() tries many candidate models. One of them may have been dodgy.
The auto.arima() algorithm follows Hyndman & Khandakar (2008) Automatic time series forecasting (pdf), althoug
|
40,062
|
Warning message in auto.arima
|
[I think this question should be placed on stackexchange]
You should try:
auto.arima(forecast_data_ts, approximation=FALSE,trace=FALSE)
|
Warning message in auto.arima
|
[I think this question should be placed on stackexchange]
You should try:
auto.arima(forecast_data_ts, approximation=FALSE,trace=FALSE)
|
Warning message in auto.arima
[I think this question should be placed on stackexchange]
You should try:
auto.arima(forecast_data_ts, approximation=FALSE,trace=FALSE)
|
Warning message in auto.arima
[I think this question should be placed on stackexchange]
You should try:
auto.arima(forecast_data_ts, approximation=FALSE,trace=FALSE)
|
40,063
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
|
Both answers posted so far are useful (+1) but let me present this in a slightly different way using the Minimum description length principal. The basic idea behind MDL is related to Kolmogorov Complexity and the concept of the minimum sized program required to reproduce a sequence. The MDL principle states that one should prefer models that can communicate the data in the smallest number of bits Hastie09. As Shannon's source coding theorem has shown the expected code message length for a given prefix code (ie. model) is : $L = -\Sigma_{a\epsilon A} P(a) \log_2 P(a)$ where $A$ is the set of all possible messages we would like to transmit; if we write this for an infinite set of messages (effectively something in $R$) $L = -\int P(a) \log_2 P(a) da$. One can therefore see that in terms of bits we need $-\log_2P(a)$ bits to transmit a random variable $a$ with probability density function $P(a)$. Now given that when transmitting a dataset $y$ of model outputs one effectively has to transmit it by sending the best fit parameters of the model $m_i$, $\theta^*$, as well as the discrepancy between the original data and the fitted data, one can write the total length as :
\begin{align}
L = (-\log_2 Pr(\theta^*|m_i)) + (- \log_2 Pr(y|\theta^*,m_i))
\end{align}
So while you will decrease the second term by over-fitting, you will increase your first term by adding "redundant" information. In essence you will increase the variance of $\theta^*$ unnecessarily.
This is by no means a (formal) proof but I thought it might be fun to consider. :)
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
|
Both answers posted so far are useful (+1) but let me present this in a slightly different way using the Minimum description length principal. The basic idea behind MDL is related to Kolmogorov Comple
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
Both answers posted so far are useful (+1) but let me present this in a slightly different way using the Minimum description length principal. The basic idea behind MDL is related to Kolmogorov Complexity and the concept of the minimum sized program required to reproduce a sequence. The MDL principle states that one should prefer models that can communicate the data in the smallest number of bits Hastie09. As Shannon's source coding theorem has shown the expected code message length for a given prefix code (ie. model) is : $L = -\Sigma_{a\epsilon A} P(a) \log_2 P(a)$ where $A$ is the set of all possible messages we would like to transmit; if we write this for an infinite set of messages (effectively something in $R$) $L = -\int P(a) \log_2 P(a) da$. One can therefore see that in terms of bits we need $-\log_2P(a)$ bits to transmit a random variable $a$ with probability density function $P(a)$. Now given that when transmitting a dataset $y$ of model outputs one effectively has to transmit it by sending the best fit parameters of the model $m_i$, $\theta^*$, as well as the discrepancy between the original data and the fitted data, one can write the total length as :
\begin{align}
L = (-\log_2 Pr(\theta^*|m_i)) + (- \log_2 Pr(y|\theta^*,m_i))
\end{align}
So while you will decrease the second term by over-fitting, you will increase your first term by adding "redundant" information. In essence you will increase the variance of $\theta^*$ unnecessarily.
This is by no means a (formal) proof but I thought it might be fun to consider. :)
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
Both answers posted so far are useful (+1) but let me present this in a slightly different way using the Minimum description length principal. The basic idea behind MDL is related to Kolmogorov Comple
|
40,064
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
|
The references you want is The Analysis of Market Demand (JSTOR) by Richard Stone, Journal of the Royal Statistical Society, Vol. 108, No. 3/4 (1945), pp. 286-391. I can't find an ungated link, so here's the gist of it.
He gives a formula for the estimated variance of OLS regressor $\beta_k$ in a regression of $y$ on $K$ variables as
$$
\frac{1}{N-K}\cdot\frac{\sigma^2_y}{\sigma^2_k}\cdot\frac{1-R^2}{1-R^2_k},
$$
where $\sigma^2_y$ is the estimated variance of $y$, $\sigma^2_k$ is the estimated variance of $x_k$, $R_k^2$ is from the regression of $x_k$ on $K-1$ remaining independent variables, and $N$ is the sample size. The set of $K$ already includes a constant.
Now we make L'Hospital and the Bernoullis spin in their tombs with some terrible math.
To overfit, fix $N$ and start adding variables ($K \rightarrow N$). As you do this, both of the $R^2$s approach 1 since they are a monotonic function of $K$. The middle fraction remains constant since $N$ is fixed. The first fraction grows since you're dividing by something closer and closer to zero.
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
|
The references you want is The Analysis of Market Demand (JSTOR) by Richard Stone, Journal of the Royal Statistical Society, Vol. 108, No. 3/4 (1945), pp. 286-391. I can't find an ungated link, so her
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
The references you want is The Analysis of Market Demand (JSTOR) by Richard Stone, Journal of the Royal Statistical Society, Vol. 108, No. 3/4 (1945), pp. 286-391. I can't find an ungated link, so here's the gist of it.
He gives a formula for the estimated variance of OLS regressor $\beta_k$ in a regression of $y$ on $K$ variables as
$$
\frac{1}{N-K}\cdot\frac{\sigma^2_y}{\sigma^2_k}\cdot\frac{1-R^2}{1-R^2_k},
$$
where $\sigma^2_y$ is the estimated variance of $y$, $\sigma^2_k$ is the estimated variance of $x_k$, $R_k^2$ is from the regression of $x_k$ on $K-1$ remaining independent variables, and $N$ is the sample size. The set of $K$ already includes a constant.
Now we make L'Hospital and the Bernoullis spin in their tombs with some terrible math.
To overfit, fix $N$ and start adding variables ($K \rightarrow N$). As you do this, both of the $R^2$s approach 1 since they are a monotonic function of $K$. The middle fraction remains constant since $N$ is fixed. The first fraction grows since you're dividing by something closer and closer to zero.
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
The references you want is The Analysis of Market Demand (JSTOR) by Richard Stone, Journal of the Royal Statistical Society, Vol. 108, No. 3/4 (1945), pp. 286-391. I can't find an ungated link, so her
|
40,065
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
|
Basically, you are asking for an interpretation of Occam's razor in therms of probability; quoting from wikipedia, Occam's razor:
is a principle of parsimony, economy, or succinctness used in
problem-solving. It states that among competing hypotheses, the one
with the fewest assumptions should be selected.
I can direct you to this paper[0]. There, the authors generalize and quantify the original formulation's "assumptions" concept as
the degree to which a proposition is unnecessarily accommodating to
possible observable data
In a nutshell, given an equal fit, simpler prior have higher posteriors. Again, quoting from wikipedia;
all assumptions introduce possibilities for error; if an assumption
does not improve the accuracy of a theory, its only effect is to
increase the probability that the overall theory is wrong.
In essence, given an equal fit of the observed data, simpler models are preferred over models which would have accommodated a wide range of other possible data because they have a higher probability of being true.
[0]:Jefferys W. H. and Berger J. O. (1991). Sharpening Ockham's Razor On a Bayesian Strop.
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
|
Basically, you are asking for an interpretation of Occam's razor in therms of probability; quoting from wikipedia, Occam's razor:
is a principle of parsimony, economy, or succinctness used in
probl
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
Basically, you are asking for an interpretation of Occam's razor in therms of probability; quoting from wikipedia, Occam's razor:
is a principle of parsimony, economy, or succinctness used in
problem-solving. It states that among competing hypotheses, the one
with the fewest assumptions should be selected.
I can direct you to this paper[0]. There, the authors generalize and quantify the original formulation's "assumptions" concept as
the degree to which a proposition is unnecessarily accommodating to
possible observable data
In a nutshell, given an equal fit, simpler prior have higher posteriors. Again, quoting from wikipedia;
all assumptions introduce possibilities for error; if an assumption
does not improve the accuracy of a theory, its only effect is to
increase the probability that the overall theory is wrong.
In essence, given an equal fit of the observed data, simpler models are preferred over models which would have accommodated a wide range of other possible data because they have a higher probability of being true.
[0]:Jefferys W. H. and Berger J. O. (1991). Sharpening Ockham's Razor On a Bayesian Strop.
|
Looking for a proof that overfitting a model leads to greater variance estimates (under OLS)
Basically, you are asking for an interpretation of Occam's razor in therms of probability; quoting from wikipedia, Occam's razor:
is a principle of parsimony, economy, or succinctness used in
probl
|
40,066
|
Is a lower training accuracy possible in overfitting (one class SVM)
|
Proportion classified correctly is a discontinuous improper scoring rule that is optimized by a bogus model. I would not believe anything that you learn from it.
|
Is a lower training accuracy possible in overfitting (one class SVM)
|
Proportion classified correctly is a discontinuous improper scoring rule that is optimized by a bogus model. I would not believe anything that you learn from it.
|
Is a lower training accuracy possible in overfitting (one class SVM)
Proportion classified correctly is a discontinuous improper scoring rule that is optimized by a bogus model. I would not believe anything that you learn from it.
|
Is a lower training accuracy possible in overfitting (one class SVM)
Proportion classified correctly is a discontinuous improper scoring rule that is optimized by a bogus model. I would not believe anything that you learn from it.
|
40,067
|
Is a lower training accuracy possible in overfitting (one class SVM)
|
UPDATE
There is probably an numerical error with one class nu-svm in LibSVM. At optimum, some training instances should satisfy w'*x - rho = 0. However, numerically they may be slightly smaller than zero Then they are wrongly counted as training errors. Since nu is an upper bound on the ratio of training points on the wrong side of the hyperplane, numerical issues occur in calculating the first case because some training points satisfying y*(w'*x + b) - rho = 0 become negative.
This issue does not occur for nu-SVC for two-class classification.
The authors added this issue to their FAQ.
-----------OLD ANSWER BELOW-------------------------------------------------------------------------
Thanks for @cbeleites's note. I investigated the influence of both $\gamma$ and $\nu$ in one class SVM. I used 5-fold cross validation (but not the '-v 5' option in libsvm) bu shuffling the data 100 times and then average the accuracy (still use the proportion classified correctly). The result images show the training accuracy, testing accuracy, and generalization error (the difference between the former two) with different combinations of $\gamma$ and $\nu$.
Cbeleites is correct that $\gamma$ itself is not sufficient to determine the variance of the model. The underfitting is very clearly shown in subfigure(3), but it seems like there is only a slightly overfitting around the middle part ($\nu \approx 0.1$, $\gamma \approx 5$, I didn't locate in the exact coordinate). And there is the "longish optimum" as Cbeleites mentioned in the comment. Basically large $\gamma$ and $\nu$ might cause the underfitting but the overfitting dependence on the coefficient is not that evident. I used the logarithm of $\gamma$ and $\nu$ to show the smaller value region more clearly below.
|
Is a lower training accuracy possible in overfitting (one class SVM)
|
UPDATE
There is probably an numerical error with one class nu-svm in LibSVM. At optimum, some training instances should satisfy w'*x - rho = 0. However, numerically they may be slightly smaller than z
|
Is a lower training accuracy possible in overfitting (one class SVM)
UPDATE
There is probably an numerical error with one class nu-svm in LibSVM. At optimum, some training instances should satisfy w'*x - rho = 0. However, numerically they may be slightly smaller than zero Then they are wrongly counted as training errors. Since nu is an upper bound on the ratio of training points on the wrong side of the hyperplane, numerical issues occur in calculating the first case because some training points satisfying y*(w'*x + b) - rho = 0 become negative.
This issue does not occur for nu-SVC for two-class classification.
The authors added this issue to their FAQ.
-----------OLD ANSWER BELOW-------------------------------------------------------------------------
Thanks for @cbeleites's note. I investigated the influence of both $\gamma$ and $\nu$ in one class SVM. I used 5-fold cross validation (but not the '-v 5' option in libsvm) bu shuffling the data 100 times and then average the accuracy (still use the proportion classified correctly). The result images show the training accuracy, testing accuracy, and generalization error (the difference between the former two) with different combinations of $\gamma$ and $\nu$.
Cbeleites is correct that $\gamma$ itself is not sufficient to determine the variance of the model. The underfitting is very clearly shown in subfigure(3), but it seems like there is only a slightly overfitting around the middle part ($\nu \approx 0.1$, $\gamma \approx 5$, I didn't locate in the exact coordinate). And there is the "longish optimum" as Cbeleites mentioned in the comment. Basically large $\gamma$ and $\nu$ might cause the underfitting but the overfitting dependence on the coefficient is not that evident. I used the logarithm of $\gamma$ and $\nu$ to show the smaller value region more clearly below.
|
Is a lower training accuracy possible in overfitting (one class SVM)
UPDATE
There is probably an numerical error with one class nu-svm in LibSVM. At optimum, some training instances should satisfy w'*x - rho = 0. However, numerically they may be slightly smaller than z
|
40,068
|
Is a lower training accuracy possible in overfitting (one class SVM)
|
Based on Dr. Harrell's suggestion, I tried Logarithmic and Brier scoring rule. Since libsvm does not support probability estimation on nu-svm, I had to do it with binary class SVM.
Some notes on the result image:
The proportion classified correctly is different with the '-b 1' option in training and testing. Since the other scoring rules are calculated with the '-b 1', it makes more sense to compare the last three sub-figures;
The maximal and minimal of the logarithmic and brier are the same ($\gamma$ $\approx 50$), at which the accuracy in sub-figure 2 is $0$, and in sub-figure 2 is $100 \% $. The functions are continuous but not monotone, so my concern in OP still exists.
The figures with only 2 features as it was in the OP:
|
Is a lower training accuracy possible in overfitting (one class SVM)
|
Based on Dr. Harrell's suggestion, I tried Logarithmic and Brier scoring rule. Since libsvm does not support probability estimation on nu-svm, I had to do it with binary class SVM.
Some notes on the
|
Is a lower training accuracy possible in overfitting (one class SVM)
Based on Dr. Harrell's suggestion, I tried Logarithmic and Brier scoring rule. Since libsvm does not support probability estimation on nu-svm, I had to do it with binary class SVM.
Some notes on the result image:
The proportion classified correctly is different with the '-b 1' option in training and testing. Since the other scoring rules are calculated with the '-b 1', it makes more sense to compare the last three sub-figures;
The maximal and minimal of the logarithmic and brier are the same ($\gamma$ $\approx 50$), at which the accuracy in sub-figure 2 is $0$, and in sub-figure 2 is $100 \% $. The functions are continuous but not monotone, so my concern in OP still exists.
The figures with only 2 features as it was in the OP:
|
Is a lower training accuracy possible in overfitting (one class SVM)
Based on Dr. Harrell's suggestion, I tried Logarithmic and Brier scoring rule. Since libsvm does not support probability estimation on nu-svm, I had to do it with binary class SVM.
Some notes on the
|
40,069
|
Optimal lag length in VECM using vars R package
|
For VEC models you should select number of lags based on information criteria on VAR model on levels of your time series. For that you can use function VARselect from the same package vars.
Function cajorls does not have the argument $K$. However it has the argument $r$, which denotes cointegration rank. Argument $K$ in function ca.jo controls the number of lags of VEC model.
The usual workflow for estimating VEC model is the following (rough outline). Suppose your time series is in the matrix y.
Find the number of lags using VARselect(y)
Determine the cointegration rank using the function ca.jo. Pass the number of lags found in the first step as argument K.
Fit VEC model using the cointegration vectors determined from the second step. This is performed by function cajorls, where you should pass the result of ca.jo and the number of cointegration vectors.
|
Optimal lag length in VECM using vars R package
|
For VEC models you should select number of lags based on information criteria on VAR model on levels of your time series. For that you can use function VARselect from the same package vars.
Function c
|
Optimal lag length in VECM using vars R package
For VEC models you should select number of lags based on information criteria on VAR model on levels of your time series. For that you can use function VARselect from the same package vars.
Function cajorls does not have the argument $K$. However it has the argument $r$, which denotes cointegration rank. Argument $K$ in function ca.jo controls the number of lags of VEC model.
The usual workflow for estimating VEC model is the following (rough outline). Suppose your time series is in the matrix y.
Find the number of lags using VARselect(y)
Determine the cointegration rank using the function ca.jo. Pass the number of lags found in the first step as argument K.
Fit VEC model using the cointegration vectors determined from the second step. This is performed by function cajorls, where you should pass the result of ca.jo and the number of cointegration vectors.
|
Optimal lag length in VECM using vars R package
For VEC models you should select number of lags based on information criteria on VAR model on levels of your time series. For that you can use function VARselect from the same package vars.
Function c
|
40,070
|
Implementing Neural Network for time series
|
Some Googling for specifically neural networks and seasonality leads to this paper, Neural network forecasting for seasonal and trend time series, Zhang and Qi, European Journal of Operational Research, V.160, 2, 16 January 2005, 501–514. In this paper the authors sought to compare the Box-Jenkins approach with a neural network approach. From the abstract:
We find that neural networks are not able to capture seasonal or trend variations effectively with the unpreprocessed raw data and either detrending or deseasonalization can dramatically reduce forecasting errors. Moreover, a combined detrending and deseasonalization is found to be the most effective data preprocessing approach.
They conclude that accounting for trend and seasonality in your preprocessing steps is a good idea.
|
Implementing Neural Network for time series
|
Some Googling for specifically neural networks and seasonality leads to this paper, Neural network forecasting for seasonal and trend time series, Zhang and Qi, European Journal of Operational Researc
|
Implementing Neural Network for time series
Some Googling for specifically neural networks and seasonality leads to this paper, Neural network forecasting for seasonal and trend time series, Zhang and Qi, European Journal of Operational Research, V.160, 2, 16 January 2005, 501–514. In this paper the authors sought to compare the Box-Jenkins approach with a neural network approach. From the abstract:
We find that neural networks are not able to capture seasonal or trend variations effectively with the unpreprocessed raw data and either detrending or deseasonalization can dramatically reduce forecasting errors. Moreover, a combined detrending and deseasonalization is found to be the most effective data preprocessing approach.
They conclude that accounting for trend and seasonality in your preprocessing steps is a good idea.
|
Implementing Neural Network for time series
Some Googling for specifically neural networks and seasonality leads to this paper, Neural network forecasting for seasonal and trend time series, Zhang and Qi, European Journal of Operational Researc
|
40,071
|
Implementing Neural Network for time series
|
There is an editorial by Chris Chatfield in the International Journal of Forecasting 9 (1993) 1-3 entitled "Neural networks: Forecasting breakthrough or passing fad?" It is focused on comparing ARIMA models and neural networks. He warns that sometimes apparently successful applications of neural networks in business and economic forecasting are reported without comparing the results with any more established alternatives. Chatfield concludes:
In summary it is possible that neural nets will outperform standard forecasting
procedures when a fair comparison is made, at least for certain types of situation, but
there is little systematic evidence of this as yet.
There is also a paper co-authored by Chatfield published in Applied Statistics. It compares NN with Box-Jenkins and Holt-Winters and reports on many potential problems in using NN for forecasting. The authors advise "it is unwise to apply NN blindly in a 'black-box' as has sometimes been suggested", which I think answers your questions.
If you are currently working on neural networks for time series forecasting, I would suggest that you build your own collection of quality references. The two references above is a good start. Brian Ripley discussed NN for time series neither in this piece of R news nor in his "Pattern Recognition and Neural Network" book, but you probably can find his work on neural networks for time series prediction elsewhere.
You may also check this review paper: Forecasting with artificial neural networks:
The state of the art in the IJF, even though it is dated 1998. It has references to work on applying NN to multivariate time series problem. In particular, it says that
Gorr (1994) believes that ANNs can be more appropriate for the following situations:
(1) large data sets;
(2) problems with nonlinear structure;
(3) the multivariate time series forecasting problems.
There is also "...A Review from a Statistical Perspective" (and Leo Breimann remarked in his commentary to it that "room is left for other statistical perspective") published in Statistical Science, Vol. 9, No. 1 (Feb., 1994)
|
Implementing Neural Network for time series
|
There is an editorial by Chris Chatfield in the International Journal of Forecasting 9 (1993) 1-3 entitled "Neural networks: Forecasting breakthrough or passing fad?" It is focused on comparing ARIMA
|
Implementing Neural Network for time series
There is an editorial by Chris Chatfield in the International Journal of Forecasting 9 (1993) 1-3 entitled "Neural networks: Forecasting breakthrough or passing fad?" It is focused on comparing ARIMA models and neural networks. He warns that sometimes apparently successful applications of neural networks in business and economic forecasting are reported without comparing the results with any more established alternatives. Chatfield concludes:
In summary it is possible that neural nets will outperform standard forecasting
procedures when a fair comparison is made, at least for certain types of situation, but
there is little systematic evidence of this as yet.
There is also a paper co-authored by Chatfield published in Applied Statistics. It compares NN with Box-Jenkins and Holt-Winters and reports on many potential problems in using NN for forecasting. The authors advise "it is unwise to apply NN blindly in a 'black-box' as has sometimes been suggested", which I think answers your questions.
If you are currently working on neural networks for time series forecasting, I would suggest that you build your own collection of quality references. The two references above is a good start. Brian Ripley discussed NN for time series neither in this piece of R news nor in his "Pattern Recognition and Neural Network" book, but you probably can find his work on neural networks for time series prediction elsewhere.
You may also check this review paper: Forecasting with artificial neural networks:
The state of the art in the IJF, even though it is dated 1998. It has references to work on applying NN to multivariate time series problem. In particular, it says that
Gorr (1994) believes that ANNs can be more appropriate for the following situations:
(1) large data sets;
(2) problems with nonlinear structure;
(3) the multivariate time series forecasting problems.
There is also "...A Review from a Statistical Perspective" (and Leo Breimann remarked in his commentary to it that "room is left for other statistical perspective") published in Statistical Science, Vol. 9, No. 1 (Feb., 1994)
|
Implementing Neural Network for time series
There is an editorial by Chris Chatfield in the International Journal of Forecasting 9 (1993) 1-3 entitled "Neural networks: Forecasting breakthrough or passing fad?" It is focused on comparing ARIMA
|
40,072
|
Interpretation of log-level difference-in-differences specification
|
You can should treat the interaction variable as a dummy and follow this advice from David Giles:
If $Treat\cdot Post$ switches from 0 to 1, the % impact on $Y$ is $100 \cdot (\exp(\beta_4 - \frac{1}{2} \hat \sigma_{\beta_4}^2)-1).$
|
Interpretation of log-level difference-in-differences specification
|
You can should treat the interaction variable as a dummy and follow this advice from David Giles:
If $Treat\cdot Post$ switches from 0 to 1, the % impact on $Y$ is $100 \cdot (\exp(\beta_4 - \frac{1}{
|
Interpretation of log-level difference-in-differences specification
You can should treat the interaction variable as a dummy and follow this advice from David Giles:
If $Treat\cdot Post$ switches from 0 to 1, the % impact on $Y$ is $100 \cdot (\exp(\beta_4 - \frac{1}{2} \hat \sigma_{\beta_4}^2)-1).$
|
Interpretation of log-level difference-in-differences specification
You can should treat the interaction variable as a dummy and follow this advice from David Giles:
If $Treat\cdot Post$ switches from 0 to 1, the % impact on $Y$ is $100 \cdot (\exp(\beta_4 - \frac{1}{
|
40,073
|
Interpretation of log-level difference-in-differences specification
|
What's required is that $\beta \cdot \Delta x$ be small. If you know that $\Delta x$ is 1, then that means that $\beta$ has to be small. How small? The true proportionate change in $Outcome$ when the dummy rises by $1$ is $\exp(\beta)-1$. The approximate change is $\beta$. The error from the approximation is:
\begin{equation}
\textrm{Error} = \exp(\beta)-1-\beta
\end{equation}
For small $|\beta|$, this is pretty small. For example, for $\beta=0.1$ (approximate 10% change), the true percent change in $Outcome$ when the dummy turns on is 10.5%. Given the usual standard errors in empirical work, I'm happy to ignore this. By the time you get to a $\beta$ of 0.2 (approximate 20%), the true percent change is 22%. Willing to ignore this much approximation error? Again, I am, but you may not be. This is now a 10% approximation error. By the time you get to $\beta=0.3$, the true percent change in outcome is 35% rather than 30%, and I am not happy to ignore this any more.
So, my rule of thumb is to ignore this approximation error for $|\beta|<0.2$ and worry about it for $\beta$ bigger than that.
|
Interpretation of log-level difference-in-differences specification
|
What's required is that $\beta \cdot \Delta x$ be small. If you know that $\Delta x$ is 1, then that means that $\beta$ has to be small. How small? The true proportionate change in $Outcome$ when t
|
Interpretation of log-level difference-in-differences specification
What's required is that $\beta \cdot \Delta x$ be small. If you know that $\Delta x$ is 1, then that means that $\beta$ has to be small. How small? The true proportionate change in $Outcome$ when the dummy rises by $1$ is $\exp(\beta)-1$. The approximate change is $\beta$. The error from the approximation is:
\begin{equation}
\textrm{Error} = \exp(\beta)-1-\beta
\end{equation}
For small $|\beta|$, this is pretty small. For example, for $\beta=0.1$ (approximate 10% change), the true percent change in $Outcome$ when the dummy turns on is 10.5%. Given the usual standard errors in empirical work, I'm happy to ignore this. By the time you get to a $\beta$ of 0.2 (approximate 20%), the true percent change is 22%. Willing to ignore this much approximation error? Again, I am, but you may not be. This is now a 10% approximation error. By the time you get to $\beta=0.3$, the true percent change in outcome is 35% rather than 30%, and I am not happy to ignore this any more.
So, my rule of thumb is to ignore this approximation error for $|\beta|<0.2$ and worry about it for $\beta$ bigger than that.
|
Interpretation of log-level difference-in-differences specification
What's required is that $\beta \cdot \Delta x$ be small. If you know that $\Delta x$ is 1, then that means that $\beta$ has to be small. How small? The true proportionate change in $Outcome$ when t
|
40,074
|
Flaw in a conditional probability argument
|
The flaw in the argument is that the conditioning random variable is not well-defined.
The ambiguity lies in how our friend peeking at the dice decides to report the information back to us. If we let $X_1$ and $X_2$ denote the random variables associated with the values of each of the dice, then it is certainly true that for each $k \in \{1,2,\ldots,6\}$,
$$
\mathbb P(X_1 + X_2 = 7 \mid X_1 = k \cup X_2 = k) = \frac{2}{11} \>,
$$
independently of $k$.
However, the events $\{X_1 = k \cup X_2 = k\}$ are clearly not mutually exclusive, and so clearly we cannot claim
$$
\begin{align}
\mathbb P(X_1 + X_2 = 7) &\stackrel{?}{=} \sum_{k=1}^6 \mathbb P(X_1 + X_2 = 7 \mid X_1 = k \cup X_2 = k) \mathbb P( X_1 = k \cup X_2 = k ) \cr
&\stackrel{?}{=} \frac{2}{11} \sum_{k=1}^6 \mathbb P( X_1 = k \cup X_2 = k ) \cr
&\stackrel{?}{=} \frac{2}{11}
\end{align}
$$
Formally, we need to properly define a random variable, say $K$, that encodes the knowledge imparted by our peeking friend.
Our peeking friend could always report the value of the left-most die, or the right-most, or the larger of the two. She could flip a coin and then report based on the coin flip, or employ any number of more complicated machinations.
But, once this process is specified, the apparent paradox vanishes.
Indeed, suppose that $K = X_1$. Then, we have
$$
\begin{align}
\mathbb P(X_1 + X_2 = 7) &= \sum_{k=1}^6 \mathbb P(X_1+X_2 = 7, K=k) \cr
&= \sum_{k=1}^6 \mathbb P(X_1+X_2 = 7 \mid K=k) \mathbb P(K=k) \cr
&= \sum_{k=1}^6 \frac{1}{36} = \frac{1}{6} \>.
\end{align}
$$
Similar arguments hold if we choose $K = X_2$ or $K = \max(X_1,X_2)$, etc.
|
Flaw in a conditional probability argument
|
The flaw in the argument is that the conditioning random variable is not well-defined.
The ambiguity lies in how our friend peeking at the dice decides to report the information back to us. If we let
|
Flaw in a conditional probability argument
The flaw in the argument is that the conditioning random variable is not well-defined.
The ambiguity lies in how our friend peeking at the dice decides to report the information back to us. If we let $X_1$ and $X_2$ denote the random variables associated with the values of each of the dice, then it is certainly true that for each $k \in \{1,2,\ldots,6\}$,
$$
\mathbb P(X_1 + X_2 = 7 \mid X_1 = k \cup X_2 = k) = \frac{2}{11} \>,
$$
independently of $k$.
However, the events $\{X_1 = k \cup X_2 = k\}$ are clearly not mutually exclusive, and so clearly we cannot claim
$$
\begin{align}
\mathbb P(X_1 + X_2 = 7) &\stackrel{?}{=} \sum_{k=1}^6 \mathbb P(X_1 + X_2 = 7 \mid X_1 = k \cup X_2 = k) \mathbb P( X_1 = k \cup X_2 = k ) \cr
&\stackrel{?}{=} \frac{2}{11} \sum_{k=1}^6 \mathbb P( X_1 = k \cup X_2 = k ) \cr
&\stackrel{?}{=} \frac{2}{11}
\end{align}
$$
Formally, we need to properly define a random variable, say $K$, that encodes the knowledge imparted by our peeking friend.
Our peeking friend could always report the value of the left-most die, or the right-most, or the larger of the two. She could flip a coin and then report based on the coin flip, or employ any number of more complicated machinations.
But, once this process is specified, the apparent paradox vanishes.
Indeed, suppose that $K = X_1$. Then, we have
$$
\begin{align}
\mathbb P(X_1 + X_2 = 7) &= \sum_{k=1}^6 \mathbb P(X_1+X_2 = 7, K=k) \cr
&= \sum_{k=1}^6 \mathbb P(X_1+X_2 = 7 \mid K=k) \mathbb P(K=k) \cr
&= \sum_{k=1}^6 \frac{1}{36} = \frac{1}{6} \>.
\end{align}
$$
Similar arguments hold if we choose $K = X_2$ or $K = \max(X_1,X_2)$, etc.
|
Flaw in a conditional probability argument
The flaw in the argument is that the conditioning random variable is not well-defined.
The ambiguity lies in how our friend peeking at the dice decides to report the information back to us. If we let
|
40,075
|
Flaw in a conditional probability argument
|
If $B$ is an event with the property that $P(B\mid D_i) = p$ for all events
$\{D_1, D_2, \ldots\}$ in a countable partition of the sample space $\Omega$,
(that is, $D_i \cap D_j = \emptyset$ for all $i \neq j$ and
$\bigcup_i D_i = \Omega$), then the
law of total probability tells us that
$$P(B) = \sum_i P(B\mid D_i)P(D_i) = p\sum_i P(D_i) = p.$$ However,
the law of total probability does not apply if the events $D_i$ are
not mutually exclusive (even though their union is still $\Omega$), and
we cannot conclude that $P(B)$ equals the common value of $P(B\mid D_i)$.
Let $A_i$ denote the event that at least one of the dice shows the number $i$ and $B$ the event that the sum of the two numbers on the die is $7$. We know that
$P(B) = \frac{1}{6}$ and that $P(A_i) = \frac{11}{36}$. Also,
$P(B\mid A_i) = \frac{2}{11}$. Now,
$A_1\cup A_2\cup A_3 \cup A_4\cup A_5\cup A_6$
is the entire sample space $\Omega$
but we cannot use the fact that $P(B\mid A_i)$
is the same for all choices of $i$ to conclude that $P(B) = \frac{2}{11}$
because the $A_i$ are not mutually exclusive events.
However, notice that regarded as a multiset,
$A_1\cup A_2\cup A_3 \cup A_4\cup A_5\cup A_6$ contains each outcome
$(i,j)$ exactly twice, once as a member of $A_i$ and again as a member of
$A_j$. Therefore,
$$\sum_{i=1}^6 P(B \mid A_i)P(A_i)
= \sum_{i=1}^6 \frac{2}{11}\times\frac{11}{36} = \frac{1}{3} $$
which is exactly twice the value of $P(B)$.
|
Flaw in a conditional probability argument
|
If $B$ is an event with the property that $P(B\mid D_i) = p$ for all events
$\{D_1, D_2, \ldots\}$ in a countable partition of the sample space $\Omega$,
(that is, $D_i \cap D_j = \emptyset$ for all
|
Flaw in a conditional probability argument
If $B$ is an event with the property that $P(B\mid D_i) = p$ for all events
$\{D_1, D_2, \ldots\}$ in a countable partition of the sample space $\Omega$,
(that is, $D_i \cap D_j = \emptyset$ for all $i \neq j$ and
$\bigcup_i D_i = \Omega$), then the
law of total probability tells us that
$$P(B) = \sum_i P(B\mid D_i)P(D_i) = p\sum_i P(D_i) = p.$$ However,
the law of total probability does not apply if the events $D_i$ are
not mutually exclusive (even though their union is still $\Omega$), and
we cannot conclude that $P(B)$ equals the common value of $P(B\mid D_i)$.
Let $A_i$ denote the event that at least one of the dice shows the number $i$ and $B$ the event that the sum of the two numbers on the die is $7$. We know that
$P(B) = \frac{1}{6}$ and that $P(A_i) = \frac{11}{36}$. Also,
$P(B\mid A_i) = \frac{2}{11}$. Now,
$A_1\cup A_2\cup A_3 \cup A_4\cup A_5\cup A_6$
is the entire sample space $\Omega$
but we cannot use the fact that $P(B\mid A_i)$
is the same for all choices of $i$ to conclude that $P(B) = \frac{2}{11}$
because the $A_i$ are not mutually exclusive events.
However, notice that regarded as a multiset,
$A_1\cup A_2\cup A_3 \cup A_4\cup A_5\cup A_6$ contains each outcome
$(i,j)$ exactly twice, once as a member of $A_i$ and again as a member of
$A_j$. Therefore,
$$\sum_{i=1}^6 P(B \mid A_i)P(A_i)
= \sum_{i=1}^6 \frac{2}{11}\times\frac{11}{36} = \frac{1}{3} $$
which is exactly twice the value of $P(B)$.
|
Flaw in a conditional probability argument
If $B$ is an event with the property that $P(B\mid D_i) = p$ for all events
$\{D_1, D_2, \ldots\}$ in a countable partition of the sample space $\Omega$,
(that is, $D_i \cap D_j = \emptyset$ for all
|
40,076
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
|
The easy way is to use the law of total variance:
$$\text{Var}(S) = E_N\left[\text{Var}(S|N)\right] + \text{Var}_N\left[E(S|N)\right] =\text{E}_N\left[N\cdot \text{Var}(X)\right] + \text{Var}_N\left[N\cdot\text{E}(X)\right]$$
Can you do it from there? It's pretty much just substitution (well, that and really basic properties of expectation and variance).
(The first part is even more straightforward using the law of total expectation.)
--
As Spy_Lord notes, the answer is $\text{E}(N)\cdot \text{Var}(X) + \text{Var}(N)\cdot\text{E}(X)^2$
Alternative approach is to evaluate $E(S_N^2)$. Following the approach you seem to be aiming at:
\begin{eqnarray}
E(S_N^2) &=& \sum_r E(S_N^2|N=r) p_r\\
&=& \sum_r (r\sigma_2^2+r^2 \mu_2^2) p_r\\
&=& \sigma_2^2\sum_r rp_r+\mu_2^2\sum_rr^2 p_r \\
&=& \sigma_2^2 \text{E}N+\mu_2^2\text{E}(N^2)
\end{eqnarray}
and I assume you can do it from there.
However, to be honest, I think this way is easier (it's actually the same approach, you just don't need to sum over all the mutually exclusive events that way). The law of total expectation says $\text{E}(X) = \text{E}_Y[\text{E}_{X|Y}(X|Y)]$, so
\begin{eqnarray}
\text{E}(S^2_N) &=& \text{E}_N[\text{E}(S^2_N|N)]\\
&=& \text{E}_N[N\sigma_2^2+N^2\mu_2^2]\\
&=& \sigma_2^2\text{E}(N)+\mu_2^2\text{E}(N^2)
\end{eqnarray}
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
|
The easy way is to use the law of total variance:
$$\text{Var}(S) = E_N\left[\text{Var}(S|N)\right] + \text{Var}_N\left[E(S|N)\right] =\text{E}_N\left[N\cdot \text{Var}(X)\right] + \text{Var}_N\left[N
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
The easy way is to use the law of total variance:
$$\text{Var}(S) = E_N\left[\text{Var}(S|N)\right] + \text{Var}_N\left[E(S|N)\right] =\text{E}_N\left[N\cdot \text{Var}(X)\right] + \text{Var}_N\left[N\cdot\text{E}(X)\right]$$
Can you do it from there? It's pretty much just substitution (well, that and really basic properties of expectation and variance).
(The first part is even more straightforward using the law of total expectation.)
--
As Spy_Lord notes, the answer is $\text{E}(N)\cdot \text{Var}(X) + \text{Var}(N)\cdot\text{E}(X)^2$
Alternative approach is to evaluate $E(S_N^2)$. Following the approach you seem to be aiming at:
\begin{eqnarray}
E(S_N^2) &=& \sum_r E(S_N^2|N=r) p_r\\
&=& \sum_r (r\sigma_2^2+r^2 \mu_2^2) p_r\\
&=& \sigma_2^2\sum_r rp_r+\mu_2^2\sum_rr^2 p_r \\
&=& \sigma_2^2 \text{E}N+\mu_2^2\text{E}(N^2)
\end{eqnarray}
and I assume you can do it from there.
However, to be honest, I think this way is easier (it's actually the same approach, you just don't need to sum over all the mutually exclusive events that way). The law of total expectation says $\text{E}(X) = \text{E}_Y[\text{E}_{X|Y}(X|Y)]$, so
\begin{eqnarray}
\text{E}(S^2_N) &=& \text{E}_N[\text{E}(S^2_N|N)]\\
&=& \text{E}_N[N\sigma_2^2+N^2\mu_2^2]\\
&=& \sigma_2^2\text{E}(N)+\mu_2^2\text{E}(N^2)
\end{eqnarray}
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
The easy way is to use the law of total variance:
$$\text{Var}(S) = E_N\left[\text{Var}(S|N)\right] + \text{Var}_N\left[E(S|N)\right] =\text{E}_N\left[N\cdot \text{Var}(X)\right] + \text{Var}_N\left[N
|
40,077
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
|
The law of total variance is the easiest way to do this. But there are several occasions when we don't know how many random variables we are dealing with (e.g. branching processes such as Galton-Watson, birth-death processes, queues) where probability-generating functions are a useful technique. It is possible to derive mean and variance using the PGF, so I want to demonstrate how this can serve as an alternative. Why bother? One motivation is that this method will generalize easily to find any factorial moment, and hence any moment, of distribution.
A few general results: a PGF $G_X(z)=\mathbb{E}(z^X)$ has $\lim_{z\uparrow 1}\, G_X(z)=\lim_{z\uparrow 1}\, \mathbb{E}(z^X)=1$. Factorial moments are found by taking the limit of the appropriate derivative of the PGF as $z$ goes to 1 from below. So for a random variable $X$:
\begin{eqnarray}
\mathbb{E}(X) &=& \lim_{z\uparrow 1}\, G'_X(z)\\
\mathbb{E}(X(X-1)) &=& \lim_{z\uparrow 1}\, G''_X(z)\\
\mathbb{E}(X(X-1)(X-2)) &=& \lim_{z\uparrow 1}\, G'''_X(z)
\end{eqnarray}
And so on for higher moments. The key here is that if $S_N=\sum_{i=1}^N X_i$ with iid $X_i$ then $G_{S_N}(z)=G_N(G_X(z))$. Proof:
\begin{eqnarray}
G_{S_N}(z) &=& \mathbb{E}_N(\mathbb{E}(z^{\sum_{i=1}^N X_i})) &=& \mathbb{E}_N(\mathbb{E}(\prod_{i=1}^N z^{X_i})) &=& \mathbb{E}_N(\prod_{i=1}^N (\mathbb{E}(z^{X_i})) \\
&=& \mathbb{E}_N(\prod_{i=1}^N G_X(z)) &=& \mathbb{E}_N(G_X(z)^N) &=& G_N(G_X(z))
\end{eqnarray}
Also note $\lim_{z\uparrow 1}\, G_X(z)=\lim_{z\uparrow 1}\, G_N(z)=1$, $\lim_{z\uparrow 1}\, G'_X(z)=\mu_X$, $\lim_{z\uparrow 1}\, G'_N(z)=\mu_N$, $\lim_{z\uparrow 1}\, G''_X(z)=\mathbb{E}(X^2-X)=\sigma_X^2+\mu_X^2-\mu_X$ and $\lim_{z\uparrow 1}\, G''_N(z)=\sigma_N^2+\mu_N^2-\mu_N$.
Since $G_{S_N}(z)=G_N(G_X(z))$ we can use the chain rule to find the mean and variance of $S_N$:
\begin{eqnarray}
\mathbb{E}(S_N) &=& \lim_{z\uparrow 1}\, \frac{d}{dz}G_N(G_X(z))=\lim_{z\uparrow 1}\, G'_X(z)G'_N(G_X(z))=\mu_X \mu_N\\
\mathbb{E}(S_N(S_N-1)) &=& \lim_{z\uparrow 1}\, \frac{d^2}{dz^2}G_N(G_X(z))\\
\mathbb{E}(S_N^2-S_N) &=& \lim_{z\uparrow 1}\, \left(G''_X(z)G'_N(G_X(z))+G'_X(z)^2 G''_N(G_X(z))\right) \\
\mathbb{E}(S_N^2)-\mu_X \mu_N &=& (\sigma_X^2+\mu_X^2-\mu_X)(\mu_N)+(\mu_X)^2(\sigma_N^2+\mu_N^2-\mu_N) \\
\mathbb{E}(S_N^2) &=& \mu_N \sigma_X^2 + \mu_X^2 \sigma_N^2 + \mu_X^2 \mu_N^2 \\
\operatorname{Var}(S_N) &=& \mathbb{E}(S_N^2)-\mathbb{E}(S_N)^2=\mu_N \sigma_X^2 + \mu_X^2 \sigma_N^2 + \mu_X^2 \mu_N^2-(\mu_X \mu_N)^2 \\
\operatorname{Var}(S_N) &=& \mu_N \sigma_X^2 + \mu_X^2 \sigma_N^2
\end{eqnarray}
It's a little gruesome and there's no doubt the law of total variance is easier. But if the standard results are taken for granted, this is only a couple of lines of algebra and calculus, and I've given more detail than some of the other answers which makes it look worse than it is. If you wanted the higher moments, this is a viable approach.
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
|
The law of total variance is the easiest way to do this. But there are several occasions when we don't know how many random variables we are dealing with (e.g. branching processes such as Galton-Watso
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
The law of total variance is the easiest way to do this. But there are several occasions when we don't know how many random variables we are dealing with (e.g. branching processes such as Galton-Watson, birth-death processes, queues) where probability-generating functions are a useful technique. It is possible to derive mean and variance using the PGF, so I want to demonstrate how this can serve as an alternative. Why bother? One motivation is that this method will generalize easily to find any factorial moment, and hence any moment, of distribution.
A few general results: a PGF $G_X(z)=\mathbb{E}(z^X)$ has $\lim_{z\uparrow 1}\, G_X(z)=\lim_{z\uparrow 1}\, \mathbb{E}(z^X)=1$. Factorial moments are found by taking the limit of the appropriate derivative of the PGF as $z$ goes to 1 from below. So for a random variable $X$:
\begin{eqnarray}
\mathbb{E}(X) &=& \lim_{z\uparrow 1}\, G'_X(z)\\
\mathbb{E}(X(X-1)) &=& \lim_{z\uparrow 1}\, G''_X(z)\\
\mathbb{E}(X(X-1)(X-2)) &=& \lim_{z\uparrow 1}\, G'''_X(z)
\end{eqnarray}
And so on for higher moments. The key here is that if $S_N=\sum_{i=1}^N X_i$ with iid $X_i$ then $G_{S_N}(z)=G_N(G_X(z))$. Proof:
\begin{eqnarray}
G_{S_N}(z) &=& \mathbb{E}_N(\mathbb{E}(z^{\sum_{i=1}^N X_i})) &=& \mathbb{E}_N(\mathbb{E}(\prod_{i=1}^N z^{X_i})) &=& \mathbb{E}_N(\prod_{i=1}^N (\mathbb{E}(z^{X_i})) \\
&=& \mathbb{E}_N(\prod_{i=1}^N G_X(z)) &=& \mathbb{E}_N(G_X(z)^N) &=& G_N(G_X(z))
\end{eqnarray}
Also note $\lim_{z\uparrow 1}\, G_X(z)=\lim_{z\uparrow 1}\, G_N(z)=1$, $\lim_{z\uparrow 1}\, G'_X(z)=\mu_X$, $\lim_{z\uparrow 1}\, G'_N(z)=\mu_N$, $\lim_{z\uparrow 1}\, G''_X(z)=\mathbb{E}(X^2-X)=\sigma_X^2+\mu_X^2-\mu_X$ and $\lim_{z\uparrow 1}\, G''_N(z)=\sigma_N^2+\mu_N^2-\mu_N$.
Since $G_{S_N}(z)=G_N(G_X(z))$ we can use the chain rule to find the mean and variance of $S_N$:
\begin{eqnarray}
\mathbb{E}(S_N) &=& \lim_{z\uparrow 1}\, \frac{d}{dz}G_N(G_X(z))=\lim_{z\uparrow 1}\, G'_X(z)G'_N(G_X(z))=\mu_X \mu_N\\
\mathbb{E}(S_N(S_N-1)) &=& \lim_{z\uparrow 1}\, \frac{d^2}{dz^2}G_N(G_X(z))\\
\mathbb{E}(S_N^2-S_N) &=& \lim_{z\uparrow 1}\, \left(G''_X(z)G'_N(G_X(z))+G'_X(z)^2 G''_N(G_X(z))\right) \\
\mathbb{E}(S_N^2)-\mu_X \mu_N &=& (\sigma_X^2+\mu_X^2-\mu_X)(\mu_N)+(\mu_X)^2(\sigma_N^2+\mu_N^2-\mu_N) \\
\mathbb{E}(S_N^2) &=& \mu_N \sigma_X^2 + \mu_X^2 \sigma_N^2 + \mu_X^2 \mu_N^2 \\
\operatorname{Var}(S_N) &=& \mathbb{E}(S_N^2)-\mathbb{E}(S_N)^2=\mu_N \sigma_X^2 + \mu_X^2 \sigma_N^2 + \mu_X^2 \mu_N^2-(\mu_X \mu_N)^2 \\
\operatorname{Var}(S_N) &=& \mu_N \sigma_X^2 + \mu_X^2 \sigma_N^2
\end{eqnarray}
It's a little gruesome and there's no doubt the law of total variance is easier. But if the standard results are taken for granted, this is only a couple of lines of algebra and calculus, and I've given more detail than some of the other answers which makes it look worse than it is. If you wanted the higher moments, this is a viable approach.
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
The law of total variance is the easiest way to do this. But there are several occasions when we don't know how many random variables we are dealing with (e.g. branching processes such as Galton-Watso
|
40,078
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
|
The solution to calculating the mean:
$\mathbb{E}(S_N) = 0.P(N=0) + \mathbb{E}(X_1).P(N=1) + \mathbb{E}(X_1+X_2).P(N=2) + . . .$
$= 0 + \mu_2P(N=1) + 2\mu_2P(N=2) + . . . = \mu_2\sum_{i=0}^\infty i.P(N=i)$
and the infinite sum above is just equal to the expectation of $N$, hence:
$\mathbb{E}(S_N) = \mu_2.\mathbb{E}(N) = \mu_2\mu_1$
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
|
The solution to calculating the mean:
$\mathbb{E}(S_N) = 0.P(N=0) + \mathbb{E}(X_1).P(N=1) + \mathbb{E}(X_1+X_2).P(N=2) + . . .$
$= 0 + \mu_2P(N=1) + 2\mu_2P(N=2) + . . . = \mu_2\sum_{i=0}^\infty i.P(
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
The solution to calculating the mean:
$\mathbb{E}(S_N) = 0.P(N=0) + \mathbb{E}(X_1).P(N=1) + \mathbb{E}(X_1+X_2).P(N=2) + . . .$
$= 0 + \mu_2P(N=1) + 2\mu_2P(N=2) + . . . = \mu_2\sum_{i=0}^\infty i.P(N=i)$
and the infinite sum above is just equal to the expectation of $N$, hence:
$\mathbb{E}(S_N) = \mu_2.\mathbb{E}(N) = \mu_2\mu_1$
|
Variance of sum of random number of random variables (Cambridge University Worksheet)
The solution to calculating the mean:
$\mathbb{E}(S_N) = 0.P(N=0) + \mathbb{E}(X_1).P(N=1) + \mathbb{E}(X_1+X_2).P(N=2) + . . .$
$= 0 + \mu_2P(N=1) + 2\mu_2P(N=2) + . . . = \mu_2\sum_{i=0}^\infty i.P(
|
40,079
|
Converting 2nd order Markov chain to the 1st order equivalent
|
Here's a way to do it:
(I may be writing my state vectors and transition matrices transposed relative to the way you might have learned them, or even the way they're usually done. If that's the case you'll need to translate back.)
The probability model gives you probabilities for 4 output states at time $t$ in terms of the 16 input states - the possible ordered pairs for $(x_{t-1},x_{t-2})$.
For speed of writing, let's write $AC$ for $(A,C)$ and so on.
\begin{array}{c|cccc|cccc|c}
& AA & AC &AT&AG& CA &CC &CT &CG& \ldots \\ \hline
A &p_{AA\to A}&p_{AC\to A}&p_{AT\to A}&p_{AG\to A}&p_{CA\to A}&p_{CC\to A}&p_{CT\to A}&p_{CG\to A}\\
C &p_{AA\to C}&p_{AC\to C}&p_{AT\to C}&p_{AG\to C}&p_{CA\to C}&p_{CC\to C}&p_{CT\to C}&p_{CG\to C}\\
T &p_{AA\to T}&p_{AC\to T}&p_{AT\to T}&p_{AG\to T}&p_{CA\to T}&p_{CC\to T}&p_{CT\to T}&p_{CG\to T}\\
G &p_{AA\to G}&p_{AC\to G}&p_{AT\to G}&p_{AG\to G}&p_{CA\to G}&p_{CC\to G}&p_{CT\to G}&p_{CG\to G}\\ \hline
\end{array}
We could label the partitions with boldface versions of the state $x_{t-1}$:
\begin{array}{c|cccc|cc|c|cc}
& AA & AC &AT&AG& CA & &\ldots && \ldots& GG \\ \hline
A & & & & & & & & & & \\
C & & \mathbf{A} & & & \quad\mathbf{C} & &\mathbf{T}& & \mathbf{G} & \\
T & & & & & & & & & & \\
G & & & & & & & & & & \\ \hline
\end{array}
As I mentioned in comments, you need to extend the state. Let $z_t$ consist of pairs of states $(x_t,x_{t-1})$ and now consider a Markov Chain in $z_t$; that is, you have a transition matrix $p(z_t|z_{t-1})$.
So the state at time $t$ will be one of the 16 pairs $(A,A), (A,C) \ldots (G,G)$, and the transition matrix will be a 16 $\times$ 16 matrix of transition probabilities that will be mostly zero (necessarily so, because any pair that doesn't have the second component of $z_t$ match with the first component of $z_{t-1}$ is impossible).
As above, for speed of writing, let's also write $AC$ for $(A,C)$ and so on.
For ease of display I am going to define $z_{t-1}^*$ which is simply a permuted $z_{t-1}$. We can write $p(z_t|z_{t-1}^*)$ and then arrive back at $p(z_t|z_{t-1})$ by simple permutation.
So the transition matrix for $p(z_t|z_{t-1}^*)$ is of the form:
\begin{array}{c|cccc|cc|c|cc}
& AA & AC &AT&AG& CA & &\ldots && \ldots& GG \\ \hline
AA & & & & & & & & & & \\
CA & & \mathbf{A} & & & \quad\mathbf{0} & &\mathbf{0}& & \mathbf{0} & \\
TA & & & & & & & & & & \\
GA & & & & & & & & & & \\ \hline
AC & & & & & & & & & & \\
\vdots & & \mathbf{0} & & & \quad\mathbf{C} & &\mathbf{0}& &\mathbf{0} & \\
\vdots & & & & & & & & & & \\\hline
\vdots & & \mathbf{0} & & &\quad\mathbf{0} & &\mathbf{T}& & \mathbf{0} &\\
\vdots & & & & & & & & & & \\ \hline
\vdots & & \mathbf{0} & & &\quad\mathbf{0} & &\mathbf{0}& & \mathbf{G} & \\
GG & & & & & & & & & & \\
\hline
\end{array}
We can then rearrange either the rows or columns so they're in the same order; the transition matrix no longer has that simple structure, but contains the same values.
Generally, you can use this procedure to transform any $k$-th order Markov chain to a first-order MC (also holds for Hidden Markov Models).
|
Converting 2nd order Markov chain to the 1st order equivalent
|
Here's a way to do it:
(I may be writing my state vectors and transition matrices transposed relative to the way you might have learned them, or even the way they're usually done. If that's the case y
|
Converting 2nd order Markov chain to the 1st order equivalent
Here's a way to do it:
(I may be writing my state vectors and transition matrices transposed relative to the way you might have learned them, or even the way they're usually done. If that's the case you'll need to translate back.)
The probability model gives you probabilities for 4 output states at time $t$ in terms of the 16 input states - the possible ordered pairs for $(x_{t-1},x_{t-2})$.
For speed of writing, let's write $AC$ for $(A,C)$ and so on.
\begin{array}{c|cccc|cccc|c}
& AA & AC &AT&AG& CA &CC &CT &CG& \ldots \\ \hline
A &p_{AA\to A}&p_{AC\to A}&p_{AT\to A}&p_{AG\to A}&p_{CA\to A}&p_{CC\to A}&p_{CT\to A}&p_{CG\to A}\\
C &p_{AA\to C}&p_{AC\to C}&p_{AT\to C}&p_{AG\to C}&p_{CA\to C}&p_{CC\to C}&p_{CT\to C}&p_{CG\to C}\\
T &p_{AA\to T}&p_{AC\to T}&p_{AT\to T}&p_{AG\to T}&p_{CA\to T}&p_{CC\to T}&p_{CT\to T}&p_{CG\to T}\\
G &p_{AA\to G}&p_{AC\to G}&p_{AT\to G}&p_{AG\to G}&p_{CA\to G}&p_{CC\to G}&p_{CT\to G}&p_{CG\to G}\\ \hline
\end{array}
We could label the partitions with boldface versions of the state $x_{t-1}$:
\begin{array}{c|cccc|cc|c|cc}
& AA & AC &AT&AG& CA & &\ldots && \ldots& GG \\ \hline
A & & & & & & & & & & \\
C & & \mathbf{A} & & & \quad\mathbf{C} & &\mathbf{T}& & \mathbf{G} & \\
T & & & & & & & & & & \\
G & & & & & & & & & & \\ \hline
\end{array}
As I mentioned in comments, you need to extend the state. Let $z_t$ consist of pairs of states $(x_t,x_{t-1})$ and now consider a Markov Chain in $z_t$; that is, you have a transition matrix $p(z_t|z_{t-1})$.
So the state at time $t$ will be one of the 16 pairs $(A,A), (A,C) \ldots (G,G)$, and the transition matrix will be a 16 $\times$ 16 matrix of transition probabilities that will be mostly zero (necessarily so, because any pair that doesn't have the second component of $z_t$ match with the first component of $z_{t-1}$ is impossible).
As above, for speed of writing, let's also write $AC$ for $(A,C)$ and so on.
For ease of display I am going to define $z_{t-1}^*$ which is simply a permuted $z_{t-1}$. We can write $p(z_t|z_{t-1}^*)$ and then arrive back at $p(z_t|z_{t-1})$ by simple permutation.
So the transition matrix for $p(z_t|z_{t-1}^*)$ is of the form:
\begin{array}{c|cccc|cc|c|cc}
& AA & AC &AT&AG& CA & &\ldots && \ldots& GG \\ \hline
AA & & & & & & & & & & \\
CA & & \mathbf{A} & & & \quad\mathbf{0} & &\mathbf{0}& & \mathbf{0} & \\
TA & & & & & & & & & & \\
GA & & & & & & & & & & \\ \hline
AC & & & & & & & & & & \\
\vdots & & \mathbf{0} & & & \quad\mathbf{C} & &\mathbf{0}& &\mathbf{0} & \\
\vdots & & & & & & & & & & \\\hline
\vdots & & \mathbf{0} & & &\quad\mathbf{0} & &\mathbf{T}& & \mathbf{0} &\\
\vdots & & & & & & & & & & \\ \hline
\vdots & & \mathbf{0} & & &\quad\mathbf{0} & &\mathbf{0}& & \mathbf{G} & \\
GG & & & & & & & & & & \\
\hline
\end{array}
We can then rearrange either the rows or columns so they're in the same order; the transition matrix no longer has that simple structure, but contains the same values.
Generally, you can use this procedure to transform any $k$-th order Markov chain to a first-order MC (also holds for Hidden Markov Models).
|
Converting 2nd order Markov chain to the 1st order equivalent
Here's a way to do it:
(I may be writing my state vectors and transition matrices transposed relative to the way you might have learned them, or even the way they're usually done. If that's the case y
|
40,080
|
Converting 2nd order Markov chain to the 1st order equivalent
|
The first order transition matrix: $T^1$ is of size $[k*k]$. And the second order transition matrix: $T^2$ is of size $[k^2*k]$. So you want to reduce the number of rows from $k^2$ to $k$ by merging.
An example is given on the Wikipedia link, you should be able to convert $T^2$ to $T^1$ simply by marginalising over the $t-2$ states (which are not needed for $T^1$) at each column.
My explanation is probably not crystal clear but I think you will understand what I mean once you see the example on the link.
|
Converting 2nd order Markov chain to the 1st order equivalent
|
The first order transition matrix: $T^1$ is of size $[k*k]$. And the second order transition matrix: $T^2$ is of size $[k^2*k]$. So you want to reduce the number of rows from $k^2$ to $k$ by merging.
|
Converting 2nd order Markov chain to the 1st order equivalent
The first order transition matrix: $T^1$ is of size $[k*k]$. And the second order transition matrix: $T^2$ is of size $[k^2*k]$. So you want to reduce the number of rows from $k^2$ to $k$ by merging.
An example is given on the Wikipedia link, you should be able to convert $T^2$ to $T^1$ simply by marginalising over the $t-2$ states (which are not needed for $T^1$) at each column.
My explanation is probably not crystal clear but I think you will understand what I mean once you see the example on the link.
|
Converting 2nd order Markov chain to the 1st order equivalent
The first order transition matrix: $T^1$ is of size $[k*k]$. And the second order transition matrix: $T^2$ is of size $[k^2*k]$. So you want to reduce the number of rows from $k^2$ to $k$ by merging.
|
40,081
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
|
Just a hint, after reading your comment. Each image (face) is represented as a stacked vector of length $N$. The different faces make up a dataset stored in a matrix $X$ of size $K\times N$. You might be confused about the fact that you use the PCA to obtain a set of eigenvectors (eigenfaces) $I = \{u_1, u_2, \ldots, u_D\}$ of the covariance matrix $X^TX$, where each $u_i \in \mathbb{R}^{N}$. You don't reduce the number of pixels used to represent a face, but rather you find a small number of eigenfaces that span a space which suitably represents your faces. The eigenfaces still live in the original space though (they have the same number of pixels as the original faces).
The idea is, that you use the obtained eigenfaces as a sort of archetypes that can be used to perform face detection.
Also, purely in terms of storage costs, imagine you have to keep an album of $K$ faces, each composed of $N$ pixels. Instead of keeping all the $K$ faces, you just keep $D$ eigenfaces, where $D \ll K$, together with the component scores and you can recreate any face (with a certain loss in precision).
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
|
Just a hint, after reading your comment. Each image (face) is represented as a stacked vector of length $N$. The different faces make up a dataset stored in a matrix $X$ of size $K\times N$. You might
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
Just a hint, after reading your comment. Each image (face) is represented as a stacked vector of length $N$. The different faces make up a dataset stored in a matrix $X$ of size $K\times N$. You might be confused about the fact that you use the PCA to obtain a set of eigenvectors (eigenfaces) $I = \{u_1, u_2, \ldots, u_D\}$ of the covariance matrix $X^TX$, where each $u_i \in \mathbb{R}^{N}$. You don't reduce the number of pixels used to represent a face, but rather you find a small number of eigenfaces that span a space which suitably represents your faces. The eigenfaces still live in the original space though (they have the same number of pixels as the original faces).
The idea is, that you use the obtained eigenfaces as a sort of archetypes that can be used to perform face detection.
Also, purely in terms of storage costs, imagine you have to keep an album of $K$ faces, each composed of $N$ pixels. Instead of keeping all the $K$ faces, you just keep $D$ eigenfaces, where $D \ll K$, together with the component scores and you can recreate any face (with a certain loss in precision).
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
Just a hint, after reading your comment. Each image (face) is represented as a stacked vector of length $N$. The different faces make up a dataset stored in a matrix $X$ of size $K\times N$. You might
|
40,082
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
|
To clarify a bit more: in your original high-dimensional space, pixels are the dimensions. In the new space, each image is represented as a linear combination of a relatively small number of basis images, the eigenfaces. So in the new space, the eigenfaces are the dimensions.
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
|
To clarify a bit more: in your original high-dimensional space, pixels are the dimensions. In the new space, each image is represented as a linear combination of a relatively small number of basis ima
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
To clarify a bit more: in your original high-dimensional space, pixels are the dimensions. In the new space, each image is represented as a linear combination of a relatively small number of basis images, the eigenfaces. So in the new space, the eigenfaces are the dimensions.
|
How to think of reduced dimensions in PCA on facial images (eigenfaces)?
To clarify a bit more: in your original high-dimensional space, pixels are the dimensions. In the new space, each image is represented as a linear combination of a relatively small number of basis ima
|
40,083
|
Asymptotic Theory in Economics
|
Ferguson's A Course in Large Sample Theory is the best concise introduction to the topic, and it is written in a nice didactic way of having an equivalent of a week's lecture course material in a chapter followed by a strong set of exercises. (Ferguson introduced GMM in 1968 under the name of the minimum $\chi^2$, and it is tucked in as one of the exercises in that book). Van der Vaart's Asymptotic Statistics, recommended by others, is great, too, but it's going off in weird directions (for an economist). Another relatively easy introduction to the first-order asymptotics is Lehmann's Elements of Large Sample Theory. I would argue though that you would get a better mileage out of a book like Smith & Young's Essentials of Statistical Inference, as it will teach you about how statisticians think (sufficiency, UMPT, Cramer-Rao bound, etc.).
Of course you won't find the odd econometric asymptotics such as unit roots or weak instruments. Few statisticians have heard of them, and these are wa-a-ay too exotic for them. However, you would definitely want to revisit these unusual papers to shake off the wrong belief that everything asymptotic is asymptotically normal at $\sqrt{n}$ rate (you can find disturbing counterexamples here and there, too).
|
Asymptotic Theory in Economics
|
Ferguson's A Course in Large Sample Theory is the best concise introduction to the topic, and it is written in a nice didactic way of having an equivalent of a week's lecture course material in a chap
|
Asymptotic Theory in Economics
Ferguson's A Course in Large Sample Theory is the best concise introduction to the topic, and it is written in a nice didactic way of having an equivalent of a week's lecture course material in a chapter followed by a strong set of exercises. (Ferguson introduced GMM in 1968 under the name of the minimum $\chi^2$, and it is tucked in as one of the exercises in that book). Van der Vaart's Asymptotic Statistics, recommended by others, is great, too, but it's going off in weird directions (for an economist). Another relatively easy introduction to the first-order asymptotics is Lehmann's Elements of Large Sample Theory. I would argue though that you would get a better mileage out of a book like Smith & Young's Essentials of Statistical Inference, as it will teach you about how statisticians think (sufficiency, UMPT, Cramer-Rao bound, etc.).
Of course you won't find the odd econometric asymptotics such as unit roots or weak instruments. Few statisticians have heard of them, and these are wa-a-ay too exotic for them. However, you would definitely want to revisit these unusual papers to shake off the wrong belief that everything asymptotic is asymptotically normal at $\sqrt{n}$ rate (you can find disturbing counterexamples here and there, too).
|
Asymptotic Theory in Economics
Ferguson's A Course in Large Sample Theory is the best concise introduction to the topic, and it is written in a nice didactic way of having an equivalent of a week's lecture course material in a chap
|
40,084
|
Asymptotic Theory in Economics
|
Since you mention Greene's book, I assume you are interested in more in-depth understanding of asymptotic statistics. Then I can recommend A. van der Vaart's "Asymptotic statistics" and H. White's "Asymptotic theory for econometricians". Also J. Wooldridge's "Econometric Analysis of Cross Section and Panel Data" has nice chapters on asymptotic theory.
|
Asymptotic Theory in Economics
|
Since you mention Greene's book, I assume you are interested in more in-depth understanding of asymptotic statistics. Then I can recommend A. van der Vaart's "Asymptotic statistics" and H. White's "As
|
Asymptotic Theory in Economics
Since you mention Greene's book, I assume you are interested in more in-depth understanding of asymptotic statistics. Then I can recommend A. van der Vaart's "Asymptotic statistics" and H. White's "Asymptotic theory for econometricians". Also J. Wooldridge's "Econometric Analysis of Cross Section and Panel Data" has nice chapters on asymptotic theory.
|
Asymptotic Theory in Economics
Since you mention Greene's book, I assume you are interested in more in-depth understanding of asymptotic statistics. Then I can recommend A. van der Vaart's "Asymptotic statistics" and H. White's "As
|
40,085
|
Asymptotic Theory in Economics
|
"Asymptotic theory for econometricians" by Halbert White. "Asymptotic Theory of Statistics and Probability", by Anirban DasGupta.
|
Asymptotic Theory in Economics
|
"Asymptotic theory for econometricians" by Halbert White. "Asymptotic Theory of Statistics and Probability", by Anirban DasGupta.
|
Asymptotic Theory in Economics
"Asymptotic theory for econometricians" by Halbert White. "Asymptotic Theory of Statistics and Probability", by Anirban DasGupta.
|
Asymptotic Theory in Economics
"Asymptotic theory for econometricians" by Halbert White. "Asymptotic Theory of Statistics and Probability", by Anirban DasGupta.
|
40,086
|
Fitting a Poisson distribution with lme4 and nlme [closed]
|
You are correct; there is no way to specify the family because the nlme package is only for linear mixed models or non-linear mixed models, which assume Gaussian errors. The range of models fitted by nlme does not include the generalised linear mixed model (GLMM).
That lmer() takes a family argument is unfortunate, and IIRC, this may have changed in the latest version on CRAN. You are supposed to explicitly call glmer() to fit a GLMM now when using the lme4 package to fit a GLMM. What used to happen is that if you called lmer() with argument family, it would call glmer() for you.
|
Fitting a Poisson distribution with lme4 and nlme [closed]
|
You are correct; there is no way to specify the family because the nlme package is only for linear mixed models or non-linear mixed models, which assume Gaussian errors. The range of models fitted by
|
Fitting a Poisson distribution with lme4 and nlme [closed]
You are correct; there is no way to specify the family because the nlme package is only for linear mixed models or non-linear mixed models, which assume Gaussian errors. The range of models fitted by nlme does not include the generalised linear mixed model (GLMM).
That lmer() takes a family argument is unfortunate, and IIRC, this may have changed in the latest version on CRAN. You are supposed to explicitly call glmer() to fit a GLMM now when using the lme4 package to fit a GLMM. What used to happen is that if you called lmer() with argument family, it would call glmer() for you.
|
Fitting a Poisson distribution with lme4 and nlme [closed]
You are correct; there is no way to specify the family because the nlme package is only for linear mixed models or non-linear mixed models, which assume Gaussian errors. The range of models fitted by
|
40,087
|
Which distributions are parameterization invariant when based on the Jeffreys prior?
|
Yes. And actually this is the interesting invariance property: it means that two Bayesians using a different parameterization of the model but both using the Jeffreys prior, obtain the same posterior distribution (up to change-of-variables) to draw inference.
Conceptually, there's no prior predictive distribution based on the Jeffreys prior. The goal of the Jeffreys prior is to provide a posterior distribution which reflects as best as possible the information brought by the data. There's no prior belief about the parameters, hence no prior predictive distribution of the data.
It is not clear what you mean by invariance for a (prior or posterior) predictive distribution. But note that from 1), two Bayesians using the Jeffreys prior but different parameterizations, obtain the same posterior predictive distribution.
The MAP is the mode of the posterior distribution. It is not invariant, in the sense that if you use $\theta$ as the model paramater on one hand, and $\psi=f(\theta)$ on the other hand, with $f$ one-to-one, then the mode of the posterior distribution of $\psi$ is not the image under $f$ of the mode of the posterior distribution of $\theta$. That means that our two Bayesians, both using the Jeffreys prior but using different parameterization, will get incoherent results if they consider the MAP as the parameter estimate.
|
Which distributions are parameterization invariant when based on the Jeffreys prior?
|
Yes. And actually this is the interesting invariance property: it means that two Bayesians using a different parameterization of the model but both using the Jeffreys prior, obtain the same posterior
|
Which distributions are parameterization invariant when based on the Jeffreys prior?
Yes. And actually this is the interesting invariance property: it means that two Bayesians using a different parameterization of the model but both using the Jeffreys prior, obtain the same posterior distribution (up to change-of-variables) to draw inference.
Conceptually, there's no prior predictive distribution based on the Jeffreys prior. The goal of the Jeffreys prior is to provide a posterior distribution which reflects as best as possible the information brought by the data. There's no prior belief about the parameters, hence no prior predictive distribution of the data.
It is not clear what you mean by invariance for a (prior or posterior) predictive distribution. But note that from 1), two Bayesians using the Jeffreys prior but different parameterizations, obtain the same posterior predictive distribution.
The MAP is the mode of the posterior distribution. It is not invariant, in the sense that if you use $\theta$ as the model paramater on one hand, and $\psi=f(\theta)$ on the other hand, with $f$ one-to-one, then the mode of the posterior distribution of $\psi$ is not the image under $f$ of the mode of the posterior distribution of $\theta$. That means that our two Bayesians, both using the Jeffreys prior but using different parameterization, will get incoherent results if they consider the MAP as the parameter estimate.
|
Which distributions are parameterization invariant when based on the Jeffreys prior?
Yes. And actually this is the interesting invariance property: it means that two Bayesians using a different parameterization of the model but both using the Jeffreys prior, obtain the same posterior
|
40,088
|
Does $\sum_{i\neq j} \text{Cov}(X_i, X_j) = 0$ imply $\text{Cov}(X_i, X_j) = 0, \,\forall\,i \neq j$
|
Consider three random variables with covariance matrix
$$\left[\begin{matrix}1 & a & 0\\a & 1 & -a\\ 0 & -a & 1\end{matrix}\right]$$
which has leading principal minors $1$, $1-a^2$, and $1-2a^2$ and thus is positive definite
(as all covariance matrices must be) provided that $\vert a\vert < \frac{1}{\sqrt{2}}$.
Obviously this satisfies
$$\sum_{i\neq j} \text{Cov}(X_i, X_j) = 0 \tag{2}$$ but not
$$\text{Cov}(X_i, X_j) = 0, \,\forall\,i \neq j.\tag{1}$$
As noted in my (now-deleted) comment, correlation is a pairwise property and so $n$ random variables are uncorrelated means that $(1)$ holds: every pair of distinct random variables is uncorrelated. As far as I know, there is no name for random variables for which $(2)$
holds but $(1)$ does not. In the example provided, the random variables are not
significantly correlated in the sense that $Y$ "fails to explain" more than half
the variance of either $X$ or $Z$.
|
Does $\sum_{i\neq j} \text{Cov}(X_i, X_j) = 0$ imply $\text{Cov}(X_i, X_j) = 0, \,\forall\,i \neq j$
|
Consider three random variables with covariance matrix
$$\left[\begin{matrix}1 & a & 0\\a & 1 & -a\\ 0 & -a & 1\end{matrix}\right]$$
which has leading principal minors $1$, $1-a^2$, and $1-2a^2$ and t
|
Does $\sum_{i\neq j} \text{Cov}(X_i, X_j) = 0$ imply $\text{Cov}(X_i, X_j) = 0, \,\forall\,i \neq j$
Consider three random variables with covariance matrix
$$\left[\begin{matrix}1 & a & 0\\a & 1 & -a\\ 0 & -a & 1\end{matrix}\right]$$
which has leading principal minors $1$, $1-a^2$, and $1-2a^2$ and thus is positive definite
(as all covariance matrices must be) provided that $\vert a\vert < \frac{1}{\sqrt{2}}$.
Obviously this satisfies
$$\sum_{i\neq j} \text{Cov}(X_i, X_j) = 0 \tag{2}$$ but not
$$\text{Cov}(X_i, X_j) = 0, \,\forall\,i \neq j.\tag{1}$$
As noted in my (now-deleted) comment, correlation is a pairwise property and so $n$ random variables are uncorrelated means that $(1)$ holds: every pair of distinct random variables is uncorrelated. As far as I know, there is no name for random variables for which $(2)$
holds but $(1)$ does not. In the example provided, the random variables are not
significantly correlated in the sense that $Y$ "fails to explain" more than half
the variance of either $X$ or $Z$.
|
Does $\sum_{i\neq j} \text{Cov}(X_i, X_j) = 0$ imply $\text{Cov}(X_i, X_j) = 0, \,\forall\,i \neq j$
Consider three random variables with covariance matrix
$$\left[\begin{matrix}1 & a & 0\\a & 1 & -a\\ 0 & -a & 1\end{matrix}\right]$$
which has leading principal minors $1$, $1-a^2$, and $1-2a^2$ and t
|
40,089
|
Chi-square independence test for A/B split testing
|
Here's one way to lay out the chi-squared test -- you don't necessarily need all this detail. The test doesn't care which way around you have rows and columns, so I'll do it
your way around.
Comparing proportions via Pearson's Chi-squared test of independence
at the 5% level #(for this example, at least - you choose your own level)
Null hypothesis - there is no difference in CTR between pages A & B.
Observed:
Clicks Nonclicks Impressions
A 10 55990 56000
B 21 77979 78000
Tot 31 133969 134000
Expected:
Clicks Nonclicks Impressions
A 12.96 55987.04 56000
B 18.04 77981.96 78000
Tot 31 133969 134000
(Entries in the body of the Expected table are row.total*column.total/overall.total)
Contribution to chi-square = (Observed - Expected)^2/Expected
Clicks Nonclicks
A 0.6741 0.0001560
B 0.4840 0.0001120
Chi-square = sum((Observed - Expected)^2/Expected)
= 1.158368
df = 1
p-value = 0.2818
At the 5% level we do not reject H0 - there's no evidence of a difference in
click through rate.
(You need tables or a program to find the p-value.)
|
Chi-square independence test for A/B split testing
|
Here's one way to lay out the chi-squared test -- you don't necessarily need all this detail. The test doesn't care which way around you have rows and columns, so I'll do it
your way around.
Comparin
|
Chi-square independence test for A/B split testing
Here's one way to lay out the chi-squared test -- you don't necessarily need all this detail. The test doesn't care which way around you have rows and columns, so I'll do it
your way around.
Comparing proportions via Pearson's Chi-squared test of independence
at the 5% level #(for this example, at least - you choose your own level)
Null hypothesis - there is no difference in CTR between pages A & B.
Observed:
Clicks Nonclicks Impressions
A 10 55990 56000
B 21 77979 78000
Tot 31 133969 134000
Expected:
Clicks Nonclicks Impressions
A 12.96 55987.04 56000
B 18.04 77981.96 78000
Tot 31 133969 134000
(Entries in the body of the Expected table are row.total*column.total/overall.total)
Contribution to chi-square = (Observed - Expected)^2/Expected
Clicks Nonclicks
A 0.6741 0.0001560
B 0.4840 0.0001120
Chi-square = sum((Observed - Expected)^2/Expected)
= 1.158368
df = 1
p-value = 0.2818
At the 5% level we do not reject H0 - there's no evidence of a difference in
click through rate.
(You need tables or a program to find the p-value.)
|
Chi-square independence test for A/B split testing
Here's one way to lay out the chi-squared test -- you don't necessarily need all this detail. The test doesn't care which way around you have rows and columns, so I'll do it
your way around.
Comparin
|
40,090
|
Chi-square independence test for A/B split testing
|
The chi-square test of independence only tests for whether there is a relationship between the variables, though, not that there is no difference between CTR:
H0: Click-through and interface are independent.
Ha: Click-through and interface are not independent (that is, something interesting is going on; one of your interfaces is performing better)
You would prepare the test like this:
success <- c(10,21)
failure <- c(55990,77979)
my.table <- rbind(success,failure)
And then run the test:
> chisq.test(my.table)
Pearson's Chi-squared test with Yates' continuity correction
data: my.table
X-squared = 0.79955, df = 1, p-value = 0.3712
You get the same results as the previous respondent when you turn off the continuity correction (which many argue you don't need at all, unless you have less than 5 clicks on either A or B):
> chisq.test(my.table, correct=FALSE)
Pearson's Chi-squared test
data: my.table
X-squared = 1.1584, df = 1, p-value = 0.2818
You can easily access your expected value table and compute the chi-square test statistic manually if you want:
> chisq.test(my.table, correct=FALSE)$expected
[,1] [,2]
success 12.95522 18.04478
failure 55987.04478 77981.95522
An alternative would be the two-proportion zitest, which says:
H0: CTR of B - CTR of A = 0
Ha: CTR of B - CTR of A > 0 (B has a bigger CTR)
> source("https://raw.githubusercontent.com/NicoleRadziwill/R-Functions/master/z2test.R")
> z2.test(10,55990,21,77979)
$estimate
[1] -9.069995e-05
$ts.z
[1] -1.076516
$p.val
[1] 0.1408483
$cint
[1] -2.504372e-04 6.903728e-05
Notice that the p-value is large (0.14) and the confidence interval includes the value zero -- there's no difference between your A and B CTRs.
You could also use a chi-square test statistic to run the test and get a confidence interval:
> prop.test(c(10,21),c(55990,77979))
2-sample test for equality of proportions with continuity correction
data: c(10, 21) out of c(55990, 77979)
X-squared = 0.79999, df = 1, p-value = 0.3711
alternative hypothesis: two.sided
95 percent confidence interval:
-2.657756e-04 8.437566e-05
sample estimates:
prop 1 prop 2
0.0001786033 0.0002693033
Notice that the p-value is the same as when you ran the chi-square test of independence earlier, and the confidence interval also includes zero in it.... indicating no significant difference between CTRs for your A and B.
|
Chi-square independence test for A/B split testing
|
The chi-square test of independence only tests for whether there is a relationship between the variables, though, not that there is no difference between CTR:
H0: Click-through and interface are indep
|
Chi-square independence test for A/B split testing
The chi-square test of independence only tests for whether there is a relationship between the variables, though, not that there is no difference between CTR:
H0: Click-through and interface are independent.
Ha: Click-through and interface are not independent (that is, something interesting is going on; one of your interfaces is performing better)
You would prepare the test like this:
success <- c(10,21)
failure <- c(55990,77979)
my.table <- rbind(success,failure)
And then run the test:
> chisq.test(my.table)
Pearson's Chi-squared test with Yates' continuity correction
data: my.table
X-squared = 0.79955, df = 1, p-value = 0.3712
You get the same results as the previous respondent when you turn off the continuity correction (which many argue you don't need at all, unless you have less than 5 clicks on either A or B):
> chisq.test(my.table, correct=FALSE)
Pearson's Chi-squared test
data: my.table
X-squared = 1.1584, df = 1, p-value = 0.2818
You can easily access your expected value table and compute the chi-square test statistic manually if you want:
> chisq.test(my.table, correct=FALSE)$expected
[,1] [,2]
success 12.95522 18.04478
failure 55987.04478 77981.95522
An alternative would be the two-proportion zitest, which says:
H0: CTR of B - CTR of A = 0
Ha: CTR of B - CTR of A > 0 (B has a bigger CTR)
> source("https://raw.githubusercontent.com/NicoleRadziwill/R-Functions/master/z2test.R")
> z2.test(10,55990,21,77979)
$estimate
[1] -9.069995e-05
$ts.z
[1] -1.076516
$p.val
[1] 0.1408483
$cint
[1] -2.504372e-04 6.903728e-05
Notice that the p-value is large (0.14) and the confidence interval includes the value zero -- there's no difference between your A and B CTRs.
You could also use a chi-square test statistic to run the test and get a confidence interval:
> prop.test(c(10,21),c(55990,77979))
2-sample test for equality of proportions with continuity correction
data: c(10, 21) out of c(55990, 77979)
X-squared = 0.79999, df = 1, p-value = 0.3711
alternative hypothesis: two.sided
95 percent confidence interval:
-2.657756e-04 8.437566e-05
sample estimates:
prop 1 prop 2
0.0001786033 0.0002693033
Notice that the p-value is the same as when you ran the chi-square test of independence earlier, and the confidence interval also includes zero in it.... indicating no significant difference between CTRs for your A and B.
|
Chi-square independence test for A/B split testing
The chi-square test of independence only tests for whether there is a relationship between the variables, though, not that there is no difference between CTR:
H0: Click-through and interface are indep
|
40,091
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
|
Something like that? Dmax occurs at the value max.at.
set.seed(12345)
x <- rnorm(10000, 5, 5)
y <- rnorm(10000, 7, 6.5)
# remove any missings from the data
x <- x[!is.na(x)]
y <- y[!is.na(y)]
ecdf.x <- ecdf(x)
ecdf.y <- ecdf(y)
plot(ecdf.x, xlim=c(min(c(x,y)), max(c(x,y))), verticals=T, cex.lab=1.2, cex.axis=1.3,
las=1, col="skyblue4", lwd=2, main="")
plot(ecdf.y, verticals=T, add=T, do.points=FALSE, cex.lab=1.2,
cex.axis=1.3, col="red", lwd=2)
n.x <- length(x)
n.y <- length(y)
n <- n.x * n.y/(n.x + n.y)
w <- c(x, y)
z <- cumsum(ifelse(order(w) <= n.x, 1/n.x, -1/n.y))
max(abs(z)) # Dmax
[1] 0.1664
ks.test(x,y)$statistic # the same
D
0.1664
max.at <- sort(w)[which(abs(z) == max(abs(z)))]
[1] 9.082877
# Draw vertical line
abline(v=max.at, lty=2)
lines(abs(z)~sort(w), col="purple", lwd=2)
legend("topleft", legend=c("x", "y", "|Distance|"), col=c("skyblue4", "red", "purple"), lwd=c(2,2,2), bty="n")
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
|
Something like that? Dmax occurs at the value max.at.
set.seed(12345)
x <- rnorm(10000, 5, 5)
y <- rnorm(10000, 7, 6.5)
# remove any missings from the data
x <- x[!is.na(x)]
y <- y[!is.na(y)]
ecdf
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
Something like that? Dmax occurs at the value max.at.
set.seed(12345)
x <- rnorm(10000, 5, 5)
y <- rnorm(10000, 7, 6.5)
# remove any missings from the data
x <- x[!is.na(x)]
y <- y[!is.na(y)]
ecdf.x <- ecdf(x)
ecdf.y <- ecdf(y)
plot(ecdf.x, xlim=c(min(c(x,y)), max(c(x,y))), verticals=T, cex.lab=1.2, cex.axis=1.3,
las=1, col="skyblue4", lwd=2, main="")
plot(ecdf.y, verticals=T, add=T, do.points=FALSE, cex.lab=1.2,
cex.axis=1.3, col="red", lwd=2)
n.x <- length(x)
n.y <- length(y)
n <- n.x * n.y/(n.x + n.y)
w <- c(x, y)
z <- cumsum(ifelse(order(w) <= n.x, 1/n.x, -1/n.y))
max(abs(z)) # Dmax
[1] 0.1664
ks.test(x,y)$statistic # the same
D
0.1664
max.at <- sort(w)[which(abs(z) == max(abs(z)))]
[1] 9.082877
# Draw vertical line
abline(v=max.at, lty=2)
lines(abs(z)~sort(w), col="purple", lwd=2)
legend("topleft", legend=c("x", "y", "|Distance|"), col=c("skyblue4", "red", "purple"), lwd=c(2,2,2), bty="n")
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
Something like that? Dmax occurs at the value max.at.
set.seed(12345)
x <- rnorm(10000, 5, 5)
y <- rnorm(10000, 7, 6.5)
# remove any missings from the data
x <- x[!is.na(x)]
y <- y[!is.na(y)]
ecdf
|
40,092
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
|
You could also use @COOLSerdash's answer plus environments to make the ks.test function output the value directly, like this:
ks.test.2 <- function(x, y, ..., alternative = c("two.sided", "less", "greater"),
exact = NULL)
{
e <- new.env()
ks.test.2 <- ks.test
environment(ks.test.2) <- e
e$C_pkstwo <- stats:::C_pkstwo
e$C_psmirnov2x <- stats:::C_psmirnov2x
e$C_pkolmogorov2x <- stats:::C_pkolmogorov2x
e$return <- function(x){
w<-get("w", envir=parent.frame())
z<-get("z", envir=parent.frame())
x$max.at <- sort(w)[which(abs(z) == max(abs(z)))]
return(x)
}
ks.test.2(x, y, ..., alternative = c("two.sided", "less", "greater"),
exact = NULL)
}
The function ks.test.2 should behave exactly like ks.test except that now it also returns the desired max.at component.
set.seed(12345)
x <- rnorm(10000, 5, 5)
y <- rnorm(10000, 7, 6.5)
ks.test.2(x,y)$max.at
# [1] 9.082877
This is only for the two-sided alternative, but you could enhance it to deal with the one-sided alternative if desired.
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
|
You could also use @COOLSerdash's answer plus environments to make the ks.test function output the value directly, like this:
ks.test.2 <- function(x, y, ..., alternative = c("two.sided", "less", "gre
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
You could also use @COOLSerdash's answer plus environments to make the ks.test function output the value directly, like this:
ks.test.2 <- function(x, y, ..., alternative = c("two.sided", "less", "greater"),
exact = NULL)
{
e <- new.env()
ks.test.2 <- ks.test
environment(ks.test.2) <- e
e$C_pkstwo <- stats:::C_pkstwo
e$C_psmirnov2x <- stats:::C_psmirnov2x
e$C_pkolmogorov2x <- stats:::C_pkolmogorov2x
e$return <- function(x){
w<-get("w", envir=parent.frame())
z<-get("z", envir=parent.frame())
x$max.at <- sort(w)[which(abs(z) == max(abs(z)))]
return(x)
}
ks.test.2(x, y, ..., alternative = c("two.sided", "less", "greater"),
exact = NULL)
}
The function ks.test.2 should behave exactly like ks.test except that now it also returns the desired max.at component.
set.seed(12345)
x <- rnorm(10000, 5, 5)
y <- rnorm(10000, 7, 6.5)
ks.test.2(x,y)$max.at
# [1] 9.082877
This is only for the two-sided alternative, but you could enhance it to deal with the one-sided alternative if desired.
|
Value at $D_\max$ from Kolmogorov-Smirnov test in R
You could also use @COOLSerdash's answer plus environments to make the ks.test function output the value directly, like this:
ks.test.2 <- function(x, y, ..., alternative = c("two.sided", "less", "gre
|
40,093
|
How to explain how I divided a bimodal distribution based on kernel density estimation
|
You could fit a two-component mixture model using http://cran.r-project.org/web/packages/mixtools/index.html. Try using normalmixEM. You could then follow Erich Schubert's suggestions and find the region where Pr[data point was generated from the component with the smaller mean] >= 0.50.
Edit: example R code:
library(mixtools)
simulate <- function(lambda=0.3, mu=c(0, 4), sd=c(1, 1), n.obs=10^5) {
x1 <- rnorm(n.obs, mu[1], sd[1])
x2 <- rnorm(n.obs, mu[2], sd[2])
return(ifelse(runif(n.obs) < lambda, x1, x2))
}
x <- simulate()
model <- normalmixEM(x=x, k=2)
index.lower <- which.min(model$mu) # Index of component with lower mean
find.cutoff <- function(proba=0.5, i=index.lower) {
## Cutoff such that Pr[drawn from bad component] == proba
f <- function(x) {
proba - (model$lambda[i]*dnorm(x, model$mu[i], model$sigma[i]) /
(model$lambda[1]*dnorm(x, model$mu[1], model$sigma[1]) + model$lambda[2]*dnorm(x, model$mu[2], model$sigma[2])))
}
return(uniroot(f=f, lower=-10, upper=10)$root) # Careful with division by zero if changing lower and upper
}
cutoffs <- c(find.cutoff(proba=0.5), find.cutoff(proba=0.75)) # Around c(1.8, 1.5)
hist(x)
abline(v=cutoffs, col=c("red", "blue"), lty=2)
|
How to explain how I divided a bimodal distribution based on kernel density estimation
|
You could fit a two-component mixture model using http://cran.r-project.org/web/packages/mixtools/index.html. Try using normalmixEM. You could then follow Erich Schubert's suggestions and find the r
|
How to explain how I divided a bimodal distribution based on kernel density estimation
You could fit a two-component mixture model using http://cran.r-project.org/web/packages/mixtools/index.html. Try using normalmixEM. You could then follow Erich Schubert's suggestions and find the region where Pr[data point was generated from the component with the smaller mean] >= 0.50.
Edit: example R code:
library(mixtools)
simulate <- function(lambda=0.3, mu=c(0, 4), sd=c(1, 1), n.obs=10^5) {
x1 <- rnorm(n.obs, mu[1], sd[1])
x2 <- rnorm(n.obs, mu[2], sd[2])
return(ifelse(runif(n.obs) < lambda, x1, x2))
}
x <- simulate()
model <- normalmixEM(x=x, k=2)
index.lower <- which.min(model$mu) # Index of component with lower mean
find.cutoff <- function(proba=0.5, i=index.lower) {
## Cutoff such that Pr[drawn from bad component] == proba
f <- function(x) {
proba - (model$lambda[i]*dnorm(x, model$mu[i], model$sigma[i]) /
(model$lambda[1]*dnorm(x, model$mu[1], model$sigma[1]) + model$lambda[2]*dnorm(x, model$mu[2], model$sigma[2])))
}
return(uniroot(f=f, lower=-10, upper=10)$root) # Careful with division by zero if changing lower and upper
}
cutoffs <- c(find.cutoff(proba=0.5), find.cutoff(proba=0.75)) # Around c(1.8, 1.5)
hist(x)
abline(v=cutoffs, col=c("red", "blue"), lty=2)
|
How to explain how I divided a bimodal distribution based on kernel density estimation
You could fit a two-component mixture model using http://cran.r-project.org/web/packages/mixtools/index.html. Try using normalmixEM. You could then follow Erich Schubert's suggestions and find the r
|
40,094
|
How to explain how I divided a bimodal distribution based on kernel density estimation
|
It would probably make more sense if you also estimated the "height" (actually, more of a weight) of both, and then set the threshold to the tipping point.
I.e. model the data as $$p_1 \cdot pdf(x, \mu_1, \sigma_1) + p_2 \cdot pdf(x, \mu_2, \sigma_2)$$
and set the threshold to $x$ where $$p_1 \cdot pdf(x, \mu_1, \sigma_1) = p_2 \cdot pdf(x, \mu_2, \sigma_2)$$
i.e. the object has the same chance of belonging to both classes.
You can still add a parameter to tune how conservative your method is, e.g. using
$$p_1 \cdot pdf(x, \mu_1, \sigma_1) = c\cdot p_2 \cdot pdf(x, \mu_2, \sigma_2)$$
where $c=2$ would put double weight on the second distribution.
|
How to explain how I divided a bimodal distribution based on kernel density estimation
|
It would probably make more sense if you also estimated the "height" (actually, more of a weight) of both, and then set the threshold to the tipping point.
I.e. model the data as $$p_1 \cdot pdf(x, \m
|
How to explain how I divided a bimodal distribution based on kernel density estimation
It would probably make more sense if you also estimated the "height" (actually, more of a weight) of both, and then set the threshold to the tipping point.
I.e. model the data as $$p_1 \cdot pdf(x, \mu_1, \sigma_1) + p_2 \cdot pdf(x, \mu_2, \sigma_2)$$
and set the threshold to $x$ where $$p_1 \cdot pdf(x, \mu_1, \sigma_1) = p_2 \cdot pdf(x, \mu_2, \sigma_2)$$
i.e. the object has the same chance of belonging to both classes.
You can still add a parameter to tune how conservative your method is, e.g. using
$$p_1 \cdot pdf(x, \mu_1, \sigma_1) = c\cdot p_2 \cdot pdf(x, \mu_2, \sigma_2)$$
where $c=2$ would put double weight on the second distribution.
|
How to explain how I divided a bimodal distribution based on kernel density estimation
It would probably make more sense if you also estimated the "height" (actually, more of a weight) of both, and then set the threshold to the tipping point.
I.e. model the data as $$p_1 \cdot pdf(x, \m
|
40,095
|
How to explain how I divided a bimodal distribution based on kernel density estimation
|
I am using this example and I sometimes get this error
Error in uniroot(f = f, lower = -10, upper = 10) :
f() values at end points not of opposite sign
and so i changed the lower to -1 and for some of the datasets it fixed it, but still errors out on others. Not sure if that can be dynamically set based on the input vector (i.e., x) ?
|
How to explain how I divided a bimodal distribution based on kernel density estimation
|
I am using this example and I sometimes get this error
Error in uniroot(f = f, lower = -10, upper = 10) :
f() values at end points not of opposite sign
and so i changed the lower to -1 and for some
|
How to explain how I divided a bimodal distribution based on kernel density estimation
I am using this example and I sometimes get this error
Error in uniroot(f = f, lower = -10, upper = 10) :
f() values at end points not of opposite sign
and so i changed the lower to -1 and for some of the datasets it fixed it, but still errors out on others. Not sure if that can be dynamically set based on the input vector (i.e., x) ?
|
How to explain how I divided a bimodal distribution based on kernel density estimation
I am using this example and I sometimes get this error
Error in uniroot(f = f, lower = -10, upper = 10) :
f() values at end points not of opposite sign
and so i changed the lower to -1 and for some
|
40,096
|
Is it valid to use a difference score as an independent variable in a regression analysis
|
Difference scores as independent variables are fine, but they impose a functionally more restrictive form on the equation. Consider;
$y = \beta_{11}(X_2) - \beta_{21}(X_1) + e_1$
Versus the equation;
$y = \beta_{12}(\Delta X) + e_2$
Where $\Delta X = X_2 - X_1$. You can see the second equation is a special case of the first when $\beta_{11} = \beta_{21}$. Only when you have very good reason to believe the more functionally restrictive form is reasonable, should you use the change scores.
You could actually have situations in which $\beta_{11}$ and $\beta_{21}$ have countervaling effects (e.g. $\beta_{11} = -\beta_{21}$, and the change score would appear to be inconsequential when in reality the two individual components contribute to the outcome.
|
Is it valid to use a difference score as an independent variable in a regression analysis
|
Difference scores as independent variables are fine, but they impose a functionally more restrictive form on the equation. Consider;
$y = \beta_{11}(X_2) - \beta_{21}(X_1) + e_1$
Versus the equation;
|
Is it valid to use a difference score as an independent variable in a regression analysis
Difference scores as independent variables are fine, but they impose a functionally more restrictive form on the equation. Consider;
$y = \beta_{11}(X_2) - \beta_{21}(X_1) + e_1$
Versus the equation;
$y = \beta_{12}(\Delta X) + e_2$
Where $\Delta X = X_2 - X_1$. You can see the second equation is a special case of the first when $\beta_{11} = \beta_{21}$. Only when you have very good reason to believe the more functionally restrictive form is reasonable, should you use the change scores.
You could actually have situations in which $\beta_{11}$ and $\beta_{21}$ have countervaling effects (e.g. $\beta_{11} = -\beta_{21}$, and the change score would appear to be inconsequential when in reality the two individual components contribute to the outcome.
|
Is it valid to use a difference score as an independent variable in a regression analysis
Difference scores as independent variables are fine, but they impose a functionally more restrictive form on the equation. Consider;
$y = \beta_{11}(X_2) - \beta_{21}(X_1) + e_1$
Versus the equation;
|
40,097
|
Is it valid to use a difference score as an independent variable in a regression analysis
|
As far as I know, there is no reason you can't use a difference score as an independent variable in a regression. It violates no assumptions.
Your second question is more complex. Your idea of using T1 measures as covariates is often done. People also sometimes use difference scores as a DV (as you probably know from your reading).
There are some problems with pre- post- testing when the variables are measured with error (as all psychological variables inevitably are). If I recall correctly, there are details in Collins and Horn.
|
Is it valid to use a difference score as an independent variable in a regression analysis
|
As far as I know, there is no reason you can't use a difference score as an independent variable in a regression. It violates no assumptions.
Your second question is more complex. Your idea of using
|
Is it valid to use a difference score as an independent variable in a regression analysis
As far as I know, there is no reason you can't use a difference score as an independent variable in a regression. It violates no assumptions.
Your second question is more complex. Your idea of using T1 measures as covariates is often done. People also sometimes use difference scores as a DV (as you probably know from your reading).
There are some problems with pre- post- testing when the variables are measured with error (as all psychological variables inevitably are). If I recall correctly, there are details in Collins and Horn.
|
Is it valid to use a difference score as an independent variable in a regression analysis
As far as I know, there is no reason you can't use a difference score as an independent variable in a regression. It violates no assumptions.
Your second question is more complex. Your idea of using
|
40,098
|
Is it valid to use a difference score as an independent variable in a regression analysis
|
Using a difference score as a predictor in multiple regression will usually lead to some loss of model fit, i.e. R-squared will be less than what it could be if you leave both variables in the difference score to be their own predictors with their own slopes. That is, if you have a model like this:
$$
y' = a + b_1 d, \text{where}~~~ d = (x_1 - x_2)
$$
It is the same as forcing the two slopes to be equal in magnitude but opposite in sign. That is via multiplication,
$$
y' = a + b_1 (x_1 -x_2)
$$
and $y' = a +b_ 1x_1 - b_1 x_2$.
So, in a sense it is forcing the linear restriction that you use +1 and $-1$ coefficients, or at least equal but opposite in sign slopes.
To maximize the fit of the model, use this approach instead, allowing the slopes for both variables to just be estimated freely (and if they happen to be equal but opposite in sign, then the difference score is okay):
$$
y' = a + b 1 x_1 + b_2 x_2
$$
The loss of R-square or model fit depends on how different the freely estimated slopes would be from the linear restriction of the difference score. Running a simulation with different population slopes, we've found that the loss in R-square (predictable variance in the DV) ranges from zero to about .83, so it can be small or drastic.
Bottom line - just use the regular model with the two variables with their own estimated slopes as the last model above. If the best fit results from time1 - time2 (difference score) then it will be estimated as such, and if not, then your model fit will be much better.
References:
Edwards, J. R. (2001). Ten difference score myths. Organizational Research Methods, 4, 264-286. Download
Difference Scores in Linear Regression: Model Fit with Correlated Predictors MICHAEL C. HELFORD, ADRIAN L. THOMAS, MARLAINA M. MONTOYA, LONG H. NGUYEN, AYESHA P. JAMASPI, ASHLEY Y. CHUNG, Roosevelt University; mhelford@roosevelt.edu A statistical simulation was used to estimate the loss of model fit in linear regression when using difference scores with correlated predictors compared to non-difference score models. Differences in model fit ranged from 0 to 0.84 across 9 simulated populations.
|
Is it valid to use a difference score as an independent variable in a regression analysis
|
Using a difference score as a predictor in multiple regression will usually lead to some loss of model fit, i.e. R-squared will be less than what it could be if you leave both variables in the differe
|
Is it valid to use a difference score as an independent variable in a regression analysis
Using a difference score as a predictor in multiple regression will usually lead to some loss of model fit, i.e. R-squared will be less than what it could be if you leave both variables in the difference score to be their own predictors with their own slopes. That is, if you have a model like this:
$$
y' = a + b_1 d, \text{where}~~~ d = (x_1 - x_2)
$$
It is the same as forcing the two slopes to be equal in magnitude but opposite in sign. That is via multiplication,
$$
y' = a + b_1 (x_1 -x_2)
$$
and $y' = a +b_ 1x_1 - b_1 x_2$.
So, in a sense it is forcing the linear restriction that you use +1 and $-1$ coefficients, or at least equal but opposite in sign slopes.
To maximize the fit of the model, use this approach instead, allowing the slopes for both variables to just be estimated freely (and if they happen to be equal but opposite in sign, then the difference score is okay):
$$
y' = a + b 1 x_1 + b_2 x_2
$$
The loss of R-square or model fit depends on how different the freely estimated slopes would be from the linear restriction of the difference score. Running a simulation with different population slopes, we've found that the loss in R-square (predictable variance in the DV) ranges from zero to about .83, so it can be small or drastic.
Bottom line - just use the regular model with the two variables with their own estimated slopes as the last model above. If the best fit results from time1 - time2 (difference score) then it will be estimated as such, and if not, then your model fit will be much better.
References:
Edwards, J. R. (2001). Ten difference score myths. Organizational Research Methods, 4, 264-286. Download
Difference Scores in Linear Regression: Model Fit with Correlated Predictors MICHAEL C. HELFORD, ADRIAN L. THOMAS, MARLAINA M. MONTOYA, LONG H. NGUYEN, AYESHA P. JAMASPI, ASHLEY Y. CHUNG, Roosevelt University; mhelford@roosevelt.edu A statistical simulation was used to estimate the loss of model fit in linear regression when using difference scores with correlated predictors compared to non-difference score models. Differences in model fit ranged from 0 to 0.84 across 9 simulated populations.
|
Is it valid to use a difference score as an independent variable in a regression analysis
Using a difference score as a predictor in multiple regression will usually lead to some loss of model fit, i.e. R-squared will be less than what it could be if you leave both variables in the differe
|
40,099
|
What is coskewness and how can it be calculated?
|
In this paper coskewness is defined as
$$
coskew_{i,m} = \frac{COV(r_i,(r_m-\mu_m)^2) }{E[(r_m-\mu_m)^3]}.
$$
You can calculate it by using the standard moment estimators - that's what I would do.
Thus, given a sample for market returns $(r_m^j)_{j=1}^N$ and asset returns $(r_i^j)_{j=1}^N$ you calculate the quantities for each sample pair and do the calculation.
|
What is coskewness and how can it be calculated?
|
In this paper coskewness is defined as
$$
coskew_{i,m} = \frac{COV(r_i,(r_m-\mu_m)^2) }{E[(r_m-\mu_m)^3]}.
$$
You can calculate it by using the standard moment estimators - that's what I would do.
Thu
|
What is coskewness and how can it be calculated?
In this paper coskewness is defined as
$$
coskew_{i,m} = \frac{COV(r_i,(r_m-\mu_m)^2) }{E[(r_m-\mu_m)^3]}.
$$
You can calculate it by using the standard moment estimators - that's what I would do.
Thus, given a sample for market returns $(r_m^j)_{j=1}^N$ and asset returns $(r_i^j)_{j=1}^N$ you calculate the quantities for each sample pair and do the calculation.
|
What is coskewness and how can it be calculated?
In this paper coskewness is defined as
$$
coskew_{i,m} = \frac{COV(r_i,(r_m-\mu_m)^2) }{E[(r_m-\mu_m)^3]}.
$$
You can calculate it by using the standard moment estimators - that's what I would do.
Thu
|
40,100
|
What is coskewness and how can it be calculated?
|
Is there a standard definition?
Yes, these quantities were defined in
Kraus, A., Litzenberger, R.H., 1976. Skewness preference and the valuation of risk assets.
Journal of Finance 31, 1085-1100.
How to calculate it?
Check page 6 of the following document
http://asianfa2012.mcu.edu.tw/fullpaper/10312.pdf
what are my alternative options?
Depends on what information you need.
Is it possible to have a normalized coskewness like correlation coefficient?
Yes, in page 6 of the aforementioned document the autors say
More recent studies use standardised measures of co-skewness and co-kurtosis (Harvey and Siddique, 2000; Monero and Rodríguez, 2009), which are better behaved, with less extreme observations and smaller variance ...
|
What is coskewness and how can it be calculated?
|
Is there a standard definition?
Yes, these quantities were defined in
Kraus, A., Litzenberger, R.H., 1976. Skewness preference and the valuation of risk assets.
Journal of Finance 31, 1085-1100.
|
What is coskewness and how can it be calculated?
Is there a standard definition?
Yes, these quantities were defined in
Kraus, A., Litzenberger, R.H., 1976. Skewness preference and the valuation of risk assets.
Journal of Finance 31, 1085-1100.
How to calculate it?
Check page 6 of the following document
http://asianfa2012.mcu.edu.tw/fullpaper/10312.pdf
what are my alternative options?
Depends on what information you need.
Is it possible to have a normalized coskewness like correlation coefficient?
Yes, in page 6 of the aforementioned document the autors say
More recent studies use standardised measures of co-skewness and co-kurtosis (Harvey and Siddique, 2000; Monero and Rodríguez, 2009), which are better behaved, with less extreme observations and smaller variance ...
|
What is coskewness and how can it be calculated?
Is there a standard definition?
Yes, these quantities were defined in
Kraus, A., Litzenberger, R.H., 1976. Skewness preference and the valuation of risk assets.
Journal of Finance 31, 1085-1100.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.