idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
⌀ | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
⌀ | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
40,701
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
|
The results will be sensitive to the splits, so you should compare models on the same partitioning of the data. Compare these two approaches:
Approach 1 will compare two models, but use the same CV partitioning.
Approach 2 will compare two models, but the first model will have a different CV partitioning than second.
We'd like to select the best model. The problem with approach 2 is that the difference in performance between the two models will come from two different sources: (a) the differences between the two folds and (b) the differences between the algorithms themselves (say, random forest and logistic regression). If one model out-performs the other, we won't know if that difference in performance is entirely, partially, or not at all due to the differences in the two CV partitions. On the other hand, any difference in performance using approach 1 cannot be due to differences in how the data were partitioned, because the partitions are identical.
To fix the partioning, use cvTools to create your (repeated) CV partitions and store the results.
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
|
The results will be sensitive to the splits, so you should compare models on the same partitioning of the data. Compare these two approaches:
Approach 1 will compare two models, but use the same CV
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
The results will be sensitive to the splits, so you should compare models on the same partitioning of the data. Compare these two approaches:
Approach 1 will compare two models, but use the same CV partitioning.
Approach 2 will compare two models, but the first model will have a different CV partitioning than second.
We'd like to select the best model. The problem with approach 2 is that the difference in performance between the two models will come from two different sources: (a) the differences between the two folds and (b) the differences between the algorithms themselves (say, random forest and logistic regression). If one model out-performs the other, we won't know if that difference in performance is entirely, partially, or not at all due to the differences in the two CV partitions. On the other hand, any difference in performance using approach 1 cannot be due to differences in how the data were partitioned, because the partitions are identical.
To fix the partioning, use cvTools to create your (repeated) CV partitions and store the results.
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
The results will be sensitive to the splits, so you should compare models on the same partitioning of the data. Compare these two approaches:
Approach 1 will compare two models, but use the same CV
|
40,702
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
|
It certainly helps, but isn't absolutely essential.
The choice of cross-validation splits introduces a source of (uninteresting) variability. Using the same set of splits removes this source of variance, which might increase your ability to detect variability in the performance of different classifiers (if it exists), which is typically much more interesting.
If you can control the splits, then you really ought to--it's an easy way to increase the power of your experiment without doing much additional work. On the other hand, if you already have some difficult-to-replicate results where you forgot to store the splits, you can certainly use that data; just be aware that comparisons based on those results will not be as powerful as they could be. People often assume that the partition-related variance is a lot smaller than the across-classifier variance, though that may not be true especially if your classes are very unbalanced.
As for e1071 specifically, the docs for tune say
Cross-validation randomizes the data set before building the splits
which—once created—remain constant during the training process. The
splits can be recovered through the train.ind component of the
returned object.
You could let tune generate the folds itself for the first algorithm, then use them, via tune.control (with sampling=fixed), to evaluate subsequent ones. Alternately, it looks like tune generates its partition using global prng (via a call to sample), at the very beginning of the function (Line 71 of Tune.R in the source), so you may be able to generate the same folds by resetting the random number generator's seed before each call to tune. This seems a little brittle though. Finally, this is fairly easy to program yourself.
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
|
It certainly helps, but isn't absolutely essential.
The choice of cross-validation splits introduces a source of (uninteresting) variability. Using the same set of splits removes this source of varia
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
It certainly helps, but isn't absolutely essential.
The choice of cross-validation splits introduces a source of (uninteresting) variability. Using the same set of splits removes this source of variance, which might increase your ability to detect variability in the performance of different classifiers (if it exists), which is typically much more interesting.
If you can control the splits, then you really ought to--it's an easy way to increase the power of your experiment without doing much additional work. On the other hand, if you already have some difficult-to-replicate results where you forgot to store the splits, you can certainly use that data; just be aware that comparisons based on those results will not be as powerful as they could be. People often assume that the partition-related variance is a lot smaller than the across-classifier variance, though that may not be true especially if your classes are very unbalanced.
As for e1071 specifically, the docs for tune say
Cross-validation randomizes the data set before building the splits
which—once created—remain constant during the training process. The
splits can be recovered through the train.ind component of the
returned object.
You could let tune generate the folds itself for the first algorithm, then use them, via tune.control (with sampling=fixed), to evaluate subsequent ones. Alternately, it looks like tune generates its partition using global prng (via a call to sample), at the very beginning of the function (Line 71 of Tune.R in the source), so you may be able to generate the same folds by resetting the random number generator's seed before each call to tune. This seems a little brittle though. Finally, this is fairly easy to program yourself.
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
It certainly helps, but isn't absolutely essential.
The choice of cross-validation splits introduces a source of (uninteresting) variability. Using the same set of splits removes this source of varia
|
40,703
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
|
In addition to @Matt Krause's answer:
I'd approach the question from two different sides:
One of the basic assumptions underlying cross validation is that the models built on the different splits are equal (or at least equivalent). This allows pooling the results from all those splits. If that assumption is met, then the splitting doesn't matter.
However, in practice it does happen that the splitting introduces non-negligible variance, i.e. models are unstable wrt. slight changes in the training data, so this variation should anyways be checked (even if not optimizing anything).
Evaluating different classifiers on the same splits means that you can evaluate the comparison in a paired test, which is more powerful than the corresponding unpaired tests: this is why you can detect smaller changes in performance by keeping the splits constant.
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
|
In addition to @Matt Krause's answer:
I'd approach the question from two different sides:
One of the basic assumptions underlying cross validation is that the models built on the different splits are
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
In addition to @Matt Krause's answer:
I'd approach the question from two different sides:
One of the basic assumptions underlying cross validation is that the models built on the different splits are equal (or at least equivalent). This allows pooling the results from all those splits. If that assumption is met, then the splitting doesn't matter.
However, in practice it does happen that the splitting introduces non-negligible variance, i.e. models are unstable wrt. slight changes in the training data, so this variation should anyways be checked (even if not optimizing anything).
Evaluating different classifiers on the same splits means that you can evaluate the comparison in a paired test, which is more powerful than the corresponding unpaired tests: this is why you can detect smaller changes in performance by keeping the splits constant.
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
In addition to @Matt Krause's answer:
I'd approach the question from two different sides:
One of the basic assumptions underlying cross validation is that the models built on the different splits are
|
40,704
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
|
Though this has already been answered, if you want some code that will allow you to use the same CV splits in caret for multiple model trainings, you can use the following:
tune_control <- trainControl(
method = "repeatedcv",
repeats = 2,
number = 5,
index = createMultiFolds(df$y, k=5, times=2) # assuming your object is df and you are modeling y
)
You can manually check this worked by training two models and comparing the output of:
model$control$index # replace model w/ name of your model
which should print out something like:
List of 10
Fold1.Rep1: int [1:2400] 1 2 3 4 5 7 9 10 11 12 ...
Fold2.Rep1: int [1:2400] 1 2 3 4 5 6 7 8 9 10 ...
Fold3.Rep1: int [1:2400] 2 3 4 6 8 9 10 11 13 14 ...
Fold4.Rep1: int [1:2400] 1 2 5 6 7 8 11 12 16 18 ...
Fold5.Rep1: int [1:2400] 1 3 4 5 6 7 8 9 10 11 ...
Fold1.Rep2: int [1:2400] 1 3 4 5 6 8 10 11 12 14 ...
Fold2.Rep2: int [1:2400] 1 2 3 4 5 6 7 8 9 10 ...
Fold3.Rep2: int [1:2400] 2 3 4 5 7 8 9 10 11 12 ...
Fold4.Rep2: int [1:2400] 1 2 3 5 6 7 8 9 11 12 ...
Fold5.Rep2: int [1:2400] 1 2 4 6 7 9 10 13 16 17 ...
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
|
Though this has already been answered, if you want some code that will allow you to use the same CV splits in caret for multiple model trainings, you can use the following:
tune_control <- trainContr
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms?
Though this has already been answered, if you want some code that will allow you to use the same CV splits in caret for multiple model trainings, you can use the following:
tune_control <- trainControl(
method = "repeatedcv",
repeats = 2,
number = 5,
index = createMultiFolds(df$y, k=5, times=2) # assuming your object is df and you are modeling y
)
You can manually check this worked by training two models and comparing the output of:
model$control$index # replace model w/ name of your model
which should print out something like:
List of 10
Fold1.Rep1: int [1:2400] 1 2 3 4 5 7 9 10 11 12 ...
Fold2.Rep1: int [1:2400] 1 2 3 4 5 6 7 8 9 10 ...
Fold3.Rep1: int [1:2400] 2 3 4 6 8 9 10 11 13 14 ...
Fold4.Rep1: int [1:2400] 1 2 5 6 7 8 11 12 16 18 ...
Fold5.Rep1: int [1:2400] 1 3 4 5 6 7 8 9 10 11 ...
Fold1.Rep2: int [1:2400] 1 3 4 5 6 8 10 11 12 14 ...
Fold2.Rep2: int [1:2400] 1 2 3 4 5 6 7 8 9 10 ...
Fold3.Rep2: int [1:2400] 2 3 4 5 7 8 9 10 11 12 ...
Fold4.Rep2: int [1:2400] 1 2 3 5 6 7 8 9 11 12 ...
Fold5.Rep2: int [1:2400] 1 2 4 6 7 9 10 13 16 17 ...
|
Do we have to fix splits before 10-folds cross validation if we want to compare different algorithms
Though this has already been answered, if you want some code that will allow you to use the same CV splits in caret for multiple model trainings, you can use the following:
tune_control <- trainContr
|
40,705
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
|
A p-value is the probability of obtaining a result at least as extreme as the one observed.
In the case of a two-tailed z-test, "more extreme" means having a z-value at least as great in magnitude (at least as far from zero) as the observed z-value.
So if your sample gives a z-value of say 1.3 (just for an example), then the p-value will be the area to the right of 1.3 plus the area to the left of -1.3.
Similarly if your sample gives a z-value of -2.1, then the p-value will be the area to the left of -2.1 plus the area to the right of 2.1.
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
|
A p-value is the probability of obtaining a result at least as extreme as the one observed.
In the case of a two-tailed z-test, "more extreme" means having a z-value at least as great in magnitude (at
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
A p-value is the probability of obtaining a result at least as extreme as the one observed.
In the case of a two-tailed z-test, "more extreme" means having a z-value at least as great in magnitude (at least as far from zero) as the observed z-value.
So if your sample gives a z-value of say 1.3 (just for an example), then the p-value will be the area to the right of 1.3 plus the area to the left of -1.3.
Similarly if your sample gives a z-value of -2.1, then the p-value will be the area to the left of -2.1 plus the area to the right of 2.1.
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
A p-value is the probability of obtaining a result at least as extreme as the one observed.
In the case of a two-tailed z-test, "more extreme" means having a z-value at least as great in magnitude (at
|
40,706
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
|
When you do a two-tailed test you are in fact obtaining both the positive and the negative of the statistic. Remember a two-tailed test means that your are testing whether your alternative hypothesis is different from the null, which could mean either greater or less than. The "greater than" part gives you the critical region on the positive side of the curve, whereas the "less than" part gives you other critical region on the negative side of the curve.
Since both regions are the same, you will usually only use either of the statistics (positive or negative) and divide your level of significance in half to account for the other side.
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
|
When you do a two-tailed test you are in fact obtaining both the positive and the negative of the statistic. Remember a two-tailed test means that your are testing whether your alternative hypothesis
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
When you do a two-tailed test you are in fact obtaining both the positive and the negative of the statistic. Remember a two-tailed test means that your are testing whether your alternative hypothesis is different from the null, which could mean either greater or less than. The "greater than" part gives you the critical region on the positive side of the curve, whereas the "less than" part gives you other critical region on the negative side of the curve.
Since both regions are the same, you will usually only use either of the statistics (positive or negative) and divide your level of significance in half to account for the other side.
|
Why does the p-value double when using two-tailed test compared to one-tailed one? [duplicate]
When you do a two-tailed test you are in fact obtaining both the positive and the negative of the statistic. Remember a two-tailed test means that your are testing whether your alternative hypothesis
|
40,707
|
Is re-randomization valid approach to estimate statistical significance
|
The idea of a randomization test is that if a given treatment has no affect on an outcome, then the assignment of that treatment is just a kind of arbitrary labeling. (Fisher's exact test was the first method to be based on this concept.) Now if we have some statistic and we want to know its distribution under the null hypothesis of no treatment effect, we can through simulation estimate this null distribution by randomly relabeling the observations and looking at the behavior of our statistic in this setting, because then the null hypothesis is effectively true.
The example you give is an interesting one, but notice that it isn't the size of the difference in average time that we'd take as evidence that bus A is faster, but the fact that bus A is always faster. So a more sensible test statistic would be something that measures this more directly, like the statistic used in Wilcoxon's rank sum test. If you did a randomization test using a rank sum statistic instead then you would get a highly "significant" result.
|
Is re-randomization valid approach to estimate statistical significance
|
The idea of a randomization test is that if a given treatment has no affect on an outcome, then the assignment of that treatment is just a kind of arbitrary labeling. (Fisher's exact test was the firs
|
Is re-randomization valid approach to estimate statistical significance
The idea of a randomization test is that if a given treatment has no affect on an outcome, then the assignment of that treatment is just a kind of arbitrary labeling. (Fisher's exact test was the first method to be based on this concept.) Now if we have some statistic and we want to know its distribution under the null hypothesis of no treatment effect, we can through simulation estimate this null distribution by randomly relabeling the observations and looking at the behavior of our statistic in this setting, because then the null hypothesis is effectively true.
The example you give is an interesting one, but notice that it isn't the size of the difference in average time that we'd take as evidence that bus A is faster, but the fact that bus A is always faster. So a more sensible test statistic would be something that measures this more directly, like the statistic used in Wilcoxon's rank sum test. If you did a randomization test using a rank sum statistic instead then you would get a highly "significant" result.
|
Is re-randomization valid approach to estimate statistical significance
The idea of a randomization test is that if a given treatment has no affect on an outcome, then the assignment of that treatment is just a kind of arbitrary labeling. (Fisher's exact test was the firs
|
40,708
|
Is re-randomization valid approach to estimate statistical significance
|
Okay, I'm a bit late to this party, but while I agree with what dsaxton says in the first paragraph, I think the second paragraph gets lost.
Re-randomization works very well to specify the null distribution for a large variety of statistics. However, you've managed to cause a problem by combining two pathological distributions (point distributions centered on 9 and 10 respectively) with the median -- a statistic which is perhaps at its least useful where there are only two possible values because it can become very unstable.
I'm going to try to walk through comparisons for several sample sizes to show what is happening here. It should help explain dsaxton's insight that the consistency is where the real statistical power lies.
Imagine we took one ride on each bus. We get one 9 and one 10.
We randomize 10,000 times to conduct inference. In half of them, the positions switch, in half they don't. Thus if we measured medians, half the time the difference in medians will be -1, and half the time it will be 1. Similarly for means, half the time the difference in means will be -1 and half the time it will be 1.
Now imagine we took 10 rides on each bus, resulting in ten 10s and ten 9s.
We re-randomize.
This time, most of the randomizations result in having about five of each 10 and 9 in each sample. The means will form normal (really a shifted binomial) distributions around 9.5 for each sample, giving a difference centered on 0.
The difference in medians can occasionally be 0 -- if we actually get five of each time in each sample -- giving medians in each sample of 9.5, but its more likely to have a slight imbalance. That slight imbalance makes the medians 9 and 10 or 10 and 9. Thus most of the time the difference of medians will be either -1 or 1, which is similar to our real result, giving the extra high p-value.
It may seem like continuing to raise the number of bus rides should fix this problem, but while that makes the mean more stable -- and fixes the null firmly around 0, it actually destabilizes the median. It becomes less and less likely to get that exact match, and so the middle ground disappears.
Okay. Maybe that made sense. I'm going to include some R code to make this concrete.
n = 10
a = rep(10,n) #initial samples
b = rep(9,n)
joint.sample = c(a,b) #Combining samples for ease
bootstraps = 10000 #Number of replications
est.mean = mean(a) - mean(b) #Estimate of treatment
boot.mean = replicate(bootstraps, {
new.sample = sample(joint.sample)
mean(new.sample[1:n]) - mean(new.sample[1:n+n])
}) #Simply resamples and takes means of the two groups
CI.mean = quantile(boot.mean,prob=c(0.025,0.975) #Calculates a CI
pval.mean = mean(boot.mean >= est.mean)*2 #Two-sided p-value
#Same things but with median
est.median = median(a)-median(b)
boot.median = replicate(bootstraps, {
new.sample = sample(joint.sample)
median(new.sample[1:n]) - median(new.sample[1:n+n])
})
CI.median = quantile(boot.median,prob=c(0.025,0.975)
pval.median = mean(boot.median >= est.median)*2
That should give results for you that show that randomization with a mean would strongly reject that these were the same. Feel free to fiddle with the sample size n to see how that affects things, but mostly, for such a clear cut case as this, it doesn't take a large sample to spot the difference. You should also be able to reject using a median -- but you would need a different pair of distributions such that the medians moved around a bit more. Anything continuous should do I think, and then its a matter of sample size.
One note of caution. I used the defaults for the sample function here to dictate whether I was going with or without replacement. In general you want to think really hard about which sampling type you're using because that can and will affect results.
|
Is re-randomization valid approach to estimate statistical significance
|
Okay, I'm a bit late to this party, but while I agree with what dsaxton says in the first paragraph, I think the second paragraph gets lost.
Re-randomization works very well to specify the null distri
|
Is re-randomization valid approach to estimate statistical significance
Okay, I'm a bit late to this party, but while I agree with what dsaxton says in the first paragraph, I think the second paragraph gets lost.
Re-randomization works very well to specify the null distribution for a large variety of statistics. However, you've managed to cause a problem by combining two pathological distributions (point distributions centered on 9 and 10 respectively) with the median -- a statistic which is perhaps at its least useful where there are only two possible values because it can become very unstable.
I'm going to try to walk through comparisons for several sample sizes to show what is happening here. It should help explain dsaxton's insight that the consistency is where the real statistical power lies.
Imagine we took one ride on each bus. We get one 9 and one 10.
We randomize 10,000 times to conduct inference. In half of them, the positions switch, in half they don't. Thus if we measured medians, half the time the difference in medians will be -1, and half the time it will be 1. Similarly for means, half the time the difference in means will be -1 and half the time it will be 1.
Now imagine we took 10 rides on each bus, resulting in ten 10s and ten 9s.
We re-randomize.
This time, most of the randomizations result in having about five of each 10 and 9 in each sample. The means will form normal (really a shifted binomial) distributions around 9.5 for each sample, giving a difference centered on 0.
The difference in medians can occasionally be 0 -- if we actually get five of each time in each sample -- giving medians in each sample of 9.5, but its more likely to have a slight imbalance. That slight imbalance makes the medians 9 and 10 or 10 and 9. Thus most of the time the difference of medians will be either -1 or 1, which is similar to our real result, giving the extra high p-value.
It may seem like continuing to raise the number of bus rides should fix this problem, but while that makes the mean more stable -- and fixes the null firmly around 0, it actually destabilizes the median. It becomes less and less likely to get that exact match, and so the middle ground disappears.
Okay. Maybe that made sense. I'm going to include some R code to make this concrete.
n = 10
a = rep(10,n) #initial samples
b = rep(9,n)
joint.sample = c(a,b) #Combining samples for ease
bootstraps = 10000 #Number of replications
est.mean = mean(a) - mean(b) #Estimate of treatment
boot.mean = replicate(bootstraps, {
new.sample = sample(joint.sample)
mean(new.sample[1:n]) - mean(new.sample[1:n+n])
}) #Simply resamples and takes means of the two groups
CI.mean = quantile(boot.mean,prob=c(0.025,0.975) #Calculates a CI
pval.mean = mean(boot.mean >= est.mean)*2 #Two-sided p-value
#Same things but with median
est.median = median(a)-median(b)
boot.median = replicate(bootstraps, {
new.sample = sample(joint.sample)
median(new.sample[1:n]) - median(new.sample[1:n+n])
})
CI.median = quantile(boot.median,prob=c(0.025,0.975)
pval.median = mean(boot.median >= est.median)*2
That should give results for you that show that randomization with a mean would strongly reject that these were the same. Feel free to fiddle with the sample size n to see how that affects things, but mostly, for such a clear cut case as this, it doesn't take a large sample to spot the difference. You should also be able to reject using a median -- but you would need a different pair of distributions such that the medians moved around a bit more. Anything continuous should do I think, and then its a matter of sample size.
One note of caution. I used the defaults for the sample function here to dictate whether I was going with or without replacement. In general you want to think really hard about which sampling type you're using because that can and will affect results.
|
Is re-randomization valid approach to estimate statistical significance
Okay, I'm a bit late to this party, but while I agree with what dsaxton says in the first paragraph, I think the second paragraph gets lost.
Re-randomization works very well to specify the null distri
|
40,709
|
Gradient for hinge loss multiclass
|
Let's use the example of the SVM loss function for a single datapoint:
$L_i = \sum_{j\neq y_i} \left[ \max(0, w_j^Tx_i - w_{y_i}^Tx_i + \Delta) \right]$
Where $\Delta$ is the desired margin.
We can differentiate the function with respect to the weights. For example, taking the gradient with respect to $w_{yi}$ we obtain:
$\nabla_{w_{y_i}} L_i = - \left( \sum_{j\neq y_i} \mathbb{1}(w_j^Tx_i - w_{y_i}^Tx_i + \Delta > 0) \right) x_i$
Where 1 is the indicator function that is one if the condition inside is true or zero otherwise. While the expression may look scary when it is written out, when you're implementing this in code you'd simply count the number of classes that didn't meet the desired margin (and hence contributed to the loss function) and then the data vector $x_i$ scaled by this number is the gradient. Notice that this is the gradient only with respect to the row of $W$ that corresponds to the correct class. For the other rows where $j≠{{y}_{i}}$ the gradient is:
$\nabla_{w_j} L_i = \mathbb{1}(w_j^Tx_i - w_{y_i}^Tx_i + \Delta > 0) x_i$
Once you derive the expression for the gradient it is straight-forward to implement the expressions and use them to perform the gradient update.
Taken from Stanford CS231N optimization notes posted on github.
|
Gradient for hinge loss multiclass
|
Let's use the example of the SVM loss function for a single datapoint:
$L_i = \sum_{j\neq y_i} \left[ \max(0, w_j^Tx_i - w_{y_i}^Tx_i + \Delta) \right]$
Where $\Delta$ is the desired margin.
We can di
|
Gradient for hinge loss multiclass
Let's use the example of the SVM loss function for a single datapoint:
$L_i = \sum_{j\neq y_i} \left[ \max(0, w_j^Tx_i - w_{y_i}^Tx_i + \Delta) \right]$
Where $\Delta$ is the desired margin.
We can differentiate the function with respect to the weights. For example, taking the gradient with respect to $w_{yi}$ we obtain:
$\nabla_{w_{y_i}} L_i = - \left( \sum_{j\neq y_i} \mathbb{1}(w_j^Tx_i - w_{y_i}^Tx_i + \Delta > 0) \right) x_i$
Where 1 is the indicator function that is one if the condition inside is true or zero otherwise. While the expression may look scary when it is written out, when you're implementing this in code you'd simply count the number of classes that didn't meet the desired margin (and hence contributed to the loss function) and then the data vector $x_i$ scaled by this number is the gradient. Notice that this is the gradient only with respect to the row of $W$ that corresponds to the correct class. For the other rows where $j≠{{y}_{i}}$ the gradient is:
$\nabla_{w_j} L_i = \mathbb{1}(w_j^Tx_i - w_{y_i}^Tx_i + \Delta > 0) x_i$
Once you derive the expression for the gradient it is straight-forward to implement the expressions and use them to perform the gradient update.
Taken from Stanford CS231N optimization notes posted on github.
|
Gradient for hinge loss multiclass
Let's use the example of the SVM loss function for a single datapoint:
$L_i = \sum_{j\neq y_i} \left[ \max(0, w_j^Tx_i - w_{y_i}^Tx_i + \Delta) \right]$
Where $\Delta$ is the desired margin.
We can di
|
40,710
|
Gradient for hinge loss multiclass
|
First of all, note that multi-class hinge loss function is a function of $W_r$.
\begin{equation}
l(W_r) = \max( 0, 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i - W_{y_i} \cdot x_i)
\end{equation}
Next, max function is non-differentiable at $0$. So, we need to calculate the subgradient of it.
\begin{equation}
\frac{\partial l(W_r)}{\partial W_r} =
\begin{cases}
\{0\}, & W_{y_i}\cdot x_i > 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i \\
\{x_i\}, & W_{y_i}\cdot x_i < 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i\\
\{\alpha x_i\}, & \alpha \in [0,1], W_{y_i}\cdot x_i = 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i
\end{cases}
\end{equation}
In the second case, $W_{y_i}$ is independent of $W_r$. Above definition of subgradient of multi-class hinge loss is similar to subgradient of binary class hinge loss.
|
Gradient for hinge loss multiclass
|
First of all, note that multi-class hinge loss function is a function of $W_r$.
\begin{equation}
l(W_r) = \max( 0, 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i - W_{y_i} \cdot x_i)
\end{equation}
|
Gradient for hinge loss multiclass
First of all, note that multi-class hinge loss function is a function of $W_r$.
\begin{equation}
l(W_r) = \max( 0, 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i - W_{y_i} \cdot x_i)
\end{equation}
Next, max function is non-differentiable at $0$. So, we need to calculate the subgradient of it.
\begin{equation}
\frac{\partial l(W_r)}{\partial W_r} =
\begin{cases}
\{0\}, & W_{y_i}\cdot x_i > 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i \\
\{x_i\}, & W_{y_i}\cdot x_i < 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i\\
\{\alpha x_i\}, & \alpha \in [0,1], W_{y_i}\cdot x_i = 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i
\end{cases}
\end{equation}
In the second case, $W_{y_i}$ is independent of $W_r$. Above definition of subgradient of multi-class hinge loss is similar to subgradient of binary class hinge loss.
|
Gradient for hinge loss multiclass
First of all, note that multi-class hinge loss function is a function of $W_r$.
\begin{equation}
l(W_r) = \max( 0, 1 + \underset{r \neq y_i}{ \max } W_r \cdot x_i - W_{y_i} \cdot x_i)
\end{equation}
|
40,711
|
R: glm(...,family=poisson) plot confidence and prediction intervals [closed]
|
R's predict ought to be able to do a confidence interval for a GLM but definitely won't do a prediction interval -- there's an underlying statistical issue here, which I'll discuss in this answer.
i) For some GLMs it doesn't make sense to even try to do a PI - consider a logistic regression with 0/1 responses, and imagine you want say a 95% PI. Anywhere that E(Y) is not very close to 0 or 1, a prediction interval will have to include all of 0 to 1, and when E(Y) is very close to 0 or 1, the interval degenerates to just a point.
ii) for many other GLMs there's no ready analytic prediction interval. For example, there's generally no pivotal quantity for the prediction like there is in the normal case. The Poisson is among those.
A number of papers have looked at ways to get at approximate prediction intervals for some cases, and there's also options like bootstrapping, but because of issues like those I've mentioned, there's no prediction interval in R's GLM.
[To produce a CI for the mean, see ?predict.glm; predict(fit, type="response", se.fit=TRUE) will give the mean and and standard
error, which can be used to get an approximate (asympotic) interval. Alternatively, you could use the default type="link" and transform an interval for that. ]
|
R: glm(...,family=poisson) plot confidence and prediction intervals [closed]
|
R's predict ought to be able to do a confidence interval for a GLM but definitely won't do a prediction interval -- there's an underlying statistical issue here, which I'll discuss in this answer.
i)
|
R: glm(...,family=poisson) plot confidence and prediction intervals [closed]
R's predict ought to be able to do a confidence interval for a GLM but definitely won't do a prediction interval -- there's an underlying statistical issue here, which I'll discuss in this answer.
i) For some GLMs it doesn't make sense to even try to do a PI - consider a logistic regression with 0/1 responses, and imagine you want say a 95% PI. Anywhere that E(Y) is not very close to 0 or 1, a prediction interval will have to include all of 0 to 1, and when E(Y) is very close to 0 or 1, the interval degenerates to just a point.
ii) for many other GLMs there's no ready analytic prediction interval. For example, there's generally no pivotal quantity for the prediction like there is in the normal case. The Poisson is among those.
A number of papers have looked at ways to get at approximate prediction intervals for some cases, and there's also options like bootstrapping, but because of issues like those I've mentioned, there's no prediction interval in R's GLM.
[To produce a CI for the mean, see ?predict.glm; predict(fit, type="response", se.fit=TRUE) will give the mean and and standard
error, which can be used to get an approximate (asympotic) interval. Alternatively, you could use the default type="link" and transform an interval for that. ]
|
R: glm(...,family=poisson) plot confidence and prediction intervals [closed]
R's predict ought to be able to do a confidence interval for a GLM but definitely won't do a prediction interval -- there's an underlying statistical issue here, which I'll discuss in this answer.
i)
|
40,712
|
GridSearchCV Regression vs Linear Regression vs Stats.model OLS
|
The difference between the scores can be explained as follows
In your first model, you are performing cross-validation. When cv=None, or when it not passed as an argument, GridSearchCV will default to cv=3. With three folds, each model will train using 66% of the data and test using the other 33%. Since you already split the data in 70%/30% before this, each model built using GridSearchCV uses about 0.7*0.66=0.462 (46.2%) of the original data.
In your second model, there is no k-fold cross-validation. You have a single model that is trained on 70% of the original data, and tested on the remaining 30%. Since the model has been given much more data, a higher score is as expected.
In your last model, you train another single model on 70% of the data. However this time you do not test it using the 30% of the data you saved for testing. As you suspected, you are looking at the training error, not the testing error. It is almost always the case that the training error is better than the test error, so the higher score is, again, as expected.
When and how we can use GridSearchCv on Regression model ?
GridSearchCV should be used to find the optimal parameters to train your final model. Typically, you should run GridSearchCV then look at the parameters that gave the model with the best score. You should then take these parameters and train your final model on all of the data. It is important to note that if you have trained your final model on all of your data, you cannot test it. For any correct test, you must must reserve some of the data.
|
GridSearchCV Regression vs Linear Regression vs Stats.model OLS
|
The difference between the scores can be explained as follows
In your first model, you are performing cross-validation. When cv=None, or when it not passed as an argument, GridSearchCV will default to
|
GridSearchCV Regression vs Linear Regression vs Stats.model OLS
The difference between the scores can be explained as follows
In your first model, you are performing cross-validation. When cv=None, or when it not passed as an argument, GridSearchCV will default to cv=3. With three folds, each model will train using 66% of the data and test using the other 33%. Since you already split the data in 70%/30% before this, each model built using GridSearchCV uses about 0.7*0.66=0.462 (46.2%) of the original data.
In your second model, there is no k-fold cross-validation. You have a single model that is trained on 70% of the original data, and tested on the remaining 30%. Since the model has been given much more data, a higher score is as expected.
In your last model, you train another single model on 70% of the data. However this time you do not test it using the 30% of the data you saved for testing. As you suspected, you are looking at the training error, not the testing error. It is almost always the case that the training error is better than the test error, so the higher score is, again, as expected.
When and how we can use GridSearchCv on Regression model ?
GridSearchCV should be used to find the optimal parameters to train your final model. Typically, you should run GridSearchCV then look at the parameters that gave the model with the best score. You should then take these parameters and train your final model on all of the data. It is important to note that if you have trained your final model on all of your data, you cannot test it. For any correct test, you must must reserve some of the data.
|
GridSearchCV Regression vs Linear Regression vs Stats.model OLS
The difference between the scores can be explained as follows
In your first model, you are performing cross-validation. When cv=None, or when it not passed as an argument, GridSearchCV will default to
|
40,713
|
jenks natural breaks vs k-means
|
The Jenks natural breaks algorithm, just like K-means, assigns data to one of K groups such that the within group distances are minimized. Also just like K-means, one must select K prior to running the algorithm.
However, Jenks and K-means are different in how they minimize within group distances. Jenks takes advantage of the fact that 1-dimensional data is sortable which makes it a faster algorithm for 1-dimensional data. K-means is more general in that it can handle data in any dimension; including dimensions greater than 1 where the data is not sortable.
|
jenks natural breaks vs k-means
|
The Jenks natural breaks algorithm, just like K-means, assigns data to one of K groups such that the within group distances are minimized. Also just like K-means, one must select K prior to running th
|
jenks natural breaks vs k-means
The Jenks natural breaks algorithm, just like K-means, assigns data to one of K groups such that the within group distances are minimized. Also just like K-means, one must select K prior to running the algorithm.
However, Jenks and K-means are different in how they minimize within group distances. Jenks takes advantage of the fact that 1-dimensional data is sortable which makes it a faster algorithm for 1-dimensional data. K-means is more general in that it can handle data in any dimension; including dimensions greater than 1 where the data is not sortable.
|
jenks natural breaks vs k-means
The Jenks natural breaks algorithm, just like K-means, assigns data to one of K groups such that the within group distances are minimized. Also just like K-means, one must select K prior to running th
|
40,714
|
jenks natural breaks vs k-means
|
Previous answers essentially present Jenks as a special case of K-means. However, this source makes an important distinction: K-means solely "searches for minimum distance between data points and the centers of clusters they belong to". Jenks takes this objective and adds a penalty for the proximity between the centers of clusters, and thus it also searches "for maximum difference between cluster centers themselves".
The logic is that, even if two clusters are internally very compact, they may be hard to distinguish when their centers are very close.
Thus, for $n$ data points and $k$ clusters, K-means would minimize $C$:
$$ C = \sum_{j=1}^k \sum_{x \in S_j}^n dist(x, c_j) $$
where $x$ is a data point in cluster $S_j$ and $c_j$ is the cluster center of cluster $S_j$.
In contrast, the Jenks algorithm would minimize $J$:
$$ J = C - \sum_{j=1}^{k-1} dist(c_{j+1}, c_j)$$
Two things to note, however:
I am really no expert in clustering algorithms, so confirmations, comments, corrections and edits are welcome.
The source I reference states that $dist()$ computes the Euclidean distance (so, $\sqrt{(d_i - c_j)^2}$), but from everything else I read on K-means it seems that the squared Euclidean distance ($(d_i - c_j)^2$) is what is actually minimized.
Full reference:
Khan, F. (2012). An initial seed selection algorithm for k-means clustering of georeferenced data to improve replicability of cluster assignments for mapping application. Applied Soft Computing Journal, 12(11), 3698–3700. https://doi.org/10.1016/j.asoc.2012.07.021
|
jenks natural breaks vs k-means
|
Previous answers essentially present Jenks as a special case of K-means. However, this source makes an important distinction: K-means solely "searches for minimum distance between data points and the
|
jenks natural breaks vs k-means
Previous answers essentially present Jenks as a special case of K-means. However, this source makes an important distinction: K-means solely "searches for minimum distance between data points and the centers of clusters they belong to". Jenks takes this objective and adds a penalty for the proximity between the centers of clusters, and thus it also searches "for maximum difference between cluster centers themselves".
The logic is that, even if two clusters are internally very compact, they may be hard to distinguish when their centers are very close.
Thus, for $n$ data points and $k$ clusters, K-means would minimize $C$:
$$ C = \sum_{j=1}^k \sum_{x \in S_j}^n dist(x, c_j) $$
where $x$ is a data point in cluster $S_j$ and $c_j$ is the cluster center of cluster $S_j$.
In contrast, the Jenks algorithm would minimize $J$:
$$ J = C - \sum_{j=1}^{k-1} dist(c_{j+1}, c_j)$$
Two things to note, however:
I am really no expert in clustering algorithms, so confirmations, comments, corrections and edits are welcome.
The source I reference states that $dist()$ computes the Euclidean distance (so, $\sqrt{(d_i - c_j)^2}$), but from everything else I read on K-means it seems that the squared Euclidean distance ($(d_i - c_j)^2$) is what is actually minimized.
Full reference:
Khan, F. (2012). An initial seed selection algorithm for k-means clustering of georeferenced data to improve replicability of cluster assignments for mapping application. Applied Soft Computing Journal, 12(11), 3698–3700. https://doi.org/10.1016/j.asoc.2012.07.021
|
jenks natural breaks vs k-means
Previous answers essentially present Jenks as a special case of K-means. However, this source makes an important distinction: K-means solely "searches for minimum distance between data points and the
|
40,715
|
Two broad categories of dimensionality reduction: with and without an explicit mapping function
|
These two categories are sometimes referred to as parametric and non-parametric dimensionality reduction.
Parametric dimensionality reduction yields an explicit mapping $f(x)$, and is called "parametric" because it considers only a specific restricted class of mappings. E.g. PCA can only yield a linear function $f(x)$.
Note that e.g. kernel PCA is a parametric method as well (the choice of kernel defines a class of mappings), even though the function $f(x)$ is "less explicit" than for PCA and can only be written as a sum over all training data points $f(x)=\sum_\mathrm{training\:set} f_i(x)$, thanks to the kernel trick.
In contrast, non-parametric dimensionality reduction is entirely "data-driven", meaning that the mapping $f$ depends on all the data. Consequently, as you say, test data cannot be directly mapped with the mapping learnt on the training data.
In the last years, there have been some developments about how to extend non-parametric dimensionality reduction methods such that they can handle test (also called "out-of-sample") data. I am not at all familiar with this literature, but I will give a couple of links that seem relevant. The first paper explicitly discusses $k$-nearest-neighbours classifiers used with [supervised analogues of] Isomap/t-SNE, as requested in your bonus question.
Bunte et al. 2011, A general framework for dimensionality reducing
data visualization mapping
In recent years a wealth of dimension reduction techniques for data visualization and preprocessing has been established. Non-parametric methods require
additional effort for out-of-sample extensions, because they just provide a mapping of a given finite set of points. In this contribution we propose a general view
on non-parametric dimension reduction based on the concept of cost functions and
properties of the data. Based on this general principle we transfer non-parametric
dimension reduction to explicit mappings of the data manifold such that direct out-of-sample extensions become possible.
Gisbrecht et al. 2012, Out-of-Sample Kernel Extensions for
Nonparametric Dimensionality Reduction
Nonnparametric dimensionality reduction (DR) techniques
such as locally linear embedding or t-distributed stochastic neighbor (t-SNE) embedding constitute standard tools to visualize high dimensional
and complex data in the Euclidean plane. With increasing data volumes
and streaming applications, it is often no longer possible to project all data
points at once. Rather, out-of-sample extensions (OOS) derived from a
small subset of all data points are used. In this contribution, we propose
a kernel mapping for OOS in contrast to direct techniques based on the
DR method. This can be trained based on a given example set, or it
can be trained indirectly based on the cost function of the DR technique.
Considering t-SNE as an example and several benchmarks, we show that
a kernel mapping outperforms direct OOS as provided by t-SNE.
There is also an older paper, Bengio et al. 2004, Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering -- which is apparently less of a general framework, but I cannot comment on what specifically is the difference between it and the 2011-2012 papers linked above. Any comments on that are very welcome.
Here is one figure from Bunte et al. to attract attention:
|
Two broad categories of dimensionality reduction: with and without an explicit mapping function
|
These two categories are sometimes referred to as parametric and non-parametric dimensionality reduction.
Parametric dimensionality reduction yields an explicit mapping $f(x)$, and is called "parametr
|
Two broad categories of dimensionality reduction: with and without an explicit mapping function
These two categories are sometimes referred to as parametric and non-parametric dimensionality reduction.
Parametric dimensionality reduction yields an explicit mapping $f(x)$, and is called "parametric" because it considers only a specific restricted class of mappings. E.g. PCA can only yield a linear function $f(x)$.
Note that e.g. kernel PCA is a parametric method as well (the choice of kernel defines a class of mappings), even though the function $f(x)$ is "less explicit" than for PCA and can only be written as a sum over all training data points $f(x)=\sum_\mathrm{training\:set} f_i(x)$, thanks to the kernel trick.
In contrast, non-parametric dimensionality reduction is entirely "data-driven", meaning that the mapping $f$ depends on all the data. Consequently, as you say, test data cannot be directly mapped with the mapping learnt on the training data.
In the last years, there have been some developments about how to extend non-parametric dimensionality reduction methods such that they can handle test (also called "out-of-sample") data. I am not at all familiar with this literature, but I will give a couple of links that seem relevant. The first paper explicitly discusses $k$-nearest-neighbours classifiers used with [supervised analogues of] Isomap/t-SNE, as requested in your bonus question.
Bunte et al. 2011, A general framework for dimensionality reducing
data visualization mapping
In recent years a wealth of dimension reduction techniques for data visualization and preprocessing has been established. Non-parametric methods require
additional effort for out-of-sample extensions, because they just provide a mapping of a given finite set of points. In this contribution we propose a general view
on non-parametric dimension reduction based on the concept of cost functions and
properties of the data. Based on this general principle we transfer non-parametric
dimension reduction to explicit mappings of the data manifold such that direct out-of-sample extensions become possible.
Gisbrecht et al. 2012, Out-of-Sample Kernel Extensions for
Nonparametric Dimensionality Reduction
Nonnparametric dimensionality reduction (DR) techniques
such as locally linear embedding or t-distributed stochastic neighbor (t-SNE) embedding constitute standard tools to visualize high dimensional
and complex data in the Euclidean plane. With increasing data volumes
and streaming applications, it is often no longer possible to project all data
points at once. Rather, out-of-sample extensions (OOS) derived from a
small subset of all data points are used. In this contribution, we propose
a kernel mapping for OOS in contrast to direct techniques based on the
DR method. This can be trained based on a given example set, or it
can be trained indirectly based on the cost function of the DR technique.
Considering t-SNE as an example and several benchmarks, we show that
a kernel mapping outperforms direct OOS as provided by t-SNE.
There is also an older paper, Bengio et al. 2004, Out-of-Sample Extensions for LLE, Isomap, MDS, Eigenmaps, and Spectral Clustering -- which is apparently less of a general framework, but I cannot comment on what specifically is the difference between it and the 2011-2012 papers linked above. Any comments on that are very welcome.
Here is one figure from Bunte et al. to attract attention:
|
Two broad categories of dimensionality reduction: with and without an explicit mapping function
These two categories are sometimes referred to as parametric and non-parametric dimensionality reduction.
Parametric dimensionality reduction yields an explicit mapping $f(x)$, and is called "parametr
|
40,716
|
Interpretation of coefficients of glmnet - LASSO/Cox model?
|
The LASSO fit does not carry information on statistical significance.
The coefficients should have a roughly similar interpretation as in a standard Cox model, that is, as log hazard ratios. Positive coefficients indicate that a variable is associated with higher risk of an event, and vice versa for negative coefficients. How important the effects shown are depends on what the variables stand for and on subject knowledge.
Depending on the distribution of these variables you could also consider scaling them to unit variance before fitting the LASSO, which would produce standardised coefficients as a measure of relative variable importance.
|
Interpretation of coefficients of glmnet - LASSO/Cox model?
|
The LASSO fit does not carry information on statistical significance.
The coefficients should have a roughly similar interpretation as in a standard Cox model, that is, as log hazard ratios. Positive
|
Interpretation of coefficients of glmnet - LASSO/Cox model?
The LASSO fit does not carry information on statistical significance.
The coefficients should have a roughly similar interpretation as in a standard Cox model, that is, as log hazard ratios. Positive coefficients indicate that a variable is associated with higher risk of an event, and vice versa for negative coefficients. How important the effects shown are depends on what the variables stand for and on subject knowledge.
Depending on the distribution of these variables you could also consider scaling them to unit variance before fitting the LASSO, which would produce standardised coefficients as a measure of relative variable importance.
|
Interpretation of coefficients of glmnet - LASSO/Cox model?
The LASSO fit does not carry information on statistical significance.
The coefficients should have a roughly similar interpretation as in a standard Cox model, that is, as log hazard ratios. Positive
|
40,717
|
Simulate rare event data using logistic regression with correlated covariates in R
|
Let the variables be $X_1, \ldots, X_4$. Due to the flexibility in choosing coefficients in the regression model, we will lose no generality by assuming they all have zero means. Write the covariance matrix as $\Sigma$. Let the logistic regression coefficients be $\beta_0, \beta_1, \ldots, \beta_4$. This means the model is
$$\text{logit}(\Pr(Y=1)) = \beta_0 + \beta_1 X_1 + \cdots + \beta_4 X_4.$$
The right hand side, as an affine combination $X$ of components of a multivariate Normal vector, thereby has a Normal distribution. All we need to know are its mean, which equals $\beta_0$, and its variance, which is
$$\text{Var}(\beta_0 + \beta_1 X_1 + \cdots + \beta_4 X_4) = \beta\Sigma \beta^\prime,$$
with $\beta=(\beta_1, \beta_2, \beta_3, \beta_4)$. For future reference let's call the right hand side $\sigma^2$.
The unconditional probability $\Pr(Y=1)$ is the expectation of $\Pr(Y=1|X)$. To find this, re-express the preceding result in terms of the probability directly:
$$\Pr(Y=1) = \frac{1}{1 + \exp(-\beta_0 - \sigma Z)}$$
where $Z$ has a standard Normal distribution. Obtain the expectation by integrating against the density $\exp(-z^2/2)/\sqrt{2\pi}$:
$$\mathbb{E}(\Pr(Y=1|X)) = \frac{1}{\sqrt{2\pi}}\int_\mathbb{R} \left(\frac{\exp(-z^2/2)}{1 + \exp(-\beta_0 - \sigma z)}\right)dz.$$
This has to be solved numerically for $\sigma$ and $\beta_0$, giving a one-dimensional manifold of solutions. A simple way to obtain a solution is to set $\sigma=1$ and solve numerically for $\beta_0$. Alternatively, choose $\beta_1, \ldots, \beta_4$ any way you wish, thereby determining $\sigma$, and then solve for $\beta_0$. (The advantage of setting $\sigma$ to a standard value, such as unity, is that you can tabulate values of $\beta_0$ for a wide range of possible values of $\Pr(Y=1)$, once and for all, obviating any need to incorporate the numerical integration and root-finding code in every application.)
This figure plots $\Pr(Y=1)$ as a function of $\beta_0$, with $\sigma=1$. Because it rises continuously and monotonically from $0$ (when $\beta_0\to-\infty$) to $1$ (when $\beta_0\to+\infty$), every possible value of $\Pr(Y=1)$ can be realized by a unique corresponding value of $\beta_0$.
The solution for $\Pr(Y=1)=0.05$ is approximately $\beta_0=-3.37$, where the graph (in the figure) attains a height of $0.05$ (as shown with a dotted line).
This provides infinitely many solutions, because $\beta_1, \ldots, \beta_4$ can be anything you like: selecting them determines $\sigma$, from which $\beta_0$ can be computed.
The correctness and accuracy of this approach are supported by an R-based simulation. It generates $\beta_1,\ldots,\beta_4$ randomly and uses the previously computed value of $\beta_0$ to construct a dataset of 100,000 records. At the end it displays the correlation matrix of the independent variables (to check it has the desired coefficients) and it outputs the mean of the dependent variable to see whether it exhibits $5\%$ positive results overall (at least up to chance variation).
Repeated runs of this simulation (which thereby involve all new variable values as well as a new model in each iteration) consistently produce proportions of positive results between $4.8\%$ and $5.2\%$. In 100 iterations (starting with a seed of $17$), the average proportion was $4.99\%$.
library(MASS) # exports mvrnorm()
#
# Describe the independent variables.
#
mu <- c(0,0,0,0)
Sigma <- cbind(c(10,6,5,6), c(6,10,5,3.5), c(5,5,10,3), c(6,3.5,3,10))/10
#
# Simulate a dataset in which Pr(Y=1) is 5%.
#
beta.0 <- -3.37154 # Corresponds to 5%.
n.obs <- 1e5
x <- mvrnorm(n.obs, mu, Sigma)
beta <- rnorm(4) # Can be anything (nonzero)!
sigma2 <- beta %*% Sigma %*% beta
beta <- beta / sqrt(sigma2) # Assures sum of squares is unity
y <- runif(n.obs) < 1 / (1 + exp(-beta.0 - x %*% beta))
#
# Confirm that the independent variables have the desired correlation
# and the dependent variable has the desired proportion of true responses.
#
round(cor(x), 2)
mean(y)
|
Simulate rare event data using logistic regression with correlated covariates in R
|
Let the variables be $X_1, \ldots, X_4$. Due to the flexibility in choosing coefficients in the regression model, we will lose no generality by assuming they all have zero means. Write the covarianc
|
Simulate rare event data using logistic regression with correlated covariates in R
Let the variables be $X_1, \ldots, X_4$. Due to the flexibility in choosing coefficients in the regression model, we will lose no generality by assuming they all have zero means. Write the covariance matrix as $\Sigma$. Let the logistic regression coefficients be $\beta_0, \beta_1, \ldots, \beta_4$. This means the model is
$$\text{logit}(\Pr(Y=1)) = \beta_0 + \beta_1 X_1 + \cdots + \beta_4 X_4.$$
The right hand side, as an affine combination $X$ of components of a multivariate Normal vector, thereby has a Normal distribution. All we need to know are its mean, which equals $\beta_0$, and its variance, which is
$$\text{Var}(\beta_0 + \beta_1 X_1 + \cdots + \beta_4 X_4) = \beta\Sigma \beta^\prime,$$
with $\beta=(\beta_1, \beta_2, \beta_3, \beta_4)$. For future reference let's call the right hand side $\sigma^2$.
The unconditional probability $\Pr(Y=1)$ is the expectation of $\Pr(Y=1|X)$. To find this, re-express the preceding result in terms of the probability directly:
$$\Pr(Y=1) = \frac{1}{1 + \exp(-\beta_0 - \sigma Z)}$$
where $Z$ has a standard Normal distribution. Obtain the expectation by integrating against the density $\exp(-z^2/2)/\sqrt{2\pi}$:
$$\mathbb{E}(\Pr(Y=1|X)) = \frac{1}{\sqrt{2\pi}}\int_\mathbb{R} \left(\frac{\exp(-z^2/2)}{1 + \exp(-\beta_0 - \sigma z)}\right)dz.$$
This has to be solved numerically for $\sigma$ and $\beta_0$, giving a one-dimensional manifold of solutions. A simple way to obtain a solution is to set $\sigma=1$ and solve numerically for $\beta_0$. Alternatively, choose $\beta_1, \ldots, \beta_4$ any way you wish, thereby determining $\sigma$, and then solve for $\beta_0$. (The advantage of setting $\sigma$ to a standard value, such as unity, is that you can tabulate values of $\beta_0$ for a wide range of possible values of $\Pr(Y=1)$, once and for all, obviating any need to incorporate the numerical integration and root-finding code in every application.)
This figure plots $\Pr(Y=1)$ as a function of $\beta_0$, with $\sigma=1$. Because it rises continuously and monotonically from $0$ (when $\beta_0\to-\infty$) to $1$ (when $\beta_0\to+\infty$), every possible value of $\Pr(Y=1)$ can be realized by a unique corresponding value of $\beta_0$.
The solution for $\Pr(Y=1)=0.05$ is approximately $\beta_0=-3.37$, where the graph (in the figure) attains a height of $0.05$ (as shown with a dotted line).
This provides infinitely many solutions, because $\beta_1, \ldots, \beta_4$ can be anything you like: selecting them determines $\sigma$, from which $\beta_0$ can be computed.
The correctness and accuracy of this approach are supported by an R-based simulation. It generates $\beta_1,\ldots,\beta_4$ randomly and uses the previously computed value of $\beta_0$ to construct a dataset of 100,000 records. At the end it displays the correlation matrix of the independent variables (to check it has the desired coefficients) and it outputs the mean of the dependent variable to see whether it exhibits $5\%$ positive results overall (at least up to chance variation).
Repeated runs of this simulation (which thereby involve all new variable values as well as a new model in each iteration) consistently produce proportions of positive results between $4.8\%$ and $5.2\%$. In 100 iterations (starting with a seed of $17$), the average proportion was $4.99\%$.
library(MASS) # exports mvrnorm()
#
# Describe the independent variables.
#
mu <- c(0,0,0,0)
Sigma <- cbind(c(10,6,5,6), c(6,10,5,3.5), c(5,5,10,3), c(6,3.5,3,10))/10
#
# Simulate a dataset in which Pr(Y=1) is 5%.
#
beta.0 <- -3.37154 # Corresponds to 5%.
n.obs <- 1e5
x <- mvrnorm(n.obs, mu, Sigma)
beta <- rnorm(4) # Can be anything (nonzero)!
sigma2 <- beta %*% Sigma %*% beta
beta <- beta / sqrt(sigma2) # Assures sum of squares is unity
y <- runif(n.obs) < 1 / (1 + exp(-beta.0 - x %*% beta))
#
# Confirm that the independent variables have the desired correlation
# and the dependent variable has the desired proportion of true responses.
#
round(cor(x), 2)
mean(y)
|
Simulate rare event data using logistic regression with correlated covariates in R
Let the variables be $X_1, \ldots, X_4$. Due to the flexibility in choosing coefficients in the regression model, we will lose no generality by assuming they all have zero means. Write the covarianc
|
40,718
|
Why logistic regression cannot be solved by OLS
|
I think this essentially boils down to what cost function you want to minimize in order to estimate your parameter $w$. Typically, the negative log-likelihood is minimized for parameter estimation, what you have suggested looks like minimizing the Brier score. I think they would give very similar estimates for $w$ (edit: see comments).
edit: I should say, it is not an incorrect approach.
|
Why logistic regression cannot be solved by OLS
|
I think this essentially boils down to what cost function you want to minimize in order to estimate your parameter $w$. Typically, the negative log-likelihood is minimized for parameter estimation, wh
|
Why logistic regression cannot be solved by OLS
I think this essentially boils down to what cost function you want to minimize in order to estimate your parameter $w$. Typically, the negative log-likelihood is minimized for parameter estimation, what you have suggested looks like minimizing the Brier score. I think they would give very similar estimates for $w$ (edit: see comments).
edit: I should say, it is not an incorrect approach.
|
Why logistic regression cannot be solved by OLS
I think this essentially boils down to what cost function you want to minimize in order to estimate your parameter $w$. Typically, the negative log-likelihood is minimized for parameter estimation, wh
|
40,719
|
Why logistic regression cannot be solved by OLS
|
What you are proposing is a linear probability model, i.e. an OLS regression for a binary dependent variable. The difference is that logit is a non-linear model whereas the linear probability model (as the name says) is linear. The difference is perhaps best understood graphically.
If you calculate the marginal effect of your logistic regression coefficients at the mean you will likely get very similar estimates than those from the OLS regression (in the graph that would be where the blue and the red line intersect or it will be at least close to it). The picture also shows nicely the problems of OLS in this case because you can see that it predicts outside the theoretical range, so it can give you predicted probabilities that are larger than one or smaller than zero. There are other advantages and disadvantages of either model (see for example these lecture notes for a summary).
In this sense there is nothing "wrong" with your approach. It just really depends on what you want to do with your model. If you are interested in estimating the marginal effect of your explanatory variables on the outcome probability then either is fine. If you want to do predictions then the linear probability model is not a good choice given that it's predicted probabilities are not bound to lie between zero and one.
|
Why logistic regression cannot be solved by OLS
|
What you are proposing is a linear probability model, i.e. an OLS regression for a binary dependent variable. The difference is that logit is a non-linear model whereas the linear probability model (a
|
Why logistic regression cannot be solved by OLS
What you are proposing is a linear probability model, i.e. an OLS regression for a binary dependent variable. The difference is that logit is a non-linear model whereas the linear probability model (as the name says) is linear. The difference is perhaps best understood graphically.
If you calculate the marginal effect of your logistic regression coefficients at the mean you will likely get very similar estimates than those from the OLS regression (in the graph that would be where the blue and the red line intersect or it will be at least close to it). The picture also shows nicely the problems of OLS in this case because you can see that it predicts outside the theoretical range, so it can give you predicted probabilities that are larger than one or smaller than zero. There are other advantages and disadvantages of either model (see for example these lecture notes for a summary).
In this sense there is nothing "wrong" with your approach. It just really depends on what you want to do with your model. If you are interested in estimating the marginal effect of your explanatory variables on the outcome probability then either is fine. If you want to do predictions then the linear probability model is not a good choice given that it's predicted probabilities are not bound to lie between zero and one.
|
Why logistic regression cannot be solved by OLS
What you are proposing is a linear probability model, i.e. an OLS regression for a binary dependent variable. The difference is that logit is a non-linear model whereas the linear probability model (a
|
40,720
|
Calculate the F1 score of precision and recall in R
|
F1Score ranges from 0-1, you're right.
Keep in mind how precision and recall is calculated
precision = TP / (TP + FP)
recall = TP / (TP + FN)
with TP = True positives, FP = False positives and FN = false negatives. Therefore, precision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved (like wikipedia puts it).
Given that definition, precision and recall are both percentual values (range from 0%-100% = 0-1). Therefore you should go with your XXX1 versions, because your Precsion (without the 1) does not satisfy that criteria. But please note, that your recall is with values lower than 0,02 extremly low.
|
Calculate the F1 score of precision and recall in R
|
F1Score ranges from 0-1, you're right.
Keep in mind how precision and recall is calculated
precision = TP / (TP + FP)
recall = TP / (TP + FN)
with TP = True positives, FP = False positives and FN = f
|
Calculate the F1 score of precision and recall in R
F1Score ranges from 0-1, you're right.
Keep in mind how precision and recall is calculated
precision = TP / (TP + FP)
recall = TP / (TP + FN)
with TP = True positives, FP = False positives and FN = false negatives. Therefore, precision is the fraction of retrieved instances that are relevant, while recall is the fraction of relevant instances that are retrieved (like wikipedia puts it).
Given that definition, precision and recall are both percentual values (range from 0%-100% = 0-1). Therefore you should go with your XXX1 versions, because your Precsion (without the 1) does not satisfy that criteria. But please note, that your recall is with values lower than 0,02 extremly low.
|
Calculate the F1 score of precision and recall in R
F1Score ranges from 0-1, you're right.
Keep in mind how precision and recall is calculated
precision = TP / (TP + FP)
recall = TP / (TP + FN)
with TP = True positives, FP = False positives and FN = f
|
40,721
|
Calculate the F1 score of precision and recall in R
|
You Need to take the numbers between 0 and 1 and not the percent values. Please check the syntax, however, as I think, there is an error hidden. Precision and Recall are two vectors. You are computing sum(Precision, Recall) where I think you should compute Precision + Recall. Note, that these are not the same in R. The sum function will add all the values in both vectors to one large number, whilst the + will add element wise:
> a <- c(1, 1, 1, 1)
> b <- c(1, 1, 1, 1)
> sum(a,b)
[1] 8
> a+b
[1] 2 2 2 2
The more Precision/Recall pairs you have, the smaller the results of your function (using sum) will get, as they all have a growing denominator.
To come back to your example data, that would be:
Precision1 <- c(0.5454, 0.6000, 0.9130, 0.9523)
Recall1 <- c(0.0002, 0.0210, 0.0018, 0.0530)
Fscore_rev <- 2 * Precision1 * Recall1 / (Precision1 + Recall1)
and yield
> round(Fscore_rev, 4)
[1] 0.0004 0.0406 0.0036 0.1004
|
Calculate the F1 score of precision and recall in R
|
You Need to take the numbers between 0 and 1 and not the percent values. Please check the syntax, however, as I think, there is an error hidden. Precision and Recall are two vectors. You are computing
|
Calculate the F1 score of precision and recall in R
You Need to take the numbers between 0 and 1 and not the percent values. Please check the syntax, however, as I think, there is an error hidden. Precision and Recall are two vectors. You are computing sum(Precision, Recall) where I think you should compute Precision + Recall. Note, that these are not the same in R. The sum function will add all the values in both vectors to one large number, whilst the + will add element wise:
> a <- c(1, 1, 1, 1)
> b <- c(1, 1, 1, 1)
> sum(a,b)
[1] 8
> a+b
[1] 2 2 2 2
The more Precision/Recall pairs you have, the smaller the results of your function (using sum) will get, as they all have a growing denominator.
To come back to your example data, that would be:
Precision1 <- c(0.5454, 0.6000, 0.9130, 0.9523)
Recall1 <- c(0.0002, 0.0210, 0.0018, 0.0530)
Fscore_rev <- 2 * Precision1 * Recall1 / (Precision1 + Recall1)
and yield
> round(Fscore_rev, 4)
[1] 0.0004 0.0406 0.0036 0.1004
|
Calculate the F1 score of precision and recall in R
You Need to take the numbers between 0 and 1 and not the percent values. Please check the syntax, however, as I think, there is an error hidden. Precision and Recall are two vectors. You are computing
|
40,722
|
Calculate the F1 score of precision and recall in R
|
By definition, Precision and Recall should range form 0 to 1.
Use the decimal representation of Precision and Recall:
Precision <- c( 0.5454 0.6000 0.9130 0.9523 )
Recall <- c( 0.0002 0.0210 0.0018 0.0530 )
numerator <- 2*Precision*Recall
print(numerator)
[1] 0.00021816 0.02520000 0.00328680 0.10094380
denominator <- (Precision + Recall)
print(denominator)
[1] 0.5456 0.6210 0.9148 1.0053
Fscore <- numerator/denominator
The answer is:
print( Fscore)
[1] 0.0003998534 0.0405797101 0.0035929165 0.1004116184
|
Calculate the F1 score of precision and recall in R
|
By definition, Precision and Recall should range form 0 to 1.
Use the decimal representation of Precision and Recall:
Precision <- c( 0.5454 0.6000 0.9130 0.9523 )
Recall <- c( 0.0002 0.0210 0.0018 0.
|
Calculate the F1 score of precision and recall in R
By definition, Precision and Recall should range form 0 to 1.
Use the decimal representation of Precision and Recall:
Precision <- c( 0.5454 0.6000 0.9130 0.9523 )
Recall <- c( 0.0002 0.0210 0.0018 0.0530 )
numerator <- 2*Precision*Recall
print(numerator)
[1] 0.00021816 0.02520000 0.00328680 0.10094380
denominator <- (Precision + Recall)
print(denominator)
[1] 0.5456 0.6210 0.9148 1.0053
Fscore <- numerator/denominator
The answer is:
print( Fscore)
[1] 0.0003998534 0.0405797101 0.0035929165 0.1004116184
|
Calculate the F1 score of precision and recall in R
By definition, Precision and Recall should range form 0 to 1.
Use the decimal representation of Precision and Recall:
Precision <- c( 0.5454 0.6000 0.9130 0.9523 )
Recall <- c( 0.0002 0.0210 0.0018 0.
|
40,723
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
|
The location-difference measure that the Mann-Whitney 'sees' is neither difference in means nor difference in medians -- it's the median of cross-group pairwise differences (the between samples quantity is the relevant estimate of the corresponding measure between populations).
See the end of this section of the wikipedia article on the Mann-Whitney (just above the section headed "Calculations").
The most typical additional assumptions required to make either the difference of means or medians reasonable (identity of distribution shapes is sufficient and is a commonly added assumption) immediately makes the other equally reasonable (at least assuming means are finite). So either: neither will be correct, or both should be good.
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
|
The location-difference measure that the Mann-Whitney 'sees' is neither difference in means nor difference in medians -- it's the median of cross-group pairwise differences (the between samples quanti
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
The location-difference measure that the Mann-Whitney 'sees' is neither difference in means nor difference in medians -- it's the median of cross-group pairwise differences (the between samples quantity is the relevant estimate of the corresponding measure between populations).
See the end of this section of the wikipedia article on the Mann-Whitney (just above the section headed "Calculations").
The most typical additional assumptions required to make either the difference of means or medians reasonable (identity of distribution shapes is sufficient and is a commonly added assumption) immediately makes the other equally reasonable (at least assuming means are finite). So either: neither will be correct, or both should be good.
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
The location-difference measure that the Mann-Whitney 'sees' is neither difference in means nor difference in medians -- it's the median of cross-group pairwise differences (the between samples quanti
|
40,724
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
|
Eric, I don't know if you solved your problem but I think the approach of the asterisk is ok. If you think the means represent better the data behavior than just use them. Check the CV%, it can also give a perspective of the dispersion which can be important.
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
|
Eric, I don't know if you solved your problem but I think the approach of the asterisk is ok. If you think the means represent better the data behavior than just use them. Check the CV%, it can also g
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
Eric, I don't know if you solved your problem but I think the approach of the asterisk is ok. If you think the means represent better the data behavior than just use them. Check the CV%, it can also give a perspective of the dispersion which can be important.
|
Reporting Results of Mann-Whitney U Test - Means vs Medians
Eric, I don't know if you solved your problem but I think the approach of the asterisk is ok. If you think the means represent better the data behavior than just use them. Check the CV%, it can also g
|
40,725
|
How to simulate effectiveness of treatment in R?
|
If you want to simulate one random cell (under independence) with fixed margins, that's effectively hypergeometric sampling, which we can apply recursively, so one approach is
pick one cell;
given the margins that cell has a hypergeometric distribution, so
simulate from that hypegeometric
once you have that value, that affects possible values of other cells, which
can be generated in turn, each conditional on all previous values
In the case of $3\times 2$ tables such as yours (or $k\times 2$ tables more generally), you need only simulate two ($k-1$) values, and the rest are determined. If you look at the (1,1) cell you can treat the situation as $2\times 2$ (by combining the remaining row categories) and so generate the (1,1) cell; then (1,2) is determined. After subtraction of the first row from the column totals you're then left with a $2\times 2$ (more generally $(k-1)\times 2$) table for the lower rows which is then done in the same fashion.
[Note: gung suggests a simpler-to-understand and (in some cases), perhaps faster approach to simulation with fixed margins in the comments, and gives some code in his answer.]
In R, you can just use r2dtable; it uses Patefield's algorithm[1].
[1]: Patefield, W. M. (1981),
"Algorithm AS159. An efficient method of generating r x c tables with given row and column totals,"
Applied Statistics 30, 91–97.
|
How to simulate effectiveness of treatment in R?
|
If you want to simulate one random cell (under independence) with fixed margins, that's effectively hypergeometric sampling, which we can apply recursively, so one approach is
pick one cell;
given
|
How to simulate effectiveness of treatment in R?
If you want to simulate one random cell (under independence) with fixed margins, that's effectively hypergeometric sampling, which we can apply recursively, so one approach is
pick one cell;
given the margins that cell has a hypergeometric distribution, so
simulate from that hypegeometric
once you have that value, that affects possible values of other cells, which
can be generated in turn, each conditional on all previous values
In the case of $3\times 2$ tables such as yours (or $k\times 2$ tables more generally), you need only simulate two ($k-1$) values, and the rest are determined. If you look at the (1,1) cell you can treat the situation as $2\times 2$ (by combining the remaining row categories) and so generate the (1,1) cell; then (1,2) is determined. After subtraction of the first row from the column totals you're then left with a $2\times 2$ (more generally $(k-1)\times 2$) table for the lower rows which is then done in the same fashion.
[Note: gung suggests a simpler-to-understand and (in some cases), perhaps faster approach to simulation with fixed margins in the comments, and gives some code in his answer.]
In R, you can just use r2dtable; it uses Patefield's algorithm[1].
[1]: Patefield, W. M. (1981),
"Algorithm AS159. An efficient method of generating r x c tables with given row and column totals,"
Applied Statistics 30, 91–97.
|
How to simulate effectiveness of treatment in R?
If you want to simulate one random cell (under independence) with fixed margins, that's effectively hypergeometric sampling, which we can apply recursively, so one approach is
pick one cell;
given
|
40,726
|
How to simulate effectiveness of treatment in R?
|
(It isn't clear if you want neither, only a particular one, or both marginal totals fixed. @Glen_b has provided a simulation algorithm that is based on having both marginal totals fixed; below, I provide algorithms for all three possibilities.)
The assumption of independence means that the cell probabilities are equal to the product of the probability an observation will occur within a given row times the probability that the observation will occur within the given column.
Neither margin is fixed:
Using the values from your contingency table, the following code will simulate the null. Note, however, that the exact number of, say, "yes" observations in each iteration will not necessarily equal 117. Nonetheless, the probability that an observation will be in the Placebo Gum row AND being in the "yes" column is equal to the product of the row probability times the column probability, which is the definition of independence. (Note further that to get a simple, single simulated table, just set B = 1.)
N = 533 # this is the total number of observations in your table
r.ns = c(178, 179, 176) # these are the row counts
c.ns = c(416, 117) # these are the column counts
r.ps = r.ns/N # these are the row probabilities
c.ps = c.ns/N # these are the column probabilities
probs = r.ps%o%c.ps # these are the cell probabilities under independence
probs
# [,1] [,2]
# [1,] 0.2606507 0.07330801
# [2,] 0.2621150 0.07371986
# [3,] 0.2577221 0.07248433
probs.v = as.vector(probs) # notice that the probabilities read column-wise
probs.v
# [1] 0.26065071 0.26211504 0.25772205 0.07330801 0.07371986 0.07248433
cuts = c(0, cumsum(probs.v)) # notice that I add a 0 on the front
cuts
# [1] 0.0000000 0.2606507 0.5227658 0.7804878 0.8537958 0.9275157 1.0000000
set.seed(4847) # this makes the example exactly reproducible
B = 10000 # number of iterations in simulation
vals = runif(N*B) # generate random values / probabilities
# cut the random uniform values into cell categories:
cats = cut(vals, breaks=cuts, labels=c("11","21","31","12","22","32"))
# this reforms the single N*B vector into a matrix of N obs by B iterations:
cats = matrix(cats, nrow=N, ncol=B, byrow=F)
# here we get the number of observations w/i each cell for each iteration:
counts = apply(cats, 2, function(x){ as.vector(table(x)) })
From here, if you only made a single table (by having set B = 1), and just wanted to see it, you could use:
matrix(counts, nrow=3, ncol=2, byrow=T) # if B had been 1
# [,1] [,2]
# [1,] 137 36
# [2,] 125 47
# [3,] 148 40
To perform a full simulation of the null, make sure B was some large number and use:
# some clean up of the workspace:
rm(N, r.ns, c.ns, r.ps, c.ps, vals, probs, probs.v, cuts, cats)
p.vals = vector(length=B) # this will store the outputs
for(i in 1:B){
# put the counts into the form that chisq.test() needs:
mat = matrix(counts[,i], nrow=3, ncol=2, byrow=T)
p.vals[i] = chisq.test(mat)$p.value # here we store the p values
}
mean(p.vals<.05) # we have 5% type I errors, as appropriate:
# [1] 0.0475
Only the row totals are fixed:
In this case, you will always have, say, exactly 179 observations in the Placebo Gum treatment. Nonetheless, the probability of being in the "yes" column will always be the same:
prob.yes = 117/533 # this is the probability of 'yes' under all 3 treatments
set.seed(192) # this makes the example exactly reproducible
PG = rbinom(n=178, size=1, prob=prob.yes) # these each generate a vector of 'yes'es &
XG = rbinom(n=179, size=1, prob=prob.yes) # 'no's with the fixed row totals & the
XL = rbinom(n=176, size=1, prob=prob.yes) # constant probability of success
raw.observations = rbind(cbind("PG", PG), # here I just make the dataset
cbind("XG", XG),
cbind("XL", XL) )
table(raw.observations[,1], raw.observations[,2])
# 0 1
# PG 142 36
# XG 133 46
# XL 140 36
Both margins are fixed:
In this case, you will always have, say, exactly 179 observations in the Placebo Gum treatment, and, say, exactly 117 observations in the "yes" column. Nonetheless, the probability of being in the Placebo Gum row AND being in the "yes" column is equal to the product of the row probability times the column probability:
X = rbind(cbind(rep("PG",129), rep("no", 129)), # this just re-creates your table
cbind(rep("XG",150), rep("no", 150)),
cbind(rep("XL",137), rep("no", 137)),
cbind(rep("PG", 49), rep("yes", 49)),
cbind(rep("XG", 29), rep("yes", 29)),
cbind(rep("XL", 39), rep("yes", 39)) )
table(X[,1],X[,2])
# no yes
# PG 129 49
# XG 150 29
# XL 137 39
set.seed(6875) # this makes the simulation exactly reproducible
# the sample() call is the key element:
X.perm = cbind(X[,1], sample(X[,2], nrow(X), replace=F))
table(X.perm[,1], X.perm[,2])
# no yes
# PG 140 38
# XG 140 39
# XL 136 40
|
How to simulate effectiveness of treatment in R?
|
(It isn't clear if you want neither, only a particular one, or both marginal totals fixed. @Glen_b has provided a simulation algorithm that is based on having both marginal totals fixed; below, I pro
|
How to simulate effectiveness of treatment in R?
(It isn't clear if you want neither, only a particular one, or both marginal totals fixed. @Glen_b has provided a simulation algorithm that is based on having both marginal totals fixed; below, I provide algorithms for all three possibilities.)
The assumption of independence means that the cell probabilities are equal to the product of the probability an observation will occur within a given row times the probability that the observation will occur within the given column.
Neither margin is fixed:
Using the values from your contingency table, the following code will simulate the null. Note, however, that the exact number of, say, "yes" observations in each iteration will not necessarily equal 117. Nonetheless, the probability that an observation will be in the Placebo Gum row AND being in the "yes" column is equal to the product of the row probability times the column probability, which is the definition of independence. (Note further that to get a simple, single simulated table, just set B = 1.)
N = 533 # this is the total number of observations in your table
r.ns = c(178, 179, 176) # these are the row counts
c.ns = c(416, 117) # these are the column counts
r.ps = r.ns/N # these are the row probabilities
c.ps = c.ns/N # these are the column probabilities
probs = r.ps%o%c.ps # these are the cell probabilities under independence
probs
# [,1] [,2]
# [1,] 0.2606507 0.07330801
# [2,] 0.2621150 0.07371986
# [3,] 0.2577221 0.07248433
probs.v = as.vector(probs) # notice that the probabilities read column-wise
probs.v
# [1] 0.26065071 0.26211504 0.25772205 0.07330801 0.07371986 0.07248433
cuts = c(0, cumsum(probs.v)) # notice that I add a 0 on the front
cuts
# [1] 0.0000000 0.2606507 0.5227658 0.7804878 0.8537958 0.9275157 1.0000000
set.seed(4847) # this makes the example exactly reproducible
B = 10000 # number of iterations in simulation
vals = runif(N*B) # generate random values / probabilities
# cut the random uniform values into cell categories:
cats = cut(vals, breaks=cuts, labels=c("11","21","31","12","22","32"))
# this reforms the single N*B vector into a matrix of N obs by B iterations:
cats = matrix(cats, nrow=N, ncol=B, byrow=F)
# here we get the number of observations w/i each cell for each iteration:
counts = apply(cats, 2, function(x){ as.vector(table(x)) })
From here, if you only made a single table (by having set B = 1), and just wanted to see it, you could use:
matrix(counts, nrow=3, ncol=2, byrow=T) # if B had been 1
# [,1] [,2]
# [1,] 137 36
# [2,] 125 47
# [3,] 148 40
To perform a full simulation of the null, make sure B was some large number and use:
# some clean up of the workspace:
rm(N, r.ns, c.ns, r.ps, c.ps, vals, probs, probs.v, cuts, cats)
p.vals = vector(length=B) # this will store the outputs
for(i in 1:B){
# put the counts into the form that chisq.test() needs:
mat = matrix(counts[,i], nrow=3, ncol=2, byrow=T)
p.vals[i] = chisq.test(mat)$p.value # here we store the p values
}
mean(p.vals<.05) # we have 5% type I errors, as appropriate:
# [1] 0.0475
Only the row totals are fixed:
In this case, you will always have, say, exactly 179 observations in the Placebo Gum treatment. Nonetheless, the probability of being in the "yes" column will always be the same:
prob.yes = 117/533 # this is the probability of 'yes' under all 3 treatments
set.seed(192) # this makes the example exactly reproducible
PG = rbinom(n=178, size=1, prob=prob.yes) # these each generate a vector of 'yes'es &
XG = rbinom(n=179, size=1, prob=prob.yes) # 'no's with the fixed row totals & the
XL = rbinom(n=176, size=1, prob=prob.yes) # constant probability of success
raw.observations = rbind(cbind("PG", PG), # here I just make the dataset
cbind("XG", XG),
cbind("XL", XL) )
table(raw.observations[,1], raw.observations[,2])
# 0 1
# PG 142 36
# XG 133 46
# XL 140 36
Both margins are fixed:
In this case, you will always have, say, exactly 179 observations in the Placebo Gum treatment, and, say, exactly 117 observations in the "yes" column. Nonetheless, the probability of being in the Placebo Gum row AND being in the "yes" column is equal to the product of the row probability times the column probability:
X = rbind(cbind(rep("PG",129), rep("no", 129)), # this just re-creates your table
cbind(rep("XG",150), rep("no", 150)),
cbind(rep("XL",137), rep("no", 137)),
cbind(rep("PG", 49), rep("yes", 49)),
cbind(rep("XG", 29), rep("yes", 29)),
cbind(rep("XL", 39), rep("yes", 39)) )
table(X[,1],X[,2])
# no yes
# PG 129 49
# XG 150 29
# XL 137 39
set.seed(6875) # this makes the simulation exactly reproducible
# the sample() call is the key element:
X.perm = cbind(X[,1], sample(X[,2], nrow(X), replace=F))
table(X.perm[,1], X.perm[,2])
# no yes
# PG 140 38
# XG 140 39
# XL 136 40
|
How to simulate effectiveness of treatment in R?
(It isn't clear if you want neither, only a particular one, or both marginal totals fixed. @Glen_b has provided a simulation algorithm that is based on having both marginal totals fixed; below, I pro
|
40,727
|
How to simulate a system where "the failure probability per week is 3.5%"?
|
First consider the case we have just one machine. We will have to make some assumptions, and one common and simple one is to model the failure time as exponentially distributed. This means that the failure rate is constant (the probability of failure between time t and t+1, given survival up to time t is constant for all t) (see wiki for more info).
The time to failure, let's denote it by $T \sim Exp(\lambda t)$ where t denotes time in days. We first need to find $\lambda$, and since we know the probability of failure in one week is 3.5%, we get:
$P(T < 7) = 1-e^{-7 \lambda} = 0.035$.
Working this out, we get $\lambda = 0.00221$.
Now to simulate the failures for $N$ machines as time progresses, we can draw $N$ samples from an Exponential distribution with $\lambda = 0.00221$. This will give you failure times of the $N$ machines in days.
|
How to simulate a system where "the failure probability per week is 3.5%"?
|
First consider the case we have just one machine. We will have to make some assumptions, and one common and simple one is to model the failure time as exponentially distributed. This means that the fa
|
How to simulate a system where "the failure probability per week is 3.5%"?
First consider the case we have just one machine. We will have to make some assumptions, and one common and simple one is to model the failure time as exponentially distributed. This means that the failure rate is constant (the probability of failure between time t and t+1, given survival up to time t is constant for all t) (see wiki for more info).
The time to failure, let's denote it by $T \sim Exp(\lambda t)$ where t denotes time in days. We first need to find $\lambda$, and since we know the probability of failure in one week is 3.5%, we get:
$P(T < 7) = 1-e^{-7 \lambda} = 0.035$.
Working this out, we get $\lambda = 0.00221$.
Now to simulate the failures for $N$ machines as time progresses, we can draw $N$ samples from an Exponential distribution with $\lambda = 0.00221$. This will give you failure times of the $N$ machines in days.
|
How to simulate a system where "the failure probability per week is 3.5%"?
First consider the case we have just one machine. We will have to make some assumptions, and one common and simple one is to model the failure time as exponentially distributed. This means that the fa
|
40,728
|
How to simulate a system where "the failure probability per week is 3.5%"?
|
It seems like you don't need information more frequently than daily,
so the most obvious approach would be to compute the distribution of the number of failures per day, and then simulate from that.
Alternatively, you can simulate the exponential inter-event times and go from that. This gives intra-day precision if you need information at that level.
"how many failure will happen in 13 days?"
You can work this out without simulation. To do it with the daily simulation, you could simulate sets of 13 days many times and keep the simulated distribution of values.
This is very easy in R.
|
How to simulate a system where "the failure probability per week is 3.5%"?
|
It seems like you don't need information more frequently than daily,
so the most obvious approach would be to compute the distribution of the number of failures per day, and then simulate from that.
|
How to simulate a system where "the failure probability per week is 3.5%"?
It seems like you don't need information more frequently than daily,
so the most obvious approach would be to compute the distribution of the number of failures per day, and then simulate from that.
Alternatively, you can simulate the exponential inter-event times and go from that. This gives intra-day precision if you need information at that level.
"how many failure will happen in 13 days?"
You can work this out without simulation. To do it with the daily simulation, you could simulate sets of 13 days many times and keep the simulated distribution of values.
This is very easy in R.
|
How to simulate a system where "the failure probability per week is 3.5%"?
It seems like you don't need information more frequently than daily,
so the most obvious approach would be to compute the distribution of the number of failures per day, and then simulate from that.
|
40,729
|
Understanding multiple KS tests
|
The Kolmogorov Smirnov statistic uses a fairly generic measure of nonuniformity - it's not particularly sensitive to every way in which a distribution may be non-uniform. In particular, it's not especially sensitive to the particular kind of nonuniformity you're looking at.
The KS-test statistic looks at the maximum distance between cdf and ecdf.
to acknowledge that the p-value distribution is not uniform with 0.99 confidence
That's not how hypothesis tests work. You don't have "0.99 confidence". I presume you mean you're doing your test at $\alpha=0.01$.
At $n=100$, the $1\%$ critical value is $0.163$.
Each small value you put in moves the ecdf near 0 up by about 0.01 (and by about half that distance near 0.5 if the distribution is close to uniform). If the ecdf was previously very close to uniform you might expect it to take about 16 values to reach that critical value.
However, in practice it takes less than 16 because of the natural random variation in the rest of a typical sample; it wiggles about a uniform:
The left side is an ECDF of sample of 100 values from an actual uniform. There's some deviation in the center due to random variation, but nowhere near large enough to reach the 1% significance level. The right side is an ECDF of the same sample where in addition the first 11 values (not the smallest 11, just 11 values from the start of the sample) were replaced by exactly 0*. In this case that's more than enough to pass the 1% critical value of the statistic. (In this case, fewer than 11 would be sufficient, but typically it takes a little more than 11.)
*(which, given even a single instance of such a value, some other tests would identify non-normality without difficulty)
So if you want to make something that is close to uniform look non-uniform to a KS-test by inserting small values, you would need to insert a lot of them. If you want a test specifically sensitive to "too many very small values", there are a number of better choices than the KS test for that. The Anderson-Darling test would be an example of a test that's more sensitive to the specific kind of deviation you're constructing here.
|
Understanding multiple KS tests
|
The Kolmogorov Smirnov statistic uses a fairly generic measure of nonuniformity - it's not particularly sensitive to every way in which a distribution may be non-uniform. In particular, it's not espec
|
Understanding multiple KS tests
The Kolmogorov Smirnov statistic uses a fairly generic measure of nonuniformity - it's not particularly sensitive to every way in which a distribution may be non-uniform. In particular, it's not especially sensitive to the particular kind of nonuniformity you're looking at.
The KS-test statistic looks at the maximum distance between cdf and ecdf.
to acknowledge that the p-value distribution is not uniform with 0.99 confidence
That's not how hypothesis tests work. You don't have "0.99 confidence". I presume you mean you're doing your test at $\alpha=0.01$.
At $n=100$, the $1\%$ critical value is $0.163$.
Each small value you put in moves the ecdf near 0 up by about 0.01 (and by about half that distance near 0.5 if the distribution is close to uniform). If the ecdf was previously very close to uniform you might expect it to take about 16 values to reach that critical value.
However, in practice it takes less than 16 because of the natural random variation in the rest of a typical sample; it wiggles about a uniform:
The left side is an ECDF of sample of 100 values from an actual uniform. There's some deviation in the center due to random variation, but nowhere near large enough to reach the 1% significance level. The right side is an ECDF of the same sample where in addition the first 11 values (not the smallest 11, just 11 values from the start of the sample) were replaced by exactly 0*. In this case that's more than enough to pass the 1% critical value of the statistic. (In this case, fewer than 11 would be sufficient, but typically it takes a little more than 11.)
*(which, given even a single instance of such a value, some other tests would identify non-normality without difficulty)
So if you want to make something that is close to uniform look non-uniform to a KS-test by inserting small values, you would need to insert a lot of them. If you want a test specifically sensitive to "too many very small values", there are a number of better choices than the KS test for that. The Anderson-Darling test would be an example of a test that's more sensitive to the specific kind of deviation you're constructing here.
|
Understanding multiple KS tests
The Kolmogorov Smirnov statistic uses a fairly generic measure of nonuniformity - it's not particularly sensitive to every way in which a distribution may be non-uniform. In particular, it's not espec
|
40,730
|
What is the best visualization for Cramér's V?
|
My default for this situation is to use a mosaic plot. I'll admit this is in part because they are convenient to make in R. One possible drawback of mosaic plots is that they are not symmetrical. It is clearly the case that one variable is the 'independent-ish' variable and the other is the 'dependent-esque' variable. So mosaic plots are a great choice for data that might be analyzed with logistic regression, for example. But if you are thinking about Cramer's V purely as a measure of association, it isn't quite as good. Another option would be a sieve plot, but I find them ugly. I think the nicest option is what seems to be called a dynamic pressure plot. I have an example in my question here: Alternative to sieve / mosaic plots for contingency tables, and @Glen_b works up a couple examples in his answer here: Graph for relationship between two ordinal variables.
|
What is the best visualization for Cramér's V?
|
My default for this situation is to use a mosaic plot. I'll admit this is in part because they are convenient to make in R. One possible drawback of mosaic plots is that they are not symmetrical. I
|
What is the best visualization for Cramér's V?
My default for this situation is to use a mosaic plot. I'll admit this is in part because they are convenient to make in R. One possible drawback of mosaic plots is that they are not symmetrical. It is clearly the case that one variable is the 'independent-ish' variable and the other is the 'dependent-esque' variable. So mosaic plots are a great choice for data that might be analyzed with logistic regression, for example. But if you are thinking about Cramer's V purely as a measure of association, it isn't quite as good. Another option would be a sieve plot, but I find them ugly. I think the nicest option is what seems to be called a dynamic pressure plot. I have an example in my question here: Alternative to sieve / mosaic plots for contingency tables, and @Glen_b works up a couple examples in his answer here: Graph for relationship between two ordinal variables.
|
What is the best visualization for Cramér's V?
My default for this situation is to use a mosaic plot. I'll admit this is in part because they are convenient to make in R. One possible drawback of mosaic plots is that they are not symmetrical. I
|
40,731
|
Expected value of a product of two compound Poisson processes
|
Use the tower porperty of conditional expectations.
\begin{eqnarray}
\mathbb{E}\left[Y^a Y^b\right]
&=& \mathbb{E}\left[\mathbb{E}\left[Y^a Y^b\ |\ N_a, N_b \right]\right]
\\
&=& \mathbb{E}\left[\mathbb{E}\left[\left(\sum_{i=1}^{N_a}X_i^a\right)\left(\sum_{i=1}^{N_b}X_i^b\right) \Bigg|\ N_a, N_b \right]\right]
\\
&=& \mathbb{E}\left[\mathbb{E}\left[\sum_{i=1}^{N_a}\sum_{j=1}^{N_b}X_i^aX_j^b\ \Bigg|\ N_a, N_b \right]\right]
\\
&\overset{1}{=}& \mathbb{E}\left[\sum_{i=1}^{N_a}\sum_{j=1}^{N_b} \mathbb{E}\left[ X_i^aX_j^b\ \right]\right]
\\
&\overset{2}{=}& \mathbb{E}\left[ N_aN_b\left(\text{Cov}(X^a, X^b) - \mathbb{E}\left[X^a\right]\mathbb{E}[X^b]\right)\right]
\\
&=&\left(\text{Cov}(N^a, N^b) - \mathbb{E}\left[N^a\right]\mathbb{E}[N^b]\right)\left(\text{Cov}(X^a, X^b) - \mathbb{E}\left[X^a\right]\mathbb{E}[X^b]\right)
\end{eqnarray}
In step 1 the sums are moved out of the integral (finite sums) and the conditioning can be removed. In step 2 we use that we can express the expectation of the product with known stuff, and we sum.
|
Expected value of a product of two compound Poisson processes
|
Use the tower porperty of conditional expectations.
\begin{eqnarray}
\mathbb{E}\left[Y^a Y^b\right]
&=& \mathbb{E}\left[\mathbb{E}\left[Y^a Y^b\ |\ N_a, N_b \right]\right]
\\
&=& \mathbb{E}\left[\ma
|
Expected value of a product of two compound Poisson processes
Use the tower porperty of conditional expectations.
\begin{eqnarray}
\mathbb{E}\left[Y^a Y^b\right]
&=& \mathbb{E}\left[\mathbb{E}\left[Y^a Y^b\ |\ N_a, N_b \right]\right]
\\
&=& \mathbb{E}\left[\mathbb{E}\left[\left(\sum_{i=1}^{N_a}X_i^a\right)\left(\sum_{i=1}^{N_b}X_i^b\right) \Bigg|\ N_a, N_b \right]\right]
\\
&=& \mathbb{E}\left[\mathbb{E}\left[\sum_{i=1}^{N_a}\sum_{j=1}^{N_b}X_i^aX_j^b\ \Bigg|\ N_a, N_b \right]\right]
\\
&\overset{1}{=}& \mathbb{E}\left[\sum_{i=1}^{N_a}\sum_{j=1}^{N_b} \mathbb{E}\left[ X_i^aX_j^b\ \right]\right]
\\
&\overset{2}{=}& \mathbb{E}\left[ N_aN_b\left(\text{Cov}(X^a, X^b) - \mathbb{E}\left[X^a\right]\mathbb{E}[X^b]\right)\right]
\\
&=&\left(\text{Cov}(N^a, N^b) - \mathbb{E}\left[N^a\right]\mathbb{E}[N^b]\right)\left(\text{Cov}(X^a, X^b) - \mathbb{E}\left[X^a\right]\mathbb{E}[X^b]\right)
\end{eqnarray}
In step 1 the sums are moved out of the integral (finite sums) and the conditioning can be removed. In step 2 we use that we can express the expectation of the product with known stuff, and we sum.
|
Expected value of a product of two compound Poisson processes
Use the tower porperty of conditional expectations.
\begin{eqnarray}
\mathbb{E}\left[Y^a Y^b\right]
&=& \mathbb{E}\left[\mathbb{E}\left[Y^a Y^b\ |\ N_a, N_b \right]\right]
\\
&=& \mathbb{E}\left[\ma
|
40,732
|
Expected value of a product of two compound Poisson processes
|
I guess we start like this:
\begin{align}
Y^aY^b &= \left(\sum_{i=1}^{N^a}X^a_i\right)\left(\sum_{i=1}^{N^b}X^b_i\right)\\
&= \sum_{i=1}^{N^a}\sum_{j=1}^{N^b}X^a_iX^b_j
\end{align}
The expectations of the terms of the sum are all the same at $E\{X^aX^b\}$, which you
say we may assume known (or estimated?). Also, the terms of the sum are independent of
how many terms there are (that is $X^aX^b$ is independent of $N^aN^b$). So, the
expectation of the sum (by an argument you have probably seen many times if you
work with stochastic processes) is $E\{X^aX^b\}E\{N^aN^b\}$. Now, we are done. If you
know $Cov(N^a,N^b)$, then you know $E\{N^aN^b\}$.
Did I misunderstand the question somehow? Or maybe I have made an error? That seemed too easy.
|
Expected value of a product of two compound Poisson processes
|
I guess we start like this:
\begin{align}
Y^aY^b &= \left(\sum_{i=1}^{N^a}X^a_i\right)\left(\sum_{i=1}^{N^b}X^b_i\right)\\
&= \sum_{i=1}^{N^a}\sum_{j=1}^{N^b}X^a_iX^b_j
\end{align}
The expectat
|
Expected value of a product of two compound Poisson processes
I guess we start like this:
\begin{align}
Y^aY^b &= \left(\sum_{i=1}^{N^a}X^a_i\right)\left(\sum_{i=1}^{N^b}X^b_i\right)\\
&= \sum_{i=1}^{N^a}\sum_{j=1}^{N^b}X^a_iX^b_j
\end{align}
The expectations of the terms of the sum are all the same at $E\{X^aX^b\}$, which you
say we may assume known (or estimated?). Also, the terms of the sum are independent of
how many terms there are (that is $X^aX^b$ is independent of $N^aN^b$). So, the
expectation of the sum (by an argument you have probably seen many times if you
work with stochastic processes) is $E\{X^aX^b\}E\{N^aN^b\}$. Now, we are done. If you
know $Cov(N^a,N^b)$, then you know $E\{N^aN^b\}$.
Did I misunderstand the question somehow? Or maybe I have made an error? That seemed too easy.
|
Expected value of a product of two compound Poisson processes
I guess we start like this:
\begin{align}
Y^aY^b &= \left(\sum_{i=1}^{N^a}X^a_i\right)\left(\sum_{i=1}^{N^b}X^b_i\right)\\
&= \sum_{i=1}^{N^a}\sum_{j=1}^{N^b}X^a_iX^b_j
\end{align}
The expectat
|
40,733
|
Sampling from conditional copula
|
How to sample from a given univariate CDF is a huge subject, so I will assume that part of the answer is known and will address how to find the conditional CDF from the copula.
By definition, any copula assigns probabilities to rectangular regions (within the unit square) delimited on the right by its first argument and above by its second argument. In particular, when $U$ and $V$ are uniformly distributed with $C$ as the copula for $(U,V)$ and $0 \lt \epsilon \le 1 - u$ is sufficiently small,
$$\eqalign{
\Pr(U\in (u, u+\epsilon]\text{ and }V \le v) &= \Pr(U\le u+\epsilon, V \le v) - \Pr(U\le u, V \le v) \\
&=C(u+\epsilon, v) - C(u, v).
}$$
Therefore, the conditional cumulative distribution function ought to arise as the (right-hand) limiting value of
$$\Pr(U\in (u, u+\epsilon]\text{ and }V \le v\,\Big|\,U\in (u, u+\epsilon]) = \frac{C(u+\epsilon, v) - C(u, v)}{\epsilon}.$$
Provided this limit exists (which it will almost everywhere for $u$), by definition it is the first partial derivative, $\partial C(u,v)/\partial u$. This, therefore, gives the conditional CDF for $V\,\Big|\, U=u$ evaluated at $v$.
The left figure shows a contour plot of the copula (representing a surface) $C(u,v)=uv/(u+v-uv)$. The right figure is the graph of the conditional distribution of $V$ for $u\approx 0.23$. It is a cross section of the rightward slope of the surface.
Reference
Roger B. Nelsen, An Introduction to Copulas, Second Edition. Springer 2006: Section 2.9, Random Variate Generation.
|
Sampling from conditional copula
|
How to sample from a given univariate CDF is a huge subject, so I will assume that part of the answer is known and will address how to find the conditional CDF from the copula.
By definition, any cop
|
Sampling from conditional copula
How to sample from a given univariate CDF is a huge subject, so I will assume that part of the answer is known and will address how to find the conditional CDF from the copula.
By definition, any copula assigns probabilities to rectangular regions (within the unit square) delimited on the right by its first argument and above by its second argument. In particular, when $U$ and $V$ are uniformly distributed with $C$ as the copula for $(U,V)$ and $0 \lt \epsilon \le 1 - u$ is sufficiently small,
$$\eqalign{
\Pr(U\in (u, u+\epsilon]\text{ and }V \le v) &= \Pr(U\le u+\epsilon, V \le v) - \Pr(U\le u, V \le v) \\
&=C(u+\epsilon, v) - C(u, v).
}$$
Therefore, the conditional cumulative distribution function ought to arise as the (right-hand) limiting value of
$$\Pr(U\in (u, u+\epsilon]\text{ and }V \le v\,\Big|\,U\in (u, u+\epsilon]) = \frac{C(u+\epsilon, v) - C(u, v)}{\epsilon}.$$
Provided this limit exists (which it will almost everywhere for $u$), by definition it is the first partial derivative, $\partial C(u,v)/\partial u$. This, therefore, gives the conditional CDF for $V\,\Big|\, U=u$ evaluated at $v$.
The left figure shows a contour plot of the copula (representing a surface) $C(u,v)=uv/(u+v-uv)$. The right figure is the graph of the conditional distribution of $V$ for $u\approx 0.23$. It is a cross section of the rightward slope of the surface.
Reference
Roger B. Nelsen, An Introduction to Copulas, Second Edition. Springer 2006: Section 2.9, Random Variate Generation.
|
Sampling from conditional copula
How to sample from a given univariate CDF is a huge subject, so I will assume that part of the answer is known and will address how to find the conditional CDF from the copula.
By definition, any cop
|
40,734
|
Determine where hazards starts to increase for a continuous variable
|
I think you are on the right track with rcs of the rms package. In fact, rms comes with its own version of Coxph, it is called cph.
You might wish to try the following
fit <- cph (Surv(survival, event) ~ statin + rcs(bloodpressure, 3), x=TRUE, y=TRUE, data=data)
# x, y for predict, validate and calibrate;
plot(Predict(fit), data=data)
You may read more about this in the lecture note of Professor Harrell (author of RMS package and a book of the same name) http://biostat.mc.vanderbilt.edu/wiki/pub/Main/RmS/rms.pdf Scroll to Section 18.4
Here is however another side to consider - have you included all the relevant variables in your model? It is not necessary to have only one variable in the model to visualize its effect. An under-specified model produces biased estimates because it does not control for all the potential confounding variables. (See for example: http://en.wikipedia.org/wiki/Omitted-variable_bias)
EDITED: For the plot.Predict method to work correctly, you would need the following two lines before the cph line. (I have to confess I don't know the exact meaning of this, but I see this in 18.1 of Harrell's notes, and it helps resolve the error message for me) Hope this helps~
dd <- datadist(data)
options(datadist = 'dd')
|
Determine where hazards starts to increase for a continuous variable
|
I think you are on the right track with rcs of the rms package. In fact, rms comes with its own version of Coxph, it is called cph.
You might wish to try the following
fit <- cph (Surv(survival, even
|
Determine where hazards starts to increase for a continuous variable
I think you are on the right track with rcs of the rms package. In fact, rms comes with its own version of Coxph, it is called cph.
You might wish to try the following
fit <- cph (Surv(survival, event) ~ statin + rcs(bloodpressure, 3), x=TRUE, y=TRUE, data=data)
# x, y for predict, validate and calibrate;
plot(Predict(fit), data=data)
You may read more about this in the lecture note of Professor Harrell (author of RMS package and a book of the same name) http://biostat.mc.vanderbilt.edu/wiki/pub/Main/RmS/rms.pdf Scroll to Section 18.4
Here is however another side to consider - have you included all the relevant variables in your model? It is not necessary to have only one variable in the model to visualize its effect. An under-specified model produces biased estimates because it does not control for all the potential confounding variables. (See for example: http://en.wikipedia.org/wiki/Omitted-variable_bias)
EDITED: For the plot.Predict method to work correctly, you would need the following two lines before the cph line. (I have to confess I don't know the exact meaning of this, but I see this in 18.1 of Harrell's notes, and it helps resolve the error message for me) Hope this helps~
dd <- datadist(data)
options(datadist = 'dd')
|
Determine where hazards starts to increase for a continuous variable
I think you are on the right track with rcs of the rms package. In fact, rms comes with its own version of Coxph, it is called cph.
You might wish to try the following
fit <- cph (Surv(survival, even
|
40,735
|
Determine where hazards starts to increase for a continuous variable
|
Don't accept this answer, it's only for variety:
In oncology, there is a different approach more closely to proportional hazards. It would work as follows: One splits the blood pressure scale in small intervals and estimates the hazard ratios "locally". You'll get a plot of blood pressure vs. hazard ratio. It is called STEPP by Bonetti and Gelber (2004). There is also a R package. But you have to keep in mind the downsides of this apporach: It needs rather large sample sizes and the results depend on the (arbitrary) length of the intervals. After all, it is more useful for explanatory than confirmatoric analysis. Also, you will not get the optimal blood pressure but only an interval. The only pro is that you don't have to specify the shape of the hazard function.
(Which is also a downside because it's tempting you to think less in advance of your analysis)
|
Determine where hazards starts to increase for a continuous variable
|
Don't accept this answer, it's only for variety:
In oncology, there is a different approach more closely to proportional hazards. It would work as follows: One splits the blood pressure scale in small
|
Determine where hazards starts to increase for a continuous variable
Don't accept this answer, it's only for variety:
In oncology, there is a different approach more closely to proportional hazards. It would work as follows: One splits the blood pressure scale in small intervals and estimates the hazard ratios "locally". You'll get a plot of blood pressure vs. hazard ratio. It is called STEPP by Bonetti and Gelber (2004). There is also a R package. But you have to keep in mind the downsides of this apporach: It needs rather large sample sizes and the results depend on the (arbitrary) length of the intervals. After all, it is more useful for explanatory than confirmatoric analysis. Also, you will not get the optimal blood pressure but only an interval. The only pro is that you don't have to specify the shape of the hazard function.
(Which is also a downside because it's tempting you to think less in advance of your analysis)
|
Determine where hazards starts to increase for a continuous variable
Don't accept this answer, it's only for variety:
In oncology, there is a different approach more closely to proportional hazards. It would work as follows: One splits the blood pressure scale in small
|
40,736
|
Determine where hazards starts to increase for a continuous variable
|
I would try the cox.ph family in the mgcv package, which is available for the generalized additive models implemented by the gam function. Generalized additive models (GAMs) are sort of the multiple linear regression analogue for spline models. If you have a large dataset, worry not, because mgcv is also quite fast. After fitting the flexible, semi-parametric GAM, you can eyeball the curve, and then start thinking about a parametric generalized non-linear model that captures its shape well.
|
Determine where hazards starts to increase for a continuous variable
|
I would try the cox.ph family in the mgcv package, which is available for the generalized additive models implemented by the gam function. Generalized additive models (GAMs) are sort of the multiple l
|
Determine where hazards starts to increase for a continuous variable
I would try the cox.ph family in the mgcv package, which is available for the generalized additive models implemented by the gam function. Generalized additive models (GAMs) are sort of the multiple linear regression analogue for spline models. If you have a large dataset, worry not, because mgcv is also quite fast. After fitting the flexible, semi-parametric GAM, you can eyeball the curve, and then start thinking about a parametric generalized non-linear model that captures its shape well.
|
Determine where hazards starts to increase for a continuous variable
I would try the cox.ph family in the mgcv package, which is available for the generalized additive models implemented by the gam function. Generalized additive models (GAMs) are sort of the multiple l
|
40,737
|
Determine where hazards starts to increase for a continuous variable
|
For a graphic display of relation between BP and events, you could following using ggplot2 and R. Here is an example using your own sample data:
library(ggplot2)
ggplot(data, aes(x=bloodpressure, y=event))+stat_smooth()
The shaded zones indicate 95% confidence intervals.
Two curves can be obtained based on statin use:
ggplot(data, aes(x=bloodpressure, y=event, group=factor(statin), color=factor(statin)))+stat_smooth()
Here the patients taking statins are very few, hence the curve is not complete.
Above code uses 'loess' as the method. Since you are expecting it to be a curve, the code can be changed as follows:
ggplot(data, aes(x=bloodpressure, y=event))+stat_smooth(method = "lm", formula = y ~ poly(x, 2), size = 1)
You can also show actual event rate in groups of bloodpressure. Say you divide all into 10 bloodpressure groups. For large sample size, the number of groups can be increased to get more accurate curve. The y axis in following curves give proportion of patients in that group who had events.
data$grp = cut(data$bloodpressure, 10)
aa = aggregate(event~grp, data=data, mean)
ggplot(aa, aes(x=grp, y=event))+geom_line(aes(group=1))
Errorbars can be added:
ddt = data.table(data)
aaa = ddt[,list(meanevent=mean(event), se_event=se(event)),by=grp]
ggplot(aaa, aes(x=grp, y=meanevent))+geom_line(aes(group=1)) + geom_errorbar(aes(ymin = meanevent-se_event, ymax = meanevent+se_event), width = 0.2)
|
Determine where hazards starts to increase for a continuous variable
|
For a graphic display of relation between BP and events, you could following using ggplot2 and R. Here is an example using your own sample data:
library(ggplot2)
ggplot(data, aes(x=bloodpressure, y=ev
|
Determine where hazards starts to increase for a continuous variable
For a graphic display of relation between BP and events, you could following using ggplot2 and R. Here is an example using your own sample data:
library(ggplot2)
ggplot(data, aes(x=bloodpressure, y=event))+stat_smooth()
The shaded zones indicate 95% confidence intervals.
Two curves can be obtained based on statin use:
ggplot(data, aes(x=bloodpressure, y=event, group=factor(statin), color=factor(statin)))+stat_smooth()
Here the patients taking statins are very few, hence the curve is not complete.
Above code uses 'loess' as the method. Since you are expecting it to be a curve, the code can be changed as follows:
ggplot(data, aes(x=bloodpressure, y=event))+stat_smooth(method = "lm", formula = y ~ poly(x, 2), size = 1)
You can also show actual event rate in groups of bloodpressure. Say you divide all into 10 bloodpressure groups. For large sample size, the number of groups can be increased to get more accurate curve. The y axis in following curves give proportion of patients in that group who had events.
data$grp = cut(data$bloodpressure, 10)
aa = aggregate(event~grp, data=data, mean)
ggplot(aa, aes(x=grp, y=event))+geom_line(aes(group=1))
Errorbars can be added:
ddt = data.table(data)
aaa = ddt[,list(meanevent=mean(event), se_event=se(event)),by=grp]
ggplot(aaa, aes(x=grp, y=meanevent))+geom_line(aes(group=1)) + geom_errorbar(aes(ymin = meanevent-se_event, ymax = meanevent+se_event), width = 0.2)
|
Determine where hazards starts to increase for a continuous variable
For a graphic display of relation between BP and events, you could following using ggplot2 and R. Here is an example using your own sample data:
library(ggplot2)
ggplot(data, aes(x=bloodpressure, y=ev
|
40,738
|
Is it correct to use Precision-Recall AUC in a balanced dataset situation?
|
You can't compare PR-AUC values based on differently balanced data. You can use ROC-AUC for that, though, since that does not depend on class balance.
The larger the fraction of positives in the data set, the larger the area under the PR curve will be for a given model. By increasing the fraction of positives in the data, you artificially inflate PR-AUC (which may or may not be additional to an improved model, you cannot measure).
A random model has PR-AUC equal to the fraction of positives, since it's precision is always equal to the fraction of positives regardless of the recall. For ROC curves the AUC of a random model is 50%, independent of class balance. If you want to assess models under varying levels of class balance, I suggest using ROC-AUC instead of PR-AUC.
|
Is it correct to use Precision-Recall AUC in a balanced dataset situation?
|
You can't compare PR-AUC values based on differently balanced data. You can use ROC-AUC for that, though, since that does not depend on class balance.
The larger the fraction of positives in the data
|
Is it correct to use Precision-Recall AUC in a balanced dataset situation?
You can't compare PR-AUC values based on differently balanced data. You can use ROC-AUC for that, though, since that does not depend on class balance.
The larger the fraction of positives in the data set, the larger the area under the PR curve will be for a given model. By increasing the fraction of positives in the data, you artificially inflate PR-AUC (which may or may not be additional to an improved model, you cannot measure).
A random model has PR-AUC equal to the fraction of positives, since it's precision is always equal to the fraction of positives regardless of the recall. For ROC curves the AUC of a random model is 50%, independent of class balance. If you want to assess models under varying levels of class balance, I suggest using ROC-AUC instead of PR-AUC.
|
Is it correct to use Precision-Recall AUC in a balanced dataset situation?
You can't compare PR-AUC values based on differently balanced data. You can use ROC-AUC for that, though, since that does not depend on class balance.
The larger the fraction of positives in the data
|
40,739
|
Using the same variable as a fixed and random effect in Mixed Effect Models
|
Note that your model rm1 includes a factor that is used both a fixed and a random effect -- namely the intercept. So yes, it is OK to do what you propose. The only issue is that there are a whole lot more random effects to estimate. There is always a trade off between parsimony and fit, so look at the AIC and BIC statistics, among others, to make sure you're not getting too fancy. For example, you might consider at least not having the interactions in there as random effects.
|
Using the same variable as a fixed and random effect in Mixed Effect Models
|
Note that your model rm1 includes a factor that is used both a fixed and a random effect -- namely the intercept. So yes, it is OK to do what you propose. The only issue is that there are a whole lot
|
Using the same variable as a fixed and random effect in Mixed Effect Models
Note that your model rm1 includes a factor that is used both a fixed and a random effect -- namely the intercept. So yes, it is OK to do what you propose. The only issue is that there are a whole lot more random effects to estimate. There is always a trade off between parsimony and fit, so look at the AIC and BIC statistics, among others, to make sure you're not getting too fancy. For example, you might consider at least not having the interactions in there as random effects.
|
Using the same variable as a fixed and random effect in Mixed Effect Models
Note that your model rm1 includes a factor that is used both a fixed and a random effect -- namely the intercept. So yes, it is OK to do what you propose. The only issue is that there are a whole lot
|
40,740
|
How does the 45 degree banking rule apply when plotting multiple data series in one chart?
|
The Visweek 2012 paper, An Empirical Model of Slope Ratio Comparisons [PDF] by
Justin Talbot, John Gerth, and Pat Hanrahan, attempts to generalize the question of optimal banking. Excerpt from introduction:
Despite the practical success of this guideline, its perceptual underpinnings
remain unclear. Cleveland et al. justified the guideline with
an experiment that showed that placing the mid-angle of two lines (the
angle halfway between them) at 45° minimizes errors made in judging
the ratio of their slopes. However, examination of their experimental
design suggests that this conclusion might not be generally applicable.
...
This paper seeks to improve our understanding of slope ratio estimation
in line plots through empirical modeling and experimentation.
The experiments suggest the optimal median angle may be around 30° and that flatter is better for perceiving angle differences. Excerpt of a graph supporting the former point (observed error in black; predicted error in red):
However, like seemingly every other research paper, the bottom line is:
It is still unclear if the results derived in our studies for pairwise discrete comparisons will transfer to real plots. ...
there remains substantial work to be done to
build a solid understanding of aspect ratio selection.
Related papers:
Arc Length-Based Aspect Ratio Selection [PDF], Justin Talbot, John Gerth, and Pat Hanrahan
Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design [PDF], Jeffrey Heer and Michael Bostock
|
How does the 45 degree banking rule apply when plotting multiple data series in one chart?
|
The Visweek 2012 paper, An Empirical Model of Slope Ratio Comparisons [PDF] by
Justin Talbot, John Gerth, and Pat Hanrahan, attempts to generalize the question of optimal banking. Excerpt from introdu
|
How does the 45 degree banking rule apply when plotting multiple data series in one chart?
The Visweek 2012 paper, An Empirical Model of Slope Ratio Comparisons [PDF] by
Justin Talbot, John Gerth, and Pat Hanrahan, attempts to generalize the question of optimal banking. Excerpt from introduction:
Despite the practical success of this guideline, its perceptual underpinnings
remain unclear. Cleveland et al. justified the guideline with
an experiment that showed that placing the mid-angle of two lines (the
angle halfway between them) at 45° minimizes errors made in judging
the ratio of their slopes. However, examination of their experimental
design suggests that this conclusion might not be generally applicable.
...
This paper seeks to improve our understanding of slope ratio estimation
in line plots through empirical modeling and experimentation.
The experiments suggest the optimal median angle may be around 30° and that flatter is better for perceiving angle differences. Excerpt of a graph supporting the former point (observed error in black; predicted error in red):
However, like seemingly every other research paper, the bottom line is:
It is still unclear if the results derived in our studies for pairwise discrete comparisons will transfer to real plots. ...
there remains substantial work to be done to
build a solid understanding of aspect ratio selection.
Related papers:
Arc Length-Based Aspect Ratio Selection [PDF], Justin Talbot, John Gerth, and Pat Hanrahan
Crowdsourcing Graphical Perception: Using Mechanical Turk to Assess Visualization Design [PDF], Jeffrey Heer and Michael Bostock
|
How does the 45 degree banking rule apply when plotting multiple data series in one chart?
The Visweek 2012 paper, An Empirical Model of Slope Ratio Comparisons [PDF] by
Justin Talbot, John Gerth, and Pat Hanrahan, attempts to generalize the question of optimal banking. Excerpt from introdu
|
40,741
|
Why don't we use the unbiased sample variance to calculate the standard error?
|
The $n$ in $\sigma/\sqrt{n}$ has nothing to do with how you estimate $\sigma$. It has to do with the fact that the average of $n$ iid random variables $X_i$ has variance $\sigma^2/n$ when $\mbox{Var}(X_i) = \sigma^2$.
If $\sigma$ is unknown, you estimate it using $s = \sqrt{\frac1{n-1}\sum (X_i-\bar X)^2}$, so that your estimate of the standard error is
$$
\hat{SE}(\bar X) = \sqrt{\frac{\sum(X_i-\bar X)^2}{n(n-1)}}
$$
|
Why don't we use the unbiased sample variance to calculate the standard error?
|
The $n$ in $\sigma/\sqrt{n}$ has nothing to do with how you estimate $\sigma$. It has to do with the fact that the average of $n$ iid random variables $X_i$ has variance $\sigma^2/n$ when $\mbox{Var}(
|
Why don't we use the unbiased sample variance to calculate the standard error?
The $n$ in $\sigma/\sqrt{n}$ has nothing to do with how you estimate $\sigma$. It has to do with the fact that the average of $n$ iid random variables $X_i$ has variance $\sigma^2/n$ when $\mbox{Var}(X_i) = \sigma^2$.
If $\sigma$ is unknown, you estimate it using $s = \sqrt{\frac1{n-1}\sum (X_i-\bar X)^2}$, so that your estimate of the standard error is
$$
\hat{SE}(\bar X) = \sqrt{\frac{\sum(X_i-\bar X)^2}{n(n-1)}}
$$
|
Why don't we use the unbiased sample variance to calculate the standard error?
The $n$ in $\sigma/\sqrt{n}$ has nothing to do with how you estimate $\sigma$. It has to do with the fact that the average of $n$ iid random variables $X_i$ has variance $\sigma^2/n$ when $\mbox{Var}(
|
40,742
|
lm weights and the standard error
|
I think you are looking for:
fit2 = lm(y ~ x, data = d.f, weights = 1/u^2)
parameter.covariance.matrix = vcov(fit2)/summary(fit2)$sigma^2
there is probably a better way to get this information from the fit2 object. This works even if it isn't elegant.
|
lm weights and the standard error
|
I think you are looking for:
fit2 = lm(y ~ x, data = d.f, weights = 1/u^2)
parameter.covariance.matrix = vcov(fit2)/summary(fit2)$sigma^2
there is probably a better way to get this information from t
|
lm weights and the standard error
I think you are looking for:
fit2 = lm(y ~ x, data = d.f, weights = 1/u^2)
parameter.covariance.matrix = vcov(fit2)/summary(fit2)$sigma^2
there is probably a better way to get this information from the fit2 object. This works even if it isn't elegant.
|
lm weights and the standard error
I think you are looking for:
fit2 = lm(y ~ x, data = d.f, weights = 1/u^2)
parameter.covariance.matrix = vcov(fit2)/summary(fit2)$sigma^2
there is probably a better way to get this information from t
|
40,743
|
lm weights and the standard error
|
It seems that the u values represent known standard errors of the y values. However, as pointed out by Glen_b, with lm(y ~ x, data = d.f, weights = 1/u^2), the variances (u^2) are treated as if they are known only up to a proportionality constant. If you want to fit a model with known standard errors, you can approach this from a meta-analytic perspective (where we are in the same situation where the standard errors of the effect size estimates are known). Hence:
library(metafor)
rma(y ~ x, u^2, method="FE", data=d.f, digits=5)
will yield an estimate of the slope (not intercept!) equal to $1.9$ with standard error equal to $0.04472$, which is what the OP was expecting.
|
lm weights and the standard error
|
It seems that the u values represent known standard errors of the y values. However, as pointed out by Glen_b, with lm(y ~ x, data = d.f, weights = 1/u^2), the variances (u^2) are treated as if they a
|
lm weights and the standard error
It seems that the u values represent known standard errors of the y values. However, as pointed out by Glen_b, with lm(y ~ x, data = d.f, weights = 1/u^2), the variances (u^2) are treated as if they are known only up to a proportionality constant. If you want to fit a model with known standard errors, you can approach this from a meta-analytic perspective (where we are in the same situation where the standard errors of the effect size estimates are known). Hence:
library(metafor)
rma(y ~ x, u^2, method="FE", data=d.f, digits=5)
will yield an estimate of the slope (not intercept!) equal to $1.9$ with standard error equal to $0.04472$, which is what the OP was expecting.
|
lm weights and the standard error
It seems that the u values represent known standard errors of the y values. However, as pointed out by Glen_b, with lm(y ~ x, data = d.f, weights = 1/u^2), the variances (u^2) are treated as if they a
|
40,744
|
lm weights and the standard error
|
The reported standard error for fit2 is not correct, indeed. The reason for this is that the residual standard error for the inverse-variance weighted regression should be 1. Hence, to obtain the correct value of the uncertainty in the slope for fit2 you have to divide the reported standard error by the reported residual standard error.
fit2 <- lm(y ~ x, data = d.f, weights = 1/u^2)
SE <- summary(fit2)$coef[2,2]/summary(fit2)$sigma
SE = 0.04472136 - the correct SE of the slope.
|
lm weights and the standard error
|
The reported standard error for fit2 is not correct, indeed. The reason for this is that the residual standard error for the inverse-variance weighted regression should be 1. Hence, to obtain the corr
|
lm weights and the standard error
The reported standard error for fit2 is not correct, indeed. The reason for this is that the residual standard error for the inverse-variance weighted regression should be 1. Hence, to obtain the correct value of the uncertainty in the slope for fit2 you have to divide the reported standard error by the reported residual standard error.
fit2 <- lm(y ~ x, data = d.f, weights = 1/u^2)
SE <- summary(fit2)$coef[2,2]/summary(fit2)$sigma
SE = 0.04472136 - the correct SE of the slope.
|
lm weights and the standard error
The reported standard error for fit2 is not correct, indeed. The reason for this is that the residual standard error for the inverse-variance weighted regression should be 1. Hence, to obtain the corr
|
40,745
|
Derivative of order statistics
|
It seems to me that $\phi$ and $\Phi$ are the PDF and CDF of the standard Gaussian; that is,
\begin{equation}
\phi(x) = \Phi^\prime (x).
\end{equation}
To obtain $g_{\Phi}(x) \equiv {\rm med}_{Y} |x-Y|$ (the CDF of $Y$ is $\Phi$), one should find a distance $a$ such that for exactly half of the members of the Gaussian distribution, the distance from $x$ is greater than $a$ (and of course smaller than $a$ for the other half). This condition amounts to
\begin{equation}
F(x,a) \equiv \Phi(x+a) - \Phi(x-a) = \frac{1}{2}.
\end{equation}
Solving this for $a$ gives $g_{\Phi}(x)$. Therefore obtaining $g_{\Phi}^{\prime}(x)$ boils down to differentiating the implicit function given above. That is,
\begin{equation}
\begin{split}
g_{\Phi}^{\prime}(x) &= - \frac{\partial F/\partial x}{\partial F/\partial a} = \frac{\phi(x-a)-\phi(x+a)}{\phi(x+a)+\phi(x-a)}\\
&=\frac{\phi(x-g_{\Phi}(x))-\phi(x+g_{\Phi}(x))}{\phi(x+g_{\Phi}(x))+\phi(x-g_{\Phi}(x))}.
\end{split}
\end{equation}
Then, using the fact that $g_{\Phi}(q) = 1/c = {\rm med}_{X}g_{\Phi}(X)$, where $q$ is defined by $\Phi(q) = 3/4$, gives
\begin{equation}
g_{\Phi}^{\prime}(q) = \frac{\phi(q - c^{-1})-\phi(q + c^{-1})}{\phi(q + c^{-1})+\phi(q - c^{-1})}.
\end{equation}
Update: How do we know that ${\rm med}_{X}g_{\Phi}(X) = g_{\Phi}(q)$?
Claim: $g_{\Phi}(x)$ is an even function that monotonically decreases as a function of $|x|$.
That this function is even follows from its definition (as an implicit function) given above and the fact that $\Phi(-x) = 1 - \Phi(x)$. It monotonically decreases in $|x|$ because
\begin{equation}
g_{\Phi}^{\prime}(x) \ \Bigg\{\begin{array}{ccc} >0 \ (x>0)\\=0\ (x=0)\\<0\ (x<0)\end{array}.
\end{equation}
One can see this from the expression for $g_{\Phi}^{\prime}(x)$ and the shape of the Gaussian function $\phi$. More rigorously, one should be able to deduce this fact from the following considerations:
$\phi(x) = \phi(-x)$.
$\phi(x)$ monotonically decreases in $|x|$.
$a = g_{\Phi}(x)>0$.
Then, the value of $x$ giving the median of $g_{\Phi}(x)$, which we denote as $q$, is where exactly half of the members of the Gaussian distribution falls within $|q|$. Hence $\Phi(q) = 1/4$ or $3/4$. Evaluating $g_{\Phi}$ at either of the two values (they only differ by their signs, and $g_{\Phi}$ is even) would give ${\rm med}_{X}g_{\Phi}(X)$.
|
Derivative of order statistics
|
It seems to me that $\phi$ and $\Phi$ are the PDF and CDF of the standard Gaussian; that is,
\begin{equation}
\phi(x) = \Phi^\prime (x).
\end{equation}
To obtain $g_{\Phi}(x) \equiv {\rm med}_{Y} |x-Y
|
Derivative of order statistics
It seems to me that $\phi$ and $\Phi$ are the PDF and CDF of the standard Gaussian; that is,
\begin{equation}
\phi(x) = \Phi^\prime (x).
\end{equation}
To obtain $g_{\Phi}(x) \equiv {\rm med}_{Y} |x-Y|$ (the CDF of $Y$ is $\Phi$), one should find a distance $a$ such that for exactly half of the members of the Gaussian distribution, the distance from $x$ is greater than $a$ (and of course smaller than $a$ for the other half). This condition amounts to
\begin{equation}
F(x,a) \equiv \Phi(x+a) - \Phi(x-a) = \frac{1}{2}.
\end{equation}
Solving this for $a$ gives $g_{\Phi}(x)$. Therefore obtaining $g_{\Phi}^{\prime}(x)$ boils down to differentiating the implicit function given above. That is,
\begin{equation}
\begin{split}
g_{\Phi}^{\prime}(x) &= - \frac{\partial F/\partial x}{\partial F/\partial a} = \frac{\phi(x-a)-\phi(x+a)}{\phi(x+a)+\phi(x-a)}\\
&=\frac{\phi(x-g_{\Phi}(x))-\phi(x+g_{\Phi}(x))}{\phi(x+g_{\Phi}(x))+\phi(x-g_{\Phi}(x))}.
\end{split}
\end{equation}
Then, using the fact that $g_{\Phi}(q) = 1/c = {\rm med}_{X}g_{\Phi}(X)$, where $q$ is defined by $\Phi(q) = 3/4$, gives
\begin{equation}
g_{\Phi}^{\prime}(q) = \frac{\phi(q - c^{-1})-\phi(q + c^{-1})}{\phi(q + c^{-1})+\phi(q - c^{-1})}.
\end{equation}
Update: How do we know that ${\rm med}_{X}g_{\Phi}(X) = g_{\Phi}(q)$?
Claim: $g_{\Phi}(x)$ is an even function that monotonically decreases as a function of $|x|$.
That this function is even follows from its definition (as an implicit function) given above and the fact that $\Phi(-x) = 1 - \Phi(x)$. It monotonically decreases in $|x|$ because
\begin{equation}
g_{\Phi}^{\prime}(x) \ \Bigg\{\begin{array}{ccc} >0 \ (x>0)\\=0\ (x=0)\\<0\ (x<0)\end{array}.
\end{equation}
One can see this from the expression for $g_{\Phi}^{\prime}(x)$ and the shape of the Gaussian function $\phi$. More rigorously, one should be able to deduce this fact from the following considerations:
$\phi(x) = \phi(-x)$.
$\phi(x)$ monotonically decreases in $|x|$.
$a = g_{\Phi}(x)>0$.
Then, the value of $x$ giving the median of $g_{\Phi}(x)$, which we denote as $q$, is where exactly half of the members of the Gaussian distribution falls within $|q|$. Hence $\Phi(q) = 1/4$ or $3/4$. Evaluating $g_{\Phi}$ at either of the two values (they only differ by their signs, and $g_{\Phi}$ is even) would give ${\rm med}_{X}g_{\Phi}(X)$.
|
Derivative of order statistics
It seems to me that $\phi$ and $\Phi$ are the PDF and CDF of the standard Gaussian; that is,
\begin{equation}
\phi(x) = \Phi^\prime (x).
\end{equation}
To obtain $g_{\Phi}(x) \equiv {\rm med}_{Y} |x-Y
|
40,746
|
Is there a difference between semipartial correlation and regression coefficient in multiple regression?
|
While thorough and ultimately correct, the comment of @ttnphns given to the question is slightly misleading in the sense that it focuses on the similarities between the standardized regression coefficient and the partial correlation, while the more obvious comparison would be between standardized regression coefficient and the more closely related semipartial correlation [but see the thoughtful answer of @ttnphns in response to my post, clarifying his point about partial correlations].
Indeed, the only difference is that the semipartial takes the square root of the denominator. The result is that the semipartial is bounded between -1 and +1, while Beta is not.
Aside from the algebraic similarities, semipartial correlations are also conceptually closest to regression coefficients. In a regression analysis, we try to measure the unique explanatory power of predictors, i.e. the unique part of the total variance of Y that can be explained by X1, controlled for the other X-variables. That is, we residualize each X on other predictors to get its unique effect, but we do not residualize Y, as in the partial correlation.
For an excellent Powerpoint presentation on this topic, see these slides by Michael Brannick of the University of South Florida.
|
Is there a difference between semipartial correlation and regression coefficient in multiple regress
|
While thorough and ultimately correct, the comment of @ttnphns given to the question is slightly misleading in the sense that it focuses on the similarities between the standardized regression coeffic
|
Is there a difference between semipartial correlation and regression coefficient in multiple regression?
While thorough and ultimately correct, the comment of @ttnphns given to the question is slightly misleading in the sense that it focuses on the similarities between the standardized regression coefficient and the partial correlation, while the more obvious comparison would be between standardized regression coefficient and the more closely related semipartial correlation [but see the thoughtful answer of @ttnphns in response to my post, clarifying his point about partial correlations].
Indeed, the only difference is that the semipartial takes the square root of the denominator. The result is that the semipartial is bounded between -1 and +1, while Beta is not.
Aside from the algebraic similarities, semipartial correlations are also conceptually closest to regression coefficients. In a regression analysis, we try to measure the unique explanatory power of predictors, i.e. the unique part of the total variance of Y that can be explained by X1, controlled for the other X-variables. That is, we residualize each X on other predictors to get its unique effect, but we do not residualize Y, as in the partial correlation.
For an excellent Powerpoint presentation on this topic, see these slides by Michael Brannick of the University of South Florida.
|
Is there a difference between semipartial correlation and regression coefficient in multiple regress
While thorough and ultimately correct, the comment of @ttnphns given to the question is slightly misleading in the sense that it focuses on the similarities between the standardized regression coeffic
|
40,747
|
Is there a difference between semipartial correlation and regression coefficient in multiple regression?
|
In their answer @Soporiferous correctly says of the relation between the semipartial correlation and the standardized beta coefficient, and hastes to label my old comment to the question (about the sooner existence of a relation between the beta and partial correlation) "slightly misleading". But in my comment, I implied Beta of another regression (another dependent variable) than @Soporiferous seemingly implied.
Let us have 3 variables X, Y, Z.
Partial correlation between X and Y (Z partialled out from both) is
$r_{xy.z} = \frac{r_{xy} - r_{xz}r_{yz} }{\sqrt{ (1-r_{xz}^2)(1-r_{yz}^2) }}$.
While semipartial or part correlation between X and Y (with Z partialled out from Y) is
$r_{x(y.z)} = \frac{r_{xy} - r_{xz}r_{yz} }{\sqrt{1-r_{yz}^2 }}$.
@Soporiferous correctly notices (by linking to an outer source) that the last formula is very similar to the formula of a beta regression coefficient:
$\beta = \frac{r_{xy} - r_{xz}r_{yz} }{{1-r_{yz}^2 }}$, with the only difference being is that root taken in the denominator.
True observation; however mind that this beta is $\beta_y$ in the regression where X is the dependent while Y and Z are predictors. Semipartial $r_{x(y.z)}$ squared is equal to the rise of R-square in reaction to the inclusion of Y to the model having consisted only of Z.
So, the semipartial is structurally similar to $\beta_y$, - and not to $\beta_x$ (of regression where Y is the dependent which idea would probably come to mind at first place "by default").
But partial $r_{xy.z}$ is related to both the $\beta_y$ (where X is the dependent) and to the $\beta_x$ (where Y is the dependent):
$\beta_x = \frac{r_{xy} - r_{xz}r_{yz} }{{1-r_{xz}^2 }}$.
Saying (in my comment to the question and in my answer linked just now) that regression coefficient beta is directly related to partial correlation I meant that regression where X is a regressor and the partial correlation considered is, too, between the regressor X and the dependent Y. The very title of the question induces such interpretation of it.
|
Is there a difference between semipartial correlation and regression coefficient in multiple regress
|
In their answer @Soporiferous correctly says of the relation between the semipartial correlation and the standardized beta coefficient, and hastes to label my old comment to the question (about the so
|
Is there a difference between semipartial correlation and regression coefficient in multiple regression?
In their answer @Soporiferous correctly says of the relation between the semipartial correlation and the standardized beta coefficient, and hastes to label my old comment to the question (about the sooner existence of a relation between the beta and partial correlation) "slightly misleading". But in my comment, I implied Beta of another regression (another dependent variable) than @Soporiferous seemingly implied.
Let us have 3 variables X, Y, Z.
Partial correlation between X and Y (Z partialled out from both) is
$r_{xy.z} = \frac{r_{xy} - r_{xz}r_{yz} }{\sqrt{ (1-r_{xz}^2)(1-r_{yz}^2) }}$.
While semipartial or part correlation between X and Y (with Z partialled out from Y) is
$r_{x(y.z)} = \frac{r_{xy} - r_{xz}r_{yz} }{\sqrt{1-r_{yz}^2 }}$.
@Soporiferous correctly notices (by linking to an outer source) that the last formula is very similar to the formula of a beta regression coefficient:
$\beta = \frac{r_{xy} - r_{xz}r_{yz} }{{1-r_{yz}^2 }}$, with the only difference being is that root taken in the denominator.
True observation; however mind that this beta is $\beta_y$ in the regression where X is the dependent while Y and Z are predictors. Semipartial $r_{x(y.z)}$ squared is equal to the rise of R-square in reaction to the inclusion of Y to the model having consisted only of Z.
So, the semipartial is structurally similar to $\beta_y$, - and not to $\beta_x$ (of regression where Y is the dependent which idea would probably come to mind at first place "by default").
But partial $r_{xy.z}$ is related to both the $\beta_y$ (where X is the dependent) and to the $\beta_x$ (where Y is the dependent):
$\beta_x = \frac{r_{xy} - r_{xz}r_{yz} }{{1-r_{xz}^2 }}$.
Saying (in my comment to the question and in my answer linked just now) that regression coefficient beta is directly related to partial correlation I meant that regression where X is a regressor and the partial correlation considered is, too, between the regressor X and the dependent Y. The very title of the question induces such interpretation of it.
|
Is there a difference between semipartial correlation and regression coefficient in multiple regress
In their answer @Soporiferous correctly says of the relation between the semipartial correlation and the standardized beta coefficient, and hastes to label my old comment to the question (about the so
|
40,748
|
How many answers to memorize for a test?
|
Let $N = 15$ be the total population of questions, $n = 7$ the number that will appear on the test, and $m = 5$ the number of questions you need to answer. There are $\binom{N}{n}$ possible tests; I'll assume the test is selected uniformly at random without replacement.
Say you memorize $K$ of the questions.
Then, the number of questions on the test that you know the answer to, call it $k$, is hypergeometric, with population size $N$, "success" population size $K$, and number of draws $n$ (like the names in the Wikipedia article).
How many answers should I memorize such that 75% of the time I will get 100% on the test?
You get 100% on the test if $k \ge m$, so the probability of doing so is the complement of the CDF at $m-1$. Unfortunately, the expression for this is not pretty:
$$
{{{n \choose {m}}{{N-n} \choose {K-m}}}\over {N \choose K}} \,_3F_2\!\!\left[\begin{array}{c}1,\ m-K,\ m-n \\ m+1,\ N+m+1-K-n\end{array};1\right]
\ge .75
$$
where $_3F_2$ is the generalized hypergeometric function.
This probably needs to be solved numerically.
How many answers should I memorize such that on average I will get at least an 80% on the exam?
Your score on the exam can be written as $S := \max\left(1, \frac{k}{m}\right)$.
To get rid of the max, you can do:
$$
\begin{align*}
\mathbb{E}[S]
&= \Pr[k \le m] \, \mathbb{E}[S \mid k \le m]
+ \Pr[k > m] \, \mathbb{E}[S \mid k > m]
\\ &= \Pr[k \le m] \, \mathbb{E}[k \mid k \le m] + \Pr[k > m]
\end{align*}
$$
where the two probability terms are again gross hypergeometric cdfs,
and I don't know if there's a nice form for $\mathbb{E}[k \mid k \le m]$.
Maybe if you derive it the way you derive $\mathbb{E}[S]$ it'll mostly cancel out with $\Pr[T \le k]$ or something; not sure.
For smallish numbers like the ones given, you could compute $\mathbb{E} S$ exactly based on the hypergeometric pmf, but you'd probably have to do a binary search or something to find the exact cutoff.
|
How many answers to memorize for a test?
|
Let $N = 15$ be the total population of questions, $n = 7$ the number that will appear on the test, and $m = 5$ the number of questions you need to answer. There are $\binom{N}{n}$ possible tests; I'l
|
How many answers to memorize for a test?
Let $N = 15$ be the total population of questions, $n = 7$ the number that will appear on the test, and $m = 5$ the number of questions you need to answer. There are $\binom{N}{n}$ possible tests; I'll assume the test is selected uniformly at random without replacement.
Say you memorize $K$ of the questions.
Then, the number of questions on the test that you know the answer to, call it $k$, is hypergeometric, with population size $N$, "success" population size $K$, and number of draws $n$ (like the names in the Wikipedia article).
How many answers should I memorize such that 75% of the time I will get 100% on the test?
You get 100% on the test if $k \ge m$, so the probability of doing so is the complement of the CDF at $m-1$. Unfortunately, the expression for this is not pretty:
$$
{{{n \choose {m}}{{N-n} \choose {K-m}}}\over {N \choose K}} \,_3F_2\!\!\left[\begin{array}{c}1,\ m-K,\ m-n \\ m+1,\ N+m+1-K-n\end{array};1\right]
\ge .75
$$
where $_3F_2$ is the generalized hypergeometric function.
This probably needs to be solved numerically.
How many answers should I memorize such that on average I will get at least an 80% on the exam?
Your score on the exam can be written as $S := \max\left(1, \frac{k}{m}\right)$.
To get rid of the max, you can do:
$$
\begin{align*}
\mathbb{E}[S]
&= \Pr[k \le m] \, \mathbb{E}[S \mid k \le m]
+ \Pr[k > m] \, \mathbb{E}[S \mid k > m]
\\ &= \Pr[k \le m] \, \mathbb{E}[k \mid k \le m] + \Pr[k > m]
\end{align*}
$$
where the two probability terms are again gross hypergeometric cdfs,
and I don't know if there's a nice form for $\mathbb{E}[k \mid k \le m]$.
Maybe if you derive it the way you derive $\mathbb{E}[S]$ it'll mostly cancel out with $\Pr[T \le k]$ or something; not sure.
For smallish numbers like the ones given, you could compute $\mathbb{E} S$ exactly based on the hypergeometric pmf, but you'd probably have to do a binary search or something to find the exact cutoff.
|
How many answers to memorize for a test?
Let $N = 15$ be the total population of questions, $n = 7$ the number that will appear on the test, and $m = 5$ the number of questions you need to answer. There are $\binom{N}{n}$ possible tests; I'l
|
40,749
|
How many answers to memorize for a test?
|
The objective of a test is to encourage you to learn and understand the material. If you do that you shouldn't have to memorize anything
|
How many answers to memorize for a test?
|
The objective of a test is to encourage you to learn and understand the material. If you do that you shouldn't have to memorize anything
|
How many answers to memorize for a test?
The objective of a test is to encourage you to learn and understand the material. If you do that you shouldn't have to memorize anything
|
How many answers to memorize for a test?
The objective of a test is to encourage you to learn and understand the material. If you do that you shouldn't have to memorize anything
|
40,750
|
Interpreting significance of Cragg-Donald F-Statistic for weak instruments
|
If you have a weak instrument then the bias of the IV estimator can be large and in some cases it can even be bigger than the bias of the OLS estimator. With their tabulated values Stock and Yogo first fix the largest relative bias of the two stage least squares estimator (2SLS) relative to OLS that is acceptable. In this sense the test answers the question: can we reject the null hypothesis that the maximum relative bias due to weak instruments is 10% (or 5%, etc.).
The critical values then depend on this acceptable bias (a lower acceptable bias means that your instrument has to achieve a higher first stage F-statistic), the number of endogenous regressors and the number of exclusion restrictions. As an example, if you set the maximum acceptable bias to 0.05 (i.e. we tolerate a bias of 5% relative to OLS), and you have one endogenous variable and three instruments, the critical value is 13.91, so your instrument is not considered weak if its first stage F-statistic is larger than that. The problem is that these critical values only work if you have at least two overidentifying restrictions. In your case with one endogenous variable you need at least three instruments.
With one endogenous variable the Cragg-Donald test should give you a similar result as Stock and Yogo. This test differs from the previous one if there are several endogenous variables for which you will have multiple first stages. Anderson's canonical correlation test works similar to Cragg-Donald with the difference that Anderson's CC is a likelihood ratio test whilst Cragg-Donald is a Wald statistic but both tests are applicable with one endogenous variable and one instrument. However, in the end Stock Yogo, Cragg-Donald and Anderson all rely on an iid assumption on the errors. If you used heteroscedasticity robust standard errors for instance these tests will not work but the Kleinbergen-Paap test is robust against violations of the iid assumption. It also works with one endogenous variable and one instrument as long as the model is identified. There is a nice discussion on these tests in these notes by Baum (2007). Other robust tests for weak instruments are also offered in Stata's rivtest package.
If you end up with a weak instrument you can use the conditional likelihood ratio test by Moreira (2003) in order to perform weak instrument robust inference. A paper by Andrews et al. (2008) shows that the CLR test is approximately optimal. Weak robust instruments regressions are available for instance in Stata's condivreg package.
|
Interpreting significance of Cragg-Donald F-Statistic for weak instruments
|
If you have a weak instrument then the bias of the IV estimator can be large and in some cases it can even be bigger than the bias of the OLS estimator. With their tabulated values Stock and Yogo firs
|
Interpreting significance of Cragg-Donald F-Statistic for weak instruments
If you have a weak instrument then the bias of the IV estimator can be large and in some cases it can even be bigger than the bias of the OLS estimator. With their tabulated values Stock and Yogo first fix the largest relative bias of the two stage least squares estimator (2SLS) relative to OLS that is acceptable. In this sense the test answers the question: can we reject the null hypothesis that the maximum relative bias due to weak instruments is 10% (or 5%, etc.).
The critical values then depend on this acceptable bias (a lower acceptable bias means that your instrument has to achieve a higher first stage F-statistic), the number of endogenous regressors and the number of exclusion restrictions. As an example, if you set the maximum acceptable bias to 0.05 (i.e. we tolerate a bias of 5% relative to OLS), and you have one endogenous variable and three instruments, the critical value is 13.91, so your instrument is not considered weak if its first stage F-statistic is larger than that. The problem is that these critical values only work if you have at least two overidentifying restrictions. In your case with one endogenous variable you need at least three instruments.
With one endogenous variable the Cragg-Donald test should give you a similar result as Stock and Yogo. This test differs from the previous one if there are several endogenous variables for which you will have multiple first stages. Anderson's canonical correlation test works similar to Cragg-Donald with the difference that Anderson's CC is a likelihood ratio test whilst Cragg-Donald is a Wald statistic but both tests are applicable with one endogenous variable and one instrument. However, in the end Stock Yogo, Cragg-Donald and Anderson all rely on an iid assumption on the errors. If you used heteroscedasticity robust standard errors for instance these tests will not work but the Kleinbergen-Paap test is robust against violations of the iid assumption. It also works with one endogenous variable and one instrument as long as the model is identified. There is a nice discussion on these tests in these notes by Baum (2007). Other robust tests for weak instruments are also offered in Stata's rivtest package.
If you end up with a weak instrument you can use the conditional likelihood ratio test by Moreira (2003) in order to perform weak instrument robust inference. A paper by Andrews et al. (2008) shows that the CLR test is approximately optimal. Weak robust instruments regressions are available for instance in Stata's condivreg package.
|
Interpreting significance of Cragg-Donald F-Statistic for weak instruments
If you have a weak instrument then the bias of the IV estimator can be large and in some cases it can even be bigger than the bias of the OLS estimator. With their tabulated values Stock and Yogo firs
|
40,751
|
Proof of Point-Biserial Correlation being a special case of Pearson Correlation
|
Let the $n$ data consist of $n_0\gt 0$ $(x, 0)$ pairs and $n_1\gt 0$ $(x, 1)$ pairs. Their Pearson correlation coefficient will be the same as the reversed data consisting of corresponding $(0,x)$ and $(1,x)$ pairs. Because there are exactly two distinct values of the first coordinates, the regression line of the reversed data must pass through the mean points $(0,M_0)$ and $(1,M_1)$, whence it has slope $(M_1-M_0)/(1-0) = M_1-M_0$. The correlation coefficient is obtained by standardizing this: it must be multiplied by the standard deviation of the first coordinates and divided by the standard deviation of the second coordinates (the original $x$ values), written $s_n$. The standard deviation of the first coordinates is readily computed from the fact that they consist of $n_0$ zeros and $n_1$ ones; it equals
$$\sqrt{\frac{n_1}{n}\left(1-\frac{n_1}{n}\right)} = \sqrt{\frac{n_0n_1}{n^2}}.$$
Consequently the Pearson correlation coefficient is
$$r = \frac{M_1-M_0}{s_n}\sqrt{\frac{n_0n_1}{n^2}},$$
which is precisely the Wikipedia formula for the point-biserial coefficient.
The heights of the red dots depict the mean values $M_0$ and $M_1$ of each vertical strip of points. The dashed gray line is the regression line.
|
Proof of Point-Biserial Correlation being a special case of Pearson Correlation
|
Let the $n$ data consist of $n_0\gt 0$ $(x, 0)$ pairs and $n_1\gt 0$ $(x, 1)$ pairs. Their Pearson correlation coefficient will be the same as the reversed data consisting of corresponding $(0,x)$ an
|
Proof of Point-Biserial Correlation being a special case of Pearson Correlation
Let the $n$ data consist of $n_0\gt 0$ $(x, 0)$ pairs and $n_1\gt 0$ $(x, 1)$ pairs. Their Pearson correlation coefficient will be the same as the reversed data consisting of corresponding $(0,x)$ and $(1,x)$ pairs. Because there are exactly two distinct values of the first coordinates, the regression line of the reversed data must pass through the mean points $(0,M_0)$ and $(1,M_1)$, whence it has slope $(M_1-M_0)/(1-0) = M_1-M_0$. The correlation coefficient is obtained by standardizing this: it must be multiplied by the standard deviation of the first coordinates and divided by the standard deviation of the second coordinates (the original $x$ values), written $s_n$. The standard deviation of the first coordinates is readily computed from the fact that they consist of $n_0$ zeros and $n_1$ ones; it equals
$$\sqrt{\frac{n_1}{n}\left(1-\frac{n_1}{n}\right)} = \sqrt{\frac{n_0n_1}{n^2}}.$$
Consequently the Pearson correlation coefficient is
$$r = \frac{M_1-M_0}{s_n}\sqrt{\frac{n_0n_1}{n^2}},$$
which is precisely the Wikipedia formula for the point-biserial coefficient.
The heights of the red dots depict the mean values $M_0$ and $M_1$ of each vertical strip of points. The dashed gray line is the regression line.
|
Proof of Point-Biserial Correlation being a special case of Pearson Correlation
Let the $n$ data consist of $n_0\gt 0$ $(x, 0)$ pairs and $n_1\gt 0$ $(x, 1)$ pairs. Their Pearson correlation coefficient will be the same as the reversed data consisting of corresponding $(0,x)$ an
|
40,752
|
Difference in PCA loadings between R and SPSS
|
The difference is in how R and SPSS interpret the word "loading". Loadings in PCA should be defined as eigenvectors of the covariance matrix scaled by the square roots of the respective eigenvalues. Please see e.g. my answer here for motivation:
How does "Fundamental Theorem of Factor Analysis" apply to PCA, or how are PCA loadings defined?
This is the definition followed by SPSS. However, what R (unfortunately) calls "loadings" are non-scaled eigenvectors of the covariance matrix. Therefore, your two plots should differ in scaling by a square root of the first eigenvalue. As the scaling factor seems to be $\approx 2.5$, the first eigenvalue should be approximately equal to $2.5^2=6.25$, as @whuber hinted in the comments above.
|
Difference in PCA loadings between R and SPSS
|
The difference is in how R and SPSS interpret the word "loading". Loadings in PCA should be defined as eigenvectors of the covariance matrix scaled by the square roots of the respective eigenvalues. P
|
Difference in PCA loadings between R and SPSS
The difference is in how R and SPSS interpret the word "loading". Loadings in PCA should be defined as eigenvectors of the covariance matrix scaled by the square roots of the respective eigenvalues. Please see e.g. my answer here for motivation:
How does "Fundamental Theorem of Factor Analysis" apply to PCA, or how are PCA loadings defined?
This is the definition followed by SPSS. However, what R (unfortunately) calls "loadings" are non-scaled eigenvectors of the covariance matrix. Therefore, your two plots should differ in scaling by a square root of the first eigenvalue. As the scaling factor seems to be $\approx 2.5$, the first eigenvalue should be approximately equal to $2.5^2=6.25$, as @whuber hinted in the comments above.
|
Difference in PCA loadings between R and SPSS
The difference is in how R and SPSS interpret the word "loading". Loadings in PCA should be defined as eigenvectors of the covariance matrix scaled by the square roots of the respective eigenvalues. P
|
40,753
|
Difference in PCA loadings between R and SPSS
|
The simple answer, by a R-guy.
About the sign-shifting; each loading-vector have an "evil twin vector" pointing in 180 degrees opposite direction in feature space. The loading vectors yield scores of opposite signs.
Similar to (-1)(+1) =(+1)(-1), the solutions are identical.
Also the length of the loading vector does not really matter, as long it will be reflected by the scores. X = loadings x scores + e , and loadings x scores = X(fit)
Thus loadings(R) x scores(R) = loadings(SPSS) x scores(SPSS)
The two solutions are identical
|
Difference in PCA loadings between R and SPSS
|
The simple answer, by a R-guy.
About the sign-shifting; each loading-vector have an "evil twin vector" pointing in 180 degrees opposite direction in feature space. The loading vectors yield scores of
|
Difference in PCA loadings between R and SPSS
The simple answer, by a R-guy.
About the sign-shifting; each loading-vector have an "evil twin vector" pointing in 180 degrees opposite direction in feature space. The loading vectors yield scores of opposite signs.
Similar to (-1)(+1) =(+1)(-1), the solutions are identical.
Also the length of the loading vector does not really matter, as long it will be reflected by the scores. X = loadings x scores + e , and loadings x scores = X(fit)
Thus loadings(R) x scores(R) = loadings(SPSS) x scores(SPSS)
The two solutions are identical
|
Difference in PCA loadings between R and SPSS
The simple answer, by a R-guy.
About the sign-shifting; each loading-vector have an "evil twin vector" pointing in 180 degrees opposite direction in feature space. The loading vectors yield scores of
|
40,754
|
Difference in PCA loadings between R and SPSS
|
Full details on the SPSS PCA algorithm would be found in the Algorithms doc included with the product. But it appears that the results are actually identical up to an arbitrary linear transformation.
|
Difference in PCA loadings between R and SPSS
|
Full details on the SPSS PCA algorithm would be found in the Algorithms doc included with the product. But it appears that the results are actually identical up to an arbitrary linear transformation.
|
Difference in PCA loadings between R and SPSS
Full details on the SPSS PCA algorithm would be found in the Algorithms doc included with the product. But it appears that the results are actually identical up to an arbitrary linear transformation.
|
Difference in PCA loadings between R and SPSS
Full details on the SPSS PCA algorithm would be found in the Algorithms doc included with the product. But it appears that the results are actually identical up to an arbitrary linear transformation.
|
40,755
|
Top scoring students across a series of tests
|
I agree with the comments above that this would be easier if the question were a little more precise.
Here are some things to think about.
set.seed(1)
## simulate data: refine to reflect attributes of interest
n <- 30 ## test count
m <- 100 ## student count
## student "ability": student rows, test columns
sa <- matrix(round(runif(m*n,30,80)), nrow=m)
te <- 5 ## test error
## test scores:
ts <- matrix(round(rnorm(m*n, sa, te)),nrow=m)
rownames(ts) <- seq(m)
Method A:
Use a summary measure that collapses student variability across tests.
## test score summary
tss <- apply(ts,1,mean)
qqnorm(tss); qqline(tss, col=2)
Taking tss to be a sample from the normal distribution with matching parameters:
ps <-pnorm(tss, mean(tss), sd(tss))
## top 10% students
(a <- rownames(ts[ps>0.9,]))
[1] "4" "14" "23" "30" "39" "43" "45" "49" "50" "68" "69" "80"
Method B:
Take z-scores within tests, report p-values from one-sided t-tests within students.
z.ps <- function(ts) {
zs <- scale(ts)
st <- apply(zs, 1, function(s) t.test(s, alternative='greater'))
unlist(lapply(st, function(s) s$p.value))
}
ps <- z.ps(ts)
## top 10% students
(b <- rownames(ts[ps<0.1,]))
[1] "3" "4" "8" "14" "23" "30" "39" "43" "45" "49" "50" "68" "69" "80" "83"
[16] "96"
This results in a more "permissive" threshold. It does try to account for variability across tests within students but the normal assumption is sometimes questionable, even for larger n.
See here (graph not shown), and repeat methods with differing m,n:
op <- par(mfrow=c(3,4))
for (i in sample(n,12)) { qqnorm(ts[i,]); qqline(ts[i,],col=2) }
par(op)
This is in part because of the way these data are simulated. It is perhaps not very realistic to model "student" abilities as uniform on an interval and uncorrelated within human subjects.
The point is not the defence, or otherwise, of the naivety of the simulation. But rather to reinforce that if your "students" are performance metrics from arbitrary processes, the attributes of these will influence what constitutes a sensible approach.
Hope that helps.
|
Top scoring students across a series of tests
|
I agree with the comments above that this would be easier if the question were a little more precise.
Here are some things to think about.
set.seed(1)
## simulate data: refine to reflect attributes
|
Top scoring students across a series of tests
I agree with the comments above that this would be easier if the question were a little more precise.
Here are some things to think about.
set.seed(1)
## simulate data: refine to reflect attributes of interest
n <- 30 ## test count
m <- 100 ## student count
## student "ability": student rows, test columns
sa <- matrix(round(runif(m*n,30,80)), nrow=m)
te <- 5 ## test error
## test scores:
ts <- matrix(round(rnorm(m*n, sa, te)),nrow=m)
rownames(ts) <- seq(m)
Method A:
Use a summary measure that collapses student variability across tests.
## test score summary
tss <- apply(ts,1,mean)
qqnorm(tss); qqline(tss, col=2)
Taking tss to be a sample from the normal distribution with matching parameters:
ps <-pnorm(tss, mean(tss), sd(tss))
## top 10% students
(a <- rownames(ts[ps>0.9,]))
[1] "4" "14" "23" "30" "39" "43" "45" "49" "50" "68" "69" "80"
Method B:
Take z-scores within tests, report p-values from one-sided t-tests within students.
z.ps <- function(ts) {
zs <- scale(ts)
st <- apply(zs, 1, function(s) t.test(s, alternative='greater'))
unlist(lapply(st, function(s) s$p.value))
}
ps <- z.ps(ts)
## top 10% students
(b <- rownames(ts[ps<0.1,]))
[1] "3" "4" "8" "14" "23" "30" "39" "43" "45" "49" "50" "68" "69" "80" "83"
[16] "96"
This results in a more "permissive" threshold. It does try to account for variability across tests within students but the normal assumption is sometimes questionable, even for larger n.
See here (graph not shown), and repeat methods with differing m,n:
op <- par(mfrow=c(3,4))
for (i in sample(n,12)) { qqnorm(ts[i,]); qqline(ts[i,],col=2) }
par(op)
This is in part because of the way these data are simulated. It is perhaps not very realistic to model "student" abilities as uniform on an interval and uncorrelated within human subjects.
The point is not the defence, or otherwise, of the naivety of the simulation. But rather to reinforce that if your "students" are performance metrics from arbitrary processes, the attributes of these will influence what constitutes a sensible approach.
Hope that helps.
|
Top scoring students across a series of tests
I agree with the comments above that this would be easier if the question were a little more precise.
Here are some things to think about.
set.seed(1)
## simulate data: refine to reflect attributes
|
40,756
|
Top scoring students across a series of tests
|
The method that you describe seems quite convoluted (and perhaps fairly inaccurate). Presumably, the tests measure some sort of trait (e.g., intelligence), or ability (e.g., math ability), or experience (e.g., learning) and you want the top 10% of students who score highest on this construct. You also mention having a series of tests, which suggests you have several scores. If these tests are related, and to you they measure the same ability, then why do not you not simply sum up the scores for all tests? You could then just use the top 10% on these raw scores to identify the top 10%.
If you want to go a bit more complex, you could use exploratory/confirmatory factor analyses to compute weighted scores based on the associations between each of the scales and the ability you wish to assess.
You could also use aspects of item response theory to identify the strongest test takers.
One issue is your mention of "statistical significance". I don't know what you are referring to here. It seems like you want to identify individuals, and this seems more descriptive than relevant to statistical significance and null-hypothesis testing. If you want some sort of index of how reliable your estimates are, then estimates or error/information will be valuable, not statistical significance per se.
|
Top scoring students across a series of tests
|
The method that you describe seems quite convoluted (and perhaps fairly inaccurate). Presumably, the tests measure some sort of trait (e.g., intelligence), or ability (e.g., math ability), or experien
|
Top scoring students across a series of tests
The method that you describe seems quite convoluted (and perhaps fairly inaccurate). Presumably, the tests measure some sort of trait (e.g., intelligence), or ability (e.g., math ability), or experience (e.g., learning) and you want the top 10% of students who score highest on this construct. You also mention having a series of tests, which suggests you have several scores. If these tests are related, and to you they measure the same ability, then why do not you not simply sum up the scores for all tests? You could then just use the top 10% on these raw scores to identify the top 10%.
If you want to go a bit more complex, you could use exploratory/confirmatory factor analyses to compute weighted scores based on the associations between each of the scales and the ability you wish to assess.
You could also use aspects of item response theory to identify the strongest test takers.
One issue is your mention of "statistical significance". I don't know what you are referring to here. It seems like you want to identify individuals, and this seems more descriptive than relevant to statistical significance and null-hypothesis testing. If you want some sort of index of how reliable your estimates are, then estimates or error/information will be valuable, not statistical significance per se.
|
Top scoring students across a series of tests
The method that you describe seems quite convoluted (and perhaps fairly inaccurate). Presumably, the tests measure some sort of trait (e.g., intelligence), or ability (e.g., math ability), or experien
|
40,757
|
Top scoring students across a series of tests
|
Personally, I don't like the premise. You asked and I will work with it, but I do not like the fundamental here.
Personal aside on teaching
I personally think that a teacher has 2 jobs, and neither is improved by ambiguity:
to teach and certify the student has all the fundamentals down. If something isn't a fundamental, then they don't need it. If they need it, then it is a fundamental. Every student passing a class should have 100% of the fundamentals down. No student passing a class should be lacking in any of the fundamentals - if they lack they should fail the class.
to teach the student how to teach themselves. By this I mean that all relevant learning methods are part of the fundamentals. The student who has this is capable of building in all relevant ways upon his fundamentals to arbitrary levels of exceptionalism. No student passing a class should be unable to teach himself in any relevant way. Every student passing a class should be able to teach themselves in each and all of the relevant ways for that subject.
How I win at "Pick-em"
I play a version of fantasy football called "pick-em". My results are at least as good as the Las Vegas "spread" values.
The problem can be stated as:
given 16 teams, and some recent history of how they play (I only need points per game)
given two of the 16 teams who will content in the next game
determine which is most likely to win
determine a scale indicating how much more likely to win one is than the other
My approach:
Set the metric: For all past games the team played, if they won indicate point differential as positive, otherwise indicate it as negative
Establish the coordinate system: For the game of interest, put one team at coordinate x=1, and the other at x=-1. Each of the differential scores becomes one y value. That mean at x=1 there will be as many points as games that are being considered relevant history. If the last 5 games are considered relevant history then there are 5 points at x=1 indicating relevant scoring of team x=1 and there are 5 points at x=-1 indicating relevant scoring of team x=-1.
use the Theil-Sen estimator (or its relatives) to determine the median slope between all pairs of points. If the slope is positive then the team at x=1 will win, otherwise the team at x=-1 will win. The slope is a measure of how likely a team is to win. If the slope is shallow the win-loss is uncertain, but if the slope is high then the win-loss is more reliable.
I should disclaim that all standard disclaimers apply so nobody writes me hate mail saying they lost money betting using this method. If you bet then it is your decision, not mine.
Applying it to kids
You can make a matrix (aka graph) comparing all kids. at the cell (i,j) which compares kid "i" versus kid "j" put the slope of the T-S estimator. If kid "i" typically wins, the slope is positive, otherwise it is negative.
Make row sums. That is going to give you a 1-axis robust estimate of performance. A higher value for the row sum means that the student overall is higher performing. A lower value for the row sum means that the student overall is lower performing. It will be the best overall estimator of comparative performance. It will tell you nearly nothing about absolute performance. It will tell you nothing about meeting the criteria that I gave. It will, however, be an excellent estimator of comparative performance. It will allow clustering.
There is a textbook "reality check" for univariate data called the 4-plot. (link) If you want any sense of what is going on then use this. It is going to show you trends, outliers, something about the dependence of the grade of kid "i" versus "i+1", and the nature of the distribution. There is hand-waving saying in the limit of infinite samples everything is gaussian. In reality nothing is gaussian, not even pseudorandom numbers meant to look gaussian. The true distribution can tell you about clusters, clumps, (probability modes), and outliers.
Best of luck
PS: If you want example and source code for textbook TS (not my variant) then I will provide it. Please request.
|
Top scoring students across a series of tests
|
Personally, I don't like the premise. You asked and I will work with it, but I do not like the fundamental here.
Personal aside on teaching
I personally think that a teacher has 2 jobs, and neither
|
Top scoring students across a series of tests
Personally, I don't like the premise. You asked and I will work with it, but I do not like the fundamental here.
Personal aside on teaching
I personally think that a teacher has 2 jobs, and neither is improved by ambiguity:
to teach and certify the student has all the fundamentals down. If something isn't a fundamental, then they don't need it. If they need it, then it is a fundamental. Every student passing a class should have 100% of the fundamentals down. No student passing a class should be lacking in any of the fundamentals - if they lack they should fail the class.
to teach the student how to teach themselves. By this I mean that all relevant learning methods are part of the fundamentals. The student who has this is capable of building in all relevant ways upon his fundamentals to arbitrary levels of exceptionalism. No student passing a class should be unable to teach himself in any relevant way. Every student passing a class should be able to teach themselves in each and all of the relevant ways for that subject.
How I win at "Pick-em"
I play a version of fantasy football called "pick-em". My results are at least as good as the Las Vegas "spread" values.
The problem can be stated as:
given 16 teams, and some recent history of how they play (I only need points per game)
given two of the 16 teams who will content in the next game
determine which is most likely to win
determine a scale indicating how much more likely to win one is than the other
My approach:
Set the metric: For all past games the team played, if they won indicate point differential as positive, otherwise indicate it as negative
Establish the coordinate system: For the game of interest, put one team at coordinate x=1, and the other at x=-1. Each of the differential scores becomes one y value. That mean at x=1 there will be as many points as games that are being considered relevant history. If the last 5 games are considered relevant history then there are 5 points at x=1 indicating relevant scoring of team x=1 and there are 5 points at x=-1 indicating relevant scoring of team x=-1.
use the Theil-Sen estimator (or its relatives) to determine the median slope between all pairs of points. If the slope is positive then the team at x=1 will win, otherwise the team at x=-1 will win. The slope is a measure of how likely a team is to win. If the slope is shallow the win-loss is uncertain, but if the slope is high then the win-loss is more reliable.
I should disclaim that all standard disclaimers apply so nobody writes me hate mail saying they lost money betting using this method. If you bet then it is your decision, not mine.
Applying it to kids
You can make a matrix (aka graph) comparing all kids. at the cell (i,j) which compares kid "i" versus kid "j" put the slope of the T-S estimator. If kid "i" typically wins, the slope is positive, otherwise it is negative.
Make row sums. That is going to give you a 1-axis robust estimate of performance. A higher value for the row sum means that the student overall is higher performing. A lower value for the row sum means that the student overall is lower performing. It will be the best overall estimator of comparative performance. It will tell you nearly nothing about absolute performance. It will tell you nothing about meeting the criteria that I gave. It will, however, be an excellent estimator of comparative performance. It will allow clustering.
There is a textbook "reality check" for univariate data called the 4-plot. (link) If you want any sense of what is going on then use this. It is going to show you trends, outliers, something about the dependence of the grade of kid "i" versus "i+1", and the nature of the distribution. There is hand-waving saying in the limit of infinite samples everything is gaussian. In reality nothing is gaussian, not even pseudorandom numbers meant to look gaussian. The true distribution can tell you about clusters, clumps, (probability modes), and outliers.
Best of luck
PS: If you want example and source code for textbook TS (not my variant) then I will provide it. Please request.
|
Top scoring students across a series of tests
Personally, I don't like the premise. You asked and I will work with it, but I do not like the fundamental here.
Personal aside on teaching
I personally think that a teacher has 2 jobs, and neither
|
40,758
|
Books for learning non parametric Bayesian model
|
Part V of Bayesian Data Analysis is on non-linear and non-parametric methods, which as I recall has chapters on each of basis function methods, Gaussian processes, and Dirichlet processes. (Don't have my copy handy.)
Gaussian Processes for Machine Learning is comprehensive, covering both theory and implementation, and is freely available online.
If you're interested in resources outside of full books, the tutorials by Chris Fonnesbeck on Dirichlet and Gaussian processes were very valuable to me. (Sections 5.1 and 5.2 in the "Notebooks" folder.)
Last, the Machine Learning Summer School 2009 lectures include two talks on non-parametric Bayesian methods. I haven't seen those two yet, but every other lecture I've watched in the series gave a top-notch introduction to its topic.
|
Books for learning non parametric Bayesian model
|
Part V of Bayesian Data Analysis is on non-linear and non-parametric methods, which as I recall has chapters on each of basis function methods, Gaussian processes, and Dirichlet processes. (Don't have
|
Books for learning non parametric Bayesian model
Part V of Bayesian Data Analysis is on non-linear and non-parametric methods, which as I recall has chapters on each of basis function methods, Gaussian processes, and Dirichlet processes. (Don't have my copy handy.)
Gaussian Processes for Machine Learning is comprehensive, covering both theory and implementation, and is freely available online.
If you're interested in resources outside of full books, the tutorials by Chris Fonnesbeck on Dirichlet and Gaussian processes were very valuable to me. (Sections 5.1 and 5.2 in the "Notebooks" folder.)
Last, the Machine Learning Summer School 2009 lectures include two talks on non-parametric Bayesian methods. I haven't seen those two yet, but every other lecture I've watched in the series gave a top-notch introduction to its topic.
|
Books for learning non parametric Bayesian model
Part V of Bayesian Data Analysis is on non-linear and non-parametric methods, which as I recall has chapters on each of basis function methods, Gaussian processes, and Dirichlet processes. (Don't have
|
40,759
|
Books for learning non parametric Bayesian model
|
A fantastic reference is Fundamentals of Nonparametric Bayesian Inference by Ghosal and van der Vaart.
|
Books for learning non parametric Bayesian model
|
A fantastic reference is Fundamentals of Nonparametric Bayesian Inference by Ghosal and van der Vaart.
|
Books for learning non parametric Bayesian model
A fantastic reference is Fundamentals of Nonparametric Bayesian Inference by Ghosal and van der Vaart.
|
Books for learning non parametric Bayesian model
A fantastic reference is Fundamentals of Nonparametric Bayesian Inference by Ghosal and van der Vaart.
|
40,760
|
Books for learning non parametric Bayesian model
|
Here is a good collection to buy. I like the "Bundle of algorithms in Java", it gives straight out implementations/examples as does "Machine learning, practical tools and techniques" which is also a great book with practical examples. Hope that helps
|
Books for learning non parametric Bayesian model
|
Here is a good collection to buy. I like the "Bundle of algorithms in Java", it gives straight out implementations/examples as does "Machine learning, practical tools and techniques" which is also a g
|
Books for learning non parametric Bayesian model
Here is a good collection to buy. I like the "Bundle of algorithms in Java", it gives straight out implementations/examples as does "Machine learning, practical tools and techniques" which is also a great book with practical examples. Hope that helps
|
Books for learning non parametric Bayesian model
Here is a good collection to buy. I like the "Bundle of algorithms in Java", it gives straight out implementations/examples as does "Machine learning, practical tools and techniques" which is also a g
|
40,761
|
Limiting distribution of the first order statistic of a general distribution
|
(The answer has been reworked to respond to OP's and whuber's comments).
The complementary cdf of $X$ is
$$G_n(x) = \left[1-F_Z\left(x/n\right)\right]^{n}$$
To prove that asymptotically $X$ follows an exponential distribution, we need to show that $$\lim_{n\rightarrow \infty}G_n(x)= e^{-\lambda x}$$
Consider
$$F_Z\left(x/n\right) = \int_0^{x/n}f(t)dt $$
By the properties of the integral, we have
$$\int_0^{x/n}f(t)dt = \frac 1n\int_0^{x}f(t/n)dt$$
Define
$$h_n(w) = \left(1+\frac {w}{n}\right)^{n}, \qquad \lim_{n\rightarrow \infty}h_n(w) = e^w=h(w), \;\; w \in \mathbb R$$
and
$$g_n(x) = -\int_0^{x}f(t/n)dt,\;\;\; -\lim_{n\rightarrow \infty}g_n(x) = -\int_0^{x}f(0)dt = -\lambda x = g(x), \;\;x \in \mathbb R_+$$
(To respond to a question by the OP, we can take the limit inside the integral. First note that $n\geq 1$, and we do not send $x$ to infinity. So the argument of $f$ does not explode. So even if it were the case that $f(\infty) \rightarrow \infty$, we do not need to consider this case here. Then, since also $f(0)$ is finite by assumption, $f$ is bounded and dominated convergence holds).
With these definitions we can write
$$G_n(x) = h_n(g_n(x))$$
and the question is
$$ \lim_{n\rightarrow \infty}h_n(g_n(x)) =?\;\; h(g(x)) = e^{-\lambda x},\;\;x \in \mathbb R_+$$
The limit of a composition of function-sequences does not in general equal the composition of their limits (which is what whuber has essentially pointed out in his comment). But this equality will hold if
$(i)$ $h_n$ converges uniformly to $h$ (it does-convergence to $e^w$ is uniform)
$(ii)$ the limit of $h_n$ is a continuous function (it is)
$(iii)$ the functions $g_n(x)$ map $\mathbb R_+$ to $\mathbb R$ (namely, they map their domain into the set where $h_n$ converges -they do).
So the above equality holds and we have proven what we needed to prove.
|
Limiting distribution of the first order statistic of a general distribution
|
(The answer has been reworked to respond to OP's and whuber's comments).
The complementary cdf of $X$ is
$$G_n(x) = \left[1-F_Z\left(x/n\right)\right]^{n}$$
To prove that asymptotically $X$ follows
|
Limiting distribution of the first order statistic of a general distribution
(The answer has been reworked to respond to OP's and whuber's comments).
The complementary cdf of $X$ is
$$G_n(x) = \left[1-F_Z\left(x/n\right)\right]^{n}$$
To prove that asymptotically $X$ follows an exponential distribution, we need to show that $$\lim_{n\rightarrow \infty}G_n(x)= e^{-\lambda x}$$
Consider
$$F_Z\left(x/n\right) = \int_0^{x/n}f(t)dt $$
By the properties of the integral, we have
$$\int_0^{x/n}f(t)dt = \frac 1n\int_0^{x}f(t/n)dt$$
Define
$$h_n(w) = \left(1+\frac {w}{n}\right)^{n}, \qquad \lim_{n\rightarrow \infty}h_n(w) = e^w=h(w), \;\; w \in \mathbb R$$
and
$$g_n(x) = -\int_0^{x}f(t/n)dt,\;\;\; -\lim_{n\rightarrow \infty}g_n(x) = -\int_0^{x}f(0)dt = -\lambda x = g(x), \;\;x \in \mathbb R_+$$
(To respond to a question by the OP, we can take the limit inside the integral. First note that $n\geq 1$, and we do not send $x$ to infinity. So the argument of $f$ does not explode. So even if it were the case that $f(\infty) \rightarrow \infty$, we do not need to consider this case here. Then, since also $f(0)$ is finite by assumption, $f$ is bounded and dominated convergence holds).
With these definitions we can write
$$G_n(x) = h_n(g_n(x))$$
and the question is
$$ \lim_{n\rightarrow \infty}h_n(g_n(x)) =?\;\; h(g(x)) = e^{-\lambda x},\;\;x \in \mathbb R_+$$
The limit of a composition of function-sequences does not in general equal the composition of their limits (which is what whuber has essentially pointed out in his comment). But this equality will hold if
$(i)$ $h_n$ converges uniformly to $h$ (it does-convergence to $e^w$ is uniform)
$(ii)$ the limit of $h_n$ is a continuous function (it is)
$(iii)$ the functions $g_n(x)$ map $\mathbb R_+$ to $\mathbb R$ (namely, they map their domain into the set where $h_n$ converges -they do).
So the above equality holds and we have proven what we needed to prove.
|
Limiting distribution of the first order statistic of a general distribution
(The answer has been reworked to respond to OP's and whuber's comments).
The complementary cdf of $X$ is
$$G_n(x) = \left[1-F_Z\left(x/n\right)\right]^{n}$$
To prove that asymptotically $X$ follows
|
40,762
|
Limiting distribution of the first order statistic of a general distribution
|
To prove convergence in distribution we need to show that the complementary distribution of $X_n$, written $G_n$ where $G_n(x)=\Pr(X_n\gt x)$, gets close to an exponential function for $n$ sufficiently large. To this end, let $t\gt 0$ be an arbitrary point at which to evaluate $G_n(t)$. Note that the independence of the $Z_i$ implies
$$G_n(t) = \left(1 - F\left(\frac{t}{n}\right)\right)^n = \left(1 - \lambda\frac{t}{n} + \left[\lambda\frac{t}{n} - F\left(\frac{t}{n}\right)\right]\right)^n.$$
The term in square brackets is the problem--if it weren't there the limit would obviously be exponential--so we will use the only information available to us to estimate it and hope that it's very small for large $n$. The existence of the limit
$$\lambda = {\lim}_{x\to 0^{+}} f\left(x\right)$$
implies
$$\left|\lambda\frac{t}{n} - F\left(\frac{t}{n}\right)\right| = \left|\int_0^{t/n} (\lambda - f(u)) du\right| \le \frac{t}{n}\sup_{0\le u\le t/n}\left(|\lambda - f(u)|\right) = \frac{t}{n}\varepsilon(n)$$
for some function $\varepsilon$ that approaches $0$ for large arguments. Substitute this into the foregoing and assume $n$ is so large that $F\left(\frac{t}{n}\right)\lt 1$, so that we may take logarithms, and use the Taylor series of the logarithm near $1$ to estimate
$$\eqalign{
\log(G(t))=n\log\left(1 - F\left(\frac{t}{n}\right)\right) &= n\log\left(1 - \lambda\frac{t}{n} + \left[\lambda\frac{t}{n} - F\left(\frac{t}{n}\right)\right]\right) \\
&= n\log\left(1 - \left(\lambda-\varepsilon(n)\right)\frac{t}{n}\right) \\
&= -\left(\lambda-\varepsilon(n)\right)t + \left[(\lambda - \varepsilon(n))t\right]^2O\left(\frac{1}{n}\right).
}$$
Clearly (applying theorems about the limits of products and sums of continuous functions) this has a limit as $n\to \infty$ and it equals $-\lambda t$, showing that $G(t)=\exp(\log(G(t))$ has the limiting value $\exp(-\lambda t)$, QED.
|
Limiting distribution of the first order statistic of a general distribution
|
To prove convergence in distribution we need to show that the complementary distribution of $X_n$, written $G_n$ where $G_n(x)=\Pr(X_n\gt x)$, gets close to an exponential function for $n$ sufficientl
|
Limiting distribution of the first order statistic of a general distribution
To prove convergence in distribution we need to show that the complementary distribution of $X_n$, written $G_n$ where $G_n(x)=\Pr(X_n\gt x)$, gets close to an exponential function for $n$ sufficiently large. To this end, let $t\gt 0$ be an arbitrary point at which to evaluate $G_n(t)$. Note that the independence of the $Z_i$ implies
$$G_n(t) = \left(1 - F\left(\frac{t}{n}\right)\right)^n = \left(1 - \lambda\frac{t}{n} + \left[\lambda\frac{t}{n} - F\left(\frac{t}{n}\right)\right]\right)^n.$$
The term in square brackets is the problem--if it weren't there the limit would obviously be exponential--so we will use the only information available to us to estimate it and hope that it's very small for large $n$. The existence of the limit
$$\lambda = {\lim}_{x\to 0^{+}} f\left(x\right)$$
implies
$$\left|\lambda\frac{t}{n} - F\left(\frac{t}{n}\right)\right| = \left|\int_0^{t/n} (\lambda - f(u)) du\right| \le \frac{t}{n}\sup_{0\le u\le t/n}\left(|\lambda - f(u)|\right) = \frac{t}{n}\varepsilon(n)$$
for some function $\varepsilon$ that approaches $0$ for large arguments. Substitute this into the foregoing and assume $n$ is so large that $F\left(\frac{t}{n}\right)\lt 1$, so that we may take logarithms, and use the Taylor series of the logarithm near $1$ to estimate
$$\eqalign{
\log(G(t))=n\log\left(1 - F\left(\frac{t}{n}\right)\right) &= n\log\left(1 - \lambda\frac{t}{n} + \left[\lambda\frac{t}{n} - F\left(\frac{t}{n}\right)\right]\right) \\
&= n\log\left(1 - \left(\lambda-\varepsilon(n)\right)\frac{t}{n}\right) \\
&= -\left(\lambda-\varepsilon(n)\right)t + \left[(\lambda - \varepsilon(n))t\right]^2O\left(\frac{1}{n}\right).
}$$
Clearly (applying theorems about the limits of products and sums of continuous functions) this has a limit as $n\to \infty$ and it equals $-\lambda t$, showing that $G(t)=\exp(\log(G(t))$ has the limiting value $\exp(-\lambda t)$, QED.
|
Limiting distribution of the first order statistic of a general distribution
To prove convergence in distribution we need to show that the complementary distribution of $X_n$, written $G_n$ where $G_n(x)=\Pr(X_n\gt x)$, gets close to an exponential function for $n$ sufficientl
|
40,763
|
Methods for testing a Bayesian method's software implementation
|
Bayesians don't lose the relative frequency-based interpretation of probability. In particular, if you define this procedure:
simulate from the prior,
then simulate from the model using those values from the prior, and
estimate the parameters using the same prior.
Then your credible intervals should have the appropriate frequentist coverage, i.e. 95% intervals should include the true parameter in 95% of your analyses, over repeated replicates of the procedure.
|
Methods for testing a Bayesian method's software implementation
|
Bayesians don't lose the relative frequency-based interpretation of probability. In particular, if you define this procedure:
simulate from the prior,
then simulate from the model using those values
|
Methods for testing a Bayesian method's software implementation
Bayesians don't lose the relative frequency-based interpretation of probability. In particular, if you define this procedure:
simulate from the prior,
then simulate from the model using those values from the prior, and
estimate the parameters using the same prior.
Then your credible intervals should have the appropriate frequentist coverage, i.e. 95% intervals should include the true parameter in 95% of your analyses, over repeated replicates of the procedure.
|
Methods for testing a Bayesian method's software implementation
Bayesians don't lose the relative frequency-based interpretation of probability. In particular, if you define this procedure:
simulate from the prior,
then simulate from the model using those values
|
40,764
|
Missing observations in a linear mixed model
|
You don't need to omit an individual if there is only missingness for a few observations. In fact, you want to include participants with missingness to increase your power and avoid biasing your results. The nice thing about mixed-effects is that they handle missing data pretty well with maximum likelihood estimation, especially in the context of longitudinal designs.
After taking a look at the syntax below, you'll notice that the estimates between the full model and the missingness model are fairly similar given the context of the extremely small sample size. Additionally, if you specify a random slope you can you can also extract Empirical Bayes estimates using the ranef() function, which gives you an estimated slope for each participant.
These are calculated using both information from the individual and information from the rest of the sample. In the case of more extreme observations or individuals with smaller sample sizes (due to missing data), estimates will be adjusted toward the mean of the overall sample, which is a concept known as "shrinkage." There is a pretty good review on growth curves in a mixed-effects framework that can be found here, although the author uses the nlme package rather than lme4.
require(lme4)
# Set the seed to make the code reproducible
set.seed(28)
# Simulate a growth curve for 4 participants, each with 4 time points. Assume a random
# intercept and fixed slope.
simData <- expand.grid(ID = 1:4, Time = 0:3)
simData <- simData[order(simData$ID), ]
randInt <- rnorm(n = 4, mean = 0, sd = 2)
slope <- 2
randError <- rnorm(n = nrow(simData), mean = 0, sd = 2)
response <- c(NA)
for(i in 1:nrow(simData)){
df <- simData[i, ]
response[i] <- randInt[df$ID] + df$Time * slope + randError[i]
}
simData$response <- response
# Use lmer to model the growth curve
fullMod <- lmer(response ~ Time + (1 | ID), data = simData)
summary(fullMod)
# Number of obs: 16, groups: ID, 4
# Add in missingness for only one time point
simData[2, 3] <- NA
missMod <- lmer(response ~ Time + (1 | ID), data = simData)
summary(missMod)
# Number of obs: 15, groups: ID, 4
|
Missing observations in a linear mixed model
|
You don't need to omit an individual if there is only missingness for a few observations. In fact, you want to include participants with missingness to increase your power and avoid biasing your resul
|
Missing observations in a linear mixed model
You don't need to omit an individual if there is only missingness for a few observations. In fact, you want to include participants with missingness to increase your power and avoid biasing your results. The nice thing about mixed-effects is that they handle missing data pretty well with maximum likelihood estimation, especially in the context of longitudinal designs.
After taking a look at the syntax below, you'll notice that the estimates between the full model and the missingness model are fairly similar given the context of the extremely small sample size. Additionally, if you specify a random slope you can you can also extract Empirical Bayes estimates using the ranef() function, which gives you an estimated slope for each participant.
These are calculated using both information from the individual and information from the rest of the sample. In the case of more extreme observations or individuals with smaller sample sizes (due to missing data), estimates will be adjusted toward the mean of the overall sample, which is a concept known as "shrinkage." There is a pretty good review on growth curves in a mixed-effects framework that can be found here, although the author uses the nlme package rather than lme4.
require(lme4)
# Set the seed to make the code reproducible
set.seed(28)
# Simulate a growth curve for 4 participants, each with 4 time points. Assume a random
# intercept and fixed slope.
simData <- expand.grid(ID = 1:4, Time = 0:3)
simData <- simData[order(simData$ID), ]
randInt <- rnorm(n = 4, mean = 0, sd = 2)
slope <- 2
randError <- rnorm(n = nrow(simData), mean = 0, sd = 2)
response <- c(NA)
for(i in 1:nrow(simData)){
df <- simData[i, ]
response[i] <- randInt[df$ID] + df$Time * slope + randError[i]
}
simData$response <- response
# Use lmer to model the growth curve
fullMod <- lmer(response ~ Time + (1 | ID), data = simData)
summary(fullMod)
# Number of obs: 16, groups: ID, 4
# Add in missingness for only one time point
simData[2, 3] <- NA
missMod <- lmer(response ~ Time + (1 | ID), data = simData)
summary(missMod)
# Number of obs: 15, groups: ID, 4
|
Missing observations in a linear mixed model
You don't need to omit an individual if there is only missingness for a few observations. In fact, you want to include participants with missingness to increase your power and avoid biasing your resul
|
40,765
|
SVM basic theory?
|
Nonlinear SVM is a synonym for SVM with a kernel trick.
The idea is that if there is no linear separation in the original space, it may exist in some other space, quite likely of a higher dimension. Kernel trick allows one to construct this space implicitly by messing with the dot product in the original space; that's why the result seems as there is a nonlinear boundary in the original space.
The scalar stuff is just a way of interpreting the boundary hyperplane. You can map the points on an axis perpendicular to the boundary, hook the zero onto the intersection and then each object will get a single coordinate that is positive for one class and negative for the other.
See above.
Tough stuff; sometimes you know that the data will fit particular kernel, sometimes you hope that RBF will also be good enough this time but generally it is an extra hyperparameter to be tuned.
See 2.
|
SVM basic theory?
|
Nonlinear SVM is a synonym for SVM with a kernel trick.
The idea is that if there is no linear separation in the original space, it may exist in some other space, quite likely of a higher dimension. K
|
SVM basic theory?
Nonlinear SVM is a synonym for SVM with a kernel trick.
The idea is that if there is no linear separation in the original space, it may exist in some other space, quite likely of a higher dimension. Kernel trick allows one to construct this space implicitly by messing with the dot product in the original space; that's why the result seems as there is a nonlinear boundary in the original space.
The scalar stuff is just a way of interpreting the boundary hyperplane. You can map the points on an axis perpendicular to the boundary, hook the zero onto the intersection and then each object will get a single coordinate that is positive for one class and negative for the other.
See above.
Tough stuff; sometimes you know that the data will fit particular kernel, sometimes you hope that RBF will also be good enough this time but generally it is an extra hyperparameter to be tuned.
See 2.
|
SVM basic theory?
Nonlinear SVM is a synonym for SVM with a kernel trick.
The idea is that if there is no linear separation in the original space, it may exist in some other space, quite likely of a higher dimension. K
|
40,766
|
Run time analysis of the clustering algorithm (k-means)
|
Looking at these notes time complexity of Lloyds algorithm for k-means clustering is given as:
O(n * K * I * d)
n : number of points
K : number of clusters
I : number of iterations
d : number of attributes
My gut feeling is that in your case number of iterations (and number of attributes) is assumed to be constant.
|
Run time analysis of the clustering algorithm (k-means)
|
Looking at these notes time complexity of Lloyds algorithm for k-means clustering is given as:
O(n * K * I * d)
n : number of points
K : number of clusters
I : number of iterations
d : number of attr
|
Run time analysis of the clustering algorithm (k-means)
Looking at these notes time complexity of Lloyds algorithm for k-means clustering is given as:
O(n * K * I * d)
n : number of points
K : number of clusters
I : number of iterations
d : number of attributes
My gut feeling is that in your case number of iterations (and number of attributes) is assumed to be constant.
|
Run time analysis of the clustering algorithm (k-means)
Looking at these notes time complexity of Lloyds algorithm for k-means clustering is given as:
O(n * K * I * d)
n : number of points
K : number of clusters
I : number of iterations
d : number of attr
|
40,767
|
Why the trees generated via bagging are identically distributed?
|
Bagging technique uses bootstraps (random samples of the same length with replacement) to train each tree from the assembly. Thus, the samples used to build each individual tree comes from the same population as the original sample. This is why the input and target variables are called ID (identically distributed = same distribution).
More than that, because the samples are drawn randomly, the samples are also independent (knowing elements of a sample does not give hints on the elements of another sample). This is usually denoted as IID (independent and identically distributed).
The expectation of the mean is preserved because input and target variables are IID (samples are independent and are drawn from the same population). [see Law of Large numbers]
Because trees are basically a piece-wise constant approximation, what those trees can learn are constant averages on various regions. The trees only define input space regions (the leaf nodes), but on those regions approximates with an average.
Those constants are averages of some sort (mean, median) depending on the loss function. So what you can say about averages from the input and target variables, you can say about trees themselves (that they preserve the expectation of the average).
The bagging is used to reduce variance by averaging the models, while they preserve as much as possible the expectation of those variables.
I hope I was clear somehow, I will retry later when I will have the chance, to improve this, eventually.
|
Why the trees generated via bagging are identically distributed?
|
Bagging technique uses bootstraps (random samples of the same length with replacement) to train each tree from the assembly. Thus, the samples used to build each individual tree comes from the same po
|
Why the trees generated via bagging are identically distributed?
Bagging technique uses bootstraps (random samples of the same length with replacement) to train each tree from the assembly. Thus, the samples used to build each individual tree comes from the same population as the original sample. This is why the input and target variables are called ID (identically distributed = same distribution).
More than that, because the samples are drawn randomly, the samples are also independent (knowing elements of a sample does not give hints on the elements of another sample). This is usually denoted as IID (independent and identically distributed).
The expectation of the mean is preserved because input and target variables are IID (samples are independent and are drawn from the same population). [see Law of Large numbers]
Because trees are basically a piece-wise constant approximation, what those trees can learn are constant averages on various regions. The trees only define input space regions (the leaf nodes), but on those regions approximates with an average.
Those constants are averages of some sort (mean, median) depending on the loss function. So what you can say about averages from the input and target variables, you can say about trees themselves (that they preserve the expectation of the average).
The bagging is used to reduce variance by averaging the models, while they preserve as much as possible the expectation of those variables.
I hope I was clear somehow, I will retry later when I will have the chance, to improve this, eventually.
|
Why the trees generated via bagging are identically distributed?
Bagging technique uses bootstraps (random samples of the same length with replacement) to train each tree from the assembly. Thus, the samples used to build each individual tree comes from the same po
|
40,768
|
Why the trees generated via bagging are identically distributed?
|
I think rapaio is conflating a couple key concepts and in doing so misinterpreted the OP's question. Yes, the bootstrap samples utilized within a bagging algorithm are IID. However, the bagging estimator is ID, NOT IID.
The bagging algorithm will generate B trees and the corresponding prediction estimates, $\{\hat{f}^b(X)\}_{b=1}^B$. Since the tree estimator estimated each tree using draws from the same distribution, the identical distribution assumption will hold. However, the independence assumption will not hold!!! For example, imagine that there is one very strong predictor within the data. In each tree this strong predictor will likely be the first split. Therefore, the prediction of most trees will be similar. Said another way, the predictions will be correlated (i.e. not independent).
Think about it, the bagging algorithm is taking a sequence of IID random variables (i.e. the bootstrap samples) and turning them into a sequence of ID random variables (by generating tree estimates)
The bagging algorithm is still helpful. The bagging estimator is unbiased; bias is unaffected the lack of independence. Therefore the average of $\hat{f}^b(X)$ will be the same as the expected value of any tree, i.e. $E(f^b(X)) = \frac{1}{B} \sum_{i=1}^B \hat{f}^b(X)$. The variance of the bagging estimator will, however, be affected by non-independence i.e. remember $Var(X+Y) = Var(X) + Var(Y) + 2Cov(X,Y)$. It turns out that bagging estimator will have a smaller variance then a tree estimator (see pg 518 Elements of Statistical Learning). However, we can further reduce estimator variance by attempting to decorrelate the trees. This is where the notion of Random Forest comes from. Again see pg 518 Elements of Statistical Learning or pg 319 Introduction to Statistical Learning for more.
|
Why the trees generated via bagging are identically distributed?
|
I think rapaio is conflating a couple key concepts and in doing so misinterpreted the OP's question. Yes, the bootstrap samples utilized within a bagging algorithm are IID. However, the bagging esti
|
Why the trees generated via bagging are identically distributed?
I think rapaio is conflating a couple key concepts and in doing so misinterpreted the OP's question. Yes, the bootstrap samples utilized within a bagging algorithm are IID. However, the bagging estimator is ID, NOT IID.
The bagging algorithm will generate B trees and the corresponding prediction estimates, $\{\hat{f}^b(X)\}_{b=1}^B$. Since the tree estimator estimated each tree using draws from the same distribution, the identical distribution assumption will hold. However, the independence assumption will not hold!!! For example, imagine that there is one very strong predictor within the data. In each tree this strong predictor will likely be the first split. Therefore, the prediction of most trees will be similar. Said another way, the predictions will be correlated (i.e. not independent).
Think about it, the bagging algorithm is taking a sequence of IID random variables (i.e. the bootstrap samples) and turning them into a sequence of ID random variables (by generating tree estimates)
The bagging algorithm is still helpful. The bagging estimator is unbiased; bias is unaffected the lack of independence. Therefore the average of $\hat{f}^b(X)$ will be the same as the expected value of any tree, i.e. $E(f^b(X)) = \frac{1}{B} \sum_{i=1}^B \hat{f}^b(X)$. The variance of the bagging estimator will, however, be affected by non-independence i.e. remember $Var(X+Y) = Var(X) + Var(Y) + 2Cov(X,Y)$. It turns out that bagging estimator will have a smaller variance then a tree estimator (see pg 518 Elements of Statistical Learning). However, we can further reduce estimator variance by attempting to decorrelate the trees. This is where the notion of Random Forest comes from. Again see pg 518 Elements of Statistical Learning or pg 319 Introduction to Statistical Learning for more.
|
Why the trees generated via bagging are identically distributed?
I think rapaio is conflating a couple key concepts and in doing so misinterpreted the OP's question. Yes, the bootstrap samples utilized within a bagging algorithm are IID. However, the bagging esti
|
40,769
|
What is copula transformation
|
I believe you're just referring to transforming each marginal distribution to $U[0,1]$ via the probability integral transform, which when applied to each of the variables individually, transforms a d-dimensional distribution to its copula.
For example, if you had a bivariate normal $(X,Y)$, and transform $U=F_X(X)$ and $V=F_Y(Y)$, then $(U,V)$ is a Gaussian copula.
e.g. see here
There are some recommended introductory readings here
|
What is copula transformation
|
I believe you're just referring to transforming each marginal distribution to $U[0,1]$ via the probability integral transform, which when applied to each of the variables individually, transforms a d-
|
What is copula transformation
I believe you're just referring to transforming each marginal distribution to $U[0,1]$ via the probability integral transform, which when applied to each of the variables individually, transforms a d-dimensional distribution to its copula.
For example, if you had a bivariate normal $(X,Y)$, and transform $U=F_X(X)$ and $V=F_Y(Y)$, then $(U,V)$ is a Gaussian copula.
e.g. see here
There are some recommended introductory readings here
|
What is copula transformation
I believe you're just referring to transforming each marginal distribution to $U[0,1]$ via the probability integral transform, which when applied to each of the variables individually, transforms a d-
|
40,770
|
What is copula transformation
|
Sklar's theorem says that (bivariate case), for any given joint distribution function $H$, of two random variables $X$ and $Y$, with uniform univariate margins, $F_x(x)$ and $G_y(y)$, of $X$ and $Y$ respectively, then there exist a copula function $C$, such that:
$$H(x,y)= C(F_x(x),G_y(y)) $$
If all margins is continuous the copula is unique.
The idea of copula transformation is that: Copula models allow to model the margins separately from the dependence structure. Hence, we transform all the variable to be uniform in order to capture the pure dependence structures between variables without any affect of the margins.
$u=F(x)$ and hence, $F^{-1}(u)=x$. So we can go back easily to the original data.
|
What is copula transformation
|
Sklar's theorem says that (bivariate case), for any given joint distribution function $H$, of two random variables $X$ and $Y$, with uniform univariate margins, $F_x(x)$ and $G_y(y)$, of $X$ and $Y$ r
|
What is copula transformation
Sklar's theorem says that (bivariate case), for any given joint distribution function $H$, of two random variables $X$ and $Y$, with uniform univariate margins, $F_x(x)$ and $G_y(y)$, of $X$ and $Y$ respectively, then there exist a copula function $C$, such that:
$$H(x,y)= C(F_x(x),G_y(y)) $$
If all margins is continuous the copula is unique.
The idea of copula transformation is that: Copula models allow to model the margins separately from the dependence structure. Hence, we transform all the variable to be uniform in order to capture the pure dependence structures between variables without any affect of the margins.
$u=F(x)$ and hence, $F^{-1}(u)=x$. So we can go back easily to the original data.
|
What is copula transformation
Sklar's theorem says that (bivariate case), for any given joint distribution function $H$, of two random variables $X$ and $Y$, with uniform univariate margins, $F_x(x)$ and $G_y(y)$, of $X$ and $Y$ r
|
40,771
|
OLS estimate of a linear model with dummy variable
|
The model that we have is:
Knowing that X is a dummy variable, we can get the following:
Using the above information, we can substitute for components of OLS estimation and by simplifying we get:
|
OLS estimate of a linear model with dummy variable
|
The model that we have is:
Knowing that X is a dummy variable, we can get the following:
Using the above information, we can substitute for components of OLS estimation and by simplifying we get:
|
OLS estimate of a linear model with dummy variable
The model that we have is:
Knowing that X is a dummy variable, we can get the following:
Using the above information, we can substitute for components of OLS estimation and by simplifying we get:
|
OLS estimate of a linear model with dummy variable
The model that we have is:
Knowing that X is a dummy variable, we can get the following:
Using the above information, we can substitute for components of OLS estimation and by simplifying we get:
|
40,772
|
How to prove that the chance of occurrence of event A increases in presence of event B, when both have a different Poisson distribution?
|
If A is Poisson, you would speak of its rate of occurrence, rather than its probability.
If this is to be based on data, rather than a fully identified joint distribution, a simple approach is to condition on is observed values of B and the interest is if the observed counts of A tends to be higher when the count of B is higher.
So the question would be 'How would we see if the rate of occurrence of A increases when the occurrence of B is higher?'
If that's your intent, we can certainly get somewhere (if not, please edit your question to clarify your actual intent):
You might do something simple, like look for a positive correlation (not necessarily linear - you might look at monotonic association via something like a nonparametric correlation, perhaps).
Another approach would be to do a Poisson regression (GLM) of the counts of A on counts of B (hopefully including any other known or likely important covariates); in some other cases, B might be treated as an exposure by including an offset in a Poisson regression (but I don't think that would be suitable for this particular example).
Here's an illustration with simulated data (in this case I know the model, because I created the data):
As we see from the plot, the two are positively associated. Their Pearson correlation is:
cor(x,y)
[1] 0.8057106
Here I fit a GLM (in R) with identity link (more commonly, a log link would be used, but the model here is closer to the data generating model in this particular example). In this case, we fit $E(Y|X=x) = \beta_0+\beta_1 x$, which looks like a regression model, but with the GLM here the model takes account of the fact that the observations are conditionally Poisson. The command
summary(glm(y~x,family=poisson(link=identity)))
fits the model mentioned above, with output:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 13.9571 3.6826 3.79 0.000151 ***
x 0.9533 0.1806 5.28 1.29e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 42.907 on 39 degrees of freedom
Residual deviance: 15.284 on 38 degrees of freedom
AIC: 232.81
In this case, the fitted model reproduces the actual process used to create it fairly well; the true model is that $x$ was generated from a conditionally Poisson model (with mean that took two different values), and $y$ was equal to $x$ plus a 'background' Poisson process (with constant intensity).
The interpretation here is that when $x$ increases by $1$, on average the $y$ count increases by about $0.95$
The intercept term picked up the background process (which had population mean 13), and the slope term picked up the $x$ effect (which had population coefficient 1).
As gung suggests below, if you take daily rainfall and daily hail as Bernoulli (rained or not, hailed or not, for each day), then you can deal with probability rather than counts, and there are a variety of ways to model that. His suggestions in comments are a good way of looking at the problem (quite a bit more sophisticated than my suggestions here), and would get you closer to conditioning on estimated underlying probability per unit time rather than directly observed rate per unit time.
|
How to prove that the chance of occurrence of event A increases in presence of event B, when both ha
|
If A is Poisson, you would speak of its rate of occurrence, rather than its probability.
If this is to be based on data, rather than a fully identified joint distribution, a simple approach is to con
|
How to prove that the chance of occurrence of event A increases in presence of event B, when both have a different Poisson distribution?
If A is Poisson, you would speak of its rate of occurrence, rather than its probability.
If this is to be based on data, rather than a fully identified joint distribution, a simple approach is to condition on is observed values of B and the interest is if the observed counts of A tends to be higher when the count of B is higher.
So the question would be 'How would we see if the rate of occurrence of A increases when the occurrence of B is higher?'
If that's your intent, we can certainly get somewhere (if not, please edit your question to clarify your actual intent):
You might do something simple, like look for a positive correlation (not necessarily linear - you might look at monotonic association via something like a nonparametric correlation, perhaps).
Another approach would be to do a Poisson regression (GLM) of the counts of A on counts of B (hopefully including any other known or likely important covariates); in some other cases, B might be treated as an exposure by including an offset in a Poisson regression (but I don't think that would be suitable for this particular example).
Here's an illustration with simulated data (in this case I know the model, because I created the data):
As we see from the plot, the two are positively associated. Their Pearson correlation is:
cor(x,y)
[1] 0.8057106
Here I fit a GLM (in R) with identity link (more commonly, a log link would be used, but the model here is closer to the data generating model in this particular example). In this case, we fit $E(Y|X=x) = \beta_0+\beta_1 x$, which looks like a regression model, but with the GLM here the model takes account of the fact that the observations are conditionally Poisson. The command
summary(glm(y~x,family=poisson(link=identity)))
fits the model mentioned above, with output:
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 13.9571 3.6826 3.79 0.000151 ***
x 0.9533 0.1806 5.28 1.29e-07 ***
---
Signif. codes: 0 ‘***’ 0.001 ‘**’ 0.01 ‘*’ 0.05 ‘.’ 0.1 ‘ ’ 1
(Dispersion parameter for poisson family taken to be 1)
Null deviance: 42.907 on 39 degrees of freedom
Residual deviance: 15.284 on 38 degrees of freedom
AIC: 232.81
In this case, the fitted model reproduces the actual process used to create it fairly well; the true model is that $x$ was generated from a conditionally Poisson model (with mean that took two different values), and $y$ was equal to $x$ plus a 'background' Poisson process (with constant intensity).
The interpretation here is that when $x$ increases by $1$, on average the $y$ count increases by about $0.95$
The intercept term picked up the background process (which had population mean 13), and the slope term picked up the $x$ effect (which had population coefficient 1).
As gung suggests below, if you take daily rainfall and daily hail as Bernoulli (rained or not, hailed or not, for each day), then you can deal with probability rather than counts, and there are a variety of ways to model that. His suggestions in comments are a good way of looking at the problem (quite a bit more sophisticated than my suggestions here), and would get you closer to conditioning on estimated underlying probability per unit time rather than directly observed rate per unit time.
|
How to prove that the chance of occurrence of event A increases in presence of event B, when both ha
If A is Poisson, you would speak of its rate of occurrence, rather than its probability.
If this is to be based on data, rather than a fully identified joint distribution, a simple approach is to con
|
40,773
|
How to log transform data with a large number of zeros
|
Do you know what the sensitivity of the machine is? If it cannot reliably record any values less than 100 (and therefore reports them as 0), then that means all your 0's are values between 0 (or negative infinity) and 100, adding 0.5 would underestimate this, 50 would be a more reasonable value, or possibly 100. It would make the most sense to choose the added value (and maybe only add it to the 0's, not all the values) based on the machine precision.
There are also ways to estimate the value to be added that gives the "Best" normal approximation in the data (I think there was some of this in the original Box-Cox paper), or a logspline fit can be used to estimate a distribution with your zeros being treated as interval censored values.
|
How to log transform data with a large number of zeros
|
Do you know what the sensitivity of the machine is? If it cannot reliably record any values less than 100 (and therefore reports them as 0), then that means all your 0's are values between 0 (or nega
|
How to log transform data with a large number of zeros
Do you know what the sensitivity of the machine is? If it cannot reliably record any values less than 100 (and therefore reports them as 0), then that means all your 0's are values between 0 (or negative infinity) and 100, adding 0.5 would underestimate this, 50 would be a more reasonable value, or possibly 100. It would make the most sense to choose the added value (and maybe only add it to the 0's, not all the values) based on the machine precision.
There are also ways to estimate the value to be added that gives the "Best" normal approximation in the data (I think there was some of this in the original Box-Cox paper), or a logspline fit can be used to estimate a distribution with your zeros being treated as interval censored values.
|
How to log transform data with a large number of zeros
Do you know what the sensitivity of the machine is? If it cannot reliably record any values less than 100 (and therefore reports them as 0), then that means all your 0's are values between 0 (or nega
|
40,774
|
How to log transform data with a large number of zeros
|
In your case, I would treat zeros separately from the other data points. You can work out a model for non-zero elements. Adding a small value $\epsilon$ at least works for data visualization purpose.
Btw. there was an almost similar discussion before here:
How should I transform non-negative data including zeros?
|
How to log transform data with a large number of zeros
|
In your case, I would treat zeros separately from the other data points. You can work out a model for non-zero elements. Adding a small value $\epsilon$ at least works for data visualization purpose.
|
How to log transform data with a large number of zeros
In your case, I would treat zeros separately from the other data points. You can work out a model for non-zero elements. Adding a small value $\epsilon$ at least works for data visualization purpose.
Btw. there was an almost similar discussion before here:
How should I transform non-negative data including zeros?
|
How to log transform data with a large number of zeros
In your case, I would treat zeros separately from the other data points. You can work out a model for non-zero elements. Adding a small value $\epsilon$ at least works for data visualization purpose.
|
40,775
|
Differences between Multi-layer NN, Hopfield, Helmholtz and Boltzmann machines
|
Multilayer NN (MLP) and Hopfield networks are deterministic networks. Concretely, the first can be shown to estimate the conditional average on the target data. For details you may have a look at Bishop's book on neural networks.
The Hopfield is a deterministic recurrent neural network. Deterministic because once the initial state is given, its dynamics evolves following a Lyapunov function. See papers by Hopfield and Tank. It has been shown that it can solve combinatorial problems and learn time series.
Helmholtz and Boltzmann machines are stochastic networks, meaning that given an input, the state of the network does not converge to a unique state, but to an ensemble distribution. A probability distribution of the state of the neural network. They are the stochastic equivalent of the Hopfield network.
One can actually prove that in the limit of absolute zero, $T \rightarrow 0$, the Boltzmann machine reduces to the Hopfield model.
You may look at the early papers by Hinton on the topic to see the basic differences, and the new ones to understand how to make them work.
Also, the Boltzmann and Helmholtz machines are strongly related to Markov Random Fields and Conditional Random Fields, as explained here and here. This leads to development of algorithms for inference that can be applied to both kinds of models, as for example fractional belief propagation.
|
Differences between Multi-layer NN, Hopfield, Helmholtz and Boltzmann machines
|
Multilayer NN (MLP) and Hopfield networks are deterministic networks. Concretely, the first can be shown to estimate the conditional average on the target data. For details you may have a look at Bish
|
Differences between Multi-layer NN, Hopfield, Helmholtz and Boltzmann machines
Multilayer NN (MLP) and Hopfield networks are deterministic networks. Concretely, the first can be shown to estimate the conditional average on the target data. For details you may have a look at Bishop's book on neural networks.
The Hopfield is a deterministic recurrent neural network. Deterministic because once the initial state is given, its dynamics evolves following a Lyapunov function. See papers by Hopfield and Tank. It has been shown that it can solve combinatorial problems and learn time series.
Helmholtz and Boltzmann machines are stochastic networks, meaning that given an input, the state of the network does not converge to a unique state, but to an ensemble distribution. A probability distribution of the state of the neural network. They are the stochastic equivalent of the Hopfield network.
One can actually prove that in the limit of absolute zero, $T \rightarrow 0$, the Boltzmann machine reduces to the Hopfield model.
You may look at the early papers by Hinton on the topic to see the basic differences, and the new ones to understand how to make them work.
Also, the Boltzmann and Helmholtz machines are strongly related to Markov Random Fields and Conditional Random Fields, as explained here and here. This leads to development of algorithms for inference that can be applied to both kinds of models, as for example fractional belief propagation.
|
Differences between Multi-layer NN, Hopfield, Helmholtz and Boltzmann machines
Multilayer NN (MLP) and Hopfield networks are deterministic networks. Concretely, the first can be shown to estimate the conditional average on the target data. For details you may have a look at Bish
|
40,776
|
Why the most of world correlations are positive?
|
Although ttnphns's comment is slightly in jest - it actually has bearing on your question. We may consider different phenomenon as being caused by a set of related factors (which may or may not be measured). So for example say we have a latent factor of $\lambda$ that affects responses to a set of Likert items on a survey.
$$\begin{align*}
y_1 = 0.5\lambda + e \\
y_2 = 0.7\lambda + e \\
y_3 = 0.6\lambda + e
\end{align*}$$
In this example $y_1$, $y_2$ and $y_3$ will all have a positive correlation because they are all related the same way through $\lambda$. For many datasets it may be that many of the items have some variable that is underlying in common. For example in the vitamin and mineral contents if the food samples are of different size I would expect more vitamins and minerals for larger food samples, making the marginal correlations of each positively correlated. Another explanation might be producers that intentionally increase vitamin content also increase mineral content (as they aren't really competing with one another and may be marketed as healthy foods).
In the case of Likert items, as Peter Flom stated in a comment, we typically construct the survey to identify these underlying latent factors, so it is by construction that many items are positively correlated. Also the anchors are somewhat arbitrary, but questions stated positively (e.g. "Do you support the death penalty?") tend to be measured more accurately than negated questions (e.g. "Do you not support the death penalty?"). It is also the case that you could assign different numeric values to the Likert items, but it is typical to have a scale of $1$ to $n$ (with $n$ being the different potential responses) as the default for coding the values.
Note you could arbitrarily flip this coding though, so if all of the correlations in the sample were positive, you could flip half the variables so the correlations were equal. Often times there is an arbitrariness in how we represent values, e.g. if you have a nominal category of men and women you could set $\text{men} = 1$ and $\text{women} = 0$ or you could do it the obverse way. Again people may make these arbitrary coding decisions to make items appear to have positive correlations.
|
Why the most of world correlations are positive?
|
Although ttnphns's comment is slightly in jest - it actually has bearing on your question. We may consider different phenomenon as being caused by a set of related factors (which may or may not be mea
|
Why the most of world correlations are positive?
Although ttnphns's comment is slightly in jest - it actually has bearing on your question. We may consider different phenomenon as being caused by a set of related factors (which may or may not be measured). So for example say we have a latent factor of $\lambda$ that affects responses to a set of Likert items on a survey.
$$\begin{align*}
y_1 = 0.5\lambda + e \\
y_2 = 0.7\lambda + e \\
y_3 = 0.6\lambda + e
\end{align*}$$
In this example $y_1$, $y_2$ and $y_3$ will all have a positive correlation because they are all related the same way through $\lambda$. For many datasets it may be that many of the items have some variable that is underlying in common. For example in the vitamin and mineral contents if the food samples are of different size I would expect more vitamins and minerals for larger food samples, making the marginal correlations of each positively correlated. Another explanation might be producers that intentionally increase vitamin content also increase mineral content (as they aren't really competing with one another and may be marketed as healthy foods).
In the case of Likert items, as Peter Flom stated in a comment, we typically construct the survey to identify these underlying latent factors, so it is by construction that many items are positively correlated. Also the anchors are somewhat arbitrary, but questions stated positively (e.g. "Do you support the death penalty?") tend to be measured more accurately than negated questions (e.g. "Do you not support the death penalty?"). It is also the case that you could assign different numeric values to the Likert items, but it is typical to have a scale of $1$ to $n$ (with $n$ being the different potential responses) as the default for coding the values.
Note you could arbitrarily flip this coding though, so if all of the correlations in the sample were positive, you could flip half the variables so the correlations were equal. Often times there is an arbitrariness in how we represent values, e.g. if you have a nominal category of men and women you could set $\text{men} = 1$ and $\text{women} = 0$ or you could do it the obverse way. Again people may make these arbitrary coding decisions to make items appear to have positive correlations.
|
Why the most of world correlations are positive?
Although ttnphns's comment is slightly in jest - it actually has bearing on your question. We may consider different phenomenon as being caused by a set of related factors (which may or may not be mea
|
40,777
|
Why the most of world correlations are positive?
|
To expand on Scortchi's/AndyW's point of confounding factors:
For the food stuff I think the water content is an extremely important confounding variable. In particular, if you mix in fruits that are naturally eaten with very high water content (tomatoes, cucumbers) with fruit where the "natural" state is already dried (raisins) and which therefore contain more of about everything, the huge difference in the water content can actually influence the correlation.
The effect becomes very clear if you consider a small table that just lists raisins and grapes...
Note that water is not listed in the table, so the negative correlations are just not shown. So another reason (in addition to @Peter Flom's comment) is that the way people tabulate data can also emphasize positive correlations: if you want to know the water content, you just have to subtract the proteins, lipids, carbohydrates (depending on the way carbohydrates are listed also fiber) from the 100 g raw weight - the information is redundant. But because the water content is for these tables of less interest that the other nutrient contents, the subtraction is left to the reader.
And then, we actually know certain (co)relations in the data, e.g.
the energy content for proteins and non-fiber carbohydrates (both 17 kJ/g) and lipids (37 kJ/g) etc. is well known, and the total energy is usually just calculated as the sum of all those contributions
Na⁺ to K⁺ concentrations are similar among plants and among animals (much higher difference between plant and animals: plants have comparably more K⁺)
These tables sometimes list subcategories which then obviously have an upper bound. Consider
carbohydrates,
thereof mono- and disacharides
lipids
thereof saturated lipids
This relation tend to produce positive correlations as well, which is again caused by the way we group and tabulate our data.
|
Why the most of world correlations are positive?
|
To expand on Scortchi's/AndyW's point of confounding factors:
For the food stuff I think the water content is an extremely important confounding variable. In particular, if you mix in fruits that are
|
Why the most of world correlations are positive?
To expand on Scortchi's/AndyW's point of confounding factors:
For the food stuff I think the water content is an extremely important confounding variable. In particular, if you mix in fruits that are naturally eaten with very high water content (tomatoes, cucumbers) with fruit where the "natural" state is already dried (raisins) and which therefore contain more of about everything, the huge difference in the water content can actually influence the correlation.
The effect becomes very clear if you consider a small table that just lists raisins and grapes...
Note that water is not listed in the table, so the negative correlations are just not shown. So another reason (in addition to @Peter Flom's comment) is that the way people tabulate data can also emphasize positive correlations: if you want to know the water content, you just have to subtract the proteins, lipids, carbohydrates (depending on the way carbohydrates are listed also fiber) from the 100 g raw weight - the information is redundant. But because the water content is for these tables of less interest that the other nutrient contents, the subtraction is left to the reader.
And then, we actually know certain (co)relations in the data, e.g.
the energy content for proteins and non-fiber carbohydrates (both 17 kJ/g) and lipids (37 kJ/g) etc. is well known, and the total energy is usually just calculated as the sum of all those contributions
Na⁺ to K⁺ concentrations are similar among plants and among animals (much higher difference between plant and animals: plants have comparably more K⁺)
These tables sometimes list subcategories which then obviously have an upper bound. Consider
carbohydrates,
thereof mono- and disacharides
lipids
thereof saturated lipids
This relation tend to produce positive correlations as well, which is again caused by the way we group and tabulate our data.
|
Why the most of world correlations are positive?
To expand on Scortchi's/AndyW's point of confounding factors:
For the food stuff I think the water content is an extremely important confounding variable. In particular, if you mix in fruits that are
|
40,778
|
Training a convolutional neural network
|
If I understand you correctly, the question is how to train the net if you have pooling layers? Well, the weights in pooling layers are not that different from the ones in "normal" layers. Imagine you have a max pooling layer with grid size 3x3. Imagine further that for a given training example, pixel number 5 (that is, in position (2,2) ) has had the max value in forward propagation, i.e. its value has been passed through the max pooling layer. When doing backprop for that sample, the weight between your pixel number 5 and the output of the pooling is simply one, while for the other eight pixels it is zero. And since the max pooling does not do any further transformation, the error used is that from the layer that came after the max pooling layer. For a more mathematical formulation, there is a nice website: http://andrew.gibiansky.com/blog/machine-learning/convolutional-neural-networks/
|
Training a convolutional neural network
|
If I understand you correctly, the question is how to train the net if you have pooling layers? Well, the weights in pooling layers are not that different from the ones in "normal" layers. Imagine you
|
Training a convolutional neural network
If I understand you correctly, the question is how to train the net if you have pooling layers? Well, the weights in pooling layers are not that different from the ones in "normal" layers. Imagine you have a max pooling layer with grid size 3x3. Imagine further that for a given training example, pixel number 5 (that is, in position (2,2) ) has had the max value in forward propagation, i.e. its value has been passed through the max pooling layer. When doing backprop for that sample, the weight between your pixel number 5 and the output of the pooling is simply one, while for the other eight pixels it is zero. And since the max pooling does not do any further transformation, the error used is that from the layer that came after the max pooling layer. For a more mathematical formulation, there is a nice website: http://andrew.gibiansky.com/blog/machine-learning/convolutional-neural-networks/
|
Training a convolutional neural network
If I understand you correctly, the question is how to train the net if you have pooling layers? Well, the weights in pooling layers are not that different from the ones in "normal" layers. Imagine you
|
40,779
|
Training a convolutional neural network
|
Back-propagation is "just" a nice rick to combine the chain rule with dynamic programming so that you get an efficient method to calculate the gradient of the network with respect to its parameters.
Whether you can calculate the gradient does not depend on whether there are weights. You still have to calculate the derivatives for the subsampling layers, even though there are no weights. It's easy to imagine this if you consider that subsampling is just a very special kind of transfer function.
|
Training a convolutional neural network
|
Back-propagation is "just" a nice rick to combine the chain rule with dynamic programming so that you get an efficient method to calculate the gradient of the network with respect to its parameters.
W
|
Training a convolutional neural network
Back-propagation is "just" a nice rick to combine the chain rule with dynamic programming so that you get an efficient method to calculate the gradient of the network with respect to its parameters.
Whether you can calculate the gradient does not depend on whether there are weights. You still have to calculate the derivatives for the subsampling layers, even though there are no weights. It's easy to imagine this if you consider that subsampling is just a very special kind of transfer function.
|
Training a convolutional neural network
Back-propagation is "just" a nice rick to combine the chain rule with dynamic programming so that you get an efficient method to calculate the gradient of the network with respect to its parameters.
W
|
40,780
|
Why does differencing time-series introduce negative autocorrelation
|
Take the simple white noise process $Z_t$, $EZ_t=0$, $cov(Z_t,Z_{t-h})=0$, for all $h\neq 0$. Now take its difference $Y_t=Z_{t}-Z_{t-1}$and calculate the first lag autocovariance:
$$cov(Y_t,Y_{t-1})=cov(Z_t-Z_{t-1},Z_{t-1}-Z_{t-2})=-cov(Z_{t-1},Z_{t-1})=-var(Z_t)$$
Hence $corr(Y_t,Y_{t-1})=-1/2.$ (Since $var(Y_t)=2var(Z_t)$).
Now for any (causal) stationary process $X_t$ there exists such a white noise process $Z_t$ and coefficients $\psi_j$ such that $X_t=\sum_{j=0}^{\infty}\psi_jZ_{t-j}$. This is courtesy of the Wold decomposition. Thus
$$cov(X_t,X_{t+h})=\sum_{j=0}^\infty\psi_j\psi_{j+h}$$
For the differenced version $Y_t=X_t-X_{t-1}$ we have
$$Y_{t}=Z_{t}+(\psi_1-1)Z_{t-1}+\sum_{j=2}(\psi_{j}-\psi_{j-1})Z_{t-j}$$
and
$$cov(Y_t,Y_{t-1})=\psi_1-1+\sum_{j=2}^{\infty}(\psi_{j}-\psi_{j-1})(\psi_{j-1}-\psi_{j-2})$$
Now more often than not the coefficients $\psi_j$ are decreasing and less than one. So we have that $\psi_1-1<0$ and is larger than remaining sum. This would be one (very obvious) explanation why the first covariance is negative. More can be said with more careful analysis of the terms of the sum, but I think I managed to convey the general idea.
|
Why does differencing time-series introduce negative autocorrelation
|
Take the simple white noise process $Z_t$, $EZ_t=0$, $cov(Z_t,Z_{t-h})=0$, for all $h\neq 0$. Now take its difference $Y_t=Z_{t}-Z_{t-1}$and calculate the first lag autocovariance:
$$cov(Y_t,Y_{t-1})
|
Why does differencing time-series introduce negative autocorrelation
Take the simple white noise process $Z_t$, $EZ_t=0$, $cov(Z_t,Z_{t-h})=0$, for all $h\neq 0$. Now take its difference $Y_t=Z_{t}-Z_{t-1}$and calculate the first lag autocovariance:
$$cov(Y_t,Y_{t-1})=cov(Z_t-Z_{t-1},Z_{t-1}-Z_{t-2})=-cov(Z_{t-1},Z_{t-1})=-var(Z_t)$$
Hence $corr(Y_t,Y_{t-1})=-1/2.$ (Since $var(Y_t)=2var(Z_t)$).
Now for any (causal) stationary process $X_t$ there exists such a white noise process $Z_t$ and coefficients $\psi_j$ such that $X_t=\sum_{j=0}^{\infty}\psi_jZ_{t-j}$. This is courtesy of the Wold decomposition. Thus
$$cov(X_t,X_{t+h})=\sum_{j=0}^\infty\psi_j\psi_{j+h}$$
For the differenced version $Y_t=X_t-X_{t-1}$ we have
$$Y_{t}=Z_{t}+(\psi_1-1)Z_{t-1}+\sum_{j=2}(\psi_{j}-\psi_{j-1})Z_{t-j}$$
and
$$cov(Y_t,Y_{t-1})=\psi_1-1+\sum_{j=2}^{\infty}(\psi_{j}-\psi_{j-1})(\psi_{j-1}-\psi_{j-2})$$
Now more often than not the coefficients $\psi_j$ are decreasing and less than one. So we have that $\psi_1-1<0$ and is larger than remaining sum. This would be one (very obvious) explanation why the first covariance is negative. More can be said with more careful analysis of the terms of the sum, but I think I managed to convey the general idea.
|
Why does differencing time-series introduce negative autocorrelation
Take the simple white noise process $Z_t$, $EZ_t=0$, $cov(Z_t,Z_{t-h})=0$, for all $h\neq 0$. Now take its difference $Y_t=Z_{t}-Z_{t-1}$and calculate the first lag autocovariance:
$$cov(Y_t,Y_{t-1})
|
40,781
|
Why does differencing time-series introduce negative autocorrelation
|
unwarranted differencing is like unwaranted drugs they can have nasty side effects . The spike in the second differences suggests an ma coefficient which will effectively countermand the unwarranted differencing. The aic/bic stuff just doesn't always work as it often suggests over-differencing and over-populated ARMA structure . In my experience it seldom works to identify a parsimonious model except in trivial cases due to non-gaussian complications.
|
Why does differencing time-series introduce negative autocorrelation
|
unwarranted differencing is like unwaranted drugs they can have nasty side effects . The spike in the second differences suggests an ma coefficient which will effectively countermand the unwarranted d
|
Why does differencing time-series introduce negative autocorrelation
unwarranted differencing is like unwaranted drugs they can have nasty side effects . The spike in the second differences suggests an ma coefficient which will effectively countermand the unwarranted differencing. The aic/bic stuff just doesn't always work as it often suggests over-differencing and over-populated ARMA structure . In my experience it seldom works to identify a parsimonious model except in trivial cases due to non-gaussian complications.
|
Why does differencing time-series introduce negative autocorrelation
unwarranted differencing is like unwaranted drugs they can have nasty side effects . The spike in the second differences suggests an ma coefficient which will effectively countermand the unwarranted d
|
40,782
|
nearPD function in Matrix package
|
The nearPD package uses the algorithm developed by Dr. Nick Higham and others. Higham describes the algorithm here (PDF): Higham, Nick (2002) Computing the nearest correlation matrix - a problem from finance; IMA Journal of Numerical Analysis 22, 329–343. In a nutshell, they are finding the "closest" (minimum difference in Frobenuis norm) positive semi-definite matrix whose values are constrained to $(-1, 1)$ and $1$'s on the diagonal.
|
nearPD function in Matrix package
|
The nearPD package uses the algorithm developed by Dr. Nick Higham and others. Higham describes the algorithm here (PDF): Higham, Nick (2002) Computing the nearest correlation matrix - a problem from
|
nearPD function in Matrix package
The nearPD package uses the algorithm developed by Dr. Nick Higham and others. Higham describes the algorithm here (PDF): Higham, Nick (2002) Computing the nearest correlation matrix - a problem from finance; IMA Journal of Numerical Analysis 22, 329–343. In a nutshell, they are finding the "closest" (minimum difference in Frobenuis norm) positive semi-definite matrix whose values are constrained to $(-1, 1)$ and $1$'s on the diagonal.
|
nearPD function in Matrix package
The nearPD package uses the algorithm developed by Dr. Nick Higham and others. Higham describes the algorithm here (PDF): Higham, Nick (2002) Computing the nearest correlation matrix - a problem from
|
40,783
|
In importance sampling, why should the importance density have heavier tails?
|
Heuristically, it's because, for many situations of interest, what happens in the tails of the distribution is important, maybe more important than what happens in the middle, so undersampling the tails results in relatively inaccurate estimates of the target quantity.
More formally, consider a known function $h(x)$, along with original distribution $f$ and importance sampling distribution $g$. Assume we are attempting to estimate:
$$\mu = \mathbb{E}h = \int h(x)f(x)\text{d}x$$
but we are forced by circumstance to resort to importance sampling. Our importance sampling estimate $\hat{\mu}_h$ is:
$$\hat{\mu}_h = \frac{1}{n}\sum_{i=1}^nh(x_i)f(x_i)/g(x_i)$$
where the $x_i \sim g$. The variance of our estimate is:
$$\sigma^2(\hat{\mu}) = {1 \over n}\left[\int {[h(x)f(x)]^2 \over g(x)} \text{d}x - \mu^2\right] = {1 \over n}\left[\int \left({f(x) \over g(x)}\right) h^2(x)f(x)\text{d}x - \mu^2 \right]$$
For comparison, if we had a sample of $h(x)$ with $x$ drawn from $f$, the variance of the sample mean of the $h$ is $\sigma^2(\bar{h}) = {1 \over n}\left[\int h^2(x)f(x)\text{d}x - \mu^2 \right]$.
Now, consider $f,g$ such that $f/g$ is unbounded. Typically this would happen in the tails of the two distributions and would come about because your sampling density $g$ has thinner tails than $f$, although you could easily construct examples where it happened in the center. Depending upon $h^2f$, this could result in a very large or even infinite $\sigma^2(\hat{\mu})$. (If $h = 0$ over the regions where $f/g$ is large, of course, you won't have an issue - but that is a very problem-specific, and I expect rare, situation.) On the other hand, if $f/g < M$ for some $M > 1$, it is clear that $\sigma^2(\hat{\mu}) \leq M \sigma^2(\bar{h})$. We'll have not only prevented a possible catastrophe in estimation, we'll have bounded how poorly we do relative to using the sample mean of the $h$ as an estimate.
In fact, importance sampling can be a variance-reduction technique, even relative to the sample mean. By "over-sampling" those regions which contribute disproportionately heavily to $h^2f$, we can increase the accuracy of our final estimate. A trivial example of this is when $h = 0$ outside some region; an importance sampling distribution that also equals zero outside that region will prevent us from wasting samples on $x_i$ from a region that contributes $0$ to $\mathbb{E}h$. A more realistic example is estimating $h = $ the mean absolute deviation of a $t(3)$ variate which we know is centered at $0$; we'll compare the sample mean, an importance sampling estimate based upon the Normal distribution, and an importance sampling estimate based upon the Cauchy distribution. Our sample size is 100, and we repeat the experiment $N = 10,000$ times to evaluate the performance of the three estimators.
N <- 10000
results <- data.frame(list(Normal=rep(0,N), Cauchy=rep(0,N), t3=rep(0,N)))
for (i in 1:N) {
x_norm <- rnorm(100)
x_cauchy <- rt(100, df=1)
results[i,1] <- mean(abs(x_norm)*dt(x_norm, df=3)/dnorm(x_norm))
results[i,2] <- mean(abs(x_cauchy)*dt(x_cauchy, df=3)/dt(x_cauchy, df=1))
results[i,3] <- mean(abs(rt(100, df=3)))
}
apply(results,2,var)
Repeating this five times results in the following estimated variances of the three estimates of MAD:
Normal Cauchy t3
3.392863921 0.005228449 0.016933091
4.987301438 0.005108166 0.018078036
21.527506149 0.005078266 0.018151188
1.314209463 0.005108059 0.017396005
2.829562814 0.005163212 0.017341226
Clearly the Cauchy-based estimator is the winner, and that "21.52..." result for the Normal-based estimator should make us suspect that the true variance might not be finite.
The moral of the story is: use proposal distributions with heavier tails than the original, unless you have a good reason not to.
|
In importance sampling, why should the importance density have heavier tails?
|
Heuristically, it's because, for many situations of interest, what happens in the tails of the distribution is important, maybe more important than what happens in the middle, so undersampling the tai
|
In importance sampling, why should the importance density have heavier tails?
Heuristically, it's because, for many situations of interest, what happens in the tails of the distribution is important, maybe more important than what happens in the middle, so undersampling the tails results in relatively inaccurate estimates of the target quantity.
More formally, consider a known function $h(x)$, along with original distribution $f$ and importance sampling distribution $g$. Assume we are attempting to estimate:
$$\mu = \mathbb{E}h = \int h(x)f(x)\text{d}x$$
but we are forced by circumstance to resort to importance sampling. Our importance sampling estimate $\hat{\mu}_h$ is:
$$\hat{\mu}_h = \frac{1}{n}\sum_{i=1}^nh(x_i)f(x_i)/g(x_i)$$
where the $x_i \sim g$. The variance of our estimate is:
$$\sigma^2(\hat{\mu}) = {1 \over n}\left[\int {[h(x)f(x)]^2 \over g(x)} \text{d}x - \mu^2\right] = {1 \over n}\left[\int \left({f(x) \over g(x)}\right) h^2(x)f(x)\text{d}x - \mu^2 \right]$$
For comparison, if we had a sample of $h(x)$ with $x$ drawn from $f$, the variance of the sample mean of the $h$ is $\sigma^2(\bar{h}) = {1 \over n}\left[\int h^2(x)f(x)\text{d}x - \mu^2 \right]$.
Now, consider $f,g$ such that $f/g$ is unbounded. Typically this would happen in the tails of the two distributions and would come about because your sampling density $g$ has thinner tails than $f$, although you could easily construct examples where it happened in the center. Depending upon $h^2f$, this could result in a very large or even infinite $\sigma^2(\hat{\mu})$. (If $h = 0$ over the regions where $f/g$ is large, of course, you won't have an issue - but that is a very problem-specific, and I expect rare, situation.) On the other hand, if $f/g < M$ for some $M > 1$, it is clear that $\sigma^2(\hat{\mu}) \leq M \sigma^2(\bar{h})$. We'll have not only prevented a possible catastrophe in estimation, we'll have bounded how poorly we do relative to using the sample mean of the $h$ as an estimate.
In fact, importance sampling can be a variance-reduction technique, even relative to the sample mean. By "over-sampling" those regions which contribute disproportionately heavily to $h^2f$, we can increase the accuracy of our final estimate. A trivial example of this is when $h = 0$ outside some region; an importance sampling distribution that also equals zero outside that region will prevent us from wasting samples on $x_i$ from a region that contributes $0$ to $\mathbb{E}h$. A more realistic example is estimating $h = $ the mean absolute deviation of a $t(3)$ variate which we know is centered at $0$; we'll compare the sample mean, an importance sampling estimate based upon the Normal distribution, and an importance sampling estimate based upon the Cauchy distribution. Our sample size is 100, and we repeat the experiment $N = 10,000$ times to evaluate the performance of the three estimators.
N <- 10000
results <- data.frame(list(Normal=rep(0,N), Cauchy=rep(0,N), t3=rep(0,N)))
for (i in 1:N) {
x_norm <- rnorm(100)
x_cauchy <- rt(100, df=1)
results[i,1] <- mean(abs(x_norm)*dt(x_norm, df=3)/dnorm(x_norm))
results[i,2] <- mean(abs(x_cauchy)*dt(x_cauchy, df=3)/dt(x_cauchy, df=1))
results[i,3] <- mean(abs(rt(100, df=3)))
}
apply(results,2,var)
Repeating this five times results in the following estimated variances of the three estimates of MAD:
Normal Cauchy t3
3.392863921 0.005228449 0.016933091
4.987301438 0.005108166 0.018078036
21.527506149 0.005078266 0.018151188
1.314209463 0.005108059 0.017396005
2.829562814 0.005163212 0.017341226
Clearly the Cauchy-based estimator is the winner, and that "21.52..." result for the Normal-based estimator should make us suspect that the true variance might not be finite.
The moral of the story is: use proposal distributions with heavier tails than the original, unless you have a good reason not to.
|
In importance sampling, why should the importance density have heavier tails?
Heuristically, it's because, for many situations of interest, what happens in the tails of the distribution is important, maybe more important than what happens in the middle, so undersampling the tai
|
40,784
|
Define Priors for Dirichlet Distribution parameters in JAGS
|
I don't know for sure what the trick is, but this is my guess. Using JAGS syntax to specify $\xi \sim \mathcal D(\alpha)$, you would normally do something like this:
xi ~ dirichlet(alpha[])
JAGS would then not allow you to assign a prior to $\alpha = (\alpha_1, \ldots, \alpha_J)$. Instead, let $\xi^\star_j \sim \mbox{Gamma}(\alpha_j, 1)$. Then it can be shown that
$$
\xi \equiv \left(\frac{\xi^\star_1}{\sum_j \xi^\star_j}, \ldots,
\frac{\xi^\star_J}{\sum_j \xi^\star_j}\right) \sim \mathcal D(\alpha_1, \ldots, \alpha_J).
$$
Hence you can do the following:
for(j in 1:J) {
xi_raw[j] ~ dgamma(alpha[j], 1)
}
for(j in 1:J) {
xi[j] <- xi_raw[j] / sum(xi_raw[])
}
## Some prior for alpha follows...
|
Define Priors for Dirichlet Distribution parameters in JAGS
|
I don't know for sure what the trick is, but this is my guess. Using JAGS syntax to specify $\xi \sim \mathcal D(\alpha)$, you would normally do something like this:
xi ~ dirichlet(alpha[])
JAGS woul
|
Define Priors for Dirichlet Distribution parameters in JAGS
I don't know for sure what the trick is, but this is my guess. Using JAGS syntax to specify $\xi \sim \mathcal D(\alpha)$, you would normally do something like this:
xi ~ dirichlet(alpha[])
JAGS would then not allow you to assign a prior to $\alpha = (\alpha_1, \ldots, \alpha_J)$. Instead, let $\xi^\star_j \sim \mbox{Gamma}(\alpha_j, 1)$. Then it can be shown that
$$
\xi \equiv \left(\frac{\xi^\star_1}{\sum_j \xi^\star_j}, \ldots,
\frac{\xi^\star_J}{\sum_j \xi^\star_j}\right) \sim \mathcal D(\alpha_1, \ldots, \alpha_J).
$$
Hence you can do the following:
for(j in 1:J) {
xi_raw[j] ~ dgamma(alpha[j], 1)
}
for(j in 1:J) {
xi[j] <- xi_raw[j] / sum(xi_raw[])
}
## Some prior for alpha follows...
|
Define Priors for Dirichlet Distribution parameters in JAGS
I don't know for sure what the trick is, but this is my guess. Using JAGS syntax to specify $\xi \sim \mathcal D(\alpha)$, you would normally do something like this:
xi ~ dirichlet(alpha[])
JAGS woul
|
40,785
|
half-cauchy prior for scale parameter
|
An alternative to using a Half-Cauchy distribution with a well-defined variance is a Half-Student-t with $\nu>2$ degrees of freedom, e.g. $\nu=3$.
$$\pi(\nu)= \frac{12 \sqrt{3}}{\pi \left(x^2+3\right)^2},\,\,\, \nu>0. $$
This prior has semi-heavy tails and it should produce fairly similar results as the Half-Cauchy prior. You can visualise it in R with the following code curve(2*dt(x,df=3),0,10). It can also be interpreted as "I have prior information, but not much" since you are using something that resembles a "vague prior" but you are adding a bit of information since you think that the tails shouldn't be that heavy. The mass cumulated on $(100,\infty)$ is $1.102261e-06$.
|
half-cauchy prior for scale parameter
|
An alternative to using a Half-Cauchy distribution with a well-defined variance is a Half-Student-t with $\nu>2$ degrees of freedom, e.g. $\nu=3$.
$$\pi(\nu)= \frac{12 \sqrt{3}}{\pi \left(x^2+3\right
|
half-cauchy prior for scale parameter
An alternative to using a Half-Cauchy distribution with a well-defined variance is a Half-Student-t with $\nu>2$ degrees of freedom, e.g. $\nu=3$.
$$\pi(\nu)= \frac{12 \sqrt{3}}{\pi \left(x^2+3\right)^2},\,\,\, \nu>0. $$
This prior has semi-heavy tails and it should produce fairly similar results as the Half-Cauchy prior. You can visualise it in R with the following code curve(2*dt(x,df=3),0,10). It can also be interpreted as "I have prior information, but not much" since you are using something that resembles a "vague prior" but you are adding a bit of information since you think that the tails shouldn't be that heavy. The mass cumulated on $(100,\infty)$ is $1.102261e-06$.
|
half-cauchy prior for scale parameter
An alternative to using a Half-Cauchy distribution with a well-defined variance is a Half-Student-t with $\nu>2$ degrees of freedom, e.g. $\nu=3$.
$$\pi(\nu)= \frac{12 \sqrt{3}}{\pi \left(x^2+3\right
|
40,786
|
Generating survival times for a piecewise constant hazard model with two change points
|
There are two basic approaches to generating data with piecewise constant hazard: inversion of the cumulative hazard and the composition method.
Inversion of the cumulative hazard - essentially the inverse CDF method. Since $F(t) = 1-\exp(-H(t))$. If $U \sim Unif(0,1)$, then $F(X) = U$ is equivalent to $1-\exp(-H(X)) = U$, so $X=H^{-1}(-\log(1-U))$. You can also note that $-\log(1-U) \sim Exp(1)$, so you can apply the inverse cumulative hazard to an exponential random variable.
The cumulative hazard is piecewise linear for your case, and should be easy to invert.
Edit (more detail): with two change-points, the hazard is:
$$h(t) = \left\{ \begin{matrix} f_1 , & 0\leq t\leq t_1\\
f_2, & t_1 < t \leq t_2\\
f_3, & t > t_2 \end{matrix}\right.$$
The cumulative hazard is:
$$H(t) = \left\{ \begin{matrix} f_1 t , & 0\leq t\leq t_1\\
f_1 t_1 + f_2(t-t_1), & t_1 < t \leq t_2\\
f_1t_1 + f_2(t_2-t_1) + f_3(t-t_2), & t > t_2 \end{matrix}\right.$$
The inverse of the cumulative hazard is:
$$H^{-1}(x) = \left\{ \begin{matrix} x/f_1 , & 0\leq x\leq f_1t_1\\
t_1 + (x-f_1t_1)/f_2, & f_1t_1 < x \leq f_1t_1 + f_2(t_2-t_1)\\
t_2 + (x-f_1t_1-f_2(t_2-t_1))/f_3, & x > f_1t_1 + f_2(t_2-t_1)\end{matrix}\right.$$
Now generate an exponentially distributed random variable, and plug it in into $H^{-1}$.
End edit
The Composition method uses the fact that if $X_1$ has hazard $h_1$, and $X_2$ has hazard $h_2$, then $X=\min(X_1,X_2)$ has hazard $h=h_1+h_2$. You can represent your piecewise constant hazard as a sum of hazards that are constant on an interval and 0 outside. Generate a value $X_i$ for each interval (it could be $\infty$, since the resulting distributions are not necessarily proper), and take their minimum.
Edit (more detail): with the above notation, the composition hazards are
$$h_1(t) = \left\{ \begin{matrix} f_1 , & 0\leq t\leq t_1\\
0, & t > t_1 \end{matrix}\right.$$
$$h_2(t) = \left\{ \begin{matrix} f_2 , & t_1 < t\leq t_2\\
0, & \text{otherwise} \end{matrix}\right.$$
$$h_3(t) = \left\{ \begin{matrix} f_3 , & t_2< t\\
0, & \text{otherwise} \end{matrix}\right.$$
You can easily calculate the CDF or the cumulative hazard for each of these hazards.
One resource with R-Code
|
Generating survival times for a piecewise constant hazard model with two change points
|
There are two basic approaches to generating data with piecewise constant hazard: inversion of the cumulative hazard and the composition method.
Inversion of the cumulative hazard - essentially the i
|
Generating survival times for a piecewise constant hazard model with two change points
There are two basic approaches to generating data with piecewise constant hazard: inversion of the cumulative hazard and the composition method.
Inversion of the cumulative hazard - essentially the inverse CDF method. Since $F(t) = 1-\exp(-H(t))$. If $U \sim Unif(0,1)$, then $F(X) = U$ is equivalent to $1-\exp(-H(X)) = U$, so $X=H^{-1}(-\log(1-U))$. You can also note that $-\log(1-U) \sim Exp(1)$, so you can apply the inverse cumulative hazard to an exponential random variable.
The cumulative hazard is piecewise linear for your case, and should be easy to invert.
Edit (more detail): with two change-points, the hazard is:
$$h(t) = \left\{ \begin{matrix} f_1 , & 0\leq t\leq t_1\\
f_2, & t_1 < t \leq t_2\\
f_3, & t > t_2 \end{matrix}\right.$$
The cumulative hazard is:
$$H(t) = \left\{ \begin{matrix} f_1 t , & 0\leq t\leq t_1\\
f_1 t_1 + f_2(t-t_1), & t_1 < t \leq t_2\\
f_1t_1 + f_2(t_2-t_1) + f_3(t-t_2), & t > t_2 \end{matrix}\right.$$
The inverse of the cumulative hazard is:
$$H^{-1}(x) = \left\{ \begin{matrix} x/f_1 , & 0\leq x\leq f_1t_1\\
t_1 + (x-f_1t_1)/f_2, & f_1t_1 < x \leq f_1t_1 + f_2(t_2-t_1)\\
t_2 + (x-f_1t_1-f_2(t_2-t_1))/f_3, & x > f_1t_1 + f_2(t_2-t_1)\end{matrix}\right.$$
Now generate an exponentially distributed random variable, and plug it in into $H^{-1}$.
End edit
The Composition method uses the fact that if $X_1$ has hazard $h_1$, and $X_2$ has hazard $h_2$, then $X=\min(X_1,X_2)$ has hazard $h=h_1+h_2$. You can represent your piecewise constant hazard as a sum of hazards that are constant on an interval and 0 outside. Generate a value $X_i$ for each interval (it could be $\infty$, since the resulting distributions are not necessarily proper), and take their minimum.
Edit (more detail): with the above notation, the composition hazards are
$$h_1(t) = \left\{ \begin{matrix} f_1 , & 0\leq t\leq t_1\\
0, & t > t_1 \end{matrix}\right.$$
$$h_2(t) = \left\{ \begin{matrix} f_2 , & t_1 < t\leq t_2\\
0, & \text{otherwise} \end{matrix}\right.$$
$$h_3(t) = \left\{ \begin{matrix} f_3 , & t_2< t\\
0, & \text{otherwise} \end{matrix}\right.$$
You can easily calculate the CDF or the cumulative hazard for each of these hazards.
One resource with R-Code
|
Generating survival times for a piecewise constant hazard model with two change points
There are two basic approaches to generating data with piecewise constant hazard: inversion of the cumulative hazard and the composition method.
Inversion of the cumulative hazard - essentially the i
|
40,787
|
Analysis of temporal patterns
|
A runs test seems appropriate, and the cited literature at the end develops the test statistic for multiple categories. Unfortunately the paper is paywalled but here is a quick run-down of the test statistic (screen shot of relevant page here).
For each individual group, we can count;
$n_s = \text{Number of successes}$
$r_s = \text{Number of success runs}$
$s_{s}^{2} = \text{Sample variance of success run lengths}$
$c_s = (r^2-1)(r+2)(r+3)/[2r(n-r-1)(n+1)]$
$v_s = cn(n - r)/[r(r + 1)]$
Then you calculate this for each separate group, and the test statistic is the sum of the each $c_s \cdot s_{s}^{2}$ and is distributed as $\chi^{2}$ with $\sum{v_i}$ degrees of freedom.
So, lets say we have a table of run lengths for three different groups as follows;
Data: 221331333121112112212112122
Length Group1 Group2 Group3
-----------------------------
1 5 4 0
2 2 3 1
3 1 0 1
-----------------------------
n_s 12 10 5
r_s 8 7 2
s_s 0.6 0.3 0.5
c_s 11.1 14.0 1.3
v_s 7.4 7.5 3.1
-----------------------------
x^2 = (0.6*11.1) + (0.3*14) + (0.5*1.3) = 11
DF = 7.4 + 7.5 + 3.1 = 18
Evaluating the area to the right of the test statistic is .9, so in this circumstance we would either fail to reject the null hypothesis that the distribution of the runs are randomly distributed. It is fairly close to the other tail though, so it is borderline evidence the data is more dispersed than you would expect by chance (as this is one of those circumstances it makes sense to evaluate the left tail of the Chi-Square distribution).
O'Brien, Peter C. & Peter J. Dyck. 1985. A runs test based on run lengths. Biometrics 41(1):237-244.
I've posted a code snippet on estimating this in SPSS at this dropbox link. It includes the made up example here, as well as a code example replicating the tables and statistics in the O'Brien & Dyck paper (on a made up set of data that looks like theirs).
|
Analysis of temporal patterns
|
A runs test seems appropriate, and the cited literature at the end develops the test statistic for multiple categories. Unfortunately the paper is paywalled but here is a quick run-down of the test st
|
Analysis of temporal patterns
A runs test seems appropriate, and the cited literature at the end develops the test statistic for multiple categories. Unfortunately the paper is paywalled but here is a quick run-down of the test statistic (screen shot of relevant page here).
For each individual group, we can count;
$n_s = \text{Number of successes}$
$r_s = \text{Number of success runs}$
$s_{s}^{2} = \text{Sample variance of success run lengths}$
$c_s = (r^2-1)(r+2)(r+3)/[2r(n-r-1)(n+1)]$
$v_s = cn(n - r)/[r(r + 1)]$
Then you calculate this for each separate group, and the test statistic is the sum of the each $c_s \cdot s_{s}^{2}$ and is distributed as $\chi^{2}$ with $\sum{v_i}$ degrees of freedom.
So, lets say we have a table of run lengths for three different groups as follows;
Data: 221331333121112112212112122
Length Group1 Group2 Group3
-----------------------------
1 5 4 0
2 2 3 1
3 1 0 1
-----------------------------
n_s 12 10 5
r_s 8 7 2
s_s 0.6 0.3 0.5
c_s 11.1 14.0 1.3
v_s 7.4 7.5 3.1
-----------------------------
x^2 = (0.6*11.1) + (0.3*14) + (0.5*1.3) = 11
DF = 7.4 + 7.5 + 3.1 = 18
Evaluating the area to the right of the test statistic is .9, so in this circumstance we would either fail to reject the null hypothesis that the distribution of the runs are randomly distributed. It is fairly close to the other tail though, so it is borderline evidence the data is more dispersed than you would expect by chance (as this is one of those circumstances it makes sense to evaluate the left tail of the Chi-Square distribution).
O'Brien, Peter C. & Peter J. Dyck. 1985. A runs test based on run lengths. Biometrics 41(1):237-244.
I've posted a code snippet on estimating this in SPSS at this dropbox link. It includes the made up example here, as well as a code example replicating the tables and statistics in the O'Brien & Dyck paper (on a made up set of data that looks like theirs).
|
Analysis of temporal patterns
A runs test seems appropriate, and the cited literature at the end develops the test statistic for multiple categories. Unfortunately the paper is paywalled but here is a quick run-down of the test st
|
40,788
|
Analysis of temporal patterns
|
If I understand, you are looking for k-mers which are patterns of size k found in sequences.
There is an R package for analyzing sequence data called TraMineR which includes functions for plotting the sequences, finding the variance of state durations, compute within sequence entropy, extract frequent event subsequences, etc.
You could also compare two sequences to see how they align in time by using Dynamic Time Warping
|
Analysis of temporal patterns
|
If I understand, you are looking for k-mers which are patterns of size k found in sequences.
There is an R package for analyzing sequence data called TraMineR which includes functions for plotting the
|
Analysis of temporal patterns
If I understand, you are looking for k-mers which are patterns of size k found in sequences.
There is an R package for analyzing sequence data called TraMineR which includes functions for plotting the sequences, finding the variance of state durations, compute within sequence entropy, extract frequent event subsequences, etc.
You could also compare two sequences to see how they align in time by using Dynamic Time Warping
|
Analysis of temporal patterns
If I understand, you are looking for k-mers which are patterns of size k found in sequences.
There is an R package for analyzing sequence data called TraMineR which includes functions for plotting the
|
40,789
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
|
On scikit-learn==0.14.1.
$\theta_0$ can be a vector. The following code works for me.
import numpy as np
from sklearn.gaussian_process import GaussianProcess
from sklearn.datasets import make_regression
X, y = make_regression()
bad_theta = np.abs(np.random.normal(0,1,100))
model = GaussianProcess(theta0=bad_theta)
model.fit(X,y)
You can pass any kernel you want as the parameter corr. The following is the radial basis function that sklearn uses for Gaussian processes.
def squared_exponential(theta, d):
"""
Squared exponential correlation model (Radial Basis Function).
(Infinitely differentiable stochastic process, very smooth)::
n
theta, dx --> r(theta, dx) = exp( sum - theta_i * (dx_i)^2 )
i = 1
Parameters
----------
theta : array_like
An array with shape 1 (isotropic) or n (anisotropic) giving the
autocorrelation parameter(s).
dx : array_like
An array with shape (n_eval, n_features) giving the componentwise
distances between locations x and x' at which the correlation model
should be evaluated.
Returns
-------
r : array_like
An array with shape (n_eval, ) containing the values of the
autocorrelation model.
"""
theta = np.asarray(theta, dtype=np.float)
d = np.asarray(d, dtype=np.float)
if d.ndim > 1:
n_features = d.shape[1]
else:
n_features = 1
if theta.size == 1:
return np.exp(-theta[0] * np.sum(d ** 2, axis=1))
elif theta.size != n_features:
raise ValueError("Length of theta must be 1 or %s" % n_features)
else:
return np.exp(-np.sum(theta.reshape(1, n_features) * d ** 2, axis=1))
It looks like you're doing something pretty interesting, btw.
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
|
On scikit-learn==0.14.1.
$\theta_0$ can be a vector. The following code works for me.
import numpy as np
from sklearn.gaussian_process import GaussianProcess
from sklearn.datasets import make_regressi
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
On scikit-learn==0.14.1.
$\theta_0$ can be a vector. The following code works for me.
import numpy as np
from sklearn.gaussian_process import GaussianProcess
from sklearn.datasets import make_regression
X, y = make_regression()
bad_theta = np.abs(np.random.normal(0,1,100))
model = GaussianProcess(theta0=bad_theta)
model.fit(X,y)
You can pass any kernel you want as the parameter corr. The following is the radial basis function that sklearn uses for Gaussian processes.
def squared_exponential(theta, d):
"""
Squared exponential correlation model (Radial Basis Function).
(Infinitely differentiable stochastic process, very smooth)::
n
theta, dx --> r(theta, dx) = exp( sum - theta_i * (dx_i)^2 )
i = 1
Parameters
----------
theta : array_like
An array with shape 1 (isotropic) or n (anisotropic) giving the
autocorrelation parameter(s).
dx : array_like
An array with shape (n_eval, n_features) giving the componentwise
distances between locations x and x' at which the correlation model
should be evaluated.
Returns
-------
r : array_like
An array with shape (n_eval, ) containing the values of the
autocorrelation model.
"""
theta = np.asarray(theta, dtype=np.float)
d = np.asarray(d, dtype=np.float)
if d.ndim > 1:
n_features = d.shape[1]
else:
n_features = 1
if theta.size == 1:
return np.exp(-theta[0] * np.sum(d ** 2, axis=1))
elif theta.size != n_features:
raise ValueError("Length of theta must be 1 or %s" % n_features)
else:
return np.exp(-np.sum(theta.reshape(1, n_features) * d ** 2, axis=1))
It looks like you're doing something pretty interesting, btw.
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
On scikit-learn==0.14.1.
$\theta_0$ can be a vector. The following code works for me.
import numpy as np
from sklearn.gaussian_process import GaussianProcess
from sklearn.datasets import make_regressi
|
40,790
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
|
For future users, in sklearn 0.19.1 (and probably earlier) you can use combined kernels.
I think you can create the kernel you want by
kernel = ConstantKernel(1.0) * RBF(np.ones(nrOfFeatures))
The ConstantKernel would be your theta0. In case you provide the RBF kernel with a single float, this will be your theta1
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
|
For future users, in sklearn 0.19.1 (and probably earlier) you can use combined kernels.
I think you can create the kernel you want by
kernel = ConstantKernel(1.0) * RBF(np.ones(nrOfFeatures))
The C
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
For future users, in sklearn 0.19.1 (and probably earlier) you can use combined kernels.
I think you can create the kernel you want by
kernel = ConstantKernel(1.0) * RBF(np.ones(nrOfFeatures))
The ConstantKernel would be your theta0. In case you provide the RBF kernel with a single float, this will be your theta1
|
Scikit-learn's Gaussian Processes: How to include multiple hyperparameters in kernel/cov function?
For future users, in sklearn 0.19.1 (and probably earlier) you can use combined kernels.
I think you can create the kernel you want by
kernel = ConstantKernel(1.0) * RBF(np.ones(nrOfFeatures))
The C
|
40,791
|
Singular values of the data matrix and eigenvalues of the covariance matrix
|
Aside stating the obvious: eig gives the results in ascending order while svd in descending one; the svd eigenvalues (and eigenvectors obviously) are dissimilar to those of eig decomposition because your matrix ingredients is not symmetric to start with. To paraphrase wikipedia a bit: "When the $X$ is a normal and/or a positive semi-definite matrix, the decomposition $\ {X} = {U} {D} {U}^*$ is also a singular value decomposition", not otherwise. ($U$ being the eigenvectors of $XX^\mathbf{T}$)
So example if you did something like:
rng(0,'twister') %just set the seed.
Q = random('normal', 0,1,5);
X = Q' * Q; %so X is PSD
[U S V]= svd(X);
[A,B]= eig(X);
max( abs(diag(S)- fliplr(diag(B)')' ))
% ans = 7.1054e-15 % AKA equal to numerical precision.
you would find that svd and eig do give you back the same results. While before exactly because matrix ingredients was not at least PSD (or even square for that matter), well.. you didn't get the same results. :)
Just to state it in another way: $X= U\Sigma V^*$ practically translates into: $X = \sum_1^r u_i s_i v_i^T$ ($r$ being the rank of $X$). Which itself means that you are (pretty awesomely) allowed to write $X v_i = \sigma_i u_i$. Clear to get back to the eigen-decomposition $X u_i = \lambda_i u_i$ you need first all $u_i$ == $v_i$. Something that non-normal matrices do not guarantee. As final note: The small numerical differences are due to eig and svd having different algorithms working in the background; a variant of the QR algorithm of svd and a (usually) generalized Schur decomposition for eig.
Specific to your problem what you want is something akin to:
load hald;
[u s v]=svd(ingredients);
sigma=(ingredients' * ingredients);
lambda =eig(sigma);
max( abs(diag(s)- fliplr(sqrt(lambda)')' ))
% ans = 5.6843e-14
As you see this is nothing to do with centring you data to have mean $0$ at this point; the matrix ingredients is not centered.
Now if you use the covariance matrix (and not a simple inner product matrix as I did) you will have to centre your data. Let's say that ingredients2 is your zero-meaned sample.
ingredients2 = ingredients - repmat(mean(ingredients), 13,1);
Then indeed you need this normalization by $1/(n-1)$
[u s v] =svd(ingredients2 );
sigma = cov(ingredients); % You don't care about centring here
lambda =eig(sigma);
max( abs( diag(s)- fliplr(sqrt(lambda *12)')')) % n = 13 so multiply by n-1
% ans = 4.7962e-14
So yeah, it the centring now. I was a bit misleading originally because I worked with the notion of PSD matrices rather than covariance matrices. The answer before the editing was fine. It addressed exactly why your eigen-decomposition did not fit your singular value decomposition. With the editting I show why your singular value decomposition did not fit the eigen-decomposition. Clearly one can view the same problem in two different ways. :D
|
Singular values of the data matrix and eigenvalues of the covariance matrix
|
Aside stating the obvious: eig gives the results in ascending order while svd in descending one; the svd eigenvalues (and eigenvectors obviously) are dissimilar to those of eig decomposition because y
|
Singular values of the data matrix and eigenvalues of the covariance matrix
Aside stating the obvious: eig gives the results in ascending order while svd in descending one; the svd eigenvalues (and eigenvectors obviously) are dissimilar to those of eig decomposition because your matrix ingredients is not symmetric to start with. To paraphrase wikipedia a bit: "When the $X$ is a normal and/or a positive semi-definite matrix, the decomposition $\ {X} = {U} {D} {U}^*$ is also a singular value decomposition", not otherwise. ($U$ being the eigenvectors of $XX^\mathbf{T}$)
So example if you did something like:
rng(0,'twister') %just set the seed.
Q = random('normal', 0,1,5);
X = Q' * Q; %so X is PSD
[U S V]= svd(X);
[A,B]= eig(X);
max( abs(diag(S)- fliplr(diag(B)')' ))
% ans = 7.1054e-15 % AKA equal to numerical precision.
you would find that svd and eig do give you back the same results. While before exactly because matrix ingredients was not at least PSD (or even square for that matter), well.. you didn't get the same results. :)
Just to state it in another way: $X= U\Sigma V^*$ practically translates into: $X = \sum_1^r u_i s_i v_i^T$ ($r$ being the rank of $X$). Which itself means that you are (pretty awesomely) allowed to write $X v_i = \sigma_i u_i$. Clear to get back to the eigen-decomposition $X u_i = \lambda_i u_i$ you need first all $u_i$ == $v_i$. Something that non-normal matrices do not guarantee. As final note: The small numerical differences are due to eig and svd having different algorithms working in the background; a variant of the QR algorithm of svd and a (usually) generalized Schur decomposition for eig.
Specific to your problem what you want is something akin to:
load hald;
[u s v]=svd(ingredients);
sigma=(ingredients' * ingredients);
lambda =eig(sigma);
max( abs(diag(s)- fliplr(sqrt(lambda)')' ))
% ans = 5.6843e-14
As you see this is nothing to do with centring you data to have mean $0$ at this point; the matrix ingredients is not centered.
Now if you use the covariance matrix (and not a simple inner product matrix as I did) you will have to centre your data. Let's say that ingredients2 is your zero-meaned sample.
ingredients2 = ingredients - repmat(mean(ingredients), 13,1);
Then indeed you need this normalization by $1/(n-1)$
[u s v] =svd(ingredients2 );
sigma = cov(ingredients); % You don't care about centring here
lambda =eig(sigma);
max( abs( diag(s)- fliplr(sqrt(lambda *12)')')) % n = 13 so multiply by n-1
% ans = 4.7962e-14
So yeah, it the centring now. I was a bit misleading originally because I worked with the notion of PSD matrices rather than covariance matrices. The answer before the editing was fine. It addressed exactly why your eigen-decomposition did not fit your singular value decomposition. With the editting I show why your singular value decomposition did not fit the eigen-decomposition. Clearly one can view the same problem in two different ways. :D
|
Singular values of the data matrix and eigenvalues of the covariance matrix
Aside stating the obvious: eig gives the results in ascending order while svd in descending one; the svd eigenvalues (and eigenvectors obviously) are dissimilar to those of eig decomposition because y
|
40,792
|
Right-censored survival fit with JAGS
|
I was asked to re-post this answer here from my comment at http://doingbayesiandataanalysis.blogspot.com/2012/01/complete-example-of-right-censoring-in.html
The specifics of this answer relate to the model in that comment, but the concepts apply to the topic here.
The core of the JAGS model for censored data is this:
isCensored[i] ~ dinterval( y[i] , censorLimitVec[i] )
y[i] ~ dnorm( mu , tau )
The key to understanding what JAGS is doing is that JAGS automatically imputes a random value for any variable that is not specified as a constant in the data. Thus, when y[i] is NA (i.e., a missing value, not a constant), then JAGS imputes a random value for it.
But what value should it generate?
The second line of the model, above, says that y[i] should be randomly generated from a normal distribution with mean mu and precision tau.
But the first line of the model, above, puts another constraint on the randomly generated value of y[i]. That line says that whatever value of y[i] is randomly generated, it must fall on the side of censorLimitVec[i] dictated by the value of isCensored[i].
To understand this part, let's unpack the dinterval() distribution. Suppose that censorLimitVec has 3 values in it, not just 1:
censorLimitVec = c(10,20,30)
Then randomly generated values from dinterval(y,c(10,20,30)) will be either 0, 1, 2, or 3 depending on whether $y<10$, $10 < y < 20$, $20<y<30$, or $30<y$. So, if $y=15$, dinterval(y,c(10,20,30)) has output of $1$ with 100% probability. The trick is this: We instead specify the output of dinterval, and impute a random value of y that could produce it. Thus, if we say
1 ~ dinterval(y,c(10,20,30))
then y is imputed as a random value between 10 and 20.
Putting the two model statements together,
1 ~ dinterval( y , censorLimit )
y ~ dnorm( mu , tau )
means that y comes from a normal density and y must fall above the censorLimit.
Hope that helps!!
|
Right-censored survival fit with JAGS
|
I was asked to re-post this answer here from my comment at http://doingbayesiandataanalysis.blogspot.com/2012/01/complete-example-of-right-censoring-in.html
The specifics of this answer relate to the
|
Right-censored survival fit with JAGS
I was asked to re-post this answer here from my comment at http://doingbayesiandataanalysis.blogspot.com/2012/01/complete-example-of-right-censoring-in.html
The specifics of this answer relate to the model in that comment, but the concepts apply to the topic here.
The core of the JAGS model for censored data is this:
isCensored[i] ~ dinterval( y[i] , censorLimitVec[i] )
y[i] ~ dnorm( mu , tau )
The key to understanding what JAGS is doing is that JAGS automatically imputes a random value for any variable that is not specified as a constant in the data. Thus, when y[i] is NA (i.e., a missing value, not a constant), then JAGS imputes a random value for it.
But what value should it generate?
The second line of the model, above, says that y[i] should be randomly generated from a normal distribution with mean mu and precision tau.
But the first line of the model, above, puts another constraint on the randomly generated value of y[i]. That line says that whatever value of y[i] is randomly generated, it must fall on the side of censorLimitVec[i] dictated by the value of isCensored[i].
To understand this part, let's unpack the dinterval() distribution. Suppose that censorLimitVec has 3 values in it, not just 1:
censorLimitVec = c(10,20,30)
Then randomly generated values from dinterval(y,c(10,20,30)) will be either 0, 1, 2, or 3 depending on whether $y<10$, $10 < y < 20$, $20<y<30$, or $30<y$. So, if $y=15$, dinterval(y,c(10,20,30)) has output of $1$ with 100% probability. The trick is this: We instead specify the output of dinterval, and impute a random value of y that could produce it. Thus, if we say
1 ~ dinterval(y,c(10,20,30))
then y is imputed as a random value between 10 and 20.
Putting the two model statements together,
1 ~ dinterval( y , censorLimit )
y ~ dnorm( mu , tau )
means that y comes from a normal density and y must fall above the censorLimit.
Hope that helps!!
|
Right-censored survival fit with JAGS
I was asked to re-post this answer here from my comment at http://doingbayesiandataanalysis.blogspot.com/2012/01/complete-example-of-right-censoring-in.html
The specifics of this answer relate to the
|
40,793
|
Applications of Data Mining in economics [closed]
|
Here's a nice survey by Einav and Levin of what's been done and some directions for further research (Section 5). Here is an application to Netflix data and price discrimination where ML techniques do the heavy lifting in feature selection and demand estimation. Here's a paper on Yelp review fraud, where Yelp's fraud detection algorithm provided identification. This is a more gingerly use of these methods.
Here's another nice survey by Hal Varian (chief economist at Google).
|
Applications of Data Mining in economics [closed]
|
Here's a nice survey by Einav and Levin of what's been done and some directions for further research (Section 5). Here is an application to Netflix data and price discrimination where ML techniques do
|
Applications of Data Mining in economics [closed]
Here's a nice survey by Einav and Levin of what's been done and some directions for further research (Section 5). Here is an application to Netflix data and price discrimination where ML techniques do the heavy lifting in feature selection and demand estimation. Here's a paper on Yelp review fraud, where Yelp's fraud detection algorithm provided identification. This is a more gingerly use of these methods.
Here's another nice survey by Hal Varian (chief economist at Google).
|
Applications of Data Mining in economics [closed]
Here's a nice survey by Einav and Levin of what's been done and some directions for further research (Section 5). Here is an application to Netflix data and price discrimination where ML techniques do
|
40,794
|
Applications of Data Mining in economics [closed]
|
There's a recent article that uses Google Trends to predict financial crises or other economic events:
http://www.nature.com/srep/2013/130425/srep01684/full/srep01684.html
A similar approach can use Twitter (as in here), to predict other economic events or variables (e.g. marginal utilities, prices, etc).
|
Applications of Data Mining in economics [closed]
|
There's a recent article that uses Google Trends to predict financial crises or other economic events:
http://www.nature.com/srep/2013/130425/srep01684/full/srep01684.html
A similar approach can use T
|
Applications of Data Mining in economics [closed]
There's a recent article that uses Google Trends to predict financial crises or other economic events:
http://www.nature.com/srep/2013/130425/srep01684/full/srep01684.html
A similar approach can use Twitter (as in here), to predict other economic events or variables (e.g. marginal utilities, prices, etc).
|
Applications of Data Mining in economics [closed]
There's a recent article that uses Google Trends to predict financial crises or other economic events:
http://www.nature.com/srep/2013/130425/srep01684/full/srep01684.html
A similar approach can use T
|
40,795
|
Applications of Data Mining in economics [closed]
|
Hidden Markov models (HMMs) in the form of "regime switching models" have been used to determine growth/decline periods (I know, they're not technically HMMs, but they basically are). Does that count? Read up on the work of James Hamilton if you're interested: http://weber.ucsd.edu/~jhamilto/
|
Applications of Data Mining in economics [closed]
|
Hidden Markov models (HMMs) in the form of "regime switching models" have been used to determine growth/decline periods (I know, they're not technically HMMs, but they basically are). Does that count?
|
Applications of Data Mining in economics [closed]
Hidden Markov models (HMMs) in the form of "regime switching models" have been used to determine growth/decline periods (I know, they're not technically HMMs, but they basically are). Does that count? Read up on the work of James Hamilton if you're interested: http://weber.ucsd.edu/~jhamilto/
|
Applications of Data Mining in economics [closed]
Hidden Markov models (HMMs) in the form of "regime switching models" have been used to determine growth/decline periods (I know, they're not technically HMMs, but they basically are). Does that count?
|
40,796
|
Best way to test runs
|
Your stated objective is to assess whether
the 1s tend to rank lower (i.e. appear earlier)
That is not measured by runs, but by ranks. Use the Wilcoxon (aka Mann-Whitney) test.
This test is both conceptually and computationally simple, yet reasonably powerful. The data are ranked in the order of appearance using the numbers $1, 2, \ldots, n$ (where $n=35$ in this case). The ranks are summed within each group: there's a sum of $n_0=18$ ranks corresponding to the zeros and a sum of $n_1=17$ ranks corresponding to the ones. To compensate for the different numbers of zeros and ones, subtract the smallest possible sum from each (equal to $1+2+\cdots+n_i = \binom{n_i+1}{2}$ for group $i$, $i=0,1$). If the ones truly tend to come first, their adjusted rank sum will be substantially smaller than that of the zeros. This can be converted into a Z-score by assuming an asymptotic Normal distribution for the statistic or a more accurate p-value can be found through the permutation distribution. The code below illustrates both methods.
For these data, the zeros appear at ranks
9 11 17 18 20 21 22 23 24 25 26 27 28 29 30 32 33 35
while the ones appear at ranks
1 2 3 4 5 6 7 8 10 12 13 14 15 16 19 31 34
The adjusted sum of the ranks of the ones is $U=47$. The Normal approximation estimates its p-value at $0.0002339$. The small value is testimony to the power of this test in the present case. The permutation distribution, estimated with a million replications, gives a p-value of $0.000264$. It is accurate to $\pm 0.000016$ (which is one standard error). Either p-value gives you ample basis to reject the null hypothesis that the zeros and ones are randomly scattered throughout the sequence.
Here is a histogram of the permutation distribution of the $U$ statistic for these data.
The red vertical line marks the actual test statistic. It obviously is extreme.
Although it might not look it, this test was conducted as a two-tailed test (by taking the smaller of the two adjusted rank sums). It tests whether there is any difference in ranks, not just whether the ones tend to come earlier.
Below is the (reproducible) R code that made the figure and computed the p-values. This large simulation takes about ten seconds to run. (Only a few thousand replications are typically needed. A million were used to make the point that the Normal approximation works well in this case.) Reduce the first argument of replicate at line 25 in order to achieve a faster run time.
x <- c(1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0)
#
# Wilcoxon test.
#
Wilcoxon <- function(x) {
n <- length(x)
n0 <- sum(x==0)
n1 <- sum(x==1)
u0 <- sum((1:n)[x==0]) - choose(n0+1, 2)
u1 <- sum((1:n)[x==1]) - choose(n1+1, 2)
u <- min(u0, u1)
m.u <- n0 * n1 / 2
s.u <- sqrt(n0 * n1 * (n+1) / 12)
Z <- (u - m.u)/s.u
p <- pnorm(Z)
return(c(U=u, Z=Z, p.value=p))
}
stats <- Wilcoxon(x)
#
# Permutation test.
#
set.seed(17)
U <- replicate(1e6, Wilcoxon(sample(x, length(x)))["U"])
hist(U, main="Permutation Distribution", )
abline(v = stats["U"], lwd=2, col="Red")
#
# Summary.
#
message("Normal approximation: ", signif(stats["p.value"], 4),
" Permutation estimate: ", signif(mean(c(1, U <= stats["U"])), 4),
" +/- ", signif(sd(c(1, U <= stats["U"])) / sqrt(1 + length(U)), 2))
|
Best way to test runs
|
Your stated objective is to assess whether
the 1s tend to rank lower (i.e. appear earlier)
That is not measured by runs, but by ranks. Use the Wilcoxon (aka Mann-Whitney) test.
This test is both c
|
Best way to test runs
Your stated objective is to assess whether
the 1s tend to rank lower (i.e. appear earlier)
That is not measured by runs, but by ranks. Use the Wilcoxon (aka Mann-Whitney) test.
This test is both conceptually and computationally simple, yet reasonably powerful. The data are ranked in the order of appearance using the numbers $1, 2, \ldots, n$ (where $n=35$ in this case). The ranks are summed within each group: there's a sum of $n_0=18$ ranks corresponding to the zeros and a sum of $n_1=17$ ranks corresponding to the ones. To compensate for the different numbers of zeros and ones, subtract the smallest possible sum from each (equal to $1+2+\cdots+n_i = \binom{n_i+1}{2}$ for group $i$, $i=0,1$). If the ones truly tend to come first, their adjusted rank sum will be substantially smaller than that of the zeros. This can be converted into a Z-score by assuming an asymptotic Normal distribution for the statistic or a more accurate p-value can be found through the permutation distribution. The code below illustrates both methods.
For these data, the zeros appear at ranks
9 11 17 18 20 21 22 23 24 25 26 27 28 29 30 32 33 35
while the ones appear at ranks
1 2 3 4 5 6 7 8 10 12 13 14 15 16 19 31 34
The adjusted sum of the ranks of the ones is $U=47$. The Normal approximation estimates its p-value at $0.0002339$. The small value is testimony to the power of this test in the present case. The permutation distribution, estimated with a million replications, gives a p-value of $0.000264$. It is accurate to $\pm 0.000016$ (which is one standard error). Either p-value gives you ample basis to reject the null hypothesis that the zeros and ones are randomly scattered throughout the sequence.
Here is a histogram of the permutation distribution of the $U$ statistic for these data.
The red vertical line marks the actual test statistic. It obviously is extreme.
Although it might not look it, this test was conducted as a two-tailed test (by taking the smaller of the two adjusted rank sums). It tests whether there is any difference in ranks, not just whether the ones tend to come earlier.
Below is the (reproducible) R code that made the figure and computed the p-values. This large simulation takes about ten seconds to run. (Only a few thousand replications are typically needed. A million were used to make the point that the Normal approximation works well in this case.) Reduce the first argument of replicate at line 25 in order to achieve a faster run time.
x <- c(1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 0, 1, 1, 1, 1, 1, 0, 0, 1, 0,
0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 1, 0)
#
# Wilcoxon test.
#
Wilcoxon <- function(x) {
n <- length(x)
n0 <- sum(x==0)
n1 <- sum(x==1)
u0 <- sum((1:n)[x==0]) - choose(n0+1, 2)
u1 <- sum((1:n)[x==1]) - choose(n1+1, 2)
u <- min(u0, u1)
m.u <- n0 * n1 / 2
s.u <- sqrt(n0 * n1 * (n+1) / 12)
Z <- (u - m.u)/s.u
p <- pnorm(Z)
return(c(U=u, Z=Z, p.value=p))
}
stats <- Wilcoxon(x)
#
# Permutation test.
#
set.seed(17)
U <- replicate(1e6, Wilcoxon(sample(x, length(x)))["U"])
hist(U, main="Permutation Distribution", )
abline(v = stats["U"], lwd=2, col="Red")
#
# Summary.
#
message("Normal approximation: ", signif(stats["p.value"], 4),
" Permutation estimate: ", signif(mean(c(1, U <= stats["U"])), 4),
" +/- ", signif(sd(c(1, U <= stats["U"])) / sqrt(1 + length(U)), 2))
|
Best way to test runs
Your stated objective is to assess whether
the 1s tend to rank lower (i.e. appear earlier)
That is not measured by runs, but by ranks. Use the Wilcoxon (aka Mann-Whitney) test.
This test is both c
|
40,797
|
Best way to test runs
|
I think you have this muddled up.
(i) The Kolmogorov Smirnov test is designed to test continuous, not discrete distributions; indeed, your values consist only of 0's and 1's, yet it appears you're testing against continuous uniformity.
(ii) as written, it looks like this is completely ignoring the time order. It's not testing what you need to pick up.
You could perhaps use as your data the fraction (considered as a quantile) of the distance through the series that the 1's occur at (it's not continuous independent data though - it only occurs at discrete places, only one value per location -- so you'd still need to adjust your null distribution for that). It wouldn't be a KS test as such, but you could use a statistic like the KS as the basis for a test.
For example, if there are $n$ observations, the $i$-th observation might be said to occur at the $\frac{i-\alpha}{n+1-2\alpha}$ quantile for some $0\leq\alpha\leq 1$ (I believe many of the 9 alternatives in R's own quantile function correspond to that definition with various values of $\alpha$). You can then test whether the quantiles of the 1's are uniformly distributed, but you'd need to simulate to get the distribution of the test statistic under the null.
An easy alternative to simulation of the null distribution (presumably conditioning on the counts of 0s and 1's) would be to do a permutation test. (Which will involve either clever algorithms to do do the full distribution, or sampling of the permutation distribution.
However, it seems as if you're really after a test for trend. Indeed, you might do better with something as simple as a logistic regression against position, or even a monotonic GAM-type model (again, probably via logistic-regression).
Edit: Here's the previously suggested logistic regression performed in R:
x <- c(rep(1, 8), 0, 1, 0, rep(1, 5), 0, 0, 1, rep(0, 11), 1, 0, 0, 1, 0)
t <- seq_along(x) # Rank order by position (1,2,3,...)
plot(x~t) # Show the sequence of 1's and 0's
logistfit <- glm(x~t,family=binomial) # fit a straight line in the logits
summary(logistfit) # show GLM regression table output
f <- fitted(logistfit) # fit is estimated P(X=1|t)
lines(f~t,col=4) # plot that fit
Here's the output of the model (a few less interesting lines removed):
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.77412 1.01415 2.735 0.00623
t -0.15912 0.05225 -3.046 0.00232 # <=== the line we want
Null deviance: 48.492 on 34 degrees of freedom
Residual deviance: 34.087 on 33 degrees of freedom
AIC: 38.087
The p-value for the glm fit is $0.00232$. It shows that the probability of a 1 is not consistent with the 1's being randomly placed with respect to the ordering variable. Since the coefficient is negative, the probability of a 1 is overall decreasing as the position increases.
Here's the plot:
|
Best way to test runs
|
I think you have this muddled up.
(i) The Kolmogorov Smirnov test is designed to test continuous, not discrete distributions; indeed, your values consist only of 0's and 1's, yet it appears you're tes
|
Best way to test runs
I think you have this muddled up.
(i) The Kolmogorov Smirnov test is designed to test continuous, not discrete distributions; indeed, your values consist only of 0's and 1's, yet it appears you're testing against continuous uniformity.
(ii) as written, it looks like this is completely ignoring the time order. It's not testing what you need to pick up.
You could perhaps use as your data the fraction (considered as a quantile) of the distance through the series that the 1's occur at (it's not continuous independent data though - it only occurs at discrete places, only one value per location -- so you'd still need to adjust your null distribution for that). It wouldn't be a KS test as such, but you could use a statistic like the KS as the basis for a test.
For example, if there are $n$ observations, the $i$-th observation might be said to occur at the $\frac{i-\alpha}{n+1-2\alpha}$ quantile for some $0\leq\alpha\leq 1$ (I believe many of the 9 alternatives in R's own quantile function correspond to that definition with various values of $\alpha$). You can then test whether the quantiles of the 1's are uniformly distributed, but you'd need to simulate to get the distribution of the test statistic under the null.
An easy alternative to simulation of the null distribution (presumably conditioning on the counts of 0s and 1's) would be to do a permutation test. (Which will involve either clever algorithms to do do the full distribution, or sampling of the permutation distribution.
However, it seems as if you're really after a test for trend. Indeed, you might do better with something as simple as a logistic regression against position, or even a monotonic GAM-type model (again, probably via logistic-regression).
Edit: Here's the previously suggested logistic regression performed in R:
x <- c(rep(1, 8), 0, 1, 0, rep(1, 5), 0, 0, 1, rep(0, 11), 1, 0, 0, 1, 0)
t <- seq_along(x) # Rank order by position (1,2,3,...)
plot(x~t) # Show the sequence of 1's and 0's
logistfit <- glm(x~t,family=binomial) # fit a straight line in the logits
summary(logistfit) # show GLM regression table output
f <- fitted(logistfit) # fit is estimated P(X=1|t)
lines(f~t,col=4) # plot that fit
Here's the output of the model (a few less interesting lines removed):
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 2.77412 1.01415 2.735 0.00623
t -0.15912 0.05225 -3.046 0.00232 # <=== the line we want
Null deviance: 48.492 on 34 degrees of freedom
Residual deviance: 34.087 on 33 degrees of freedom
AIC: 38.087
The p-value for the glm fit is $0.00232$. It shows that the probability of a 1 is not consistent with the 1's being randomly placed with respect to the ordering variable. Since the coefficient is negative, the probability of a 1 is overall decreasing as the position increases.
Here's the plot:
|
Best way to test runs
I think you have this muddled up.
(i) The Kolmogorov Smirnov test is designed to test continuous, not discrete distributions; indeed, your values consist only of 0's and 1's, yet it appears you're tes
|
40,798
|
Best way to test runs
|
You can use also a test based on the Length of the longest head run (LLHR).
In your example LLHR is too large -> you should reject the hypothesis that the sample is from Bernoulli B(1/2) distribution.
|
Best way to test runs
|
You can use also a test based on the Length of the longest head run (LLHR).
In your example LLHR is too large -> you should reject the hypothesis that the sample is from Bernoulli B(1/2) distribution
|
Best way to test runs
You can use also a test based on the Length of the longest head run (LLHR).
In your example LLHR is too large -> you should reject the hypothesis that the sample is from Bernoulli B(1/2) distribution.
|
Best way to test runs
You can use also a test based on the Length of the longest head run (LLHR).
In your example LLHR is too large -> you should reject the hypothesis that the sample is from Bernoulli B(1/2) distribution
|
40,799
|
Meaning of a "portmanteau test"?
|
As far as I know "portmanteau test" is synonymous with "omnibus test". Either term gets used in two cases:
(1) When the null hypothesis specifies values for a vector of parameters that are thought of as being on an equal footing, & the alternative is that at least one parameter value is different from that specified by the null. So the null for the ANOVA F-test is that all treatment means are zero; for the Ljung-Box test, that all autocorrelations up to a given lag are zero; &c.
(2) When a test has decent power against a wide range of alternative hypotheses: contrasted with a "directional test" with high power against a narrow range of alternatives, but low power against others. This is typically in the context of goodness of fit.
Don't get your hopes up for more exact definitions—after all, it doesn't really matter what you call a test.
|
Meaning of a "portmanteau test"?
|
As far as I know "portmanteau test" is synonymous with "omnibus test". Either term gets used in two cases:
(1) When the null hypothesis specifies values for a vector of parameters that are thought of
|
Meaning of a "portmanteau test"?
As far as I know "portmanteau test" is synonymous with "omnibus test". Either term gets used in two cases:
(1) When the null hypothesis specifies values for a vector of parameters that are thought of as being on an equal footing, & the alternative is that at least one parameter value is different from that specified by the null. So the null for the ANOVA F-test is that all treatment means are zero; for the Ljung-Box test, that all autocorrelations up to a given lag are zero; &c.
(2) When a test has decent power against a wide range of alternative hypotheses: contrasted with a "directional test" with high power against a narrow range of alternatives, but low power against others. This is typically in the context of goodness of fit.
Don't get your hopes up for more exact definitions—after all, it doesn't really matter what you call a test.
|
Meaning of a "portmanteau test"?
As far as I know "portmanteau test" is synonymous with "omnibus test". Either term gets used in two cases:
(1) When the null hypothesis specifies values for a vector of parameters that are thought of
|
40,800
|
How to calculate the confidence intervals for likelihood ratios from a 2x2 table in the presence of cells with zeroes
|
The paper by Koopman (1984) Confidence intervals for the ratio of two binomial proportions gives two methods for calculating the confidence interval. I am gonna explain the first one here as the confidence intervals can be calculated analytically (the second method uses an iterative procedure to find the confidence intervals numerically). First, consider the following 2x2 table:
Gold standard
Positive Negative
Test positive a b
Test negative c d
The the likelihood ratio of a positive test is:
$$
LR_{+}=\frac{a/(a+c)}{b/(b+d)}
$$
and the likelihood ratio of a negative test is:
$$
LR_{-}=\frac{c/(a+c)}{d/(b+d)}
$$
These are basically the ratio of two proportions.
Confidence intervals using a normal approximation
Let $T = \frac{X/m}{Y/n}$, then the variable $\ln(T)$ isapproximately normally distributed with approximate mean $\ln(\theta)$ and estimated variance $\widehat{\sigma}^{2}=(1/x) - (1/m) + (1/y) - (1/n)$. An approximated two-sided $1-\alpha$ confidence interval for $\theta$ is given by:
$$
\{t\cdot \exp(-\xi_{1-\alpha/2}\cdot\hat{\sigma}), t\cdot \exp(\xi_{1-\alpha/2}\cdot\hat{\sigma})\}
$$
where $\xi_{1-\alpha/2}$ is the $1-\frac{1}{2}\alpha$ quantile of the standard normal distribution $\mathcal{N}(0,1)$ (for $\alpha = 0.05$ $\xi=1.96$) and $t$ is the observed value of $T$ (in your case, $t$ would be simply the observed likelihood ratios). In your case, $T$ is simply the likelihood ratios and $x=a, m=a+c, y=b, n=b+d$ for the positive LR or $x=c, m=a+c, y=d, n=b+d$ for the negative LR. Important: The paper also states procedures if $x=0, x=m, y=0$ or $y=n$:
If you have no false-positives ($b=y=0$ in the table above), then you can just substitute $b=1/2$ to calculate the lower bound of the confidence interval.
In this post, @whuber provides a similar approach: add $1/2$ to both $x$ and $y$ and add $1$ to both the $n$ and $m$. In this particular case, add $1/2$ to $a$ and $b$ and add $1$ to $(a+c)$ and $(b+d)$ for the positive LR and add $1/2$ to $c$ and $d$ and add $1$ to $(a+c)$ and $(b+d)$ for the negative LR.
Another formulation
The above confidence interval can be written differently:
$$
LR_{x}=\exp\left[\ln\left(\frac{p_1}{p_2}\right)\pm \xi_{1-\alpha/2}\cdot \sqrt{\frac{1-p_1}{p_{1}n_{1}}+\frac{1-p_2}{p_{2}n_{2}}} \right]
$$
where $\xi_{1-\alpha/2}$ is the $1-\frac{1}{2}\alpha$ quantile of the standard normal distribution. Note: For the positive LR, $p_1=\text{sensitivity}$ and $p_2=1-\text{specificity}$ and for the negative LR, $p_1=1-\text{sensitivity}$ and $p_2=\text{specificity}$ and $n_1 = a+c, n_2=b+d$.
Here is a little R function that carries out the calculations:
lr.ci <- function( m, sig.level=0.95 ) {
alpha <- 1 - sig.level
a <- m[1, 1]
b <- m[1, 2]
c <- m[2, 1]
d <- m[2, 2]
spec <- d/(b+d)
sens <- a/(a+c)
lr.pos <- sens/(1 - spec)
if ( a != 0 & b != 0 ) {
sigma2 <- (1/a) - (1/(a+c)) + (1/b) - (1/(b+d))
lower.pos <- lr.pos * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.pos <- lr.pos * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( a == 0 & b == 0 ) {
lower.pos <- 0
upper.pos <- Inf
} else if ( a == 0 & b != 0 ) {
a.temp <- (1/2)
spec.temp <- d/(b+d)
sens.temp <- a.temp/(a+c)
lr.pos.temp <- sens.temp/(1 - spec.temp)
lower.pos <- 0
sigma2 <- (1/a.temp) - (1/(a.temp+c)) + (1/b) - (1/(b+d))
upper.pos <- lr.pos.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( a != 0 & b == 0 ) {
b.temp <- (1/2)
spec.temp <- d/(b.temp+d)
sens.temp <- a/(a+c)
lr.pos.temp <- sens.temp/(1 - spec.temp)
sigma2 <- (1/a) - (1/(a+c)) + (1/b.temp) - (1/(b.temp+d))
lower.pos <- lr.pos.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.pos <- Inf
} else if ( (a == (a+c)) & (b == (b+d)) ) {
a.temp <- a - (1/2)
b.temp <- b - (1/2)
spec.temp <- d/(b.temp+d)
sens.temp <- a.temp/(a+c)
lr.pos.temp <- sens.temp/(1 - spec.temp)
sigma2 <- (1/a.temp) - (1/(a.temp+c)) + (1/b.temp) - (1/(b.temp+d))
lower.pos <- lr.pos.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.pos <- lr.pos.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
}
lr.neg <- (1 - sens)/spec
if ( c != 0 & d != 0 ) {
sigma2 <- (1/c) - (1/(a+c)) + (1/d) - (1/(b+d))
lower.neg <- lr.neg * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.neg <- lr.neg * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( c == 0 & d == 0 ) {
lower.neg<- 0
upper.neg <- Inf
} else if ( c == 0 & d != 0 ) {
c.temp <- (1/2)
spec.temp <- d/(b+d)
sens.temp <- a/(a+c.temp)
lr.neg.temp <- (1 - sens.temp)/spec.temp
lower.neg <- 0
sigma2 <- (1/c.temp) - (1/(a+c)) + (1/d) - (1/(b+d))
upper.neg <- lr.neg.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( c != 0 & d == 0 ) {
d.temp <- (1/2)
spec.temp <- d.temp/(b+d)
sens.temp <- a/(a+c)
lr.neg.temp <- (1 - sens.temp)/spec.temp
sigma2 <- (1/c) - (1/(a+c)) + (1/d.temp) - (1/(b+d))
lower.neg <- lr.neg.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.neg <- Inf
} else if ( (c == (a+c)) & (d == (b+d)) ) {
c.temp <- c - (1/2)
d.temp <- d - (1/2)
spec.temp <- d.temp/(b+d)
sens.temp <- a/(a+c.temp)
lr.neg.temp <- (1 - sens.temp)/spec.temp
sigma2 <- (1/c.temp) - (1/(a+c)) + (1/d.temp) - (1/(b+d))
lower.neg <- lr.neg.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.neg <- lr.neg.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
}
list(
lr.pos=lr.pos, lower.pos=lower.pos, upper.pos=upper.pos,
lr.neg=lr.neg, lower.neg=lower.neg, upper.neg=upper.neg
)
}
|
How to calculate the confidence intervals for likelihood ratios from a 2x2 table in the presence of
|
The paper by Koopman (1984) Confidence intervals for the ratio of two binomial proportions gives two methods for calculating the confidence interval. I am gonna explain the first one here as the confi
|
How to calculate the confidence intervals for likelihood ratios from a 2x2 table in the presence of cells with zeroes
The paper by Koopman (1984) Confidence intervals for the ratio of two binomial proportions gives two methods for calculating the confidence interval. I am gonna explain the first one here as the confidence intervals can be calculated analytically (the second method uses an iterative procedure to find the confidence intervals numerically). First, consider the following 2x2 table:
Gold standard
Positive Negative
Test positive a b
Test negative c d
The the likelihood ratio of a positive test is:
$$
LR_{+}=\frac{a/(a+c)}{b/(b+d)}
$$
and the likelihood ratio of a negative test is:
$$
LR_{-}=\frac{c/(a+c)}{d/(b+d)}
$$
These are basically the ratio of two proportions.
Confidence intervals using a normal approximation
Let $T = \frac{X/m}{Y/n}$, then the variable $\ln(T)$ isapproximately normally distributed with approximate mean $\ln(\theta)$ and estimated variance $\widehat{\sigma}^{2}=(1/x) - (1/m) + (1/y) - (1/n)$. An approximated two-sided $1-\alpha$ confidence interval for $\theta$ is given by:
$$
\{t\cdot \exp(-\xi_{1-\alpha/2}\cdot\hat{\sigma}), t\cdot \exp(\xi_{1-\alpha/2}\cdot\hat{\sigma})\}
$$
where $\xi_{1-\alpha/2}$ is the $1-\frac{1}{2}\alpha$ quantile of the standard normal distribution $\mathcal{N}(0,1)$ (for $\alpha = 0.05$ $\xi=1.96$) and $t$ is the observed value of $T$ (in your case, $t$ would be simply the observed likelihood ratios). In your case, $T$ is simply the likelihood ratios and $x=a, m=a+c, y=b, n=b+d$ for the positive LR or $x=c, m=a+c, y=d, n=b+d$ for the negative LR. Important: The paper also states procedures if $x=0, x=m, y=0$ or $y=n$:
If you have no false-positives ($b=y=0$ in the table above), then you can just substitute $b=1/2$ to calculate the lower bound of the confidence interval.
In this post, @whuber provides a similar approach: add $1/2$ to both $x$ and $y$ and add $1$ to both the $n$ and $m$. In this particular case, add $1/2$ to $a$ and $b$ and add $1$ to $(a+c)$ and $(b+d)$ for the positive LR and add $1/2$ to $c$ and $d$ and add $1$ to $(a+c)$ and $(b+d)$ for the negative LR.
Another formulation
The above confidence interval can be written differently:
$$
LR_{x}=\exp\left[\ln\left(\frac{p_1}{p_2}\right)\pm \xi_{1-\alpha/2}\cdot \sqrt{\frac{1-p_1}{p_{1}n_{1}}+\frac{1-p_2}{p_{2}n_{2}}} \right]
$$
where $\xi_{1-\alpha/2}$ is the $1-\frac{1}{2}\alpha$ quantile of the standard normal distribution. Note: For the positive LR, $p_1=\text{sensitivity}$ and $p_2=1-\text{specificity}$ and for the negative LR, $p_1=1-\text{sensitivity}$ and $p_2=\text{specificity}$ and $n_1 = a+c, n_2=b+d$.
Here is a little R function that carries out the calculations:
lr.ci <- function( m, sig.level=0.95 ) {
alpha <- 1 - sig.level
a <- m[1, 1]
b <- m[1, 2]
c <- m[2, 1]
d <- m[2, 2]
spec <- d/(b+d)
sens <- a/(a+c)
lr.pos <- sens/(1 - spec)
if ( a != 0 & b != 0 ) {
sigma2 <- (1/a) - (1/(a+c)) + (1/b) - (1/(b+d))
lower.pos <- lr.pos * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.pos <- lr.pos * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( a == 0 & b == 0 ) {
lower.pos <- 0
upper.pos <- Inf
} else if ( a == 0 & b != 0 ) {
a.temp <- (1/2)
spec.temp <- d/(b+d)
sens.temp <- a.temp/(a+c)
lr.pos.temp <- sens.temp/(1 - spec.temp)
lower.pos <- 0
sigma2 <- (1/a.temp) - (1/(a.temp+c)) + (1/b) - (1/(b+d))
upper.pos <- lr.pos.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( a != 0 & b == 0 ) {
b.temp <- (1/2)
spec.temp <- d/(b.temp+d)
sens.temp <- a/(a+c)
lr.pos.temp <- sens.temp/(1 - spec.temp)
sigma2 <- (1/a) - (1/(a+c)) + (1/b.temp) - (1/(b.temp+d))
lower.pos <- lr.pos.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.pos <- Inf
} else if ( (a == (a+c)) & (b == (b+d)) ) {
a.temp <- a - (1/2)
b.temp <- b - (1/2)
spec.temp <- d/(b.temp+d)
sens.temp <- a.temp/(a+c)
lr.pos.temp <- sens.temp/(1 - spec.temp)
sigma2 <- (1/a.temp) - (1/(a.temp+c)) + (1/b.temp) - (1/(b.temp+d))
lower.pos <- lr.pos.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.pos <- lr.pos.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
}
lr.neg <- (1 - sens)/spec
if ( c != 0 & d != 0 ) {
sigma2 <- (1/c) - (1/(a+c)) + (1/d) - (1/(b+d))
lower.neg <- lr.neg * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.neg <- lr.neg * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( c == 0 & d == 0 ) {
lower.neg<- 0
upper.neg <- Inf
} else if ( c == 0 & d != 0 ) {
c.temp <- (1/2)
spec.temp <- d/(b+d)
sens.temp <- a/(a+c.temp)
lr.neg.temp <- (1 - sens.temp)/spec.temp
lower.neg <- 0
sigma2 <- (1/c.temp) - (1/(a+c)) + (1/d) - (1/(b+d))
upper.neg <- lr.neg.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
} else if ( c != 0 & d == 0 ) {
d.temp <- (1/2)
spec.temp <- d.temp/(b+d)
sens.temp <- a/(a+c)
lr.neg.temp <- (1 - sens.temp)/spec.temp
sigma2 <- (1/c) - (1/(a+c)) + (1/d.temp) - (1/(b+d))
lower.neg <- lr.neg.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.neg <- Inf
} else if ( (c == (a+c)) & (d == (b+d)) ) {
c.temp <- c - (1/2)
d.temp <- d - (1/2)
spec.temp <- d.temp/(b+d)
sens.temp <- a/(a+c.temp)
lr.neg.temp <- (1 - sens.temp)/spec.temp
sigma2 <- (1/c.temp) - (1/(a+c)) + (1/d.temp) - (1/(b+d))
lower.neg <- lr.neg.temp * exp(-qnorm(1-(alpha/2))*sqrt(sigma2))
upper.neg <- lr.neg.temp * exp(qnorm(1-(alpha/2))*sqrt(sigma2))
}
list(
lr.pos=lr.pos, lower.pos=lower.pos, upper.pos=upper.pos,
lr.neg=lr.neg, lower.neg=lower.neg, upper.neg=upper.neg
)
}
|
How to calculate the confidence intervals for likelihood ratios from a 2x2 table in the presence of
The paper by Koopman (1984) Confidence intervals for the ratio of two binomial proportions gives two methods for calculating the confidence interval. I am gonna explain the first one here as the confi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.