idx
int64 1
56k
| question
stringlengths 15
155
| answer
stringlengths 2
29.2k
β | question_cut
stringlengths 15
100
| answer_cut
stringlengths 2
200
β | conversation
stringlengths 47
29.3k
| conversation_cut
stringlengths 47
301
|
|---|---|---|---|---|---|---|
14,701
|
Rand index calculation
|
I have an implementation of this in R which I will explain:
TP (a in the code) is the sum of every cell choose 2. As per the original question (0 or 1 choose 2 equating to 0)
FN (b) is the sum of each row choose 2, all summed, less TP. Where each Row sum represents the number of documents in each True class.
The sum of this is all documents that are similar and in the same cluster (TP) plus all documents that are similar and are not in the same cluster (FN).
So this is (TP + FN) - TP = FN
FP (c) is calculated similarly. The sum of each column choose 2, all summed, less TP. In this case each column sum represents the number of documents in each cluster.
So the sum of this is all documents that are similar and in the same cluster (TP) plus all documents that are not similar and are in the same cluster (FP).
So this is (TP + FP) - TP = FP
With these 3 calculated the remaining calculation of TN is straight forward. The sum of the table choose 2, less TP, FP & FN = TN (d)
The only query I have with this method is it's definition of TP. Using the terminology in this question, I don't understand why the 2 a's in cluster 3 are considered TP. I have found this both here and in the related textbook. However I do understand their calculation with the assumption that their TP calculation is correct.
Hope this helps
FMeasure = function (x, y, beta)
{
x <- as.vector(x)
y <- as.vector(y)
if (length(x) != length(y))
stop("arguments must be vectors of the same length")
tab <- table(x, y)
if (all(dim(tab) == c(1, 1)))
return(1)
a <- sum(choose(tab, 2))
b <- sum(choose(rowSums(tab), 2)) - a
c <- sum(choose(colSums(tab), 2)) - a
d <- choose(sum(tab), 2) - a - b - c
## Precision
P = a / (a + c)
## Recall
R = a / (a + b)
##F-Measure
Fm <- (beta^2 + 1) * P * R / (beta^2*P + R)
return(Fm)
}
|
Rand index calculation
|
I have an implementation of this in R which I will explain:
TP (a in the code) is the sum of every cell choose 2. As per the original question (0 or 1 choose 2 equating to 0)
FN (b) is the sum of eac
|
Rand index calculation
I have an implementation of this in R which I will explain:
TP (a in the code) is the sum of every cell choose 2. As per the original question (0 or 1 choose 2 equating to 0)
FN (b) is the sum of each row choose 2, all summed, less TP. Where each Row sum represents the number of documents in each True class.
The sum of this is all documents that are similar and in the same cluster (TP) plus all documents that are similar and are not in the same cluster (FN).
So this is (TP + FN) - TP = FN
FP (c) is calculated similarly. The sum of each column choose 2, all summed, less TP. In this case each column sum represents the number of documents in each cluster.
So the sum of this is all documents that are similar and in the same cluster (TP) plus all documents that are not similar and are in the same cluster (FP).
So this is (TP + FP) - TP = FP
With these 3 calculated the remaining calculation of TN is straight forward. The sum of the table choose 2, less TP, FP & FN = TN (d)
The only query I have with this method is it's definition of TP. Using the terminology in this question, I don't understand why the 2 a's in cluster 3 are considered TP. I have found this both here and in the related textbook. However I do understand their calculation with the assumption that their TP calculation is correct.
Hope this helps
FMeasure = function (x, y, beta)
{
x <- as.vector(x)
y <- as.vector(y)
if (length(x) != length(y))
stop("arguments must be vectors of the same length")
tab <- table(x, y)
if (all(dim(tab) == c(1, 1)))
return(1)
a <- sum(choose(tab, 2))
b <- sum(choose(rowSums(tab), 2)) - a
c <- sum(choose(colSums(tab), 2)) - a
d <- choose(sum(tab), 2) - a - b - c
## Precision
P = a / (a + c)
## Recall
R = a / (a + b)
##F-Measure
Fm <- (beta^2 + 1) * P * R / (beta^2*P + R)
return(Fm)
}
|
Rand index calculation
I have an implementation of this in R which I will explain:
TP (a in the code) is the sum of every cell choose 2. As per the original question (0 or 1 choose 2 equating to 0)
FN (b) is the sum of eac
|
14,702
|
Rand index calculation
|
Below is the picture which describes your question:
To solve this problem, you need to consider this matrix:
+--------------------------------+--------------------------------------+
| TP: | FN: |
| Same class + same cluster | Same class + different clusters |
+--------------------------------+--------------------------------------+
| FP: | TN: |
| different class + same cluster | different class + different clusters |
+--------------------------------+--------------------------------------+
This is how we calculate TP, FN, FP for Rand Index:
NOTE: In the above equations, I used a triangle to show the diamond in the picture.
For example, for False Negative, we should pick from the class but in different clusters. So, we can pick
1 X from cluster 1 and 1 X from cluster 2 = ${5 \choose 1}{1 \choose 1} = 5$
1 X from cluster 1 and 1 X from cluster 3 = ${5 \choose 1}{2 \choose 1} = 10$
1 O from cluster 1 and 1 O from cluster 2 = ${1 \choose 1}{4 \choose 1} = 4$
1 X from cluster 2 and 1 X from cluster 3 = ${1 \choose 1}{2 \choose 1} = 2$
1 $\diamond$ from cluster 2 and 1 $\diamond$ from cluster 3 = ${1 \choose 1}{3 \choose 1} = 3$
Finally, we will have $24$ ($=5+10+4+2+3$) states.
The same is for the rest of the equations.
The hardest part is TN which can be done like the below picture:
There are some shorter paths to calculate the Rand Index, but it is the calculation in deep and step by step. Finally, The contingency table looks as follows:
+--------+--------+
| TP: 20 | FN: 24 |
+--------+--------+
| FP: 20 | TN: 72 |
+--------+--------+
|
Rand index calculation
|
Below is the picture which describes your question:
To solve this problem, you need to consider this matrix:
+--------------------------------+--------------------------------------+
| TP:
|
Rand index calculation
Below is the picture which describes your question:
To solve this problem, you need to consider this matrix:
+--------------------------------+--------------------------------------+
| TP: | FN: |
| Same class + same cluster | Same class + different clusters |
+--------------------------------+--------------------------------------+
| FP: | TN: |
| different class + same cluster | different class + different clusters |
+--------------------------------+--------------------------------------+
This is how we calculate TP, FN, FP for Rand Index:
NOTE: In the above equations, I used a triangle to show the diamond in the picture.
For example, for False Negative, we should pick from the class but in different clusters. So, we can pick
1 X from cluster 1 and 1 X from cluster 2 = ${5 \choose 1}{1 \choose 1} = 5$
1 X from cluster 1 and 1 X from cluster 3 = ${5 \choose 1}{2 \choose 1} = 10$
1 O from cluster 1 and 1 O from cluster 2 = ${1 \choose 1}{4 \choose 1} = 4$
1 X from cluster 2 and 1 X from cluster 3 = ${1 \choose 1}{2 \choose 1} = 2$
1 $\diamond$ from cluster 2 and 1 $\diamond$ from cluster 3 = ${1 \choose 1}{3 \choose 1} = 3$
Finally, we will have $24$ ($=5+10+4+2+3$) states.
The same is for the rest of the equations.
The hardest part is TN which can be done like the below picture:
There are some shorter paths to calculate the Rand Index, but it is the calculation in deep and step by step. Finally, The contingency table looks as follows:
+--------+--------+
| TP: 20 | FN: 24 |
+--------+--------+
| FP: 20 | TN: 72 |
+--------+--------+
|
Rand index calculation
Below is the picture which describes your question:
To solve this problem, you need to consider this matrix:
+--------------------------------+--------------------------------------+
| TP:
|
14,703
|
Rand index calculation
|
You can compute TN and FN the same way.
Just switch the roles of labels and clusters.
a) 1 1 1 1 1 2 3 3
b) 1 2 2 2 2
c) 2 3 3 3 3
... then perform the same computations.
|
Rand index calculation
|
You can compute TN and FN the same way.
Just switch the roles of labels and clusters.
a) 1 1 1 1 1 2 3 3
b) 1 2 2 2 2
c) 2 3 3 3 3
... then perform the same computations.
|
Rand index calculation
You can compute TN and FN the same way.
Just switch the roles of labels and clusters.
a) 1 1 1 1 1 2 3 3
b) 1 2 2 2 2
c) 2 3 3 3 3
... then perform the same computations.
|
Rand index calculation
You can compute TN and FN the same way.
Just switch the roles of labels and clusters.
a) 1 1 1 1 1 2 3 3
b) 1 2 2 2 2
c) 2 3 3 3 3
... then perform the same computations.
|
14,704
|
Rand index calculation
|
I THINK I've reverse engineered the false negative (FN) out of it. For the true positives, you made 4 groups that were positive. In cluster 1, you had the five a's; in cluster 2, you had the 4 b's; in cluster 3 you had the 3 c's AND the 2 a's.
So for the false negative.
Start with the a's in cluster 1; there are 5 correctly placed a's in cluster 1. You have 1 false a in cluster 2, and two false a's in cluster 3. That gives (5 1) and (5 2).
Then for the b's. There are 4 correctly placed b's you calculated earlier. You have one false b in cluster 1, and that's it. That gives you (4 1) for the b's.
Then for the c's. You have one false c in cluster 2, with three correct ones in cluster 3, so there's (3 1).
After that, we can't forget about that pair of a's in cluster 3 that we called a true positive. So with respect to that, we have 1 false a in cluster 2. Even though there are other a's in cluster 1, we can't call them false a's because there are so many.
Therefore, you have (5 1) + (5 2) + (4 1) + (3 1) + (2 1) which equals 5 + 10 + 4 + 3 + 2 = 24. That's where the 24 comes from, then just subtract that from the 136 you already found to get the true neg (TN).
|
Rand index calculation
|
I THINK I've reverse engineered the false negative (FN) out of it. For the true positives, you made 4 groups that were positive. In cluster 1, you had the five a's; in cluster 2, you had the 4 b's; in
|
Rand index calculation
I THINK I've reverse engineered the false negative (FN) out of it. For the true positives, you made 4 groups that were positive. In cluster 1, you had the five a's; in cluster 2, you had the 4 b's; in cluster 3 you had the 3 c's AND the 2 a's.
So for the false negative.
Start with the a's in cluster 1; there are 5 correctly placed a's in cluster 1. You have 1 false a in cluster 2, and two false a's in cluster 3. That gives (5 1) and (5 2).
Then for the b's. There are 4 correctly placed b's you calculated earlier. You have one false b in cluster 1, and that's it. That gives you (4 1) for the b's.
Then for the c's. You have one false c in cluster 2, with three correct ones in cluster 3, so there's (3 1).
After that, we can't forget about that pair of a's in cluster 3 that we called a true positive. So with respect to that, we have 1 false a in cluster 2. Even though there are other a's in cluster 1, we can't call them false a's because there are so many.
Therefore, you have (5 1) + (5 2) + (4 1) + (3 1) + (2 1) which equals 5 + 10 + 4 + 3 + 2 = 24. That's where the 24 comes from, then just subtract that from the 136 you already found to get the true neg (TN).
|
Rand index calculation
I THINK I've reverse engineered the false negative (FN) out of it. For the true positives, you made 4 groups that were positive. In cluster 1, you had the five a's; in cluster 2, you had the 4 b's; in
|
14,705
|
Rand index calculation
|
Here is how to calculate every metric for Rand Index without subtracting
Side notes for easier understanding:
Rand Index is based on comparing pairs of elements. Theory suggests, that similar pairs of elements should be placed in the same cluster, while dissimilar pairs of elements should be placed in separate clusters.
RI doesn't care about difference in number of clusters. It just cares about True/False pairs of elements.
Based on this assumption, Rand Index, is calculated
Ok, let's dive in here is our example:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
In denominator, we have total possible pairs, which is (17 2) = 136
Now lets calculate every metric for better understanding:
A) Let's start with easy a, (True Positives or correct similar)
It means, you need to find all possible pairs of elements, where prediction and true label were placed together.
On grid example it means get sum of possible pairs within each cell.
a = (5 2) + (1 2) + (2 2) + (1 2) + (4 2) + (0 2) + (0 2) + (1 2) + (3 2) =
= 10 + 0 + 1 + 0 + 6 + 0 + 0 + 0 + 3 = 20
C) Now, let's do c (False Negative or incorrect similar)
It means, find all pairs that we placed in different clusters, but which should be together.
On grid example it means, find all possible pairs between any 2 horizontal cells
c = 5*1 + 5*2 + 1*2 +
+ 1*4 + 1*0 + 4*0 +
+ 0*1 + 0*3 + 1*3 =
= 5 + 10 + 2 + 4 + 0 + 0 + 0 + 0 + 3 = 24
D) Calculating d ( False Positives or incorrect dissimilar)
It means, find all pairs, that we placed together, but which should be in different clusters.
On grid example, find all possible pairs between any 2 vertical cells
d = 5*1 + 5*0 + 1*0 +
+ 1*4 + 1*1 + 4*1 +
+ 2*0 + 2*3 + 0*3 =
= 5 + 0 + 0 + 4 + 1 + 4 + 0 + 6 + 0 = 20
B) And, finally let's do b (True Negatives or correct dissimilar)
It means, find all pairs that we placed in different clusters, which also should be in different clusters.
On grid, it means find all possible pairs between any 2 non-vertical and non-horizontal cells
Here is which numbers should be multiplied, to get better understanding what I meant:
b = x1*o2 + x1*o3 + x1*β2 + x1*β3 +
+ x2*o1 + x2*o3 + x2*β1 + x2*β3 +
+ x3*o1 + x3*o2 + x3*β1 + x3*β2 +
+ o1*β2 + o1*β3 +
+ o2*β1 + o2*β3 +
+ o3*β1 + o3*β2
In numbers:
b = 5*4 + 5*0 + 5*1 + 5*3 +
+ 1*1 + 1*0 + 1*0 + 1*3 +
+ 2*1 + 2*4 + 2*0 + 2*1 +
+ 1*1 + 1*3 +
+ 4*0 + 4*3 = 72
And at the end Rand Index is equal: (20 + 72) / 136 = 0.676
|
Rand index calculation
|
Here is how to calculate every metric for Rand Index without subtracting
Side notes for easier understanding:
Rand Index is based on comparing pairs of elements. Theory suggests, that similar pairs o
|
Rand index calculation
Here is how to calculate every metric for Rand Index without subtracting
Side notes for easier understanding:
Rand Index is based on comparing pairs of elements. Theory suggests, that similar pairs of elements should be placed in the same cluster, while dissimilar pairs of elements should be placed in separate clusters.
RI doesn't care about difference in number of clusters. It just cares about True/False pairs of elements.
Based on this assumption, Rand Index, is calculated
Ok, let's dive in here is our example:
| 1 | 2 | 3
--+---+---+---
x | 5 | 1 | 2
--+---+---+---
o | 1 | 4 | 0
--+---+---+---
β | 0 | 1 | 3
In denominator, we have total possible pairs, which is (17 2) = 136
Now lets calculate every metric for better understanding:
A) Let's start with easy a, (True Positives or correct similar)
It means, you need to find all possible pairs of elements, where prediction and true label were placed together.
On grid example it means get sum of possible pairs within each cell.
a = (5 2) + (1 2) + (2 2) + (1 2) + (4 2) + (0 2) + (0 2) + (1 2) + (3 2) =
= 10 + 0 + 1 + 0 + 6 + 0 + 0 + 0 + 3 = 20
C) Now, let's do c (False Negative or incorrect similar)
It means, find all pairs that we placed in different clusters, but which should be together.
On grid example it means, find all possible pairs between any 2 horizontal cells
c = 5*1 + 5*2 + 1*2 +
+ 1*4 + 1*0 + 4*0 +
+ 0*1 + 0*3 + 1*3 =
= 5 + 10 + 2 + 4 + 0 + 0 + 0 + 0 + 3 = 24
D) Calculating d ( False Positives or incorrect dissimilar)
It means, find all pairs, that we placed together, but which should be in different clusters.
On grid example, find all possible pairs between any 2 vertical cells
d = 5*1 + 5*0 + 1*0 +
+ 1*4 + 1*1 + 4*1 +
+ 2*0 + 2*3 + 0*3 =
= 5 + 0 + 0 + 4 + 1 + 4 + 0 + 6 + 0 = 20
B) And, finally let's do b (True Negatives or correct dissimilar)
It means, find all pairs that we placed in different clusters, which also should be in different clusters.
On grid, it means find all possible pairs between any 2 non-vertical and non-horizontal cells
Here is which numbers should be multiplied, to get better understanding what I meant:
b = x1*o2 + x1*o3 + x1*β2 + x1*β3 +
+ x2*o1 + x2*o3 + x2*β1 + x2*β3 +
+ x3*o1 + x3*o2 + x3*β1 + x3*β2 +
+ o1*β2 + o1*β3 +
+ o2*β1 + o2*β3 +
+ o3*β1 + o3*β2
In numbers:
b = 5*4 + 5*0 + 5*1 + 5*3 +
+ 1*1 + 1*0 + 1*0 + 1*3 +
+ 2*1 + 2*4 + 2*0 + 2*1 +
+ 1*1 + 1*3 +
+ 4*0 + 4*3 = 72
And at the end Rand Index is equal: (20 + 72) / 136 = 0.676
|
Rand index calculation
Here is how to calculate every metric for Rand Index without subtracting
Side notes for easier understanding:
Rand Index is based on comparing pairs of elements. Theory suggests, that similar pairs o
|
14,706
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an ordinal variable
|
Nominal vs Interval
The most classic "correlation" measure between a nominal and an interval ("numeric") variable is Eta, also called correlation ratio, and equal to the root R-square of the one-way ANOVA (with p-value = that of the ANOVA). Eta can be seen as a symmetric association measure, like correlation, because Eta of ANOVA (with the nominal as independent, numeric as dependent) is equal to Pillai's trace of multivariate regression (with the numeric as independent, set of dummy variables corresponding to the nominal as dependent).
A more subtle measure is intraclass correlation coefficient (ICC). Whereas Eta grasps only the difference between groups (defined by the nominal variable) in respect to the numeric variable, ICC simultaneously also measures the coordination or agreemant between numeric values inside groups; in other words, ICC (particularly the original unbiased "pairing" ICC version) stays on the level of values while Eta operates on the level of statistics (group means vs group variances).
Nominal vs Ordinal
The question about "correlation" measure between a nominal and an ordinal variable is less apparent. The reason of the difficulty is that ordinal scale is, by its nature, more "mystic" or "twisted" than interval or nominal scales. No wonder that statistical analyses specially for ordinal data are relatively poorly formulated so far.
One way might be to convert your ordinal data into ranks and then compute Eta as if the ranks were interval data. The p-value of such Eta = that of Kruskal-Wallis analysis. This approach seems warranted due to the same reasoning as why Spearman rho is used to correlate two ordinal variables. That logic is "when you don't know the interval widths on the scale, cut the Gordian knot by linearizing any possible monotonicity: go rank the data".
Another approach (possibly more rigorous and flexible) would be to use ordinal logistic regression with the ordinal variable as the DV and the nominal one as the IV. The square root of Nagelkerkeβs pseudo R-square (with the regression's p-value) is another correlation measure for you. Note that you can experiment with various link functions in ordinal regression. This association is, however, not symmetric: the nominal is assumed independent.
Yet another approach might be to find such a monotonic transformation of ordinal data into interval - instead of ranking of the penultimate paragraph - that would maximize R (i.e. Eta) for you. This is categorical regression (= linear regression with optimal scaling).
Still another approach is to perform classification tree, such as CHAID, with the ordinal variable as predictor. This procedure will bin together (hence it is the approach opposite to the previous one) adjacent ordered categories which do not distinguish among categories of the nominal predictand. Then you could rely on Chi-square-based association measures (such as Cramer's V) as if you correlate nominal vs nominal variables.
And @Michael in his comment suggests yet one more way - a special coefficient called Freeman's Theta.
So, we have arrived so far at these opportunities: (1) Rank, then compute Eta; (2) Use ordinal regression; (3) Use categorical regression ("optimally" transforming ordinal variable into interval); (4) Use classification tree ("optimally" reducing the number of ordered categories); (5) Use Freeman's Theta.
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an
|
Nominal vs Interval
The most classic "correlation" measure between a nominal and an interval ("numeric") variable is Eta, also called correlation ratio, and equal to the root R-square of the one-way A
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an ordinal variable
Nominal vs Interval
The most classic "correlation" measure between a nominal and an interval ("numeric") variable is Eta, also called correlation ratio, and equal to the root R-square of the one-way ANOVA (with p-value = that of the ANOVA). Eta can be seen as a symmetric association measure, like correlation, because Eta of ANOVA (with the nominal as independent, numeric as dependent) is equal to Pillai's trace of multivariate regression (with the numeric as independent, set of dummy variables corresponding to the nominal as dependent).
A more subtle measure is intraclass correlation coefficient (ICC). Whereas Eta grasps only the difference between groups (defined by the nominal variable) in respect to the numeric variable, ICC simultaneously also measures the coordination or agreemant between numeric values inside groups; in other words, ICC (particularly the original unbiased "pairing" ICC version) stays on the level of values while Eta operates on the level of statistics (group means vs group variances).
Nominal vs Ordinal
The question about "correlation" measure between a nominal and an ordinal variable is less apparent. The reason of the difficulty is that ordinal scale is, by its nature, more "mystic" or "twisted" than interval or nominal scales. No wonder that statistical analyses specially for ordinal data are relatively poorly formulated so far.
One way might be to convert your ordinal data into ranks and then compute Eta as if the ranks were interval data. The p-value of such Eta = that of Kruskal-Wallis analysis. This approach seems warranted due to the same reasoning as why Spearman rho is used to correlate two ordinal variables. That logic is "when you don't know the interval widths on the scale, cut the Gordian knot by linearizing any possible monotonicity: go rank the data".
Another approach (possibly more rigorous and flexible) would be to use ordinal logistic regression with the ordinal variable as the DV and the nominal one as the IV. The square root of Nagelkerkeβs pseudo R-square (with the regression's p-value) is another correlation measure for you. Note that you can experiment with various link functions in ordinal regression. This association is, however, not symmetric: the nominal is assumed independent.
Yet another approach might be to find such a monotonic transformation of ordinal data into interval - instead of ranking of the penultimate paragraph - that would maximize R (i.e. Eta) for you. This is categorical regression (= linear regression with optimal scaling).
Still another approach is to perform classification tree, such as CHAID, with the ordinal variable as predictor. This procedure will bin together (hence it is the approach opposite to the previous one) adjacent ordered categories which do not distinguish among categories of the nominal predictand. Then you could rely on Chi-square-based association measures (such as Cramer's V) as if you correlate nominal vs nominal variables.
And @Michael in his comment suggests yet one more way - a special coefficient called Freeman's Theta.
So, we have arrived so far at these opportunities: (1) Rank, then compute Eta; (2) Use ordinal regression; (3) Use categorical regression ("optimally" transforming ordinal variable into interval); (4) Use classification tree ("optimally" reducing the number of ordered categories); (5) Use Freeman's Theta.
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an
Nominal vs Interval
The most classic "correlation" measure between a nominal and an interval ("numeric") variable is Eta, also called correlation ratio, and equal to the root R-square of the one-way A
|
14,707
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an ordinal variable
|
Do a one-way anova on the response, with city as the grouping variable. The $F$ and $p$ it gives should be the same as the $F$ and $p$ from the regression of the response on the dummy-coded cities, and $SS_{between\, cities}/SS_{total}$ should equal the multiple $R^2$ from the regression. The multiple $R$ is the correlation of city with the response.
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an
|
Do a one-way anova on the response, with city as the grouping variable. The $F$ and $p$ it gives should be the same as the $F$ and $p$ from the regression of the response on the dummy-coded cities, an
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an ordinal variable
Do a one-way anova on the response, with city as the grouping variable. The $F$ and $p$ it gives should be the same as the $F$ and $p$ from the regression of the response on the dummy-coded cities, and $SS_{between\, cities}/SS_{total}$ should equal the multiple $R^2$ from the regression. The multiple $R$ is the correlation of city with the response.
|
Correlation coefficient between a (non-dichotomous) nominal variable and a numeric (interval) or an
Do a one-way anova on the response, with city as the grouping variable. The $F$ and $p$ it gives should be the same as the $F$ and $p$ from the regression of the response on the dummy-coded cities, an
|
14,708
|
What is the optimal distance function for individuals when attributes are nominal?
|
Technically to compute a dis(similarity) measure between individuals on nominal attributes most programs first recode each nominal variable into a set of dummy binary variables and then compute some measure for binary variables. Here is formulas of some frequently used binary similarity and dissimilarity measures.
What is dummy variables (also called one-hot)? Below is 5 individuals, two nominal variables (A with 3 categories, B with 2 categories). 3 dummies created in place of A, 2 dummies created in place of B.
ID A B A1 A2 A3 B1 B2
1 2 1 0 1 0 1 0
2 1 2 1 0 0 0 1
3 3 2 0 0 1 0 1
4 1 1 1 0 0 1 0
5 2 1 0 1 0 1 0
(There is no need to eliminate one dummy variable as "redundant" as we typically would do it in regression with dummies. It is not practised in clustering, albeit in special situations you might consider that option.)
There are many measures for binary variables, however, not all of them logically suit dummy binary variables, i.e. former nominal ones. You see, for a nominal variable, the fact "the 2 individuals match" and the fact "the 2 individuals don't match" are of equal importance. But consider popular Jaccard measure $\frac{a}{a+b+c}$, where
a - number of dummies 1 for both individuals
b - number of dummies 1 for this and 0 for that
c - number of dummies 0 for this and 1 for that
d - number of dummies 0 for both
Here mismatch consists of two variants, $b$ and $c$; but for us, as already said, each of them is of the same importance as match $a$. Hence we should double-weight $a$, and get formula $\frac{2a}{2a+b+c}$, known as Dice (after Lee Dice) or Czekanovsky-Sorensen measure. It is more appropriate for dummy variables. Indeed, famous composite Gower coefficient (which is recommeded for you with your nominal attributes) is exactly equal to Dice when all the attributes are nominal. Note also that for dummy variables Dice measure (between individuals) = Ochiai measure (which is simply a cosine) = Kulczynsky 2 measure. And more for your information, 1-Dice = binary Lance-Williams distance known also as Bray-Curtis distance. Look how many synonyms - you are sure to find something of that in your software!
The intuitive validity of Dice similarity coefficient comes from the fact that it is simply the co-occurence proportion (or relative agreement). For the data snippet above, take nominal column A and compute 5x5 square symmetric matrix with either 1 (both individuals fell in the same category) or 0 (not in the same category). Compute likewise the matrix for B.
A 1 2 3 4 5 B 1 2 3 4 5
_____________ _____________
1| 1 1| 1
2| 0 1 2| 0 1
3| 0 0 1 3| 0 1 1
4| 0 1 0 1 4| 1 0 0 1
5| 1 0 0 0 1 5| 1 0 0 0 1
Sum the corresponding entries of the two matrices and divide by 2 (number of nominal variables) - here you are with the matrix of Dice coefficients. (So, actually you don't have to create dummies to compute Dice, with matrix operations you may probably do it faster the way just described.) See a related topic on Dice for the association of nominal attribures.
Albeit Dice is the most apparent measure to use when you want a (dis)similarity function between cases when attributes are categorical, other binary measures could be used - if find their formula satisfy considerations about your nominal data.
Measures like Simple Matching (SM, or Rand) $\frac{a+d}{a+b+c+d}$ which contain $d$ in the numerator won't suit you on the grounds that they treat 0-0 (both individuals do not have a specific common attribute/category) as a match, which is obviuosly nonsense with originally nominal, qualitative features. So check the formula of the similarity or dissimilarity you plan to use with the sets of dummy variables: if it has or implies $d$ as the grounds for sameness, don't use that measure for nominal data. For example, squared Euclidean distance, which formula becomes with binary data just $b+c$ (and is synonymic in this case to Manhattan distance or Hamming distance) does treat $d$ as the basis of sameness. Actually, $d^2 = p(1-SM)$, where $p$ is the number of binary attributes; thus Euclidean distance is informationally equal worth with SM and shouldn't be applied to originally nominal data.
But...
Having read the previous "theoretical" paragraph I realized that - in spite what I wrote - the majority of binary coefficients (also those using $d$) practically will do most of the time. I established by check that with dummy variables obtained from a number of nominal ones Dice coefficient is related strictly functionally with a number of other binary measures (acronym is the measure's keyword in SPSS):
relation with Dice
Similarities
Russell and Rao (simple joint prob) RR proportional
Simple matching (or Rand) SM linear
Jaccard JACCARD monotonic
Sokal and Sneath 1 SS1 monotonic
Rogers and Tanimoto RT monotonic
Sokal and Sneath 2 SS2 monotonic
Sokal and Sneath 4 SS4 linear
Hamann HAMANN linear
Phi (or Pearson) correlation PHI linear
Dispersion similarity DISPER linear
Dissimilarities
Euclidean distance BEUCLID monotonic
Squared Euclidean distance BSEUCLID linear
Pattern difference PATTERN monotonic (linear w/o d term omitted from formula)
Variance dissimilarity VARIANCE linear
Since in many applications of a proximity matrix, such as in many methods of cluster analysis, results will not change or will change smoothly under linear (and sometimes even under monotonic) transform of proximities, it appears one may be justified to a vast number of binary measures besides Dice to get same or similar results. But you should first consider/explore how the specific method (for example a linkage in hierarchical clustering) reacts to a given transformation of proximities.
If your planned clustering or MDS analysis is sensitive to monotonic transforms of distances you better refrain from using measures noted as "monotonic" in the table above (and thus yes, it isn't good idea to use Jaccard similarity or nonsquared euclidean distance with dummy, i.e. former nominal, attributes).
|
What is the optimal distance function for individuals when attributes are nominal?
|
Technically to compute a dis(similarity) measure between individuals on nominal attributes most programs first recode each nominal variable into a set of dummy binary variables and then compute some m
|
What is the optimal distance function for individuals when attributes are nominal?
Technically to compute a dis(similarity) measure between individuals on nominal attributes most programs first recode each nominal variable into a set of dummy binary variables and then compute some measure for binary variables. Here is formulas of some frequently used binary similarity and dissimilarity measures.
What is dummy variables (also called one-hot)? Below is 5 individuals, two nominal variables (A with 3 categories, B with 2 categories). 3 dummies created in place of A, 2 dummies created in place of B.
ID A B A1 A2 A3 B1 B2
1 2 1 0 1 0 1 0
2 1 2 1 0 0 0 1
3 3 2 0 0 1 0 1
4 1 1 1 0 0 1 0
5 2 1 0 1 0 1 0
(There is no need to eliminate one dummy variable as "redundant" as we typically would do it in regression with dummies. It is not practised in clustering, albeit in special situations you might consider that option.)
There are many measures for binary variables, however, not all of them logically suit dummy binary variables, i.e. former nominal ones. You see, for a nominal variable, the fact "the 2 individuals match" and the fact "the 2 individuals don't match" are of equal importance. But consider popular Jaccard measure $\frac{a}{a+b+c}$, where
a - number of dummies 1 for both individuals
b - number of dummies 1 for this and 0 for that
c - number of dummies 0 for this and 1 for that
d - number of dummies 0 for both
Here mismatch consists of two variants, $b$ and $c$; but for us, as already said, each of them is of the same importance as match $a$. Hence we should double-weight $a$, and get formula $\frac{2a}{2a+b+c}$, known as Dice (after Lee Dice) or Czekanovsky-Sorensen measure. It is more appropriate for dummy variables. Indeed, famous composite Gower coefficient (which is recommeded for you with your nominal attributes) is exactly equal to Dice when all the attributes are nominal. Note also that for dummy variables Dice measure (between individuals) = Ochiai measure (which is simply a cosine) = Kulczynsky 2 measure. And more for your information, 1-Dice = binary Lance-Williams distance known also as Bray-Curtis distance. Look how many synonyms - you are sure to find something of that in your software!
The intuitive validity of Dice similarity coefficient comes from the fact that it is simply the co-occurence proportion (or relative agreement). For the data snippet above, take nominal column A and compute 5x5 square symmetric matrix with either 1 (both individuals fell in the same category) or 0 (not in the same category). Compute likewise the matrix for B.
A 1 2 3 4 5 B 1 2 3 4 5
_____________ _____________
1| 1 1| 1
2| 0 1 2| 0 1
3| 0 0 1 3| 0 1 1
4| 0 1 0 1 4| 1 0 0 1
5| 1 0 0 0 1 5| 1 0 0 0 1
Sum the corresponding entries of the two matrices and divide by 2 (number of nominal variables) - here you are with the matrix of Dice coefficients. (So, actually you don't have to create dummies to compute Dice, with matrix operations you may probably do it faster the way just described.) See a related topic on Dice for the association of nominal attribures.
Albeit Dice is the most apparent measure to use when you want a (dis)similarity function between cases when attributes are categorical, other binary measures could be used - if find their formula satisfy considerations about your nominal data.
Measures like Simple Matching (SM, or Rand) $\frac{a+d}{a+b+c+d}$ which contain $d$ in the numerator won't suit you on the grounds that they treat 0-0 (both individuals do not have a specific common attribute/category) as a match, which is obviuosly nonsense with originally nominal, qualitative features. So check the formula of the similarity or dissimilarity you plan to use with the sets of dummy variables: if it has or implies $d$ as the grounds for sameness, don't use that measure for nominal data. For example, squared Euclidean distance, which formula becomes with binary data just $b+c$ (and is synonymic in this case to Manhattan distance or Hamming distance) does treat $d$ as the basis of sameness. Actually, $d^2 = p(1-SM)$, where $p$ is the number of binary attributes; thus Euclidean distance is informationally equal worth with SM and shouldn't be applied to originally nominal data.
But...
Having read the previous "theoretical" paragraph I realized that - in spite what I wrote - the majority of binary coefficients (also those using $d$) practically will do most of the time. I established by check that with dummy variables obtained from a number of nominal ones Dice coefficient is related strictly functionally with a number of other binary measures (acronym is the measure's keyword in SPSS):
relation with Dice
Similarities
Russell and Rao (simple joint prob) RR proportional
Simple matching (or Rand) SM linear
Jaccard JACCARD monotonic
Sokal and Sneath 1 SS1 monotonic
Rogers and Tanimoto RT monotonic
Sokal and Sneath 2 SS2 monotonic
Sokal and Sneath 4 SS4 linear
Hamann HAMANN linear
Phi (or Pearson) correlation PHI linear
Dispersion similarity DISPER linear
Dissimilarities
Euclidean distance BEUCLID monotonic
Squared Euclidean distance BSEUCLID linear
Pattern difference PATTERN monotonic (linear w/o d term omitted from formula)
Variance dissimilarity VARIANCE linear
Since in many applications of a proximity matrix, such as in many methods of cluster analysis, results will not change or will change smoothly under linear (and sometimes even under monotonic) transform of proximities, it appears one may be justified to a vast number of binary measures besides Dice to get same or similar results. But you should first consider/explore how the specific method (for example a linkage in hierarchical clustering) reacts to a given transformation of proximities.
If your planned clustering or MDS analysis is sensitive to monotonic transforms of distances you better refrain from using measures noted as "monotonic" in the table above (and thus yes, it isn't good idea to use Jaccard similarity or nonsquared euclidean distance with dummy, i.e. former nominal, attributes).
|
What is the optimal distance function for individuals when attributes are nominal?
Technically to compute a dis(similarity) measure between individuals on nominal attributes most programs first recode each nominal variable into a set of dummy binary variables and then compute some m
|
14,709
|
How to do regression with effect coding instead of dummy coding in R?
|
In principle, there are two types of contrast coding, with which the intercept will estimate the Grand Mean. These are sum contrasts and repeated contrasts (sliding differences).
Here's an example data set:
set.seed(42)
x <- data.frame(a = c(rnorm(100,2), rnorm(100,1),rnorm(100,0)),
b = rep(c("A", "B", "C"), each = 100))
The conditions' means:
tapply(x$a, x$b, mean)
A B C
2.03251482 0.91251629 -0.01036817
The Grand Mean:
mean(tapply(x$a, x$b, mean))
[1] 0.978221
You can specify the type of contrast coding with the contrasts parameter in lm.
Sum contrasts
lm(a ~ b, x, contrasts = list(b = contr.sum))
Coefficients:
(Intercept) b1 b2
0.9782 1.0543 -0.0657
The intercept is the Grand Mean. The first slope is the difference between the first factor level and the Grand Mean. The second slope is the difference between the second factor level and the Grand Mean.
Repeated contrasts
The function for creating repeated contrasts is part of the MASS package.
lm(a ~ b, x, contrasts = list(b = MASS::contr.sdif))
Coefficients:
(Intercept) b2-1 b3-2
0.9782 -1.1200 -0.9229
The intercept is the Grand Mean. The slopes indicate the differences between consecutive factor levels (2 vs. 1, 3 vs. 2).
|
How to do regression with effect coding instead of dummy coding in R?
|
In principle, there are two types of contrast coding, with which the intercept will estimate the Grand Mean. These are sum contrasts and repeated contrasts (sliding differences).
Here's an example dat
|
How to do regression with effect coding instead of dummy coding in R?
In principle, there are two types of contrast coding, with which the intercept will estimate the Grand Mean. These are sum contrasts and repeated contrasts (sliding differences).
Here's an example data set:
set.seed(42)
x <- data.frame(a = c(rnorm(100,2), rnorm(100,1),rnorm(100,0)),
b = rep(c("A", "B", "C"), each = 100))
The conditions' means:
tapply(x$a, x$b, mean)
A B C
2.03251482 0.91251629 -0.01036817
The Grand Mean:
mean(tapply(x$a, x$b, mean))
[1] 0.978221
You can specify the type of contrast coding with the contrasts parameter in lm.
Sum contrasts
lm(a ~ b, x, contrasts = list(b = contr.sum))
Coefficients:
(Intercept) b1 b2
0.9782 1.0543 -0.0657
The intercept is the Grand Mean. The first slope is the difference between the first factor level and the Grand Mean. The second slope is the difference between the second factor level and the Grand Mean.
Repeated contrasts
The function for creating repeated contrasts is part of the MASS package.
lm(a ~ b, x, contrasts = list(b = MASS::contr.sdif))
Coefficients:
(Intercept) b2-1 b3-2
0.9782 -1.1200 -0.9229
The intercept is the Grand Mean. The slopes indicate the differences between consecutive factor levels (2 vs. 1, 3 vs. 2).
|
How to do regression with effect coding instead of dummy coding in R?
In principle, there are two types of contrast coding, with which the intercept will estimate the Grand Mean. These are sum contrasts and repeated contrasts (sliding differences).
Here's an example dat
|
14,710
|
How to do regression with effect coding instead of dummy coding in R?
|
Nitpicking: if your professor told you to code your variables with (-1, 1), he told you to use effect coding, not effect sizes. At any rate, @user20650 is right. As usual, the UCLA stats help website has a useful page the explains how to do this with R.
|
How to do regression with effect coding instead of dummy coding in R?
|
Nitpicking: if your professor told you to code your variables with (-1, 1), he told you to use effect coding, not effect sizes. At any rate, @user20650 is right. As usual, the UCLA stats help websit
|
How to do regression with effect coding instead of dummy coding in R?
Nitpicking: if your professor told you to code your variables with (-1, 1), he told you to use effect coding, not effect sizes. At any rate, @user20650 is right. As usual, the UCLA stats help website has a useful page the explains how to do this with R.
|
How to do regression with effect coding instead of dummy coding in R?
Nitpicking: if your professor told you to code your variables with (-1, 1), he told you to use effect coding, not effect sizes. At any rate, @user20650 is right. As usual, the UCLA stats help websit
|
14,711
|
Difference time series before Arima or within Arima
|
There are several issues here.
If you difference first, then Arima() will fit a model to the differenced data. If you let Arima() do the differencing as part of the estimation procedure, it will use a diffuse prior for the initialization. This is explained in the help file for arima(). So the results will be different due to the different ways the initial observation is handled. I don't think it makes much difference in terms of the quality of the estimation. However, it is much easier to let Arima() handle the differencing if you want forecasts or fitted values on the original (undifferenced) data.
Apart from differences in estimation, your two models are not equivalent because modB includes a constant while modA does not. By default, Arima() includes a constant when $d=0$ and no constant when $d>0$. You can over-ride these defaults with the include.mean argument.
Fitted values for the original data are not equivalent to the undifferenced fitted values on the differenced data. To see this, note that the fitted values on the original data are given by
$$\hat{X}_t = X_{t-1} + \phi(X_{t-1}-X_{t-2})$$
whereas the fitted values on the differenced data are given by
$$\hat{Y}_t = \phi (X_{t-1}-X_{t-2})$$
where $\{X_t\}$ is the original time series and $\{Y_t\}$ is the differenced series. Thus $$\hat{X}_t - \hat{X}_{t-1} \ne \hat{Y}_t.$$
|
Difference time series before Arima or within Arima
|
There are several issues here.
If you difference first, then Arima() will fit a model to the differenced data. If you let Arima() do the differencing as part of the estimation procedure, it will use
|
Difference time series before Arima or within Arima
There are several issues here.
If you difference first, then Arima() will fit a model to the differenced data. If you let Arima() do the differencing as part of the estimation procedure, it will use a diffuse prior for the initialization. This is explained in the help file for arima(). So the results will be different due to the different ways the initial observation is handled. I don't think it makes much difference in terms of the quality of the estimation. However, it is much easier to let Arima() handle the differencing if you want forecasts or fitted values on the original (undifferenced) data.
Apart from differences in estimation, your two models are not equivalent because modB includes a constant while modA does not. By default, Arima() includes a constant when $d=0$ and no constant when $d>0$. You can over-ride these defaults with the include.mean argument.
Fitted values for the original data are not equivalent to the undifferenced fitted values on the differenced data. To see this, note that the fitted values on the original data are given by
$$\hat{X}_t = X_{t-1} + \phi(X_{t-1}-X_{t-2})$$
whereas the fitted values on the differenced data are given by
$$\hat{Y}_t = \phi (X_{t-1}-X_{t-2})$$
where $\{X_t\}$ is the original time series and $\{Y_t\}$ is the differenced series. Thus $$\hat{X}_t - \hat{X}_{t-1} \ne \hat{Y}_t.$$
|
Difference time series before Arima or within Arima
There are several issues here.
If you difference first, then Arima() will fit a model to the differenced data. If you let Arima() do the differencing as part of the estimation procedure, it will use
|
14,712
|
Difference time series before Arima or within Arima
|
Sometimes you need to remove local means to make the series stationary. If the original series has an acf that doesn't die out this can be due to a level/step shift in the series. The remedy is to de-mean the series.
RESPONSE TO BOUNTY:
The way to get the same results/fitted values is after physically differencing the oroiginal (Y(t) series to get first difference (dely) , estimate an AR(1) without a constant.This is tantamount to fitting an OLS model of the form dely(t)= B1*dely(t-1) + a(t) WITHOUT an intercept.The fitted values from this model, suitably integrated of order 1 will ( I believe ) give you the fitted values of a model; [1-B][AR(1)]Y(t)=a(t). Most pieces of software , with the noted exception of AUTOBOX will NOT ALLOW you to estimate an AR(1) model without a constant. Here is the equation for dely =+ [(1- .675B* 1)]**-1 [A(T)] while the equation for Y was
[(1-B*1)]Y(T) =+ [(1- .676B* 1)]**-1 [A(T)] . Note the rounding error caused by the physical differencing of Y. Note that when differencing is in effect (in the model ) OR not the user can select whether or not to include or to exclude the constant. Normal process is to include a constant for a stationary (i.e. undifferenced) ARIMA model and to optionally include a constant when differencing is in the model. It appears that the alternative approach (Arima) forces a constant into a stationary model which in my opinion has caused your dilemma.
|
Difference time series before Arima or within Arima
|
Sometimes you need to remove local means to make the series stationary. If the original series has an acf that doesn't die out this can be due to a level/step shift in the series. The remedy is to de-
|
Difference time series before Arima or within Arima
Sometimes you need to remove local means to make the series stationary. If the original series has an acf that doesn't die out this can be due to a level/step shift in the series. The remedy is to de-mean the series.
RESPONSE TO BOUNTY:
The way to get the same results/fitted values is after physically differencing the oroiginal (Y(t) series to get first difference (dely) , estimate an AR(1) without a constant.This is tantamount to fitting an OLS model of the form dely(t)= B1*dely(t-1) + a(t) WITHOUT an intercept.The fitted values from this model, suitably integrated of order 1 will ( I believe ) give you the fitted values of a model; [1-B][AR(1)]Y(t)=a(t). Most pieces of software , with the noted exception of AUTOBOX will NOT ALLOW you to estimate an AR(1) model without a constant. Here is the equation for dely =+ [(1- .675B* 1)]**-1 [A(T)] while the equation for Y was
[(1-B*1)]Y(T) =+ [(1- .676B* 1)]**-1 [A(T)] . Note the rounding error caused by the physical differencing of Y. Note that when differencing is in effect (in the model ) OR not the user can select whether or not to include or to exclude the constant. Normal process is to include a constant for a stationary (i.e. undifferenced) ARIMA model and to optionally include a constant when differencing is in the model. It appears that the alternative approach (Arima) forces a constant into a stationary model which in my opinion has caused your dilemma.
|
Difference time series before Arima or within Arima
Sometimes you need to remove local means to make the series stationary. If the original series has an acf that doesn't die out this can be due to a level/step shift in the series. The remedy is to de-
|
14,713
|
Difference time series before Arima or within Arima
|
I don't know why there would be a difference in the results unless somehow you are differencing more times one way than the other. for an ARIMA(p,d,q) the d differences are done first before any model fitting. Then the stationary ARMA(p,q) model is fit to the differenced series. The assumption is that after the removal of polynomial trends in the series the remaining series is stationary. The number of differences corresponds to the order of teh polynomial that you want to remove. So for a linear trend you just take one difference, for a quadratic trend you take two differences. I don't agree with most of what was said in John's answer.
|
Difference time series before Arima or within Arima
|
I don't know why there would be a difference in the results unless somehow you are differencing more times one way than the other. for an ARIMA(p,d,q) the d differences are done first before any mod
|
Difference time series before Arima or within Arima
I don't know why there would be a difference in the results unless somehow you are differencing more times one way than the other. for an ARIMA(p,d,q) the d differences are done first before any model fitting. Then the stationary ARMA(p,q) model is fit to the differenced series. The assumption is that after the removal of polynomial trends in the series the remaining series is stationary. The number of differences corresponds to the order of teh polynomial that you want to remove. So for a linear trend you just take one difference, for a quadratic trend you take two differences. I don't agree with most of what was said in John's answer.
|
Difference time series before Arima or within Arima
I don't know why there would be a difference in the results unless somehow you are differencing more times one way than the other. for an ARIMA(p,d,q) the d differences are done first before any mod
|
14,714
|
Difference time series before Arima or within Arima
|
One reason to difference an I(1) series is to make it stationary. Presuming you have the correct specification for the ARIMA model, the residuals to the model will have the autoregressive and moving average components removed and should be stationary. In that respect it can make sense to use the residuals to the model, rather than differencing. However, in circumstances where you have a lot of data that you think is approximately I(1), some people will just difference the data rather than estimate the ARIMA model wholly. The ARIMA model can fit a whole host of time series problems where it may not make sense to difference. For instance, if the data experiences mean-reversion, this may not always be appropriate to difference since it may not be I(1).
|
Difference time series before Arima or within Arima
|
One reason to difference an I(1) series is to make it stationary. Presuming you have the correct specification for the ARIMA model, the residuals to the model will have the autoregressive and moving a
|
Difference time series before Arima or within Arima
One reason to difference an I(1) series is to make it stationary. Presuming you have the correct specification for the ARIMA model, the residuals to the model will have the autoregressive and moving average components removed and should be stationary. In that respect it can make sense to use the residuals to the model, rather than differencing. However, in circumstances where you have a lot of data that you think is approximately I(1), some people will just difference the data rather than estimate the ARIMA model wholly. The ARIMA model can fit a whole host of time series problems where it may not make sense to difference. For instance, if the data experiences mean-reversion, this may not always be appropriate to difference since it may not be I(1).
|
Difference time series before Arima or within Arima
One reason to difference an I(1) series is to make it stationary. Presuming you have the correct specification for the ARIMA model, the residuals to the model will have the autoregressive and moving a
|
14,715
|
Large scale text classification
|
This should be possible to make it work as long as the data is represented as a sparse data structure such as scipy.sparse.csr_matrix instance in Python. I wrote a tutorial for working on text data. It is further possible to reduce the memory usage further by leveraging the hashing trick: adapt it to use the HashingVectorizer instead of the CountingVectorizer or the TfidfVectorizer. This is explained in the documentation section text features extraction.
Random Forests are in general much more expensive than linear models (such as linear support vector machines and logistic regression) and multinomial or Bernoulli naive Bayes and for most text classification problems that do not bring significantly better predictive accuracy than simpler models.
If scikit-learn ends up not being able to scale to your problem, Vowpal Wabbit will do (and probably faster than sklearn) albeit it does not implement all the models your are talking about.
Edited in April 2015 to reflect the current state of the scikit-learn library and to fix broken links.
|
Large scale text classification
|
This should be possible to make it work as long as the data is represented as a sparse data structure such as scipy.sparse.csr_matrix instance in Python. I wrote a tutorial for working on text data. I
|
Large scale text classification
This should be possible to make it work as long as the data is represented as a sparse data structure such as scipy.sparse.csr_matrix instance in Python. I wrote a tutorial for working on text data. It is further possible to reduce the memory usage further by leveraging the hashing trick: adapt it to use the HashingVectorizer instead of the CountingVectorizer or the TfidfVectorizer. This is explained in the documentation section text features extraction.
Random Forests are in general much more expensive than linear models (such as linear support vector machines and logistic regression) and multinomial or Bernoulli naive Bayes and for most text classification problems that do not bring significantly better predictive accuracy than simpler models.
If scikit-learn ends up not being able to scale to your problem, Vowpal Wabbit will do (and probably faster than sklearn) albeit it does not implement all the models your are talking about.
Edited in April 2015 to reflect the current state of the scikit-learn library and to fix broken links.
|
Large scale text classification
This should be possible to make it work as long as the data is represented as a sparse data structure such as scipy.sparse.csr_matrix instance in Python. I wrote a tutorial for working on text data. I
|
14,716
|
Large scale text classification
|
Gensim for Python is magic. And since it's in Python, you can use it in conjunction with @ogrisel's suggestion.
|
Large scale text classification
|
Gensim for Python is magic. And since it's in Python, you can use it in conjunction with @ogrisel's suggestion.
|
Large scale text classification
Gensim for Python is magic. And since it's in Python, you can use it in conjunction with @ogrisel's suggestion.
|
Large scale text classification
Gensim for Python is magic. And since it's in Python, you can use it in conjunction with @ogrisel's suggestion.
|
14,717
|
Large scale text classification
|
Not to toot my own horn, but I made a pretty popular video series on text analytics with Rapidminer. You can see it here:
http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-loading.html
You can likely avoid doing feature selection, just use a classifier that doesn't create a million * million matrix in memory :)
Logistic regression will choke on that many dimensions. Naive Bayes assumes independent dimensions, so you will be fine. SVM doesn't depend on the number of dimensions (but on the number of support vectors) so it will be fine as well.
300 is a lot of classes though. I would start with only a few and work your way up.
|
Large scale text classification
|
Not to toot my own horn, but I made a pretty popular video series on text analytics with Rapidminer. You can see it here:
http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-loadi
|
Large scale text classification
Not to toot my own horn, but I made a pretty popular video series on text analytics with Rapidminer. You can see it here:
http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-loading.html
You can likely avoid doing feature selection, just use a classifier that doesn't create a million * million matrix in memory :)
Logistic regression will choke on that many dimensions. Naive Bayes assumes independent dimensions, so you will be fine. SVM doesn't depend on the number of dimensions (but on the number of support vectors) so it will be fine as well.
300 is a lot of classes though. I would start with only a few and work your way up.
|
Large scale text classification
Not to toot my own horn, but I made a pretty popular video series on text analytics with Rapidminer. You can see it here:
http://vancouverdata.blogspot.com/2010/11/text-analytics-with-rapidminer-loadi
|
14,718
|
Large scale text classification
|
First, based on your comments, I would treat this as 300 binary (yes/no) classification problems. There are many easy-to-use open source binary classifier learners, and this lets you trade time for memory.
SVMs and logistic regression are probably the most popular approaches for text classification. Both can easily handle 1000000 dimensions, since modern implementations use sparse data structures, and include regularization settings that avoid overfitting.
Several open source machine learning suites, including WEKA and KNIME, include both SVMs and logistic regression. Standalone implementations of SVMs include libSVM and SVMlight. For logistic regression, I'll plug BXRtrain and BXRclassify, which I developed with Madigan, Genkin, and others. BXRclassify can build an in-memory index of thousands of logistic regression models and apply them simultaneously.
As for converting text to attribute vector form, I somehow always end up writing a little Perl to do that from scratch. :-) But I think the machine learning suites I mentioned include tokenization and vectorization code. Another route would be to go with more of a natural language toolkit like LingPipe, though that may be overkill for you.
|
Large scale text classification
|
First, based on your comments, I would treat this as 300 binary (yes/no) classification problems. There are many easy-to-use open source binary classifier learners, and this lets you trade time for m
|
Large scale text classification
First, based on your comments, I would treat this as 300 binary (yes/no) classification problems. There are many easy-to-use open source binary classifier learners, and this lets you trade time for memory.
SVMs and logistic regression are probably the most popular approaches for text classification. Both can easily handle 1000000 dimensions, since modern implementations use sparse data structures, and include regularization settings that avoid overfitting.
Several open source machine learning suites, including WEKA and KNIME, include both SVMs and logistic regression. Standalone implementations of SVMs include libSVM and SVMlight. For logistic regression, I'll plug BXRtrain and BXRclassify, which I developed with Madigan, Genkin, and others. BXRclassify can build an in-memory index of thousands of logistic regression models and apply them simultaneously.
As for converting text to attribute vector form, I somehow always end up writing a little Perl to do that from scratch. :-) But I think the machine learning suites I mentioned include tokenization and vectorization code. Another route would be to go with more of a natural language toolkit like LingPipe, though that may be overkill for you.
|
Large scale text classification
First, based on your comments, I would treat this as 300 binary (yes/no) classification problems. There are many easy-to-use open source binary classifier learners, and this lets you trade time for m
|
14,719
|
Large scale text classification
|
Since Sklearn 0.13 there is indeed an implementation of the HashingVectorizer.
EDIT: Here is a full-fledged example of such an application from sklearn docs
Basically, this example demonstrates that you can classify text on data that cannot fit in the computer's main memory (but rather on disk / network / ...).
|
Large scale text classification
|
Since Sklearn 0.13 there is indeed an implementation of the HashingVectorizer.
EDIT: Here is a full-fledged example of such an application from sklearn docs
Basically, this example demonstrates that y
|
Large scale text classification
Since Sklearn 0.13 there is indeed an implementation of the HashingVectorizer.
EDIT: Here is a full-fledged example of such an application from sklearn docs
Basically, this example demonstrates that you can classify text on data that cannot fit in the computer's main memory (but rather on disk / network / ...).
|
Large scale text classification
Since Sklearn 0.13 there is indeed an implementation of the HashingVectorizer.
EDIT: Here is a full-fledged example of such an application from sklearn docs
Basically, this example demonstrates that y
|
14,720
|
Generate uniformly distributed weights that sum to unity?
|
Choose $\mathbf{x} \in [0,1]^{n-1}$ uniformly (by means of $n-1$ uniform reals in the interval $[0,1]$). Sort the coefficients so that $0 \le x_1 \le \cdots \le x_{n-1}$. Set
$$\mathbf{w} = (x_1, x_2-x_1, x_3 - x_2, \ldots, x_{n-1} - x_{n-2}, 1 - x_{n-1}).$$
Because we can recover the sorted $x_i$ by means of the partial sums of the $w_i$, the mapping $\mathbf{x} \to \mathbf{w}$ is $(n-1)!$ to 1; in particular, its image is the $n-1$ simplex in $\mathbb{R}^n$. Because (a) each swap in a sort is a linear transformation, (b) the preceding formula is linear, and (c) linear transformations preserve uniformity of distributions, the uniformity of $\mathbf{x}$ implies the uniformity of $\mathbf{w}$ on the $n-1$ simplex. In particular, note that the marginals of $\mathbf{w}$ are not necessarily independent.
This 3D point plot shows the results of 2000 iterations of this algorithm for $n=3$. The points are confined to the simplex and are approximately uniformly distributed over it.
Because the execution time of this algorithm is $O(n \log(n)) \gg O(n)$, it is inefficient for large $n$. But this does answer the question! A better way (in general) to generate uniformly distributed values on the $n-1$-simplex is to draw $n$ uniform reals $(x_1, \ldots, x_n)$ on the interval $[0,1]$, compute
$$y_i = -\log(x_i)$$
(which makes each $y_i$ positive with probability $1$, whence their sum is almost surely nonzero) and set
$$\mathbf w = (y_1, y_2, \ldots, y_n) / (y_1 + y_2 + \cdots + y_n).$$
This works because each $y_i$ has a $\Gamma(1)$ distribution, which implies $\mathbf w$ has a Dirichlet$(1,1,1)$ distribution--and that is uniform.
|
Generate uniformly distributed weights that sum to unity?
|
Choose $\mathbf{x} \in [0,1]^{n-1}$ uniformly (by means of $n-1$ uniform reals in the interval $[0,1]$). Sort the coefficients so that $0 \le x_1 \le \cdots \le x_{n-1}$. Set
$$\mathbf{w} = (x_1, x
|
Generate uniformly distributed weights that sum to unity?
Choose $\mathbf{x} \in [0,1]^{n-1}$ uniformly (by means of $n-1$ uniform reals in the interval $[0,1]$). Sort the coefficients so that $0 \le x_1 \le \cdots \le x_{n-1}$. Set
$$\mathbf{w} = (x_1, x_2-x_1, x_3 - x_2, \ldots, x_{n-1} - x_{n-2}, 1 - x_{n-1}).$$
Because we can recover the sorted $x_i$ by means of the partial sums of the $w_i$, the mapping $\mathbf{x} \to \mathbf{w}$ is $(n-1)!$ to 1; in particular, its image is the $n-1$ simplex in $\mathbb{R}^n$. Because (a) each swap in a sort is a linear transformation, (b) the preceding formula is linear, and (c) linear transformations preserve uniformity of distributions, the uniformity of $\mathbf{x}$ implies the uniformity of $\mathbf{w}$ on the $n-1$ simplex. In particular, note that the marginals of $\mathbf{w}$ are not necessarily independent.
This 3D point plot shows the results of 2000 iterations of this algorithm for $n=3$. The points are confined to the simplex and are approximately uniformly distributed over it.
Because the execution time of this algorithm is $O(n \log(n)) \gg O(n)$, it is inefficient for large $n$. But this does answer the question! A better way (in general) to generate uniformly distributed values on the $n-1$-simplex is to draw $n$ uniform reals $(x_1, \ldots, x_n)$ on the interval $[0,1]$, compute
$$y_i = -\log(x_i)$$
(which makes each $y_i$ positive with probability $1$, whence their sum is almost surely nonzero) and set
$$\mathbf w = (y_1, y_2, \ldots, y_n) / (y_1 + y_2 + \cdots + y_n).$$
This works because each $y_i$ has a $\Gamma(1)$ distribution, which implies $\mathbf w$ has a Dirichlet$(1,1,1)$ distribution--and that is uniform.
|
Generate uniformly distributed weights that sum to unity?
Choose $\mathbf{x} \in [0,1]^{n-1}$ uniformly (by means of $n-1$ uniform reals in the interval $[0,1]$). Sort the coefficients so that $0 \le x_1 \le \cdots \le x_{n-1}$. Set
$$\mathbf{w} = (x_1, x
|
14,721
|
Generate uniformly distributed weights that sum to unity?
|
zz <- c(0, log(-log(runif(n-1))))
ezz <- exp(zz)
w <- ezz/sum(ezz)
The first entry is put to zero for identification; you would see that done in multinomial logistic models. Of course, in multinomial models, you would also have covariates under the exponents, rather than just the random zzs. The distribution of the zzs is the extreme value distribution; you'd need this to ensure that the resulting weights are i.i.d. I initially put rnormals there, but then had a gut feeling that this ain't gonna work.
|
Generate uniformly distributed weights that sum to unity?
|
zz <- c(0, log(-log(runif(n-1))))
ezz <- exp(zz)
w <- ezz/sum(ezz)
The first entry is put to zero for identification; you would see that done in multinomial logistic models. Of course, in mul
|
Generate uniformly distributed weights that sum to unity?
zz <- c(0, log(-log(runif(n-1))))
ezz <- exp(zz)
w <- ezz/sum(ezz)
The first entry is put to zero for identification; you would see that done in multinomial logistic models. Of course, in multinomial models, you would also have covariates under the exponents, rather than just the random zzs. The distribution of the zzs is the extreme value distribution; you'd need this to ensure that the resulting weights are i.i.d. I initially put rnormals there, but then had a gut feeling that this ain't gonna work.
|
Generate uniformly distributed weights that sum to unity?
zz <- c(0, log(-log(runif(n-1))))
ezz <- exp(zz)
w <- ezz/sum(ezz)
The first entry is put to zero for identification; you would see that done in multinomial logistic models. Of course, in mul
|
14,722
|
Generate uniformly distributed weights that sum to unity?
|
The solution is obvious. The following MathLab code provides the answer for 3 weights.
function [ ] = TESTGEN( )
SZ = 1000;
V = zeros (1, 3);
VS = zeros (SZ, 3);
for NIT=1:SZ
V(1) = rand (1,1); % uniform generation on the range 0..1
V(2) = rand (1,1) * (1 - V(1));
V(3) = 1 - V(1) - V(2);
PERM = randperm (3); % random permutation of values 1,2,3
for NID=1:3
VS (NIT, NID) = V (PERM(NID));
end
end
figure;
scatter3 (VS(:, 1), VS(:,2), VS (:,3));
end
|
Generate uniformly distributed weights that sum to unity?
|
The solution is obvious. The following MathLab code provides the answer for 3 weights.
function [ ] = TESTGEN( )
SZ = 1000;
V = zeros (1, 3);
VS = zeros (SZ, 3);
for NIT=1:SZ
V(1) = rand (1,1
|
Generate uniformly distributed weights that sum to unity?
The solution is obvious. The following MathLab code provides the answer for 3 weights.
function [ ] = TESTGEN( )
SZ = 1000;
V = zeros (1, 3);
VS = zeros (SZ, 3);
for NIT=1:SZ
V(1) = rand (1,1); % uniform generation on the range 0..1
V(2) = rand (1,1) * (1 - V(1));
V(3) = 1 - V(1) - V(2);
PERM = randperm (3); % random permutation of values 1,2,3
for NID=1:3
VS (NIT, NID) = V (PERM(NID));
end
end
figure;
scatter3 (VS(:, 1), VS(:,2), VS (:,3));
end
|
Generate uniformly distributed weights that sum to unity?
The solution is obvious. The following MathLab code provides the answer for 3 weights.
function [ ] = TESTGEN( )
SZ = 1000;
V = zeros (1, 3);
VS = zeros (SZ, 3);
for NIT=1:SZ
V(1) = rand (1,1
|
14,723
|
Resources for learning to create data visualizations?
|
Flowing data regularly discusses the tools that he uses. See, for instance:
40 Essential Tools and Resources to Visualize Data
What Visualization Tool/Software Should You Use? β Getting Started
He also shows in great detail how he makes graphics on occasion, such as:
How to Make a US County Thematic Map Using Free Tools
How to Make a Graph in Adobe Illustrator
How to Make a Heatmap β a Quick and Easy Solution
There are also other questions on this site:
Recommended visualization libraries for standalone applications
Web visualization libraries
IMO, try:
R and ggplot2: this is a good introductory video, but the ggplot2 website has lots of resources.
Processing: plenty of good tutorials on the homepage.
Protovis: also a plethora of great examples on the homepage.
You can use Adobe afterwards to clean these up.
You can also look at the R webvis package, although it isn't as complete as ggplot2. From R, you can run this command to see the Playfair's Wheat example:
install.packages("webvis")
library(webvis)
demo("playfairs.wheat")
Lastly, my favorite commercial applications for interactive visualization are:
Tableau
Spotfire
Qlikview
|
Resources for learning to create data visualizations?
|
Flowing data regularly discusses the tools that he uses. See, for instance:
40 Essential Tools and Resources to Visualize Data
What Visualization Tool/Software Should You Use? β Getting Started
He
|
Resources for learning to create data visualizations?
Flowing data regularly discusses the tools that he uses. See, for instance:
40 Essential Tools and Resources to Visualize Data
What Visualization Tool/Software Should You Use? β Getting Started
He also shows in great detail how he makes graphics on occasion, such as:
How to Make a US County Thematic Map Using Free Tools
How to Make a Graph in Adobe Illustrator
How to Make a Heatmap β a Quick and Easy Solution
There are also other questions on this site:
Recommended visualization libraries for standalone applications
Web visualization libraries
IMO, try:
R and ggplot2: this is a good introductory video, but the ggplot2 website has lots of resources.
Processing: plenty of good tutorials on the homepage.
Protovis: also a plethora of great examples on the homepage.
You can use Adobe afterwards to clean these up.
You can also look at the R webvis package, although it isn't as complete as ggplot2. From R, you can run this command to see the Playfair's Wheat example:
install.packages("webvis")
library(webvis)
demo("playfairs.wheat")
Lastly, my favorite commercial applications for interactive visualization are:
Tableau
Spotfire
Qlikview
|
Resources for learning to create data visualizations?
Flowing data regularly discusses the tools that he uses. See, for instance:
40 Essential Tools and Resources to Visualize Data
What Visualization Tool/Software Should You Use? β Getting Started
He
|
14,724
|
Resources for learning to create data visualizations?
|
Already mentioned processing has a nice set of books available. See: 1, 2, 3, 4, 5, 6, 7
You will find lots of stuff on the web to help you start with R. As next step then ggplot2 has excellent web documentation. I also found Hadley's book very helpful.
Python might be another way to go. Especially with tools like:
matplotlib
NetworkX
igraph
Chaco
Mayavi
All projects are well documented on the web. You might also consider peeking into some books.
Lastly, Graphics of Large Datasets book could be also some help.
|
Resources for learning to create data visualizations?
|
Already mentioned processing has a nice set of books available. See: 1, 2, 3, 4, 5, 6, 7
You will find lots of stuff on the web to help you start with R. As next step then ggplot2 has excellent web do
|
Resources for learning to create data visualizations?
Already mentioned processing has a nice set of books available. See: 1, 2, 3, 4, 5, 6, 7
You will find lots of stuff on the web to help you start with R. As next step then ggplot2 has excellent web documentation. I also found Hadley's book very helpful.
Python might be another way to go. Especially with tools like:
matplotlib
NetworkX
igraph
Chaco
Mayavi
All projects are well documented on the web. You might also consider peeking into some books.
Lastly, Graphics of Large Datasets book could be also some help.
|
Resources for learning to create data visualizations?
Already mentioned processing has a nice set of books available. See: 1, 2, 3, 4, 5, 6, 7
You will find lots of stuff on the web to help you start with R. As next step then ggplot2 has excellent web do
|
14,725
|
Resources for learning to create data visualizations?
|
You'll spend a lot of time getting up to speed with R.
RapidMiner is free and open source and graphical, and has plenty of good visualizations, and you can export them.
If you have money to spare, or are a university staff/student then JMP is also very freaking nice. It can make some very pretty graphs, very very easily. Can export to flash or PNG or PDF or what have you.
|
Resources for learning to create data visualizations?
|
You'll spend a lot of time getting up to speed with R.
RapidMiner is free and open source and graphical, and has plenty of good visualizations, and you can export them.
If you have money to spare, or
|
Resources for learning to create data visualizations?
You'll spend a lot of time getting up to speed with R.
RapidMiner is free and open source and graphical, and has plenty of good visualizations, and you can export them.
If you have money to spare, or are a university staff/student then JMP is also very freaking nice. It can make some very pretty graphs, very very easily. Can export to flash or PNG or PDF or what have you.
|
Resources for learning to create data visualizations?
You'll spend a lot of time getting up to speed with R.
RapidMiner is free and open source and graphical, and has plenty of good visualizations, and you can export them.
If you have money to spare, or
|
14,726
|
Resources for learning to create data visualizations?
|
Another good alternative is the protovis library http://vis.stanford.edu/protovis/
It is a very well crafted JavaScript library that can create some beautiful visualizations if you have the time and ability to write the modest amount of JavaScript code needed.
I also highly recommend Tableau http://www.tableausoftware.com. It is great for rapidly exploring data sets and creating many different visualizations.
Both products have roots at the Stanford Visualization Group.
|
Resources for learning to create data visualizations?
|
Another good alternative is the protovis library http://vis.stanford.edu/protovis/
It is a very well crafted JavaScript library that can create some beautiful visualizations if you have the time and a
|
Resources for learning to create data visualizations?
Another good alternative is the protovis library http://vis.stanford.edu/protovis/
It is a very well crafted JavaScript library that can create some beautiful visualizations if you have the time and ability to write the modest amount of JavaScript code needed.
I also highly recommend Tableau http://www.tableausoftware.com. It is great for rapidly exploring data sets and creating many different visualizations.
Both products have roots at the Stanford Visualization Group.
|
Resources for learning to create data visualizations?
Another good alternative is the protovis library http://vis.stanford.edu/protovis/
It is a very well crafted JavaScript library that can create some beautiful visualizations if you have the time and a
|
14,727
|
Resources for learning to create data visualizations?
|
Many excellent answers have been given here, and the languages/libraries you choose to learn will be dependent on the type of visualization you would like to do.
However, if you use Python regularly then I highly recommend seaborn. It is very sophisticated when it comes to statistical data visualization, but also looks quite sophisticated from a presentation standpoint.
Let's take an example. Suppose you are trying to plot electricity consumption for a commercial building by month. A simple line graph could be generated in matplotlib for this purpose.
However, if we wanted to make the visualization more sophisticated and informative, we could generate a heatmap with seaborn:
A heatmap is just one example. Some other common uses with seaborn include:
KDE plots
Swarm plots
Violin plots
The idea behind seaborn is to present data in a more intuitive way than would be possible by using simpler charts, e.g. line, bar, pie, etc.
If it is of interest to you - you can find more information on seaborn here: https://seaborn.pydata.org/
|
Resources for learning to create data visualizations?
|
Many excellent answers have been given here, and the languages/libraries you choose to learn will be dependent on the type of visualization you would like to do.
However, if you use Python regularly t
|
Resources for learning to create data visualizations?
Many excellent answers have been given here, and the languages/libraries you choose to learn will be dependent on the type of visualization you would like to do.
However, if you use Python regularly then I highly recommend seaborn. It is very sophisticated when it comes to statistical data visualization, but also looks quite sophisticated from a presentation standpoint.
Let's take an example. Suppose you are trying to plot electricity consumption for a commercial building by month. A simple line graph could be generated in matplotlib for this purpose.
However, if we wanted to make the visualization more sophisticated and informative, we could generate a heatmap with seaborn:
A heatmap is just one example. Some other common uses with seaborn include:
KDE plots
Swarm plots
Violin plots
The idea behind seaborn is to present data in a more intuitive way than would be possible by using simpler charts, e.g. line, bar, pie, etc.
If it is of interest to you - you can find more information on seaborn here: https://seaborn.pydata.org/
|
Resources for learning to create data visualizations?
Many excellent answers have been given here, and the languages/libraries you choose to learn will be dependent on the type of visualization you would like to do.
However, if you use Python regularly t
|
14,728
|
Resources for learning to create data visualizations?
|
Here is a good set of links with resources for starting to learn:
http://blog.cartodb.com/learning-data-visualization
|
Resources for learning to create data visualizations?
|
Here is a good set of links with resources for starting to learn:
http://blog.cartodb.com/learning-data-visualization
|
Resources for learning to create data visualizations?
Here is a good set of links with resources for starting to learn:
http://blog.cartodb.com/learning-data-visualization
|
Resources for learning to create data visualizations?
Here is a good set of links with resources for starting to learn:
http://blog.cartodb.com/learning-data-visualization
|
14,729
|
Resources for learning to create data visualizations?
|
R is great, but it is not that R is difficult to learn it's that the documentation is impossible to search for any other name like Rq would be great. So when you got a problem, searching for a solution is a nightmare, and the documentation is not great either. Matlab or Octave will be great. And to get those plots in R or Matlab would be very very tedious.
IMHO post processing visuals is the best route. A lot of them from flowing data are put through Adobe Illustrator or Gimp. It is faster. Once you get the structure of the plot, then change details in an editor. Using R as an editor does not give you the flexibility you want. You will find yourself searching for new packages all the time.
|
Resources for learning to create data visualizations?
|
R is great, but it is not that R is difficult to learn it's that the documentation is impossible to search for any other name like Rq would be great. So when you got a problem, searching for a solutio
|
Resources for learning to create data visualizations?
R is great, but it is not that R is difficult to learn it's that the documentation is impossible to search for any other name like Rq would be great. So when you got a problem, searching for a solution is a nightmare, and the documentation is not great either. Matlab or Octave will be great. And to get those plots in R or Matlab would be very very tedious.
IMHO post processing visuals is the best route. A lot of them from flowing data are put through Adobe Illustrator or Gimp. It is faster. Once you get the structure of the plot, then change details in an editor. Using R as an editor does not give you the flexibility you want. You will find yourself searching for new packages all the time.
|
Resources for learning to create data visualizations?
R is great, but it is not that R is difficult to learn it's that the documentation is impossible to search for any other name like Rq would be great. So when you got a problem, searching for a solutio
|
14,730
|
Resources for learning to create data visualizations?
|
Here's a YouTube tutorial on D3.js that teaches the basics of HTML, SVG, CSS and JavaScript, as well as how to load data and create a bar chart, line chart, and scatter plot with D3.js.
|
Resources for learning to create data visualizations?
|
Here's a YouTube tutorial on D3.js that teaches the basics of HTML, SVG, CSS and JavaScript, as well as how to load data and create a bar chart, line chart, and scatter plot with D3.js.
|
Resources for learning to create data visualizations?
Here's a YouTube tutorial on D3.js that teaches the basics of HTML, SVG, CSS and JavaScript, as well as how to load data and create a bar chart, line chart, and scatter plot with D3.js.
|
Resources for learning to create data visualizations?
Here's a YouTube tutorial on D3.js that teaches the basics of HTML, SVG, CSS and JavaScript, as well as how to load data and create a bar chart, line chart, and scatter plot with D3.js.
|
14,731
|
Resources for learning to create data visualizations?
|
here's a practical resource to get you started with d3. It includes a demo code and a step-by-step example on how to load, organize and visualize a dataset in d3.
https://www.edx.org/course/web-app-development-with-the-power-of-nodejs
|
Resources for learning to create data visualizations?
|
here's a practical resource to get you started with d3. It includes a demo code and a step-by-step example on how to load, organize and visualize a dataset in d3.
https://www.edx.org/course/web-app-d
|
Resources for learning to create data visualizations?
here's a practical resource to get you started with d3. It includes a demo code and a step-by-step example on how to load, organize and visualize a dataset in d3.
https://www.edx.org/course/web-app-development-with-the-power-of-nodejs
|
Resources for learning to create data visualizations?
here's a practical resource to get you started with d3. It includes a demo code and a step-by-step example on how to load, organize and visualize a dataset in d3.
https://www.edx.org/course/web-app-d
|
14,732
|
Resources for learning to create data visualizations?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
There are infinite resources, but you can narrow them down based on how you want your data to be transformed, how many data sources you're dealing with, how they need to be shared, etc.
Here's a guide on how to pick the right resource that might help point you in the right direction.
|
Resources for learning to create data visualizations?
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Resources for learning to create data visualizations?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
There are infinite resources, but you can narrow them down based on how you want your data to be transformed, how many data sources you're dealing with, how they need to be shared, etc.
Here's a guide on how to pick the right resource that might help point you in the right direction.
|
Resources for learning to create data visualizations?
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,733
|
Why does using pseudo-labeling non-trivially affect the results?
|
Pseudo-labeling doesn't work on the given toy problem
Oliver et al. (2018) evaluated different semi-supervised learning algorithms. Their first figure shows how pseudo-labeling (and other methods) perform on the same toy problem as in your question (called the 'two-moons' dataset):
The plot shows the labeled and unlabeled datapoints, and the decision boundaries obtained after training a neural net using different semi-supervised learning methods. As you suspected, pseudo-labeling doesn't work well in this situation. They say that pseudo-labeling "is a simple heuristic which is widely used in practice, likely because of its simplicity and generality". But: "While intuitive, it can nevertheless produce incorrect results when the prediction function produces unhelpful targets for [the unlabeled data], as shown in fig. 1."
Why and when does pseudo-labeling work?
Pseudo-labeling was introduced by Lee (2013), so you can find more details there.
The cluster assumption
The theoretical justification Lee gave for pseudo-labeling is that it's similar to entropy regularization. Entropy regularization (Grandvalet and Bengio 2005) is another semi-supervised learning technique, which encourages the classifier to make confident predictions on unlabeled data. For example, we'd prefer an unlabeled point to be assigned a high probability of being in a particular class, rather than diffuse probabilities spread over multiple classes. The purpose is to take advantage the assumption that the data are clustered according to class (called the "cluster assumption" in semi-supervised learning). So, nearby points have the same class, and points in different classes are more widely separated, such that the true decision boundaries run through low density regions of input space.
Why pseudo-labeling might fail
Given the above, it would seem reasonable to guess that the cluster assumption is a necessary condition for pseudo-labeling to work. But, clearly it's not sufficient, as the two-moons problem above does satisfy the cluster assumption, but pseudo-labeling doesn't work. In this case, I suspect the problem is that there are very few labeled points, and the proper cluster structure can't be identified from these points. So, as Oliver et al. describe (and as you point out in your question), the resulting pseudo-labels guide the classifier toward the wrong decision boundary. Perhaps it would work given more labeled data. For example, contrast this to the MNIST case described below, where pseudo-labeling does work.
Where it works
Lee (2013) showed that pseudo-labeling can help on the MNIST dataset (with 100-3000 labeled examples). In fig. 1 of that paper, you can see that a neural net trained on 600 labeled examples (without any semi-supervised learning) can already recover cluster structure among classes. It seems that pseudo-labeling then helps refine the structure. Note that this is unlike the two-moons example, where several labeled points were not enough to learn the proper clusters.
The paper also mentions that results were unstable with only 100 labeled examples. This again supports the idea that pseudo-labeling is sensitive to the initial predictions, and that good initial predictions require a sufficient number of labeled points.
Lee also showed that unsupervised pre-training using denoising autoencoders helps further, but this appears to be a separate way of exploiting structure in the unlabeled data; unfortunately, there was no comparison to unsupervised pre-training alone (without pseudo-labeling).
Grandvalet and Bengio (2005) reported that pseudo-labeling beats supervised learning on the CIFAR-10 and SVHN datasets (with 4000 and 1000 labeled examples, respectively). As above, this is much more labeled data than the 6 labeled points in the two-moons problem.
References
Grandvalet and Bengio (2005). Semi-supervised learning by entropy minimization.
Lee (2013). Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks.
Oliver et al. (2018). Realistic Evaluation of Semi-Supervised Learning Algorithms.
|
Why does using pseudo-labeling non-trivially affect the results?
|
Pseudo-labeling doesn't work on the given toy problem
Oliver et al. (2018) evaluated different semi-supervised learning algorithms. Their first figure shows how pseudo-labeling (and other methods) per
|
Why does using pseudo-labeling non-trivially affect the results?
Pseudo-labeling doesn't work on the given toy problem
Oliver et al. (2018) evaluated different semi-supervised learning algorithms. Their first figure shows how pseudo-labeling (and other methods) perform on the same toy problem as in your question (called the 'two-moons' dataset):
The plot shows the labeled and unlabeled datapoints, and the decision boundaries obtained after training a neural net using different semi-supervised learning methods. As you suspected, pseudo-labeling doesn't work well in this situation. They say that pseudo-labeling "is a simple heuristic which is widely used in practice, likely because of its simplicity and generality". But: "While intuitive, it can nevertheless produce incorrect results when the prediction function produces unhelpful targets for [the unlabeled data], as shown in fig. 1."
Why and when does pseudo-labeling work?
Pseudo-labeling was introduced by Lee (2013), so you can find more details there.
The cluster assumption
The theoretical justification Lee gave for pseudo-labeling is that it's similar to entropy regularization. Entropy regularization (Grandvalet and Bengio 2005) is another semi-supervised learning technique, which encourages the classifier to make confident predictions on unlabeled data. For example, we'd prefer an unlabeled point to be assigned a high probability of being in a particular class, rather than diffuse probabilities spread over multiple classes. The purpose is to take advantage the assumption that the data are clustered according to class (called the "cluster assumption" in semi-supervised learning). So, nearby points have the same class, and points in different classes are more widely separated, such that the true decision boundaries run through low density regions of input space.
Why pseudo-labeling might fail
Given the above, it would seem reasonable to guess that the cluster assumption is a necessary condition for pseudo-labeling to work. But, clearly it's not sufficient, as the two-moons problem above does satisfy the cluster assumption, but pseudo-labeling doesn't work. In this case, I suspect the problem is that there are very few labeled points, and the proper cluster structure can't be identified from these points. So, as Oliver et al. describe (and as you point out in your question), the resulting pseudo-labels guide the classifier toward the wrong decision boundary. Perhaps it would work given more labeled data. For example, contrast this to the MNIST case described below, where pseudo-labeling does work.
Where it works
Lee (2013) showed that pseudo-labeling can help on the MNIST dataset (with 100-3000 labeled examples). In fig. 1 of that paper, you can see that a neural net trained on 600 labeled examples (without any semi-supervised learning) can already recover cluster structure among classes. It seems that pseudo-labeling then helps refine the structure. Note that this is unlike the two-moons example, where several labeled points were not enough to learn the proper clusters.
The paper also mentions that results were unstable with only 100 labeled examples. This again supports the idea that pseudo-labeling is sensitive to the initial predictions, and that good initial predictions require a sufficient number of labeled points.
Lee also showed that unsupervised pre-training using denoising autoencoders helps further, but this appears to be a separate way of exploiting structure in the unlabeled data; unfortunately, there was no comparison to unsupervised pre-training alone (without pseudo-labeling).
Grandvalet and Bengio (2005) reported that pseudo-labeling beats supervised learning on the CIFAR-10 and SVHN datasets (with 4000 and 1000 labeled examples, respectively). As above, this is much more labeled data than the 6 labeled points in the two-moons problem.
References
Grandvalet and Bengio (2005). Semi-supervised learning by entropy minimization.
Lee (2013). Pseudo-Label: The Simple and Efficient Semi-Supervised Learning Method for Deep Neural Networks.
Oliver et al. (2018). Realistic Evaluation of Semi-Supervised Learning Algorithms.
|
Why does using pseudo-labeling non-trivially affect the results?
Pseudo-labeling doesn't work on the given toy problem
Oliver et al. (2018) evaluated different semi-supervised learning algorithms. Their first figure shows how pseudo-labeling (and other methods) per
|
14,734
|
Why does using pseudo-labeling non-trivially affect the results?
|
What you may be overlooking in how self-training works is that:
It's iterative, not one-shot.
You use a classifier that returns probabilistic values. At each iteration, you only add psuedo-labels for the cases your algorithm is most certain about.
In your example, perhaps the first iteration is only confident enough to label one or two points very near each of the labeled points. In the next iteration the boundary will rotate slightly to accommodate these four to six labeled points, and if it's non-linear may also begin to bend slightly. Repeat.
It's not guaranteed to work. It depends on your base classifier, your algorithm (how certain you have to be in order to assign a pseudo-label, etc), your data, and so on.
There are also other algorithms that are more powerful if you can use them. What I believe you're describing is self-training, which is easy to code up, but you're using a single classifier that's looking at the same information repeatedly. Co-training uses multiple classifiers that are each looking at different information for each point. (This is somewhat analogous to Random Forests.) There are also other semi-supervised techniques -- such as those that explicitly cluster -- though no overall "this always works and this is the winner".
IN RESPONSE to the comment: I'm not an expert in this field. We see it as very applicable to what we typically do with clients, so I'm learning and don't have all the answers.
The top hit when I search for semi-supervised learning overviews is:
Semi-Supervised Learning Literature Survey, from 2008. That's ages ago, computer-wise, but it talks about the things I've mentioned here.
I hear you that a classifier could rate unlabeled points that are farthest from the labeled nodes with the most certainty. On the other hand, our intuitions may fool us. For example, let's consider the graphic you got from Wikipedia with the black, white, and gray nodes.
First, this is in 2D and most realistic problems will be in higher dimensions, where our intuition often misleads us. High-dimensional space acts differently in many ways -- some negative and some actually helpful.
Second, we might guess that in the first iteration the two right-most, lower-most gray points would be labeled as black, since the black labeled point is closer to them than the white labeled point. But if that happened on both sides, the vertical decision boundary would still tilt and no longer be vertical. At least in my imagination, if it were a straight line it would go down the diagonal empty space between the two originally-labeled points. It would still split the two crescents incorrectly, but it would be more aligned to the data now. Continued iteration -- particularly with a non-linear decision boundary -- might yield a better answer than we anticipate.
Third, I'm not sure that once-labeled, always-labeled is how it should actually work. Depending on how you do it and how the algorithm works, you might end up first tilting the boundary while bending it (assuming non-linear), and then some of the misclassified parts of the crescents might shift their labels.
My gut is that those three points, combined with appropriate (probably higher-dimensional) data, and appropriate classifiers can do better than a straight-up supervised with a very small number of training (labeled) samples. No guarantees, and in my experiments I've found -- I blame it on datasets that are too simple -- that semi-supervised may only marginally improve over supervised and may at times fail badly. Then again, I'm playing with two algorithms that I've created that may or may not actually be good.
|
Why does using pseudo-labeling non-trivially affect the results?
|
What you may be overlooking in how self-training works is that:
It's iterative, not one-shot.
You use a classifier that returns probabilistic values. At each iteration, you only add psuedo-labels for
|
Why does using pseudo-labeling non-trivially affect the results?
What you may be overlooking in how self-training works is that:
It's iterative, not one-shot.
You use a classifier that returns probabilistic values. At each iteration, you only add psuedo-labels for the cases your algorithm is most certain about.
In your example, perhaps the first iteration is only confident enough to label one or two points very near each of the labeled points. In the next iteration the boundary will rotate slightly to accommodate these four to six labeled points, and if it's non-linear may also begin to bend slightly. Repeat.
It's not guaranteed to work. It depends on your base classifier, your algorithm (how certain you have to be in order to assign a pseudo-label, etc), your data, and so on.
There are also other algorithms that are more powerful if you can use them. What I believe you're describing is self-training, which is easy to code up, but you're using a single classifier that's looking at the same information repeatedly. Co-training uses multiple classifiers that are each looking at different information for each point. (This is somewhat analogous to Random Forests.) There are also other semi-supervised techniques -- such as those that explicitly cluster -- though no overall "this always works and this is the winner".
IN RESPONSE to the comment: I'm not an expert in this field. We see it as very applicable to what we typically do with clients, so I'm learning and don't have all the answers.
The top hit when I search for semi-supervised learning overviews is:
Semi-Supervised Learning Literature Survey, from 2008. That's ages ago, computer-wise, but it talks about the things I've mentioned here.
I hear you that a classifier could rate unlabeled points that are farthest from the labeled nodes with the most certainty. On the other hand, our intuitions may fool us. For example, let's consider the graphic you got from Wikipedia with the black, white, and gray nodes.
First, this is in 2D and most realistic problems will be in higher dimensions, where our intuition often misleads us. High-dimensional space acts differently in many ways -- some negative and some actually helpful.
Second, we might guess that in the first iteration the two right-most, lower-most gray points would be labeled as black, since the black labeled point is closer to them than the white labeled point. But if that happened on both sides, the vertical decision boundary would still tilt and no longer be vertical. At least in my imagination, if it were a straight line it would go down the diagonal empty space between the two originally-labeled points. It would still split the two crescents incorrectly, but it would be more aligned to the data now. Continued iteration -- particularly with a non-linear decision boundary -- might yield a better answer than we anticipate.
Third, I'm not sure that once-labeled, always-labeled is how it should actually work. Depending on how you do it and how the algorithm works, you might end up first tilting the boundary while bending it (assuming non-linear), and then some of the misclassified parts of the crescents might shift their labels.
My gut is that those three points, combined with appropriate (probably higher-dimensional) data, and appropriate classifiers can do better than a straight-up supervised with a very small number of training (labeled) samples. No guarantees, and in my experiments I've found -- I blame it on datasets that are too simple -- that semi-supervised may only marginally improve over supervised and may at times fail badly. Then again, I'm playing with two algorithms that I've created that may or may not actually be good.
|
Why does using pseudo-labeling non-trivially affect the results?
What you may be overlooking in how self-training works is that:
It's iterative, not one-shot.
You use a classifier that returns probabilistic values. At each iteration, you only add psuedo-labels for
|
14,735
|
Why does using pseudo-labeling non-trivially affect the results?
|
Warning, I am not an expert on this procedure. My failure to produce good results is not proof that the technique cannot be made to work. Furthermore, your image has the general description of "semi-supervised" learning, which is a broad area with a variety of techniques.
I agree with your intuition, I'm not seeing how a technique like this could work out of the box. In other words, I think you'd need a lot of effort to make it work well for a specific application, and that effort would not necessarily be helpful in other applications.
I tried two different instances, one with a banana-shaped dataset like the one in the example image, and another easier dataset with two simple normal distributed clusters. In both cases I could not improve on the initial classifier.
As a small attempt to encourage things, I added noise to all predicted probabilities with the hope that this would cause better outcomes.
The first example I re-created the above image as faithfully as I could. I don't think psuedo-labeling will be able to help at all here.
The second example is much easier, but even here it fails to improve on the initial classifier. I specifically chose the one labeled point from the center of the left class, and the right side of the right class hoping it would shift in the correct direction, no such luck.
Code for example 1 (example 2 is similar enough that I won't duplicate here):
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
import seaborn
np.random.seed(2018-10-1)
N = 1000
_x = np.linspace(0, np.pi, num=N)
x0 = np.array([_x, np.sin(_x)]).T
x1 = -1 * x0 + [np.pi / 2, 0]
scale = 0.15
x0 += np.random.normal(scale=scale, size=(N, 2))
x1 += np.random.normal(scale=scale, size=(N, 2))
X = np.vstack([x0, x1])
proto_0 = np.array([[0], [0]]).T # the single "labeled" 0
proto_1 = np.array([[np.pi / 2], [0]]).T # the single "labeled" 1
model = RandomForestClassifier()
model.fit(np.vstack([proto_0, proto_1]), np.array([0, 1]))
for itercount in range(100):
labels = model.predict_proba(X)[:, 0]
labels += (np.random.random(labels.size) - 0.5) / 10 # add some noise
labels = labels > 0.5
model = RandomForestClassifier()
model.fit(X, labels)
f, axs = plt.subplots(1, 2, squeeze=True, figsize=(10, 5))
axs[0].plot(x0[:, 0], x0[:, 1], '.', alpha=0.25, label='unlabeled x0')
axs[0].plot(proto_0[:, 0], proto_0[:, 1], 'o', color='royalblue', markersize=10, label='labeled x0')
axs[0].plot(x1[:, 0], x1[:, 1], '.', alpha=0.25, label='unlabeled x1')
axs[0].plot(proto_1[:, 0], proto_1[:, 1], 'o', color='coral', markersize=10, label='labeled x1')
axs[0].legend()
axs[1].plot(X[~labels, 0], X[~labels, 1], '.', alpha=0.25, label='predicted class 0')
axs[1].plot(X[labels, 0], X[labels, 1], '.', alpha=0.25, label='predicted class 1')
axs[1].plot([np.pi / 4] * 2, [-1.5, 1.5], 'k--', label='halfway between labeled data')
axs[1].legend()
plt.show()
|
Why does using pseudo-labeling non-trivially affect the results?
|
Warning, I am not an expert on this procedure. My failure to produce good results is not proof that the technique cannot be made to work. Furthermore, your image has the general description of "semi-s
|
Why does using pseudo-labeling non-trivially affect the results?
Warning, I am not an expert on this procedure. My failure to produce good results is not proof that the technique cannot be made to work. Furthermore, your image has the general description of "semi-supervised" learning, which is a broad area with a variety of techniques.
I agree with your intuition, I'm not seeing how a technique like this could work out of the box. In other words, I think you'd need a lot of effort to make it work well for a specific application, and that effort would not necessarily be helpful in other applications.
I tried two different instances, one with a banana-shaped dataset like the one in the example image, and another easier dataset with two simple normal distributed clusters. In both cases I could not improve on the initial classifier.
As a small attempt to encourage things, I added noise to all predicted probabilities with the hope that this would cause better outcomes.
The first example I re-created the above image as faithfully as I could. I don't think psuedo-labeling will be able to help at all here.
The second example is much easier, but even here it fails to improve on the initial classifier. I specifically chose the one labeled point from the center of the left class, and the right side of the right class hoping it would shift in the correct direction, no such luck.
Code for example 1 (example 2 is similar enough that I won't duplicate here):
import numpy as np
from sklearn.ensemble import RandomForestClassifier
import matplotlib.pyplot as plt
import seaborn
np.random.seed(2018-10-1)
N = 1000
_x = np.linspace(0, np.pi, num=N)
x0 = np.array([_x, np.sin(_x)]).T
x1 = -1 * x0 + [np.pi / 2, 0]
scale = 0.15
x0 += np.random.normal(scale=scale, size=(N, 2))
x1 += np.random.normal(scale=scale, size=(N, 2))
X = np.vstack([x0, x1])
proto_0 = np.array([[0], [0]]).T # the single "labeled" 0
proto_1 = np.array([[np.pi / 2], [0]]).T # the single "labeled" 1
model = RandomForestClassifier()
model.fit(np.vstack([proto_0, proto_1]), np.array([0, 1]))
for itercount in range(100):
labels = model.predict_proba(X)[:, 0]
labels += (np.random.random(labels.size) - 0.5) / 10 # add some noise
labels = labels > 0.5
model = RandomForestClassifier()
model.fit(X, labels)
f, axs = plt.subplots(1, 2, squeeze=True, figsize=(10, 5))
axs[0].plot(x0[:, 0], x0[:, 1], '.', alpha=0.25, label='unlabeled x0')
axs[0].plot(proto_0[:, 0], proto_0[:, 1], 'o', color='royalblue', markersize=10, label='labeled x0')
axs[0].plot(x1[:, 0], x1[:, 1], '.', alpha=0.25, label='unlabeled x1')
axs[0].plot(proto_1[:, 0], proto_1[:, 1], 'o', color='coral', markersize=10, label='labeled x1')
axs[0].legend()
axs[1].plot(X[~labels, 0], X[~labels, 1], '.', alpha=0.25, label='predicted class 0')
axs[1].plot(X[labels, 0], X[labels, 1], '.', alpha=0.25, label='predicted class 1')
axs[1].plot([np.pi / 4] * 2, [-1.5, 1.5], 'k--', label='halfway between labeled data')
axs[1].legend()
plt.show()
|
Why does using pseudo-labeling non-trivially affect the results?
Warning, I am not an expert on this procedure. My failure to produce good results is not proof that the technique cannot be made to work. Furthermore, your image has the general description of "semi-s
|
14,736
|
Why does using pseudo-labeling non-trivially affect the results?
|
Here is my guess (I do not know much about this topic either, just wanted to add my two cents to this discussion).
I think that you're right, there's no point in training a classical model and using its predictions as data, because as you say, there's no incentive to the optimiser to do any better. I would guess that randomised-starting algorithms are more likely to find the same optimum because they'd be "more sure" that the previously found optimum is correct, due to the larger data set, but this is irrelevant.
That said, the first answer you received has a point - that example on Wikipedia talks about clustering, and I think that makes all the difference. When you've got unlabelled data, you essentially have a bunch of unlabelled points lying on some shared "latent feature space" as the other labelled ones. You can only really do better than a classification algorithm trained on the labelled data, if you can uncover the fact that the unlabelled points can be separated and then classified based on what class the labelled points belong to, on this latent feature space.
What I mean is, you need to do this:
$$labelled\;data \rightarrow clustering \rightarrow classification$$
... and then repeat with unlabelled data. Here, the learned cluster boundaries will not be the same, because clustering doesn't care for class labels, all it accounts for is transforming the feature space. The clustering generates a latent feature space, on which the classification boundary is learned, and this depends only on labelled data.
Algorithms that do not perform any sort of clustering, I believe, will not be able to change their optimum based on the unlabelled data set.
By the way, the image that you linked does a fair job I think of explaining what's going on here; a decision boundary is learned based solely on the clustering algorithm. You have no idea what the correct classes are here - it may be the case that they're all random - we don't know. All we can now is that there seems to be some structure in the feature space, and there seems to be some mapping from the feature space to the class labels.
Don't really have references but on this Reddit post, as I understand it, there's a discussion about a GAN performing semi-supervised learning. It is a hunch of mine that it implicitly performs a clustering, followed by classification.
|
Why does using pseudo-labeling non-trivially affect the results?
|
Here is my guess (I do not know much about this topic either, just wanted to add my two cents to this discussion).
I think that you're right, there's no point in training a classical model and using i
|
Why does using pseudo-labeling non-trivially affect the results?
Here is my guess (I do not know much about this topic either, just wanted to add my two cents to this discussion).
I think that you're right, there's no point in training a classical model and using its predictions as data, because as you say, there's no incentive to the optimiser to do any better. I would guess that randomised-starting algorithms are more likely to find the same optimum because they'd be "more sure" that the previously found optimum is correct, due to the larger data set, but this is irrelevant.
That said, the first answer you received has a point - that example on Wikipedia talks about clustering, and I think that makes all the difference. When you've got unlabelled data, you essentially have a bunch of unlabelled points lying on some shared "latent feature space" as the other labelled ones. You can only really do better than a classification algorithm trained on the labelled data, if you can uncover the fact that the unlabelled points can be separated and then classified based on what class the labelled points belong to, on this latent feature space.
What I mean is, you need to do this:
$$labelled\;data \rightarrow clustering \rightarrow classification$$
... and then repeat with unlabelled data. Here, the learned cluster boundaries will not be the same, because clustering doesn't care for class labels, all it accounts for is transforming the feature space. The clustering generates a latent feature space, on which the classification boundary is learned, and this depends only on labelled data.
Algorithms that do not perform any sort of clustering, I believe, will not be able to change their optimum based on the unlabelled data set.
By the way, the image that you linked does a fair job I think of explaining what's going on here; a decision boundary is learned based solely on the clustering algorithm. You have no idea what the correct classes are here - it may be the case that they're all random - we don't know. All we can now is that there seems to be some structure in the feature space, and there seems to be some mapping from the feature space to the class labels.
Don't really have references but on this Reddit post, as I understand it, there's a discussion about a GAN performing semi-supervised learning. It is a hunch of mine that it implicitly performs a clustering, followed by classification.
|
Why does using pseudo-labeling non-trivially affect the results?
Here is my guess (I do not know much about this topic either, just wanted to add my two cents to this discussion).
I think that you're right, there's no point in training a classical model and using i
|
14,737
|
Why do temporal difference (TD) methods have lower variance than Monte Carlo methods?
|
The difference between the algorithms, is how they set a new value target based on experience.
Using action values to make it a little more concrete, and sticking with on-policy evaluation (not control) to keep arguments simple, then the update to estimate $Q(S_t,A_t)$ takes the same general form for both TD and Monte Carlo:
$$Q(S_t,A_t) = Q(S_t,A_t) + \alpha(X - Q(S_t,A_t))$$
Where $X$ is the value of an estimate for the true value of $Q(S_t,A_t)$ gained through some experience. When discussing whether an RL value-based technique is biased or has high variance, the part that has these traits is whatever stands for $X$ in this update.
For Monte Carlo techniques, the value of $X$ is estimated by following a sample trajectory starting from $(S_t,A_t)$ and adding up the rewards to the end of an episode (at time $\tau$):
$$\sum_{k=0}^{\tau-t-1} \gamma^kR_{t+k+1}$$
For temporal difference, the value of $X$ is estimated by taking one step of sampled reward, and bootstrapping the estimate using the Bellman equation for $q_{\pi}(s,a)$:
$$R_{t+1} + \gamma Q(S_{t+1}, A_{t+1})$$
Bias
In both cases, the true value function that we want to estimate is the expected return after taking the action $A_t$ in state $S_t$ and following policy $\pi$. This can be written:
$$q(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\tau-t-1} \gamma^kR_{t+k+1}|S_t=s, A_t=a]$$
That should look familiar. The Monte Carlo target for $X$ is clearly a direct sample of this value, and has the same expected value that we are looking for. Hence it is not biased.
TD learning however, has a problem due to initial states. The bootstrap value of $Q(S_{t+1}, A_{t+1})$ is initially whatever you set it to, arbitrarily at the start of learning. This has no bearing on the true value you are looking for, hence it is biased. Over time, the bias decays exponentially as real values from experience are used in the update process. At least that is true for basic tabular forms of TD learning. When you add a neural network or other approximation, then this bias can cause stability problems, causing an RL agent to fail to learn.
Variance
Looking at how each update mechanism works, you can see that TD learning is exposed to 3 factors, that can each vary (in principle, depending on the environment) over a single time step:
What reward $R_{t+1}$ will be returned
What the next state $S_{t+1}$ will be.
What the policy will choose for $A_{t+1}$
Each of these factors may increase the variance of the TD target value. However, the bootstrapping mechanism means that there is no direct variance from other events on the trajectory. That's it.
In comparison, the Monte Carlo return depends on every return, state transition and policy decision from $(S_t, A_t)$ up to $S_{\tau}$. As this is often multiple time steps, the variance will also be a multiple of that seen in TD learning on the same problem. This is true even in situations with deterministic environments with sparse deterministic rewards, as an exploring policy must be stochastic, which injects some variance on every time step involved in the sampled value estimate.
|
Why do temporal difference (TD) methods have lower variance than Monte Carlo methods?
|
The difference between the algorithms, is how they set a new value target based on experience.
Using action values to make it a little more concrete, and sticking with on-policy evaluation (not contro
|
Why do temporal difference (TD) methods have lower variance than Monte Carlo methods?
The difference between the algorithms, is how they set a new value target based on experience.
Using action values to make it a little more concrete, and sticking with on-policy evaluation (not control) to keep arguments simple, then the update to estimate $Q(S_t,A_t)$ takes the same general form for both TD and Monte Carlo:
$$Q(S_t,A_t) = Q(S_t,A_t) + \alpha(X - Q(S_t,A_t))$$
Where $X$ is the value of an estimate for the true value of $Q(S_t,A_t)$ gained through some experience. When discussing whether an RL value-based technique is biased or has high variance, the part that has these traits is whatever stands for $X$ in this update.
For Monte Carlo techniques, the value of $X$ is estimated by following a sample trajectory starting from $(S_t,A_t)$ and adding up the rewards to the end of an episode (at time $\tau$):
$$\sum_{k=0}^{\tau-t-1} \gamma^kR_{t+k+1}$$
For temporal difference, the value of $X$ is estimated by taking one step of sampled reward, and bootstrapping the estimate using the Bellman equation for $q_{\pi}(s,a)$:
$$R_{t+1} + \gamma Q(S_{t+1}, A_{t+1})$$
Bias
In both cases, the true value function that we want to estimate is the expected return after taking the action $A_t$ in state $S_t$ and following policy $\pi$. This can be written:
$$q(s,a) = \mathbb{E}_{\pi}[\sum_{k=0}^{\tau-t-1} \gamma^kR_{t+k+1}|S_t=s, A_t=a]$$
That should look familiar. The Monte Carlo target for $X$ is clearly a direct sample of this value, and has the same expected value that we are looking for. Hence it is not biased.
TD learning however, has a problem due to initial states. The bootstrap value of $Q(S_{t+1}, A_{t+1})$ is initially whatever you set it to, arbitrarily at the start of learning. This has no bearing on the true value you are looking for, hence it is biased. Over time, the bias decays exponentially as real values from experience are used in the update process. At least that is true for basic tabular forms of TD learning. When you add a neural network or other approximation, then this bias can cause stability problems, causing an RL agent to fail to learn.
Variance
Looking at how each update mechanism works, you can see that TD learning is exposed to 3 factors, that can each vary (in principle, depending on the environment) over a single time step:
What reward $R_{t+1}$ will be returned
What the next state $S_{t+1}$ will be.
What the policy will choose for $A_{t+1}$
Each of these factors may increase the variance of the TD target value. However, the bootstrapping mechanism means that there is no direct variance from other events on the trajectory. That's it.
In comparison, the Monte Carlo return depends on every return, state transition and policy decision from $(S_t, A_t)$ up to $S_{\tau}$. As this is often multiple time steps, the variance will also be a multiple of that seen in TD learning on the same problem. This is true even in situations with deterministic environments with sparse deterministic rewards, as an exploring policy must be stochastic, which injects some variance on every time step involved in the sampled value estimate.
|
Why do temporal difference (TD) methods have lower variance than Monte Carlo methods?
The difference between the algorithms, is how they set a new value target based on experience.
Using action values to make it a little more concrete, and sticking with on-policy evaluation (not contro
|
14,738
|
Biased Data in Machine Learning
|
You are right to be concerned - even the best models can fail spectacularly if the distribution of out-of-sample data differs significantly from the distribution of the data that the model was trained/tested on.
I think the best you can do is train a model on the labelled data that you have, but try to keep the model interpretable. That probably means only being limited to simple models. Then, you could attempt to reason how the rules learnt by your model might interact with the prior rules you had, in an attempt to estimate how well your model might work on the unfiltered population.
For example - suppose, your model finds that in your labelled dataset, the younger the client is, the more likely they were to default. Then it may be reasonable to assume that your model will work well if you removed the prior filter of "If age of client < 18 years, then do not accept".
|
Biased Data in Machine Learning
|
You are right to be concerned - even the best models can fail spectacularly if the distribution of out-of-sample data differs significantly from the distribution of the data that the model was trained
|
Biased Data in Machine Learning
You are right to be concerned - even the best models can fail spectacularly if the distribution of out-of-sample data differs significantly from the distribution of the data that the model was trained/tested on.
I think the best you can do is train a model on the labelled data that you have, but try to keep the model interpretable. That probably means only being limited to simple models. Then, you could attempt to reason how the rules learnt by your model might interact with the prior rules you had, in an attempt to estimate how well your model might work on the unfiltered population.
For example - suppose, your model finds that in your labelled dataset, the younger the client is, the more likely they were to default. Then it may be reasonable to assume that your model will work well if you removed the prior filter of "If age of client < 18 years, then do not accept".
|
Biased Data in Machine Learning
You are right to be concerned - even the best models can fail spectacularly if the distribution of out-of-sample data differs significantly from the distribution of the data that the model was trained
|
14,739
|
Biased Data in Machine Learning
|
I'm not sure I entirely understand that question, but so far as I understand it you're asking how to train a classifier to predict on samples lying outside the domain of the samples it has already seen. This is, generally speaking and so far as I know, not possible. Machine learning theory is based on the idea of "empirical risk minimization," which boils down to assuming that your training set is a good approximation of your true distribution over samples and labels. If that assumption is violated, there aren't really any guarantees.
You mention unlabeled data -- I don't know if this would solve your problem, but semi-supervised learning has many methods for trying to learn classifiers given both labeled and unlabeled data, and you may want to consider looking into those (for example, transductive SVMs).
|
Biased Data in Machine Learning
|
I'm not sure I entirely understand that question, but so far as I understand it you're asking how to train a classifier to predict on samples lying outside the domain of the samples it has already see
|
Biased Data in Machine Learning
I'm not sure I entirely understand that question, but so far as I understand it you're asking how to train a classifier to predict on samples lying outside the domain of the samples it has already seen. This is, generally speaking and so far as I know, not possible. Machine learning theory is based on the idea of "empirical risk minimization," which boils down to assuming that your training set is a good approximation of your true distribution over samples and labels. If that assumption is violated, there aren't really any guarantees.
You mention unlabeled data -- I don't know if this would solve your problem, but semi-supervised learning has many methods for trying to learn classifiers given both labeled and unlabeled data, and you may want to consider looking into those (for example, transductive SVMs).
|
Biased Data in Machine Learning
I'm not sure I entirely understand that question, but so far as I understand it you're asking how to train a classifier to predict on samples lying outside the domain of the samples it has already see
|
14,740
|
Biased Data in Machine Learning
|
Your rules may give you a way to perform data augmentation. Copy a positive sample, change the age to 17, and then mark it as a negative sample.
This procedure won't necessarily be trivial or useful for all datasets. I work with NLP data and it's tricky to do well in that domain. For example, if you have other features correlated with age, you may end up with unrealistic samples. However, it provides an avenue to expose the system to something like the samples that didn't make it into the dataset.
|
Biased Data in Machine Learning
|
Your rules may give you a way to perform data augmentation. Copy a positive sample, change the age to 17, and then mark it as a negative sample.
This procedure won't necessarily be trivial or useful
|
Biased Data in Machine Learning
Your rules may give you a way to perform data augmentation. Copy a positive sample, change the age to 17, and then mark it as a negative sample.
This procedure won't necessarily be trivial or useful for all datasets. I work with NLP data and it's tricky to do well in that domain. For example, if you have other features correlated with age, you may end up with unrealistic samples. However, it provides an avenue to expose the system to something like the samples that didn't make it into the dataset.
|
Biased Data in Machine Learning
Your rules may give you a way to perform data augmentation. Copy a positive sample, change the age to 17, and then mark it as a negative sample.
This procedure won't necessarily be trivial or useful
|
14,741
|
Biased Data in Machine Learning
|
One thing that has worked for us in a similar situation is doing a bit of reinforcement learning (explore and exploit). On top of the rule based model, we ran a explorer which would with a small likelihood change the response of the model, so in occasional cases where the model would not recommend a card to a 17-year old, the explorer would overturn the model's decision and issue a card. From these occasional cases you would generate learning data for a future learning model where it can be used to decide to recommend cards for 17 year olds based on if the ones that were issued to 17 year olds by the explorer did not default and so you can build systems that can work outside the biases of your existing model.
|
Biased Data in Machine Learning
|
One thing that has worked for us in a similar situation is doing a bit of reinforcement learning (explore and exploit). On top of the rule based model, we ran a explorer which would with a small likel
|
Biased Data in Machine Learning
One thing that has worked for us in a similar situation is doing a bit of reinforcement learning (explore and exploit). On top of the rule based model, we ran a explorer which would with a small likelihood change the response of the model, so in occasional cases where the model would not recommend a card to a 17-year old, the explorer would overturn the model's decision and issue a card. From these occasional cases you would generate learning data for a future learning model where it can be used to decide to recommend cards for 17 year olds based on if the ones that were issued to 17 year olds by the explorer did not default and so you can build systems that can work outside the biases of your existing model.
|
Biased Data in Machine Learning
One thing that has worked for us in a similar situation is doing a bit of reinforcement learning (explore and exploit). On top of the rule based model, we ran a explorer which would with a small likel
|
14,742
|
Biased Data in Machine Learning
|
From a practical standpoint it is difficult/unreasonable to ask a model to predict something on cases that are not possible in the current system (no free lunch).
One way to circumvent that problem is to add randomization to the current (deployed) system, e.g. to add the possibility to bypass (some of) the rules with a small, controlled probability (and hence a predictable cost).
Once you managed to convince the people responsible for the system to do that then you can use off-policy evaluation methods like importance sampling to ask "what-if" questions. E.g. what would be the expected credit risk if we would allow people that are currently dropped by the rules to take a credit. One can even simulate the effect of your (biased) prediction model on that population. A good reference for that kind of methods is Bottou's paper on counterfactual learning and reasoning.
|
Biased Data in Machine Learning
|
From a practical standpoint it is difficult/unreasonable to ask a model to predict something on cases that are not possible in the current system (no free lunch).
One way to circumvent that problem is
|
Biased Data in Machine Learning
From a practical standpoint it is difficult/unreasonable to ask a model to predict something on cases that are not possible in the current system (no free lunch).
One way to circumvent that problem is to add randomization to the current (deployed) system, e.g. to add the possibility to bypass (some of) the rules with a small, controlled probability (and hence a predictable cost).
Once you managed to convince the people responsible for the system to do that then you can use off-policy evaluation methods like importance sampling to ask "what-if" questions. E.g. what would be the expected credit risk if we would allow people that are currently dropped by the rules to take a credit. One can even simulate the effect of your (biased) prediction model on that population. A good reference for that kind of methods is Bottou's paper on counterfactual learning and reasoning.
|
Biased Data in Machine Learning
From a practical standpoint it is difficult/unreasonable to ask a model to predict something on cases that are not possible in the current system (no free lunch).
One way to circumvent that problem is
|
14,743
|
Biased Data in Machine Learning
|
The classical statistical answer is that if the selection process is in the data and described by the model or selection is at random then the parametrical model contemplates it correctly. See Donald Rubin paper Inference and Missing data (1976). You do need to include the mechanism of data selection in your model. This is a field where parametric inference should do better than pure machine learning.
|
Biased Data in Machine Learning
|
The classical statistical answer is that if the selection process is in the data and described by the model or selection is at random then the parametrical model contemplates it correctly. See Donald
|
Biased Data in Machine Learning
The classical statistical answer is that if the selection process is in the data and described by the model or selection is at random then the parametrical model contemplates it correctly. See Donald Rubin paper Inference and Missing data (1976). You do need to include the mechanism of data selection in your model. This is a field where parametric inference should do better than pure machine learning.
|
Biased Data in Machine Learning
The classical statistical answer is that if the selection process is in the data and described by the model or selection is at random then the parametrical model contemplates it correctly. See Donald
|
14,744
|
Biased Data in Machine Learning
|
This is akin to the after-life dilemma: what ratio of good and bad deeds (data) is sufficient to get to heaven instead of hell (class), after one dies (filter!). Herein, death serves as the filter, leading to missing values towards a supervised learning scheme.
I want to disambiguate between missing-value problem and 'biased data' problem.
There is no such thing as biased data, there is such a thing as 'biased model' explaining said data, but the data itself isn't biased, it is merely missing.
If the missing data is meaningfully correlated to observable data, then it is entirely possible to train an unbiased model and achieve good predictive results.
If the missing data is completely uncorrelated with observable data, then its a case of 'you don't know what you don't know'. You can use neither supervised, nor unsupervised learning methods. The problem is outsides the realms of data science.
Therefore, for the sake of meaningful solution, lets assume that missing data is correlated with observable data. We'll exploit said correlation.
There are several data mining algorithms that attempt to solve such a problem. You can try 'Ensemble methods' like Bagging-n-Boosting or 'frequent pattern mining' algorithms like Apriori and FP-growth. You can also explore methods in Robust Statistics.
|
Biased Data in Machine Learning
|
This is akin to the after-life dilemma: what ratio of good and bad deeds (data) is sufficient to get to heaven instead of hell (class), after one dies (filter!). Herein, death serves as the filter, le
|
Biased Data in Machine Learning
This is akin to the after-life dilemma: what ratio of good and bad deeds (data) is sufficient to get to heaven instead of hell (class), after one dies (filter!). Herein, death serves as the filter, leading to missing values towards a supervised learning scheme.
I want to disambiguate between missing-value problem and 'biased data' problem.
There is no such thing as biased data, there is such a thing as 'biased model' explaining said data, but the data itself isn't biased, it is merely missing.
If the missing data is meaningfully correlated to observable data, then it is entirely possible to train an unbiased model and achieve good predictive results.
If the missing data is completely uncorrelated with observable data, then its a case of 'you don't know what you don't know'. You can use neither supervised, nor unsupervised learning methods. The problem is outsides the realms of data science.
Therefore, for the sake of meaningful solution, lets assume that missing data is correlated with observable data. We'll exploit said correlation.
There are several data mining algorithms that attempt to solve such a problem. You can try 'Ensemble methods' like Bagging-n-Boosting or 'frequent pattern mining' algorithms like Apriori and FP-growth. You can also explore methods in Robust Statistics.
|
Biased Data in Machine Learning
This is akin to the after-life dilemma: what ratio of good and bad deeds (data) is sufficient to get to heaven instead of hell (class), after one dies (filter!). Herein, death serves as the filter, le
|
14,745
|
Derivative of a Gaussian Process
|
The short answer: Yes, if your Gaussian Process (GP) is differentiable, its derivative is again a GP. It can be handled like any other GP and you can calculate predictive distributions.
But since a GP $G$ and its derivative $G'$ are closely related you can infer properties of either one from the other.
Existence of $G'$
A zero-mean GP with covariance function $K$ is differentiable (in mean square) if $K'(x_1, x_2)=\frac{\partial^2 K}{\partial x_1 \partial x_2}(x_1,x_2)$ exists. In that case the covariance function of $G'$ is equal to $K'$. If the process is not zero-mean, then the mean function needs to be differentiable as well. In that case the mean function of $G'$ is the derivative of the mean function of $G$.
(For more details check for example Appendix 10A of A. Papoulis "Probability, random variables and stochastic processes")
Since the Gaussian Exponential Kernel is differentiable of any order, this is no problem for you.
Predictive distribution for $G'$
This is straightforward if you just want to condition on observations of $G'$: If you can calculate the respective derivatives you know mean and covariance function so that you can do inference with it in the same way as you would do it with any other GP.
But you can also derive a predictive distributions for $G'$ based on observations of $G$. You do this by calculating the posterior of $G$ given your observations in the standard way and then applying 1. to the covariance and mean function of the posterior process.
This works in the same manner the other way around, i.e. you condition on observations of $G'$ to infer a posterior of $G$. In that case the covariance function of $G$ is given by integrals of $K'$ and might be hard to calculate but the logic is really the same.
|
Derivative of a Gaussian Process
|
The short answer: Yes, if your Gaussian Process (GP) is differentiable, its derivative is again a GP. It can be handled like any other GP and you can calculate predictive distributions.
But since a G
|
Derivative of a Gaussian Process
The short answer: Yes, if your Gaussian Process (GP) is differentiable, its derivative is again a GP. It can be handled like any other GP and you can calculate predictive distributions.
But since a GP $G$ and its derivative $G'$ are closely related you can infer properties of either one from the other.
Existence of $G'$
A zero-mean GP with covariance function $K$ is differentiable (in mean square) if $K'(x_1, x_2)=\frac{\partial^2 K}{\partial x_1 \partial x_2}(x_1,x_2)$ exists. In that case the covariance function of $G'$ is equal to $K'$. If the process is not zero-mean, then the mean function needs to be differentiable as well. In that case the mean function of $G'$ is the derivative of the mean function of $G$.
(For more details check for example Appendix 10A of A. Papoulis "Probability, random variables and stochastic processes")
Since the Gaussian Exponential Kernel is differentiable of any order, this is no problem for you.
Predictive distribution for $G'$
This is straightforward if you just want to condition on observations of $G'$: If you can calculate the respective derivatives you know mean and covariance function so that you can do inference with it in the same way as you would do it with any other GP.
But you can also derive a predictive distributions for $G'$ based on observations of $G$. You do this by calculating the posterior of $G$ given your observations in the standard way and then applying 1. to the covariance and mean function of the posterior process.
This works in the same manner the other way around, i.e. you condition on observations of $G'$ to infer a posterior of $G$. In that case the covariance function of $G$ is given by integrals of $K'$ and might be hard to calculate but the logic is really the same.
|
Derivative of a Gaussian Process
The short answer: Yes, if your Gaussian Process (GP) is differentiable, its derivative is again a GP. It can be handled like any other GP and you can calculate predictive distributions.
But since a G
|
14,746
|
Derivative of a Gaussian Process
|
It is. See Rasmussen and Williams section 9.4. Also, some authors argue strongly against the square exponential kenrnel - it is too smooth.
|
Derivative of a Gaussian Process
|
It is. See Rasmussen and Williams section 9.4. Also, some authors argue strongly against the square exponential kenrnel - it is too smooth.
|
Derivative of a Gaussian Process
It is. See Rasmussen and Williams section 9.4. Also, some authors argue strongly against the square exponential kenrnel - it is too smooth.
|
Derivative of a Gaussian Process
It is. See Rasmussen and Williams section 9.4. Also, some authors argue strongly against the square exponential kenrnel - it is too smooth.
|
14,747
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
|
It is not wise to transform the variables individually because they belong together (as you noticed) and to do k-means because the data are counts (you might, but k-means is better to do on continuous attributes such as length for example).
In your place, I would compute chi-square distance (perfect for counts) between every pair of customers, based on the variables containing counts. Then do hierarchical clustering (for example, average linkage method or complete linkage method - they do not compute centroids and threfore don't require euclidean distance) or some other clustering working with arbitrary distance matrices.
Copying example data from the question:
-----------------------------------------------------------
customer | count_red | count_blue | count_green |
-----------------------------------------------------------
c0 | 12 | 5 | 0 |
-----------------------------------------------------------
c1 | 3 | 4 | 0 |
-----------------------------------------------------------
c2 | 2 | 21 | 0 |
-----------------------------------------------------------
c3 | 4 | 8 | 1 |
-----------------------------------------------------------
Consider pair c0 and c1 and compute Chi-square statistic for their 2x3 frequency table. Take the square root of it (like you take it when you compute usual euclidean distance). That is your distance. If the distance is close to 0 the two customers are similar.
It may bother you that sums in rows in your table differ and so it affects the chi-square distance when you compare c0 with c1 vs c0 with c2. Then compute the (root of) the Phi-square distance: Phi-sq = Chi-sq/N where N is the combined total count in the two rows (customers) currently considered. It is thus normalized distance wrt to overall counts.
Here is the matrix of sqrt(Chi-sq) distance between your four customers
.000 1.275 4.057 2.292
1.275 .000 2.124 .862
4.057 2.124 .000 2.261
2.292 .862 2.261 .000
And here is the matrix of sqrt(Phi-sq) distance
.000 .260 .641 .418
.260 .000 .388 .193
.641 .388 .000 .377
.418 .193 .377 .000
So, the distance between any two rows of the data is the (sq. root of) the chi-square or phi-square statistic of the 2 x p frequency table (p is the number of columns in the data). If any column(s) in the current 2 x p table is complete zero, cut off that column and compute the distance based on the remaining nonzero columns (it is OK and this is how, for example, SPSS does when it computes the distance). Chi-square distance is actually a weighted euclidean distance.
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
|
It is not wise to transform the variables individually because they belong together (as you noticed) and to do k-means because the data are counts (you might, but k-means is better to do on continuous
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
It is not wise to transform the variables individually because they belong together (as you noticed) and to do k-means because the data are counts (you might, but k-means is better to do on continuous attributes such as length for example).
In your place, I would compute chi-square distance (perfect for counts) between every pair of customers, based on the variables containing counts. Then do hierarchical clustering (for example, average linkage method or complete linkage method - they do not compute centroids and threfore don't require euclidean distance) or some other clustering working with arbitrary distance matrices.
Copying example data from the question:
-----------------------------------------------------------
customer | count_red | count_blue | count_green |
-----------------------------------------------------------
c0 | 12 | 5 | 0 |
-----------------------------------------------------------
c1 | 3 | 4 | 0 |
-----------------------------------------------------------
c2 | 2 | 21 | 0 |
-----------------------------------------------------------
c3 | 4 | 8 | 1 |
-----------------------------------------------------------
Consider pair c0 and c1 and compute Chi-square statistic for their 2x3 frequency table. Take the square root of it (like you take it when you compute usual euclidean distance). That is your distance. If the distance is close to 0 the two customers are similar.
It may bother you that sums in rows in your table differ and so it affects the chi-square distance when you compare c0 with c1 vs c0 with c2. Then compute the (root of) the Phi-square distance: Phi-sq = Chi-sq/N where N is the combined total count in the two rows (customers) currently considered. It is thus normalized distance wrt to overall counts.
Here is the matrix of sqrt(Chi-sq) distance between your four customers
.000 1.275 4.057 2.292
1.275 .000 2.124 .862
4.057 2.124 .000 2.261
2.292 .862 2.261 .000
And here is the matrix of sqrt(Phi-sq) distance
.000 .260 .641 .418
.260 .000 .388 .193
.641 .388 .000 .377
.418 .193 .377 .000
So, the distance between any two rows of the data is the (sq. root of) the chi-square or phi-square statistic of the 2 x p frequency table (p is the number of columns in the data). If any column(s) in the current 2 x p table is complete zero, cut off that column and compute the distance based on the remaining nonzero columns (it is OK and this is how, for example, SPSS does when it computes the distance). Chi-square distance is actually a weighted euclidean distance.
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
It is not wise to transform the variables individually because they belong together (as you noticed) and to do k-means because the data are counts (you might, but k-means is better to do on continuous
|
14,748
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
|
@ttnphns has provided a good answer.
Doing clustering well is often about thinking very hard about your data, so let's do some of that. To my mind, the most fundamental aspect of your data is that they are compositional.
On the other hand, your primary concern seems to be that you have a lot of 0s for green products and specifically wonder if you can transform only the green values to make it more similar to the rest. But because these are compositional data, you cannot think about one set of counts independently from the rest. Moreover, it appears that what you are really interested in are customers' probabilities of purchasing different colored products, but because many have not purchased any green ones, you worry that you cannot estimate those probabilities. One way to address this is to use a somewhat Bayesian approach in which we nudge customers' estimated proportions towards a mean proportion, with the amount of the shift influenced by how far they are from the mean and how much data you have to estimate their true probabilities.
Below I use your example dataset to illustrate (in R) one way to approach your situation. I read in the data and convert them into rowwise proportions, and then compute mean proportions by column. I add the means back to each count to get adjusted counts and new rowwise proportions. This nudges each customer's estimated proportion towards the mean proportion for each product. If you wanted a stronger nudge, you could use a multiple of the means (such as, 15*mean.props) instead.
d = read.table(text="id red blue green
...
c3 4 8 1", header=TRUE)
tab = as.table(as.matrix(d[,-1]))
rownames(tab) = paste0("c", 0:3)
tab
# red blue green
# c0 12 5 0
# c1 3 4 0
# c2 2 21 0
# c3 4 8 1
props = prop.table(tab, 1)
props
# red blue green
# c0 0.70588235 0.29411765 0.00000000
# c1 0.42857143 0.57142857 0.00000000
# c2 0.08695652 0.91304348 0.00000000
# c3 0.30769231 0.61538462 0.07692308
mean.props = apply(props, 2, FUN=function(x){ weighted.mean(x, rowSums(tab)) })
mean.props
# red blue green
# 0.35000000 0.63333333 0.01666667
adj.counts = sweep(tab, 2, mean.props, FUN="+"); adj.counts
# red blue green
# c0 12.35000000 5.63333333 0.01666667
# c1 3.35000000 4.63333333 0.01666667
# c2 2.35000000 21.63333333 0.01666667
# c3 4.35000000 8.63333333 1.01666667
adj.props = prop.table(adj.counts, 1); adj.props
# red blue green
# c0 0.6861111111 0.3129629630 0.0009259259
# c1 0.4187500000 0.5791666667 0.0020833333
# c2 0.0979166667 0.9013888889 0.0006944444
# c3 0.3107142857 0.6166666667 0.0726190476
There are several results of this. One of which is that you now have non-zero estimates of the underlying probabilities of purchasing green products, even when a customer doesn't actually have any record of having purchased any green products yet. Another consequence is that you now have somewhat continuous values, whereas the original proportions were more discrete; that is, the set of possible estimates is less constricted, so a distance measure like the squared Euclidean distance might make more sense now.
We can visualize the data to see what happened. Because these are compositional data, we only actually have two pieces of information, and we can plot these in a single scatterplot. With most of the information in the red and blue categories, it makes sense to use those as the axes. You can see that the adjusted proportions (the red numbers) are shifted a little from their original positions.
windows()
plot(props[,1], props[,2], pch=as.character(0:3),
xlab="Proportion Red", ylab="Proportion Blue", xlim=c(0,1), ylim=c(0,1))
points(adj.props[,1], adj.props[,2], pch=as.character(0:3), col="red")
At this point, you have data and a lot of people would begin by standardizing them. Again, because these are compositional data, I would run cluster analyses without doing any standardizationβthese values are already commensurate and standardization would destroy some of the relational information. In fact, from looking at the plot I think you really have only one dimension of information here. (At least in the sample dataset; your real dataset may well be different.) Unless, from a business point of view, you think it's important to recognize people who have any substantial probability of purchasing green products as a distinct cluster of customers, I would extract scores on the first principal component (which accounts for 99.5% of the variance in this dataset) and just cluster that.
pc.a.props = prcomp(adj.props[,1:2], center=T, scale=T)
cumsum(pc.a.props$sdev^2)/sum(pc.a.props$sdev^2)
# [1] 0.9946557 1.000000
pc.a.props$x
# PC1 PC2
# c0 -1.7398975 -0.03897251
# c1 -0.1853614 -0.04803648
# c2 1.6882400 -0.06707115
# c3 0.2370189 0.15408015
library(mclust)
mc = Mclust(pc.a.props$x[,1])
summary(mc)
# ----------------------------------------------------
# Gaussian finite mixture model fitted by EM algorithm
# ----------------------------------------------------
#
# Mclust E (univariate, equal variance) model with 3 components:
#
# log.likelihood n df BIC ICL
# -2.228357 4 6 -12.77448 -12.77448
#
# Clustering table:
# 1 2 3
# 1 2 1
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
|
@ttnphns has provided a good answer.
Doing clustering well is often about thinking very hard about your data, so let's do some of that. To my mind, the most fundamental aspect of your data is that
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
@ttnphns has provided a good answer.
Doing clustering well is often about thinking very hard about your data, so let's do some of that. To my mind, the most fundamental aspect of your data is that they are compositional.
On the other hand, your primary concern seems to be that you have a lot of 0s for green products and specifically wonder if you can transform only the green values to make it more similar to the rest. But because these are compositional data, you cannot think about one set of counts independently from the rest. Moreover, it appears that what you are really interested in are customers' probabilities of purchasing different colored products, but because many have not purchased any green ones, you worry that you cannot estimate those probabilities. One way to address this is to use a somewhat Bayesian approach in which we nudge customers' estimated proportions towards a mean proportion, with the amount of the shift influenced by how far they are from the mean and how much data you have to estimate their true probabilities.
Below I use your example dataset to illustrate (in R) one way to approach your situation. I read in the data and convert them into rowwise proportions, and then compute mean proportions by column. I add the means back to each count to get adjusted counts and new rowwise proportions. This nudges each customer's estimated proportion towards the mean proportion for each product. If you wanted a stronger nudge, you could use a multiple of the means (such as, 15*mean.props) instead.
d = read.table(text="id red blue green
...
c3 4 8 1", header=TRUE)
tab = as.table(as.matrix(d[,-1]))
rownames(tab) = paste0("c", 0:3)
tab
# red blue green
# c0 12 5 0
# c1 3 4 0
# c2 2 21 0
# c3 4 8 1
props = prop.table(tab, 1)
props
# red blue green
# c0 0.70588235 0.29411765 0.00000000
# c1 0.42857143 0.57142857 0.00000000
# c2 0.08695652 0.91304348 0.00000000
# c3 0.30769231 0.61538462 0.07692308
mean.props = apply(props, 2, FUN=function(x){ weighted.mean(x, rowSums(tab)) })
mean.props
# red blue green
# 0.35000000 0.63333333 0.01666667
adj.counts = sweep(tab, 2, mean.props, FUN="+"); adj.counts
# red blue green
# c0 12.35000000 5.63333333 0.01666667
# c1 3.35000000 4.63333333 0.01666667
# c2 2.35000000 21.63333333 0.01666667
# c3 4.35000000 8.63333333 1.01666667
adj.props = prop.table(adj.counts, 1); adj.props
# red blue green
# c0 0.6861111111 0.3129629630 0.0009259259
# c1 0.4187500000 0.5791666667 0.0020833333
# c2 0.0979166667 0.9013888889 0.0006944444
# c3 0.3107142857 0.6166666667 0.0726190476
There are several results of this. One of which is that you now have non-zero estimates of the underlying probabilities of purchasing green products, even when a customer doesn't actually have any record of having purchased any green products yet. Another consequence is that you now have somewhat continuous values, whereas the original proportions were more discrete; that is, the set of possible estimates is less constricted, so a distance measure like the squared Euclidean distance might make more sense now.
We can visualize the data to see what happened. Because these are compositional data, we only actually have two pieces of information, and we can plot these in a single scatterplot. With most of the information in the red and blue categories, it makes sense to use those as the axes. You can see that the adjusted proportions (the red numbers) are shifted a little from their original positions.
windows()
plot(props[,1], props[,2], pch=as.character(0:3),
xlab="Proportion Red", ylab="Proportion Blue", xlim=c(0,1), ylim=c(0,1))
points(adj.props[,1], adj.props[,2], pch=as.character(0:3), col="red")
At this point, you have data and a lot of people would begin by standardizing them. Again, because these are compositional data, I would run cluster analyses without doing any standardizationβthese values are already commensurate and standardization would destroy some of the relational information. In fact, from looking at the plot I think you really have only one dimension of information here. (At least in the sample dataset; your real dataset may well be different.) Unless, from a business point of view, you think it's important to recognize people who have any substantial probability of purchasing green products as a distinct cluster of customers, I would extract scores on the first principal component (which accounts for 99.5% of the variance in this dataset) and just cluster that.
pc.a.props = prcomp(adj.props[,1:2], center=T, scale=T)
cumsum(pc.a.props$sdev^2)/sum(pc.a.props$sdev^2)
# [1] 0.9946557 1.000000
pc.a.props$x
# PC1 PC2
# c0 -1.7398975 -0.03897251
# c1 -0.1853614 -0.04803648
# c2 1.6882400 -0.06707115
# c3 0.2370189 0.15408015
library(mclust)
mc = Mclust(pc.a.props$x[,1])
summary(mc)
# ----------------------------------------------------
# Gaussian finite mixture model fitted by EM algorithm
# ----------------------------------------------------
#
# Mclust E (univariate, equal variance) model with 3 components:
#
# log.likelihood n df BIC ICL
# -2.228357 4 6 -12.77448 -12.77448
#
# Clustering table:
# 1 2 3
# 1 2 1
|
Clustering of very skewed, count data: any suggestions to go about (transform etc)?
@ttnphns has provided a good answer.
Doing clustering well is often about thinking very hard about your data, so let's do some of that. To my mind, the most fundamental aspect of your data is that
|
14,749
|
What are the regularity conditions for Likelihood Ratio test
|
The required regularity conditions are listed in most intermediate textbooks and are not different than those of the mle. The following ones concern the one parameter case yet their extension to the multiparameter one is straightforward.
Condition 1: The pdfs are distinct, i.e. $\theta \neq \theta ^{\prime} \Rightarrow f(x_i;\theta)\neq f(x_i;\theta ^{\prime}) $
Note that this condition essentially states that the parameter identifies the pdf.
Condition 2: The pdfs have common support for all $\theta$
What this implies is that the support does not depend on $\theta$
Condition 3:The point $\theta_0$, the real parameter that is, is an interior point in some set $\Omega$
The last one concerns the possibility that $\theta$ appears in the endpoints of an interval.
These three together guarantee that the likelihood is maximised at the true parameter $\theta_0$ and then that the mle $\hat{\theta}$ that solves the equation
$$\frac{\partial l(\theta)} {\partial \theta}=0$$
is consistent.
Condition 4: The pdf $f(x;\theta)$ is twice differentiable as a function of $\theta$
Condition 5: The integral $\int_{-\infty}^{\infty} f(x;\theta)\ \mathrm dx$ can be differentiated twice under the integral sign as a function of $\theta$
We need the last two to derive the Fisher Information which plays a central role in the theory of convergence of the mle.
For some authors these suffice but if we are to be thorough we additionally need a final condition that ensures the asymptotic normality of the mle.
Condition 6:The pdf $f(x;\theta)$ is three times differentiable as a function of $\theta$. Further for all $\theta \in \Omega$, there exist a constant $c$ and a function $M(x)$ such that
$$\left| \frac{\partial^3 log f(x;\theta)}{\partial \theta^3} \right| \leq M(x)$$
with $E_{\theta_0} \left[M(X)\right] <\infty$ for all $|\theta-\theta_0|<c$ and all $x$ in the support of $X$
Essentially the last condition allows us to conclude that the remainder of a second order Taylor expansion about $\theta_0$ is bounded in probability and thus poses no problem asymptotically.
Is that what you had in mind?
|
What are the regularity conditions for Likelihood Ratio test
|
The required regularity conditions are listed in most intermediate textbooks and are not different than those of the mle. The following ones concern the one parameter case yet their extension to the m
|
What are the regularity conditions for Likelihood Ratio test
The required regularity conditions are listed in most intermediate textbooks and are not different than those of the mle. The following ones concern the one parameter case yet their extension to the multiparameter one is straightforward.
Condition 1: The pdfs are distinct, i.e. $\theta \neq \theta ^{\prime} \Rightarrow f(x_i;\theta)\neq f(x_i;\theta ^{\prime}) $
Note that this condition essentially states that the parameter identifies the pdf.
Condition 2: The pdfs have common support for all $\theta$
What this implies is that the support does not depend on $\theta$
Condition 3:The point $\theta_0$, the real parameter that is, is an interior point in some set $\Omega$
The last one concerns the possibility that $\theta$ appears in the endpoints of an interval.
These three together guarantee that the likelihood is maximised at the true parameter $\theta_0$ and then that the mle $\hat{\theta}$ that solves the equation
$$\frac{\partial l(\theta)} {\partial \theta}=0$$
is consistent.
Condition 4: The pdf $f(x;\theta)$ is twice differentiable as a function of $\theta$
Condition 5: The integral $\int_{-\infty}^{\infty} f(x;\theta)\ \mathrm dx$ can be differentiated twice under the integral sign as a function of $\theta$
We need the last two to derive the Fisher Information which plays a central role in the theory of convergence of the mle.
For some authors these suffice but if we are to be thorough we additionally need a final condition that ensures the asymptotic normality of the mle.
Condition 6:The pdf $f(x;\theta)$ is three times differentiable as a function of $\theta$. Further for all $\theta \in \Omega$, there exist a constant $c$ and a function $M(x)$ such that
$$\left| \frac{\partial^3 log f(x;\theta)}{\partial \theta^3} \right| \leq M(x)$$
with $E_{\theta_0} \left[M(X)\right] <\infty$ for all $|\theta-\theta_0|<c$ and all $x$ in the support of $X$
Essentially the last condition allows us to conclude that the remainder of a second order Taylor expansion about $\theta_0$ is bounded in probability and thus poses no problem asymptotically.
Is that what you had in mind?
|
What are the regularity conditions for Likelihood Ratio test
The required regularity conditions are listed in most intermediate textbooks and are not different than those of the mle. The following ones concern the one parameter case yet their extension to the m
|
14,750
|
Extract data points from moving average?
|
+1 to fabee's answer, which is complete. Just a note to translate it into R, based on the packages that I've found to do the operations at hand. In my case, I had data that is NOAA temperature forecasts on a three-month basis: Jan-Feb-Mar, Feb-Mar-Apr, Mar-Apr-May, etc, and I wanted to break it out into (approximate) monthly values, assuming that each three-month period's temperature is essentially an average.
library (Matrix)
library (matrixcalc)
# Feb-Mar-Apr through Nov-Dec-Jan temperature forecasts:
qtemps <- c(46.0, 56.4, 65.8, 73.4, 77.4, 76.2, 69.5, 60.1, 49.5, 41.2)
# Thus I need a 10x12 matrix, which is a band matrix but with the first
# and last rows removed so that each row contains 3 1's, for three months.
# Yeah, the as.matrix and all is a bit obfuscated, but the results of
# band are not what svd.inverse wants.
a <- as.matrix (band (matrix (1, nrow=12, ncol=12), -1, 1)[-c(1, 12),])
ai <- svd.inverse (a)
mtemps <- t(qtemps) %*% t(ai) * 3
Which works great for me. Thanks @fabee.
EDIT: OK, back-translating my R to Python, I get:
from numpy import *
from numpy.linalg import *
qtemps = transpose ([[46.0, 56.4, 65.8, 73.4, 77.4, 76.2, 69.5, 60.1, 49.5, 41.2]])
a = tril (ones ((12, 12)), 2) - tril (ones ((12, 12)), -1)
a = a[0:10,:]
ai = pinv (a)
mtemps = dot (ai, qtemps) * 3
(Which took a lot longer to debug than the R version. First because I'm not as familiar with Python as with R, but also because R is much more usable interactively.)
|
Extract data points from moving average?
|
+1 to fabee's answer, which is complete. Just a note to translate it into R, based on the packages that I've found to do the operations at hand. In my case, I had data that is NOAA temperature forecas
|
Extract data points from moving average?
+1 to fabee's answer, which is complete. Just a note to translate it into R, based on the packages that I've found to do the operations at hand. In my case, I had data that is NOAA temperature forecasts on a three-month basis: Jan-Feb-Mar, Feb-Mar-Apr, Mar-Apr-May, etc, and I wanted to break it out into (approximate) monthly values, assuming that each three-month period's temperature is essentially an average.
library (Matrix)
library (matrixcalc)
# Feb-Mar-Apr through Nov-Dec-Jan temperature forecasts:
qtemps <- c(46.0, 56.4, 65.8, 73.4, 77.4, 76.2, 69.5, 60.1, 49.5, 41.2)
# Thus I need a 10x12 matrix, which is a band matrix but with the first
# and last rows removed so that each row contains 3 1's, for three months.
# Yeah, the as.matrix and all is a bit obfuscated, but the results of
# band are not what svd.inverse wants.
a <- as.matrix (band (matrix (1, nrow=12, ncol=12), -1, 1)[-c(1, 12),])
ai <- svd.inverse (a)
mtemps <- t(qtemps) %*% t(ai) * 3
Which works great for me. Thanks @fabee.
EDIT: OK, back-translating my R to Python, I get:
from numpy import *
from numpy.linalg import *
qtemps = transpose ([[46.0, 56.4, 65.8, 73.4, 77.4, 76.2, 69.5, 60.1, 49.5, 41.2]])
a = tril (ones ((12, 12)), 2) - tril (ones ((12, 12)), -1)
a = a[0:10,:]
ai = pinv (a)
mtemps = dot (ai, qtemps) * 3
(Which took a lot longer to debug than the R version. First because I'm not as familiar with Python as with R, but also because R is much more usable interactively.)
|
Extract data points from moving average?
+1 to fabee's answer, which is complete. Just a note to translate it into R, based on the packages that I've found to do the operations at hand. In my case, I had data that is NOAA temperature forecas
|
14,751
|
Extract data points from moving average?
|
I try to put what whuber said into an answer. Let's say you have a large vector $\mathbf x$ with $n=2000$ entries. If you compute a moving average with a window of length $\ell=30$, you can write this as a vector matrix multiplication $\mathbf y = A\mathbf x$ of the vector $\mathbf x$ with the matrix
$$A=\frac{1}{30}\left(\begin{array}{cccccc}
1 & ... & 1 & 0 & ... & 0\\
0 & 1 & ... & 1 & 0 & ...\\
\vdots & & \ddots & & & \vdots\\
0 & ... & 1 & ... & 1 & 0\\
0 & ... & 0 & 1 & ... & 1
\end{array}\right)$$
which has $30$ ones which are shifted through as you advance through the rows until the $30$ ones hit the end of the matrix. Here the averaged vector $\mathbf y$ has 1970 dimensions. The matrix has $1970$ rows and $2000$ columns. Therefore, it is not invertible.
If you are not familiar with matrices, think about it as a linear equation system: you are searching for variables $x_1,...,x_{2000}$ such that the average over the first thirty yields $y_1$, the average over the second thirty yields $y_2$ and so on.
The problem with the equation system (and the matrix) is that it has more unknowns than equations. Therefore, you cannot uniquely identify your unknowns $x_1,...,x_n$. The intuitive reason is that you loose dimensions while averaging, because the first thirty dimensions of $\mathbf x$ don't get a corresponding element in $\mathbf y$ since you cannot shift the averaging window outside of $\mathbf x$.
One way to make $A$ or, equivalently the equation system, solvable is to come up with $30$ more equations (or $30$ more rows for $A$) that provide additional information (are linearly independent to all other rows of $A$).
Another, maybe easier, way is to use the pseudoinverse $A^\dagger$ of $A$. This generates a vector $\mathbf z = A^\dagger\mathbf y$ which has the same dimension as $\mathbf x$ and which has the property that it minimizes the quadratic distance between $\mathbf y$ and $A\mathbf z$ (see wikipedia).
This seems to work quite well. Here is an example where I drew $2000$ examples from a Gaussian distribution, added five, averaged them, and reconstructed the $\mathbf x$ via the pseudoinverse.
Many numerical programs offer pseudo-inverses (e.g. Matlab, numpy in python, etc.).
Here would be the python code to generate the signals from my example:
from numpy import *
from numpy.linalg import *
from matplotlib.pyplot import *
# get A and its inverse
A = (tril(ones((2000,2000)),-1) - tril(ones((2000,2000)),-31))/30.
A = A[30:,:]
pA = pinv(A) #pseudo inverse
# get x
x = random.randn(2000) + 5
y = dot(A,x)
# reconstruct
x2 = dot(pA,y)
plot(x,label='original x')
plot(y,label='averaged x')
plot(x2,label='reconstructed x')
legend()
show()
Hope that helps.
|
Extract data points from moving average?
|
I try to put what whuber said into an answer. Let's say you have a large vector $\mathbf x$ with $n=2000$ entries. If you compute a moving average with a window of length $\ell=30$, you can write this
|
Extract data points from moving average?
I try to put what whuber said into an answer. Let's say you have a large vector $\mathbf x$ with $n=2000$ entries. If you compute a moving average with a window of length $\ell=30$, you can write this as a vector matrix multiplication $\mathbf y = A\mathbf x$ of the vector $\mathbf x$ with the matrix
$$A=\frac{1}{30}\left(\begin{array}{cccccc}
1 & ... & 1 & 0 & ... & 0\\
0 & 1 & ... & 1 & 0 & ...\\
\vdots & & \ddots & & & \vdots\\
0 & ... & 1 & ... & 1 & 0\\
0 & ... & 0 & 1 & ... & 1
\end{array}\right)$$
which has $30$ ones which are shifted through as you advance through the rows until the $30$ ones hit the end of the matrix. Here the averaged vector $\mathbf y$ has 1970 dimensions. The matrix has $1970$ rows and $2000$ columns. Therefore, it is not invertible.
If you are not familiar with matrices, think about it as a linear equation system: you are searching for variables $x_1,...,x_{2000}$ such that the average over the first thirty yields $y_1$, the average over the second thirty yields $y_2$ and so on.
The problem with the equation system (and the matrix) is that it has more unknowns than equations. Therefore, you cannot uniquely identify your unknowns $x_1,...,x_n$. The intuitive reason is that you loose dimensions while averaging, because the first thirty dimensions of $\mathbf x$ don't get a corresponding element in $\mathbf y$ since you cannot shift the averaging window outside of $\mathbf x$.
One way to make $A$ or, equivalently the equation system, solvable is to come up with $30$ more equations (or $30$ more rows for $A$) that provide additional information (are linearly independent to all other rows of $A$).
Another, maybe easier, way is to use the pseudoinverse $A^\dagger$ of $A$. This generates a vector $\mathbf z = A^\dagger\mathbf y$ which has the same dimension as $\mathbf x$ and which has the property that it minimizes the quadratic distance between $\mathbf y$ and $A\mathbf z$ (see wikipedia).
This seems to work quite well. Here is an example where I drew $2000$ examples from a Gaussian distribution, added five, averaged them, and reconstructed the $\mathbf x$ via the pseudoinverse.
Many numerical programs offer pseudo-inverses (e.g. Matlab, numpy in python, etc.).
Here would be the python code to generate the signals from my example:
from numpy import *
from numpy.linalg import *
from matplotlib.pyplot import *
# get A and its inverse
A = (tril(ones((2000,2000)),-1) - tril(ones((2000,2000)),-31))/30.
A = A[30:,:]
pA = pinv(A) #pseudo inverse
# get x
x = random.randn(2000) + 5
y = dot(A,x)
# reconstruct
x2 = dot(pA,y)
plot(x,label='original x')
plot(y,label='averaged x')
plot(x2,label='reconstructed x')
legend()
show()
Hope that helps.
|
Extract data points from moving average?
I try to put what whuber said into an answer. Let's say you have a large vector $\mathbf x$ with $n=2000$ entries. If you compute a moving average with a window of length $\ell=30$, you can write this
|
14,752
|
Extract data points from moving average?
|
This is very related with this question cumsum with shift of n I asked in SO.
I also answered in SO the same question as this one but it has been closed so I include here the answer again because I think is more focus in the software implementation than from the mathematical understanding (even though I think they are equivalent mathematically).
The question asked the same thing, how to reverse the moving average, a.k.a in pandas as rolling mean.
The code sample of the question:
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
np.random.seed(100)
data = np.random.rand(200,3)
df = pd.DataFrame(data)
df.columns = ['a', 'b', 'y']
df['y_roll'] = df['y'].rolling(10).mean()
df['y_roll_predicted'] = df['y_roll'].apply(lambda x: x + np.random.rand()/20)
So, how to obtain df['y'] back from df['y_roll']? and apply the same method to df['y_roll_predicted']
With this function cumsum_shift(n) which you have to think of it as the inverse of the pandas/numpy method diff(periods = n), you can reverse the moving average up to constant if you don't have the initial values.
The definition of cumsum_shift(n) that generalizes the cumsum() which is this one with n = 1 (n is called shift in the code):
def cumsum_shift(s, shift = 1, init_values = [0]):
s_cumsum = pd.Series(np.zeros(len(s)))
for i in range(shift):
s_cumsum.iloc[i] = init_values[i]
for i in range(shift,len(s)):
s_cumsum.iloc[i] = s_cumsum.iloc[i-shift] + s.iloc[i]
return s_cumsum
Then assuming the size of the window is 10 win_size = 10 then if you multiply by 10 the diff'ed of the rolling mean and then "cumsum shift it" with a shift of 10, you obtain the original serie up to the intial values.
The code:
win_size = 10
s_diffed = win_size * df['y_roll'].diff()
df['y_unrolled'] = cumsum_shift(s=s_diffed, shift = win_size, init_values= df['y'].values[:win_size])
This code recovers exactly y from y_roll because you have the initial values.
You can see it plotting it (in my case with plotly) that y and y_unrolled are exactly the same (just the red one).
Now doing the same thing to y_roll_predicted to obtain y_predicted_unrolled.
Code:
win_size = 10
s_diffed = win_size * df['y_roll_predicted'].diff()
df['y_predicted_unrolled'] = cumsum_shift(s=s_diffed, shift = win_size, init_values= df['y'].values[:win_size])
In this case the result are not exactly the same, notice how the initial values are from y and then y_roll_predicted incorporate noise to y_roll so the "unrolling" cannot recover exactly the original one.
Here a plot zoomed in in a smaller range to see it better:
Hope this can help somebody.
|
Extract data points from moving average?
|
This is very related with this question cumsum with shift of n I asked in SO.
I also answered in SO the same question as this one but it has been closed so I include here the answer again because I th
|
Extract data points from moving average?
This is very related with this question cumsum with shift of n I asked in SO.
I also answered in SO the same question as this one but it has been closed so I include here the answer again because I think is more focus in the software implementation than from the mathematical understanding (even though I think they are equivalent mathematically).
The question asked the same thing, how to reverse the moving average, a.k.a in pandas as rolling mean.
The code sample of the question:
import numpy as np
import pandas as pd
import matplotlib.pylab as plt
np.random.seed(100)
data = np.random.rand(200,3)
df = pd.DataFrame(data)
df.columns = ['a', 'b', 'y']
df['y_roll'] = df['y'].rolling(10).mean()
df['y_roll_predicted'] = df['y_roll'].apply(lambda x: x + np.random.rand()/20)
So, how to obtain df['y'] back from df['y_roll']? and apply the same method to df['y_roll_predicted']
With this function cumsum_shift(n) which you have to think of it as the inverse of the pandas/numpy method diff(periods = n), you can reverse the moving average up to constant if you don't have the initial values.
The definition of cumsum_shift(n) that generalizes the cumsum() which is this one with n = 1 (n is called shift in the code):
def cumsum_shift(s, shift = 1, init_values = [0]):
s_cumsum = pd.Series(np.zeros(len(s)))
for i in range(shift):
s_cumsum.iloc[i] = init_values[i]
for i in range(shift,len(s)):
s_cumsum.iloc[i] = s_cumsum.iloc[i-shift] + s.iloc[i]
return s_cumsum
Then assuming the size of the window is 10 win_size = 10 then if you multiply by 10 the diff'ed of the rolling mean and then "cumsum shift it" with a shift of 10, you obtain the original serie up to the intial values.
The code:
win_size = 10
s_diffed = win_size * df['y_roll'].diff()
df['y_unrolled'] = cumsum_shift(s=s_diffed, shift = win_size, init_values= df['y'].values[:win_size])
This code recovers exactly y from y_roll because you have the initial values.
You can see it plotting it (in my case with plotly) that y and y_unrolled are exactly the same (just the red one).
Now doing the same thing to y_roll_predicted to obtain y_predicted_unrolled.
Code:
win_size = 10
s_diffed = win_size * df['y_roll_predicted'].diff()
df['y_predicted_unrolled'] = cumsum_shift(s=s_diffed, shift = win_size, init_values= df['y'].values[:win_size])
In this case the result are not exactly the same, notice how the initial values are from y and then y_roll_predicted incorporate noise to y_roll so the "unrolling" cannot recover exactly the original one.
Here a plot zoomed in in a smaller range to see it better:
Hope this can help somebody.
|
Extract data points from moving average?
This is very related with this question cumsum with shift of n I asked in SO.
I also answered in SO the same question as this one but it has been closed so I include here the answer again because I th
|
14,753
|
Extract data points from moving average?
|
Gonzalo,
I'm using your cumsum_shift function in my large df (400,000 points) but I have problems when I change the win_size. Figure below is for win_size=12,000 and I can see some spikes at the end of each win_size. For my current problem I need to use win_size> 40,000. Do you have any idea of restriction of your function based on the win_size? Thanks in advance
|
Extract data points from moving average?
|
Gonzalo,
I'm using your cumsum_shift function in my large df (400,000 points) but I have problems when I change the win_size. Figure below is for win_size=12,000 and I can see some spikes at the end o
|
Extract data points from moving average?
Gonzalo,
I'm using your cumsum_shift function in my large df (400,000 points) but I have problems when I change the win_size. Figure below is for win_size=12,000 and I can see some spikes at the end of each win_size. For my current problem I need to use win_size> 40,000. Do you have any idea of restriction of your function based on the win_size? Thanks in advance
|
Extract data points from moving average?
Gonzalo,
I'm using your cumsum_shift function in my large df (400,000 points) but I have problems when I change the win_size. Figure below is for win_size=12,000 and I can see some spikes at the end o
|
14,754
|
Extract data points from moving average?
|
fabee's answer was complete. I am just adding a generic function that can be used in Python that I've created and tested for my projects (with a sample code)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def reconstruct_orig(sm_x:np.ndarray, win_size:int=7):
"""reconstructing from original data
Args:
sm_x (np.ndarray): smoothed array (remove any NaN from the edge)
win_size (int, optional): moving average window size. Defaults to 7.
Returns:
[type]: [description]
""" '''
'''
arr_size = sm_x.shape[0]+win_size
# get A and its inverse
A = (np.tril(np.ones((arr_size,arr_size)),-1) - np.tril(np.ones((arr_size,arr_size)),-(win_size+1)))/win_size
A = A[win_size:,:]
pA = np.linalg.pinv(A) #pseudo inverse
return np.dot(pA, sm_x)
if __name__=="__main__":
# np.random.seed(1)
nmax= 100
t=np.linspace(0,10,num=nmax)
raw_x = pd.Series(np.sin(t)+ 0.2*np.random.normal(0,1, size=nmax)) # create original data
sm_x = raw_x.rolling(7, center=False).mean().dropna() # smooth data
re_x = reconstruct_orig(sm_x, win_size=7) # reconstruct data
plt.plot(raw_x,'x',label='original x')
plt.plot(sm_x,label='averaged x')
plt.plot(re_x,'.', label='reconstructed x')
plt.legend()
plt.show()
|
Extract data points from moving average?
|
fabee's answer was complete. I am just adding a generic function that can be used in Python that I've created and tested for my projects (with a sample code)
import numpy as np
import pandas as pd
imp
|
Extract data points from moving average?
fabee's answer was complete. I am just adding a generic function that can be used in Python that I've created and tested for my projects (with a sample code)
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
def reconstruct_orig(sm_x:np.ndarray, win_size:int=7):
"""reconstructing from original data
Args:
sm_x (np.ndarray): smoothed array (remove any NaN from the edge)
win_size (int, optional): moving average window size. Defaults to 7.
Returns:
[type]: [description]
""" '''
'''
arr_size = sm_x.shape[0]+win_size
# get A and its inverse
A = (np.tril(np.ones((arr_size,arr_size)),-1) - np.tril(np.ones((arr_size,arr_size)),-(win_size+1)))/win_size
A = A[win_size:,:]
pA = np.linalg.pinv(A) #pseudo inverse
return np.dot(pA, sm_x)
if __name__=="__main__":
# np.random.seed(1)
nmax= 100
t=np.linspace(0,10,num=nmax)
raw_x = pd.Series(np.sin(t)+ 0.2*np.random.normal(0,1, size=nmax)) # create original data
sm_x = raw_x.rolling(7, center=False).mean().dropna() # smooth data
re_x = reconstruct_orig(sm_x, win_size=7) # reconstruct data
plt.plot(raw_x,'x',label='original x')
plt.plot(sm_x,label='averaged x')
plt.plot(re_x,'.', label='reconstructed x')
plt.legend()
plt.show()
|
Extract data points from moving average?
fabee's answer was complete. I am just adding a generic function that can be used in Python that I've created and tested for my projects (with a sample code)
import numpy as np
import pandas as pd
imp
|
14,755
|
Why is statistics useful when many things that matter are one shot things?
|
First I think that you may be confusing "statistics" meaning a collection of numbers or other facts describing a group or situation, and "statistics" meaning the science of using data and information to understand the world in the face of variation (others may be able to improve on my definitions). Statisticians use both senses of the word, so it is not surprising when people mix them up.
Statistics (the science) is a lot about choosing strategies and choosing the best strategy even if we only get to apply it once. Some times when I (and others) teach probability we use the classic Monty Hall problem (3 doors, 2 goats, 1 car) to motivate it and we show how we can estimate probabilities by playing the game a bunch of times (not for prizes) and we can see that the "switch" strategy wins 2/3 of the time and the "stay" strategy only wins 1/3 of the time. Now if we had the opportunity to play the game a single time we would know some things about which strategy gives a better chance of winning.
The surgery example is similar, you will only have the surgery (or not have the surgery) once, but don't you want to know which strategy benifits more people? If your choices are surgery with some chance greater than 0% of survival or no surgery and 0% of survival, then yes there is little difference between the surgery having 51% survival and 99.9% survival. But what if there are other options as well, you can choose between surgery, doing nothing (which has 25% survival) or a change of diet and exercise which has 75% survival (but requires effort on your part), now wouldn't you care about if the surgery option has 51% vs. 99% survival?
Also consider the doctor, he will be doing more than just your surgery. If surgery has 99.9% survival then he has no reason to consider alternatives, but if it only has 51% survival then while it may be the best choice today, he should be looking for other alternatives that increase that survival. Yes even with 90% survival he will loose some patients, but which strategy gives him the best chance of saving the most patients?
This morning I wore my seat belt while driving (my usual strategy), but did not get in any accidents, so was my strategy a waste of time? If I knew when I would get in an accident then I could save time by only putting on the seat belt on those occasions and not on others. But I don't know when I will be in an accident so I will stick with my wear the seat belt strategy because I believe it will give me the best chance if I ever am in an accident even if that means wasting a bit of time and effort in the high percentage (hopefully 100%) of times that there is no accident.
|
Why is statistics useful when many things that matter are one shot things?
|
First I think that you may be confusing "statistics" meaning a collection of numbers or other facts describing a group or situation, and "statistics" meaning the science of using data and information
|
Why is statistics useful when many things that matter are one shot things?
First I think that you may be confusing "statistics" meaning a collection of numbers or other facts describing a group or situation, and "statistics" meaning the science of using data and information to understand the world in the face of variation (others may be able to improve on my definitions). Statisticians use both senses of the word, so it is not surprising when people mix them up.
Statistics (the science) is a lot about choosing strategies and choosing the best strategy even if we only get to apply it once. Some times when I (and others) teach probability we use the classic Monty Hall problem (3 doors, 2 goats, 1 car) to motivate it and we show how we can estimate probabilities by playing the game a bunch of times (not for prizes) and we can see that the "switch" strategy wins 2/3 of the time and the "stay" strategy only wins 1/3 of the time. Now if we had the opportunity to play the game a single time we would know some things about which strategy gives a better chance of winning.
The surgery example is similar, you will only have the surgery (or not have the surgery) once, but don't you want to know which strategy benifits more people? If your choices are surgery with some chance greater than 0% of survival or no surgery and 0% of survival, then yes there is little difference between the surgery having 51% survival and 99.9% survival. But what if there are other options as well, you can choose between surgery, doing nothing (which has 25% survival) or a change of diet and exercise which has 75% survival (but requires effort on your part), now wouldn't you care about if the surgery option has 51% vs. 99% survival?
Also consider the doctor, he will be doing more than just your surgery. If surgery has 99.9% survival then he has no reason to consider alternatives, but if it only has 51% survival then while it may be the best choice today, he should be looking for other alternatives that increase that survival. Yes even with 90% survival he will loose some patients, but which strategy gives him the best chance of saving the most patients?
This morning I wore my seat belt while driving (my usual strategy), but did not get in any accidents, so was my strategy a waste of time? If I knew when I would get in an accident then I could save time by only putting on the seat belt on those occasions and not on others. But I don't know when I will be in an accident so I will stick with my wear the seat belt strategy because I believe it will give me the best chance if I ever am in an accident even if that means wasting a bit of time and effort in the high percentage (hopefully 100%) of times that there is no accident.
|
Why is statistics useful when many things that matter are one shot things?
First I think that you may be confusing "statistics" meaning a collection of numbers or other facts describing a group or situation, and "statistics" meaning the science of using data and information
|
14,756
|
Why is statistics useful when many things that matter are one shot things?
|
Just because you don't use statistics in your daily life does not mean that the field does not directly affect you. When you are at the doctor and they recommend one treatment over the other, you can bet that behind that recommendation was many clinical trials that used statistics to interpret the results of their experiments.
It turns out that the concept of expected value is also very useful even if you do not personally use the concept. Your example of betting your life savings fails to take into account how risk adverse you are. Other situations might find yourself less risk adverse, or where there are not catastrophic outcomes. Business, finance, actuarial contexts and others are examples of this. Perhaps you are issuing home insurance policy - then all of the sudden knowing the probability of an earthquake occurring within some specified period of time matters a great deal.
In the end statistics is a great way to deal with uncertainty. Your last example you made up some data about places you like to travel and claimed that statistics will say that you will never find a place in Asia that you like. This is just wrong. Of course this data will make you believe that Asia is less likely to have a place you like, but you can set your prior belief to be whatever you like, and statistics will tell you how to update your belief given the new data. Furthermore, it allows you to do modify your belief in a principled way that will allow you to act rationally in the presence of uncertainty.
|
Why is statistics useful when many things that matter are one shot things?
|
Just because you don't use statistics in your daily life does not mean that the field does not directly affect you. When you are at the doctor and they recommend one treatment over the other, you can
|
Why is statistics useful when many things that matter are one shot things?
Just because you don't use statistics in your daily life does not mean that the field does not directly affect you. When you are at the doctor and they recommend one treatment over the other, you can bet that behind that recommendation was many clinical trials that used statistics to interpret the results of their experiments.
It turns out that the concept of expected value is also very useful even if you do not personally use the concept. Your example of betting your life savings fails to take into account how risk adverse you are. Other situations might find yourself less risk adverse, or where there are not catastrophic outcomes. Business, finance, actuarial contexts and others are examples of this. Perhaps you are issuing home insurance policy - then all of the sudden knowing the probability of an earthquake occurring within some specified period of time matters a great deal.
In the end statistics is a great way to deal with uncertainty. Your last example you made up some data about places you like to travel and claimed that statistics will say that you will never find a place in Asia that you like. This is just wrong. Of course this data will make you believe that Asia is less likely to have a place you like, but you can set your prior belief to be whatever you like, and statistics will tell you how to update your belief given the new data. Furthermore, it allows you to do modify your belief in a principled way that will allow you to act rationally in the presence of uncertainty.
|
Why is statistics useful when many things that matter are one shot things?
Just because you don't use statistics in your daily life does not mean that the field does not directly affect you. When you are at the doctor and they recommend one treatment over the other, you can
|
14,757
|
Why is statistics useful when many things that matter are one shot things?
|
The world is stochastic not deterministic. If it were deterministic the physicists would be ruling the world and statisticians would be out of a job. But the reality is that statisticians are in high demand in almost every discipline. That is not to say that there isn't a place for physics and other sciences but statistics works hand in hand with science and is the basis for many scientific discoveries.
Enough chatter and down to specifics. I have worked the last 17 years in the medical industry, first in medical devices, then pharmaceuticals, and now general medical research. Drugs and medical devices that improve quality of life and often save or extend life are developed and approved in this country and around the world on a regular basis. In the US approval requires evidence of safety and efficacy before the FDA will allow a drug or medical device to be marketed. Evidence to the FDA comes from clinical trials in phases. All the clinical trials require valid statistical design and analysis methods. Nothing is perfect. Drugs work well for some people while others may not respond or will have adverse events (bad reactions that can cause illness or death). The trials separate out the ineffective drugs from the effective. Most drugs fail and there is often a ten year cycle from early stage development to end of phase III with approval and marketing at the end of the trial. Postmarket surveillance which also requires statistics is then applied to make sure that the drug works well enough for the general population. Sometimes the general population that the drug is approved for is a less restrictive group than the patients that were eligible for the clinical trials. So sometimes drugs do turn out to be dangerous and get pulled from the market. Statistics helps in all aspects of drug safety.
Statistics is not perfect. We live with some mistakes due to randomness and uncertainty. But it is controlled and our lives are better and errors are reduced from what they would be had statistical science not been involved.
|
Why is statistics useful when many things that matter are one shot things?
|
The world is stochastic not deterministic. If it were deterministic the physicists would be ruling the world and statisticians would be out of a job. But the reality is that statisticians are in hig
|
Why is statistics useful when many things that matter are one shot things?
The world is stochastic not deterministic. If it were deterministic the physicists would be ruling the world and statisticians would be out of a job. But the reality is that statisticians are in high demand in almost every discipline. That is not to say that there isn't a place for physics and other sciences but statistics works hand in hand with science and is the basis for many scientific discoveries.
Enough chatter and down to specifics. I have worked the last 17 years in the medical industry, first in medical devices, then pharmaceuticals, and now general medical research. Drugs and medical devices that improve quality of life and often save or extend life are developed and approved in this country and around the world on a regular basis. In the US approval requires evidence of safety and efficacy before the FDA will allow a drug or medical device to be marketed. Evidence to the FDA comes from clinical trials in phases. All the clinical trials require valid statistical design and analysis methods. Nothing is perfect. Drugs work well for some people while others may not respond or will have adverse events (bad reactions that can cause illness or death). The trials separate out the ineffective drugs from the effective. Most drugs fail and there is often a ten year cycle from early stage development to end of phase III with approval and marketing at the end of the trial. Postmarket surveillance which also requires statistics is then applied to make sure that the drug works well enough for the general population. Sometimes the general population that the drug is approved for is a less restrictive group than the patients that were eligible for the clinical trials. So sometimes drugs do turn out to be dangerous and get pulled from the market. Statistics helps in all aspects of drug safety.
Statistics is not perfect. We live with some mistakes due to randomness and uncertainty. But it is controlled and our lives are better and errors are reduced from what they would be had statistical science not been involved.
|
Why is statistics useful when many things that matter are one shot things?
The world is stochastic not deterministic. If it were deterministic the physicists would be ruling the world and statisticians would be out of a job. But the reality is that statisticians are in hig
|
14,758
|
Why is statistics useful when many things that matter are one shot things?
|
I myself have the same doubts about the usefulness of probability, and statistics, when it comes to taking decision about a single event. In my opinion, knowing the probability, real or estimated, is extremely important when the objective is estimating outcomes of samples, be they a single event repeated a number of times or a sample drown from a certain population. In short, knowing the probability makes more sense for the casino who, based on probability calculations can put the rules that guarantee he would win in the long run (after many plays) and not for a gambler who pretends to play one time, so he would won or loose (these are the outcomes when the experiment is run a single time). Itβs also important for the generals who contemplate sending their solders to a battle with the risk (probability) of losing 10% of them, but not for a certain solder (say, John) who is only going to die or survive. There are so many examples like these in real life.
The point I want to make is that, Probability and Statistics, not only are useful in real life but, more precisely, they are a tool for all modern scientific researches and decision making rules. However, itβs not correct to say that rationality implies relying on the probability of a single event, without the intention or the possibility of repeating it, for estimating the outcome. The tendency of the probability to influence a certain individualβs decision, based on her or his degree of risk aversion, is obviously subjective. Risk avert and risk lover have different attitudes (decisions) toward the same lottery (the same expected value).
|
Why is statistics useful when many things that matter are one shot things?
|
I myself have the same doubts about the usefulness of probability, and statistics, when it comes to taking decision about a single event. In my opinion, knowing the probability, real or estimated, is
|
Why is statistics useful when many things that matter are one shot things?
I myself have the same doubts about the usefulness of probability, and statistics, when it comes to taking decision about a single event. In my opinion, knowing the probability, real or estimated, is extremely important when the objective is estimating outcomes of samples, be they a single event repeated a number of times or a sample drown from a certain population. In short, knowing the probability makes more sense for the casino who, based on probability calculations can put the rules that guarantee he would win in the long run (after many plays) and not for a gambler who pretends to play one time, so he would won or loose (these are the outcomes when the experiment is run a single time). Itβs also important for the generals who contemplate sending their solders to a battle with the risk (probability) of losing 10% of them, but not for a certain solder (say, John) who is only going to die or survive. There are so many examples like these in real life.
The point I want to make is that, Probability and Statistics, not only are useful in real life but, more precisely, they are a tool for all modern scientific researches and decision making rules. However, itβs not correct to say that rationality implies relying on the probability of a single event, without the intention or the possibility of repeating it, for estimating the outcome. The tendency of the probability to influence a certain individualβs decision, based on her or his degree of risk aversion, is obviously subjective. Risk avert and risk lover have different attitudes (decisions) toward the same lottery (the same expected value).
|
Why is statistics useful when many things that matter are one shot things?
I myself have the same doubts about the usefulness of probability, and statistics, when it comes to taking decision about a single event. In my opinion, knowing the probability, real or estimated, is
|
14,759
|
Why is statistics useful when many things that matter are one shot things?
|
The long and the short of it is that probability is the unique generalization of ordinary true/false logic to degrees of belief between 0 and 1. This is the so-called logical Bayesian interpretation of probability, originated by R.T. Cox and later championed by E.T. Jaynes.
Furthermore under weak assumptions it can be shown that the right way to order uncertain outcomes by preference is to order them by expected utility, with the expected taken with respect to the probability distribution over outcomes.
See Robert Clemen, "Making Hard Decisions", for an introduction and exposition on applied decision analysis which is based on Bayesian probability and expected utility.
You are absolutely right to be skeptical about conventional frequentist statistics; by the design of its inventors (R.A. Fisher, J. Neyman, E. Pearson) it is limited to repetitive events. But many everyday problems don't involve repetitive events. What to do? The typical approach is some combination of forcing square pegs into round holes, and moving the goalposts. Shameful, really.
|
Why is statistics useful when many things that matter are one shot things?
|
The long and the short of it is that probability is the unique generalization of ordinary true/false logic to degrees of belief between 0 and 1. This is the so-called logical Bayesian interpretation o
|
Why is statistics useful when many things that matter are one shot things?
The long and the short of it is that probability is the unique generalization of ordinary true/false logic to degrees of belief between 0 and 1. This is the so-called logical Bayesian interpretation of probability, originated by R.T. Cox and later championed by E.T. Jaynes.
Furthermore under weak assumptions it can be shown that the right way to order uncertain outcomes by preference is to order them by expected utility, with the expected taken with respect to the probability distribution over outcomes.
See Robert Clemen, "Making Hard Decisions", for an introduction and exposition on applied decision analysis which is based on Bayesian probability and expected utility.
You are absolutely right to be skeptical about conventional frequentist statistics; by the design of its inventors (R.A. Fisher, J. Neyman, E. Pearson) it is limited to repetitive events. But many everyday problems don't involve repetitive events. What to do? The typical approach is some combination of forcing square pegs into round holes, and moving the goalposts. Shameful, really.
|
Why is statistics useful when many things that matter are one shot things?
The long and the short of it is that probability is the unique generalization of ordinary true/false logic to degrees of belief between 0 and 1. This is the so-called logical Bayesian interpretation o
|
14,760
|
Why is statistics useful when many things that matter are one shot things?
|
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
I skeptical of statistics for the following reasons.
I am convinced anybody without a graduate degree in statistics has no clue what they are doing. Unf. there are millions of people across the world doing research without a graduate degree in statistics. I was an undergraduate math major at the Univeristy of Maryland College, Park. I took 4 400 level math classes. All teachers did was teach you how to calculate stuff. Nobody taught me how to make sense of anything or do any statistical analysis except for hypothesis testing, which makes no sense for 2 reasons.
1. For every hypothesis test I was taught, I had to make assumptions beforehand. Nobody taught me which assumption(s) I had to start with.
2. P values make no sense logically. A graduate degree in statistics might teach you what a p value actually is. However, I am convinced no undergraduate knows how to use it. The undergraduate definition assumes a probability of something that depends on the hypothesis being correct. Logically, the definition makes no sense at all. Even worse, NOBODY has ever told me where the probability comes from. I have actually emailed almost my whole math department(more than 200 people) if somebody could give me an answer. The most popular and only responses were "one would have to ASSUME the error rates for the probability" (When I asked people how this was done, they all answered me "from previous experiments", I could not get anything more specific from anybody), "It's just the way it is", and "it is completely random".
The same thing happened when I googled what the signifigance of a p value is.
It leads me to the conclusion...
Even a sig. number of math and statistics professors have no clue what the logic behind statistics is. I don't expect people to have in depth knowledge. However, I have a feeling that even a sig. % of researches and professors do not understand any of the underlying logic behind statistics.
Statistical error is not the same thing as actual error. Because people like to use statistics to derive estimates for things that are humongous, people like to use statistical error to "mask" the fact that they have no clue what the actual error.
People use small samples for big populations because statistical theory tells them they can. I learned from one of my college courses, that people like to use data that is an estimate from about 30 schools in the country to show that there are few violent incidents in schools in the whole country. There are about 100,000 schools. That sounds insane. A whole popular movement is based off of about 30 schools in the whole country.
People like to make the burden of proof statistical. The Higgs Bossom was never discovered. It was discovered statistically, but that doesn't mean anything. Something being discovered purely statistically is useless because nobody knows the accuracy of statistics.
People like to use statistics to make important desicions. Statistics can be used as a guide, but nobody knows how accurate it really is. Just because a problem seems impossible to solve does not mean that statistics is the next best thing. The fact that DNA testing is based off of statistic gives me the chills. Can I be given the death penalty soley because of statistics? Could a murderer be released from jail soley because of statistics?
I believe statistics can be useful, but only if it is not used as the conclusion. I believe statistics can tell us what some of the possibilities are. Then logic, not statistical logic should be used to prove which possibility(s) is correct.
|
Why is statistics useful when many things that matter are one shot things?
|
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
|
Why is statistics useful when many things that matter are one shot things?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
I skeptical of statistics for the following reasons.
I am convinced anybody without a graduate degree in statistics has no clue what they are doing. Unf. there are millions of people across the world doing research without a graduate degree in statistics. I was an undergraduate math major at the Univeristy of Maryland College, Park. I took 4 400 level math classes. All teachers did was teach you how to calculate stuff. Nobody taught me how to make sense of anything or do any statistical analysis except for hypothesis testing, which makes no sense for 2 reasons.
1. For every hypothesis test I was taught, I had to make assumptions beforehand. Nobody taught me which assumption(s) I had to start with.
2. P values make no sense logically. A graduate degree in statistics might teach you what a p value actually is. However, I am convinced no undergraduate knows how to use it. The undergraduate definition assumes a probability of something that depends on the hypothesis being correct. Logically, the definition makes no sense at all. Even worse, NOBODY has ever told me where the probability comes from. I have actually emailed almost my whole math department(more than 200 people) if somebody could give me an answer. The most popular and only responses were "one would have to ASSUME the error rates for the probability" (When I asked people how this was done, they all answered me "from previous experiments", I could not get anything more specific from anybody), "It's just the way it is", and "it is completely random".
The same thing happened when I googled what the signifigance of a p value is.
It leads me to the conclusion...
Even a sig. number of math and statistics professors have no clue what the logic behind statistics is. I don't expect people to have in depth knowledge. However, I have a feeling that even a sig. % of researches and professors do not understand any of the underlying logic behind statistics.
Statistical error is not the same thing as actual error. Because people like to use statistics to derive estimates for things that are humongous, people like to use statistical error to "mask" the fact that they have no clue what the actual error.
People use small samples for big populations because statistical theory tells them they can. I learned from one of my college courses, that people like to use data that is an estimate from about 30 schools in the country to show that there are few violent incidents in schools in the whole country. There are about 100,000 schools. That sounds insane. A whole popular movement is based off of about 30 schools in the whole country.
People like to make the burden of proof statistical. The Higgs Bossom was never discovered. It was discovered statistically, but that doesn't mean anything. Something being discovered purely statistically is useless because nobody knows the accuracy of statistics.
People like to use statistics to make important desicions. Statistics can be used as a guide, but nobody knows how accurate it really is. Just because a problem seems impossible to solve does not mean that statistics is the next best thing. The fact that DNA testing is based off of statistic gives me the chills. Can I be given the death penalty soley because of statistics? Could a murderer be released from jail soley because of statistics?
I believe statistics can be useful, but only if it is not used as the conclusion. I believe statistics can tell us what some of the possibilities are. Then logic, not statistical logic should be used to prove which possibility(s) is correct.
|
Why is statistics useful when many things that matter are one shot things?
Want to improve this post? Add citations from reputable sources by editing the post. Posts with unsourced content may be edited or deleted.
|
14,761
|
how to avoid overfitting in XGBoost model
|
XGBoost (and other gradient boosting machine routines too) has a number of parameters that can be tuned to avoid over-fitting. I will mention some of the most obvious ones. For example we can change:
the ratio of features used (i.e. columns used); colsample_bytree. Lower ratios avoid over-fitting.
the ratio of the training instances used (i.e. rows used); subsample. Lower ratios avoid over-fitting.
the maximum depth of a tree; max_depth. Lower values avoid over-fitting.
the minimum loss reduction required to make a further split; gamma. Larger values avoid over-fitting.
the learning rate of our GBM (i.e. how much we update our prediction with each successive tree); eta. Lower values avoid over-fitting.
the minimum sum of instance weight needed in a leaf, in certain applications this relates directly to the minimum number of instances needed in a node; min_child_weight. Larger values avoid over-fitting.
This list is not exhaustive and I will strongly urge looking into XGBoost docs for information regarding other parameters. Please note that trying to avoid over-fitting might lead to under-fitting, where we regularise too much and fail to learn relevant information. On that matter, one might want to consider using a separate validation set or simply cross-validation (through xgboost.cv() for example) to monitor the progress of the GBM as more iterations are performed (i.e. base learners are added). That way potentially over-fitting problems can be caught early on. This relates close to the use of early-stopping as a form a regularisation; XGBoost offers an argument early_stopping_rounds that is relevant in this case.
Finally, I would also note that the class imbalance reported (85-15) is not really severe. Using the default value scale_pos_weight of 1 is probably adequate.
|
how to avoid overfitting in XGBoost model
|
XGBoost (and other gradient boosting machine routines too) has a number of parameters that can be tuned to avoid over-fitting. I will mention some of the most obvious ones. For example we can change:
|
how to avoid overfitting in XGBoost model
XGBoost (and other gradient boosting machine routines too) has a number of parameters that can be tuned to avoid over-fitting. I will mention some of the most obvious ones. For example we can change:
the ratio of features used (i.e. columns used); colsample_bytree. Lower ratios avoid over-fitting.
the ratio of the training instances used (i.e. rows used); subsample. Lower ratios avoid over-fitting.
the maximum depth of a tree; max_depth. Lower values avoid over-fitting.
the minimum loss reduction required to make a further split; gamma. Larger values avoid over-fitting.
the learning rate of our GBM (i.e. how much we update our prediction with each successive tree); eta. Lower values avoid over-fitting.
the minimum sum of instance weight needed in a leaf, in certain applications this relates directly to the minimum number of instances needed in a node; min_child_weight. Larger values avoid over-fitting.
This list is not exhaustive and I will strongly urge looking into XGBoost docs for information regarding other parameters. Please note that trying to avoid over-fitting might lead to under-fitting, where we regularise too much and fail to learn relevant information. On that matter, one might want to consider using a separate validation set or simply cross-validation (through xgboost.cv() for example) to monitor the progress of the GBM as more iterations are performed (i.e. base learners are added). That way potentially over-fitting problems can be caught early on. This relates close to the use of early-stopping as a form a regularisation; XGBoost offers an argument early_stopping_rounds that is relevant in this case.
Finally, I would also note that the class imbalance reported (85-15) is not really severe. Using the default value scale_pos_weight of 1 is probably adequate.
|
how to avoid overfitting in XGBoost model
XGBoost (and other gradient boosting machine routines too) has a number of parameters that can be tuned to avoid over-fitting. I will mention some of the most obvious ones. For example we can change:
|
14,762
|
How should I intuitively understand the KL divergence loss in variational autoencoders? [duplicate]
|
The KL divergence tells us how well the probability distribution Q approximates the probability distribution P by calculating the cross-entropy minus the entropy. Intuitively, you can think of that as the statistical measure of how one distribution differs from another.
In VAE, let $X$ be the data we want to model, $z$ be latent variable, $P(X)$ be the probability distribution of data, $P(z)$ be the probability distribution of the latent variable and $P(X|z)$ be the distribution of generating data given latent variable
In the case of variational autoencoders, our objective is to infer $P(z)$ from $P(z|X)$. $P(z|X)$ is the probability distribution that projects our data into latent space. But since we do not have the distribution $P(z|X)$, we estimate it using its simpler estimation $Q$.
Now while training our VAE, the encoder should try to learn the simpler distribution $Q(z|X)$ such that it is as close as possible to the actual distribution $P(z|X)$. This is where we use KL divergence as a measure of a difference between two probability distributions. The VAE objective function thus includes this KL divergence term that needs to be minimized.
$$ D_{KL}[Q(z|X)||P(z|X)] = E[\log {Q(z|X)} β \log {P(z|X)}] $$
|
How should I intuitively understand the KL divergence loss in variational autoencoders? [duplicate]
|
The KL divergence tells us how well the probability distribution Q approximates the probability distribution P by calculating the cross-entropy minus the entropy. Intuitively, you can think of that as
|
How should I intuitively understand the KL divergence loss in variational autoencoders? [duplicate]
The KL divergence tells us how well the probability distribution Q approximates the probability distribution P by calculating the cross-entropy minus the entropy. Intuitively, you can think of that as the statistical measure of how one distribution differs from another.
In VAE, let $X$ be the data we want to model, $z$ be latent variable, $P(X)$ be the probability distribution of data, $P(z)$ be the probability distribution of the latent variable and $P(X|z)$ be the distribution of generating data given latent variable
In the case of variational autoencoders, our objective is to infer $P(z)$ from $P(z|X)$. $P(z|X)$ is the probability distribution that projects our data into latent space. But since we do not have the distribution $P(z|X)$, we estimate it using its simpler estimation $Q$.
Now while training our VAE, the encoder should try to learn the simpler distribution $Q(z|X)$ such that it is as close as possible to the actual distribution $P(z|X)$. This is where we use KL divergence as a measure of a difference between two probability distributions. The VAE objective function thus includes this KL divergence term that needs to be minimized.
$$ D_{KL}[Q(z|X)||P(z|X)] = E[\log {Q(z|X)} β \log {P(z|X)}] $$
|
How should I intuitively understand the KL divergence loss in variational autoencoders? [duplicate]
The KL divergence tells us how well the probability distribution Q approximates the probability distribution P by calculating the cross-entropy minus the entropy. Intuitively, you can think of that as
|
14,763
|
The pdf of $\frac{X_1-\bar{X}}{S}$
|
What is so intriguing about this result is how much it looks like the distribution of a correlation coefficient. There's a reason.
Suppose $(X,Y)$ is bivariate normal with zero correlation and common variance $\sigma^2$ for both variables. Draw an iid sample $(x_1,y_1), \ldots, (x_n,y_n)$. It is well known, and readily established geometrically (as Fisher did a century ago) that the distribution of the sample correlation coefficient
$$r = \frac{\sum_{i=1}^n(x_i - \bar x)(y_i - \bar y)}{(n-1) S_x S_y}$$
is
$$f(r) = \frac{1}{B\left(\frac{1}{2}, \frac{n}{2}-1\right)}\left(1-r^2\right)^{n/2-2},\ -1 \le r \le 1.$$
(Here, as usual, $\bar x$ and $\bar y$ are sample means and $S_x$ and $S_y$ are the square roots of the unbiased variance estimators.) $B$ is the Beta function, for which
$$\frac{1}{B\left(\frac{1}{2}, \frac{n}{2}-1\right)} = \frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{n}{2}-1\right)} = \frac{\Gamma\left(\frac{n-1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{n}{2}-1\right)} . \tag{1}$$
To compute $r$, we may exploit its invariance under rotations in $\mathbb{R}^n$ around the line generated by $(1,1,\ldots, 1)$, along with the invariance of the distribution of the sample under the same rotations, and choose $y_i/S_y$ to be any unit vector whose components sum to zero. One such vector is proportional to $v = (n-1, -1, \ldots, -1)$. Its standard deviation is
$$S_v = \sqrt{\frac{1}{n-1}\left((n-1)^2 + (-1)^2 + \cdots + (-1)^2\right)} = \sqrt{n}.$$
Consequently, $r$ must have the same distribution as
$$\frac{\sum_{i=1}^n(x_i - \bar x)(v_i - \bar v)}{(n-1) S_x S_v}
= \frac{(n-1)x_1 - x_2-\cdots-x_n}{(n-1) S_x \sqrt{n}}
= \frac{n(x_1 - \bar x)}{(n-1) S_x \sqrt{n}} = \frac{\sqrt{n}}{n-1}Z.$$
Therefore all we need to do is rescale $r$ to find the distribution of $Z$:
$$f_Z(z) = \bigg|\frac{\sqrt{n}}{n-1}\bigg| f\left(\frac{\sqrt{n}}{n-1}z\right) = \frac{1}{B\left(\frac{1}{2}, \frac{n}{2}-1\right)} \frac{\sqrt{n}}{n-1}\left(1- \frac{n}{(n-1)^2}z^2\right)^{n/2-2}$$
for $|z| \le \frac{n-1}{\sqrt{n}}$. Formula (1) shows this is identical to that of the question.
Not entirely convinced? Here is the result of simulating this situation 100,000 times (with $n=4$, where the distribution is uniform).
The first histogram plots the correlation coefficients of $(x_i,y_i),i=1,\ldots,4$ while the second histogram plots the correlation coefficients of $(x_i,v_i),i=1,\ldots,4)$ for a randomly chosen vector $v_i$ that remains fixed for all iterations. They are both uniform. The QQ-plot on the right confirms these distributions are essentially identical.
Here's the R code that produced the plot.
n <- 4
n.sim <- 1e5
set.seed(17)
par(mfrow=c(1,3))
#
# Simulate spherical bivariate normal samples of size n each.
#
x <- matrix(rnorm(n.sim*n), n)
y <- matrix(rnorm(n.sim*n), n)
#
# Look at the distribution of the correlation of `x` and `y`.
#
sim <- sapply(1:n.sim, function(i) cor(x[,i], y[,i]))
hist(sim)
#
# Specify *any* fixed vector in place of `y`.
#
v <- c(n-1, rep(-1, n-1)) # The case in question
v <- rnorm(n) # Can use anything you want
#
# Look at the distribution of the correlation of `x` with `v`.
#
sim2 <- sapply(1:n.sim, function(i) cor(x[,i], v))
hist(sim2)
#
# Compare the two distributions.
#
qqplot(sim, sim2, main="QQ Plot")
Reference
R. A. Fisher, Frequency-distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10, 507. See Section 3. (Quoted in Kendall's Advanced Theory of Statistics, 5th Ed., section 16.24.)
|
The pdf of $\frac{X_1-\bar{X}}{S}$
|
What is so intriguing about this result is how much it looks like the distribution of a correlation coefficient. There's a reason.
Suppose $(X,Y)$ is bivariate normal with zero correlation and commo
|
The pdf of $\frac{X_1-\bar{X}}{S}$
What is so intriguing about this result is how much it looks like the distribution of a correlation coefficient. There's a reason.
Suppose $(X,Y)$ is bivariate normal with zero correlation and common variance $\sigma^2$ for both variables. Draw an iid sample $(x_1,y_1), \ldots, (x_n,y_n)$. It is well known, and readily established geometrically (as Fisher did a century ago) that the distribution of the sample correlation coefficient
$$r = \frac{\sum_{i=1}^n(x_i - \bar x)(y_i - \bar y)}{(n-1) S_x S_y}$$
is
$$f(r) = \frac{1}{B\left(\frac{1}{2}, \frac{n}{2}-1\right)}\left(1-r^2\right)^{n/2-2},\ -1 \le r \le 1.$$
(Here, as usual, $\bar x$ and $\bar y$ are sample means and $S_x$ and $S_y$ are the square roots of the unbiased variance estimators.) $B$ is the Beta function, for which
$$\frac{1}{B\left(\frac{1}{2}, \frac{n}{2}-1\right)} = \frac{\Gamma\left(\frac{n-1}{2}\right)}{\Gamma\left(\frac{1}{2}\right)\Gamma\left(\frac{n}{2}-1\right)} = \frac{\Gamma\left(\frac{n-1}{2}\right)}{\sqrt{\pi}\Gamma\left(\frac{n}{2}-1\right)} . \tag{1}$$
To compute $r$, we may exploit its invariance under rotations in $\mathbb{R}^n$ around the line generated by $(1,1,\ldots, 1)$, along with the invariance of the distribution of the sample under the same rotations, and choose $y_i/S_y$ to be any unit vector whose components sum to zero. One such vector is proportional to $v = (n-1, -1, \ldots, -1)$. Its standard deviation is
$$S_v = \sqrt{\frac{1}{n-1}\left((n-1)^2 + (-1)^2 + \cdots + (-1)^2\right)} = \sqrt{n}.$$
Consequently, $r$ must have the same distribution as
$$\frac{\sum_{i=1}^n(x_i - \bar x)(v_i - \bar v)}{(n-1) S_x S_v}
= \frac{(n-1)x_1 - x_2-\cdots-x_n}{(n-1) S_x \sqrt{n}}
= \frac{n(x_1 - \bar x)}{(n-1) S_x \sqrt{n}} = \frac{\sqrt{n}}{n-1}Z.$$
Therefore all we need to do is rescale $r$ to find the distribution of $Z$:
$$f_Z(z) = \bigg|\frac{\sqrt{n}}{n-1}\bigg| f\left(\frac{\sqrt{n}}{n-1}z\right) = \frac{1}{B\left(\frac{1}{2}, \frac{n}{2}-1\right)} \frac{\sqrt{n}}{n-1}\left(1- \frac{n}{(n-1)^2}z^2\right)^{n/2-2}$$
for $|z| \le \frac{n-1}{\sqrt{n}}$. Formula (1) shows this is identical to that of the question.
Not entirely convinced? Here is the result of simulating this situation 100,000 times (with $n=4$, where the distribution is uniform).
The first histogram plots the correlation coefficients of $(x_i,y_i),i=1,\ldots,4$ while the second histogram plots the correlation coefficients of $(x_i,v_i),i=1,\ldots,4)$ for a randomly chosen vector $v_i$ that remains fixed for all iterations. They are both uniform. The QQ-plot on the right confirms these distributions are essentially identical.
Here's the R code that produced the plot.
n <- 4
n.sim <- 1e5
set.seed(17)
par(mfrow=c(1,3))
#
# Simulate spherical bivariate normal samples of size n each.
#
x <- matrix(rnorm(n.sim*n), n)
y <- matrix(rnorm(n.sim*n), n)
#
# Look at the distribution of the correlation of `x` and `y`.
#
sim <- sapply(1:n.sim, function(i) cor(x[,i], y[,i]))
hist(sim)
#
# Specify *any* fixed vector in place of `y`.
#
v <- c(n-1, rep(-1, n-1)) # The case in question
v <- rnorm(n) # Can use anything you want
#
# Look at the distribution of the correlation of `x` with `v`.
#
sim2 <- sapply(1:n.sim, function(i) cor(x[,i], v))
hist(sim2)
#
# Compare the two distributions.
#
qqplot(sim, sim2, main="QQ Plot")
Reference
R. A. Fisher, Frequency-distribution of the values of the correlation coefficient in samples from an indefinitely large population. Biometrika, 10, 507. See Section 3. (Quoted in Kendall's Advanced Theory of Statistics, 5th Ed., section 16.24.)
|
The pdf of $\frac{X_1-\bar{X}}{S}$
What is so intriguing about this result is how much it looks like the distribution of a correlation coefficient. There's a reason.
Suppose $(X,Y)$ is bivariate normal with zero correlation and commo
|
14,764
|
The pdf of $\frac{X_1-\bar{X}}{S}$
|
I'd like to suggest this way to get the pdf of Z by directly calculating the MVUE of $P(X\leq c)$ using Bayes' theorem although it's handful and complex.
Since $E[I_{(-\infty,c)}(X_1)]=P(X_1\leq c)$ and $Z_1=\bar X$, $Z_2=S^2$ are joint complete sufficient statistic, MVUE of $P(X\leq c)$ would be like this:
$$\psi(z_1,z_2)=E[I_{(-\infty,c)}(X_1)|z_1,z_2]=\int_{-\infty}^{\infty}I_{(-\infty,c)}f_{X|Z_1,Z_2}(x_1|z_1,z_2)dx_1$$
Now using Bayes' theorem, we get
$$f_{X|Z_1,Z_2}(x_1|z_1,z_2)={{f_{Z_1,Z_2|X_1}(z_1,z_2|x_1)f_{X_1}(x_1)}\over{f_{Z_1,Z_2}(z_1,z_2)}}$$
The denominator $f_{Z_1,Z_2}(z_1,z_2)=f_{Z_1}(z_1)f_{Z_2}(z_2)$ can be written in closed form because $Z_1 \sim N(\mu,\frac{\sigma^2}{n})$, $Z_2 \sim \Gamma({n-1\over 2},{2 \sigma^2\over n-1})$ are independent of each other.
To get the closed form of numerator, we can adopt these statistics:
$$W_1 = {\sum_{i=2}^n X_i \over n-1}$$
$$W_2 = {\sum_{i=2}^n X_i^2 -(n-1) W_1^2 \over (n-1)-1}$$
which is the mean and the sample variance of $X_2, X_3, ..., X_n$ and they are independent of each other and also independent of $X_1$. We can express these in terms of $Z_1, Z_2$.
$W_1={n Z_1 - X_1\over n-1}$, $W_2={(n-1)Z_2+nZ_1^2-X_1^2-(n-1)W_1^2 \over n-2}$
We can use transformation while $X_1=x_1$,
$$f_{Z_1,Z_2|X_1}(z_1,z_2|x_1)={n \over n-2}f_{W_1,W_2}(w_1,w_2)={n \over n-2}f_{W_1}(w_1)f_{W_2}(w_2)$$
Since $W_1 \sim N(\mu,\frac{\sigma^2}{n-1})$, $W_2 \sim \Gamma({n-2\over 2},{2 \sigma^2\over n-2})$ we can get the closed form of this.
Note that this holds only for $w_2 \geq 0$ which restricts $x_1$ to $z_1-{n-1 \over \sqrt n}\sqrt{z_2} \leq x_1 \leq z_1+{n-1 \over \sqrt n}\sqrt{z_2} $.
So put them all together, exponential terms would disappear and you'd get,
$$f_{X|Z_1,Z_2}(x_1|z_1,z_2)={\Gamma({n-1 \over 2}) \over \sqrt{\pi} \Gamma({n-2 \over 2})} {\sqrt{n} \over \sqrt{z_2} (n-1)} (1-{({\sqrt{n} (x_1 -z_1) \over \sqrt{z_2} (n-1) })}^2)$$ where $z_1-{n-1 \over \sqrt n}\sqrt{z_2} \leq x_1 \leq z_1+{n-1 \over \sqrt n}\sqrt{z_2} $ and zero elsewhere.
From this,at this point, we can get the pdf of $Z={X_1- z_1 \over \sqrt{z_2}}$ using transformation.
By the way, the MVUE would be like this : $$\psi(z_1,z_2)={\Gamma({n-1 \over 2}) \over \sqrt{\pi} \Gamma({n-2 \over 2})} \int ^{\theta_c} _{-{\pi \over2}} cos^{n-3} \theta d\theta$$
while $\theta_c = sin^{-1} ({\sqrt{n}(c-z_1)\over(n-1)\sqrt{z_1}})$ and would be 1 if $c \geq z_1+{n-1 \over \sqrt{n} \sqrt{z_2} }$
I am not a native English speaker and there could be some awkward sentences.
I am studying statistics by myself with text book introduction to mathmatical statistics by Hogg. So there could be some grammatical or mathmatical conceptual mistakes. It would be appreciated if someone correct them.
Thank you for reading.
|
The pdf of $\frac{X_1-\bar{X}}{S}$
|
I'd like to suggest this way to get the pdf of Z by directly calculating the MVUE of $P(X\leq c)$ using Bayes' theorem although it's handful and complex.
Since $E[I_{(-\infty,c)}(X_1)]=P(X_1\leq c)$ a
|
The pdf of $\frac{X_1-\bar{X}}{S}$
I'd like to suggest this way to get the pdf of Z by directly calculating the MVUE of $P(X\leq c)$ using Bayes' theorem although it's handful and complex.
Since $E[I_{(-\infty,c)}(X_1)]=P(X_1\leq c)$ and $Z_1=\bar X$, $Z_2=S^2$ are joint complete sufficient statistic, MVUE of $P(X\leq c)$ would be like this:
$$\psi(z_1,z_2)=E[I_{(-\infty,c)}(X_1)|z_1,z_2]=\int_{-\infty}^{\infty}I_{(-\infty,c)}f_{X|Z_1,Z_2}(x_1|z_1,z_2)dx_1$$
Now using Bayes' theorem, we get
$$f_{X|Z_1,Z_2}(x_1|z_1,z_2)={{f_{Z_1,Z_2|X_1}(z_1,z_2|x_1)f_{X_1}(x_1)}\over{f_{Z_1,Z_2}(z_1,z_2)}}$$
The denominator $f_{Z_1,Z_2}(z_1,z_2)=f_{Z_1}(z_1)f_{Z_2}(z_2)$ can be written in closed form because $Z_1 \sim N(\mu,\frac{\sigma^2}{n})$, $Z_2 \sim \Gamma({n-1\over 2},{2 \sigma^2\over n-1})$ are independent of each other.
To get the closed form of numerator, we can adopt these statistics:
$$W_1 = {\sum_{i=2}^n X_i \over n-1}$$
$$W_2 = {\sum_{i=2}^n X_i^2 -(n-1) W_1^2 \over (n-1)-1}$$
which is the mean and the sample variance of $X_2, X_3, ..., X_n$ and they are independent of each other and also independent of $X_1$. We can express these in terms of $Z_1, Z_2$.
$W_1={n Z_1 - X_1\over n-1}$, $W_2={(n-1)Z_2+nZ_1^2-X_1^2-(n-1)W_1^2 \over n-2}$
We can use transformation while $X_1=x_1$,
$$f_{Z_1,Z_2|X_1}(z_1,z_2|x_1)={n \over n-2}f_{W_1,W_2}(w_1,w_2)={n \over n-2}f_{W_1}(w_1)f_{W_2}(w_2)$$
Since $W_1 \sim N(\mu,\frac{\sigma^2}{n-1})$, $W_2 \sim \Gamma({n-2\over 2},{2 \sigma^2\over n-2})$ we can get the closed form of this.
Note that this holds only for $w_2 \geq 0$ which restricts $x_1$ to $z_1-{n-1 \over \sqrt n}\sqrt{z_2} \leq x_1 \leq z_1+{n-1 \over \sqrt n}\sqrt{z_2} $.
So put them all together, exponential terms would disappear and you'd get,
$$f_{X|Z_1,Z_2}(x_1|z_1,z_2)={\Gamma({n-1 \over 2}) \over \sqrt{\pi} \Gamma({n-2 \over 2})} {\sqrt{n} \over \sqrt{z_2} (n-1)} (1-{({\sqrt{n} (x_1 -z_1) \over \sqrt{z_2} (n-1) })}^2)$$ where $z_1-{n-1 \over \sqrt n}\sqrt{z_2} \leq x_1 \leq z_1+{n-1 \over \sqrt n}\sqrt{z_2} $ and zero elsewhere.
From this,at this point, we can get the pdf of $Z={X_1- z_1 \over \sqrt{z_2}}$ using transformation.
By the way, the MVUE would be like this : $$\psi(z_1,z_2)={\Gamma({n-1 \over 2}) \over \sqrt{\pi} \Gamma({n-2 \over 2})} \int ^{\theta_c} _{-{\pi \over2}} cos^{n-3} \theta d\theta$$
while $\theta_c = sin^{-1} ({\sqrt{n}(c-z_1)\over(n-1)\sqrt{z_1}})$ and would be 1 if $c \geq z_1+{n-1 \over \sqrt{n} \sqrt{z_2} }$
I am not a native English speaker and there could be some awkward sentences.
I am studying statistics by myself with text book introduction to mathmatical statistics by Hogg. So there could be some grammatical or mathmatical conceptual mistakes. It would be appreciated if someone correct them.
Thank you for reading.
|
The pdf of $\frac{X_1-\bar{X}}{S}$
I'd like to suggest this way to get the pdf of Z by directly calculating the MVUE of $P(X\leq c)$ using Bayes' theorem although it's handful and complex.
Since $E[I_{(-\infty,c)}(X_1)]=P(X_1\leq c)$ a
|
14,765
|
What is an intuitive explanation of Echo State Networks?
|
An Echo State Network is an instance of the more general concept of Reservoir Computing. The basic idea behind the ESN is to get the benefits of a RNN (process a sequence of inputs that are dependent on each other, i.e. time dependencies like a signal) but without the problems of training a traditional RNN like the vanishing gradient problem.
ESNs achieve this by having a relatively large reservoir of sparsely connected neurons using a sigmoidal transfer function (relative to input size, something like 100-1000 units). The connections in the reservoir are assigned once and are completely random; the reservoir weights do not get trained. Input neurons are connected to the reservoir and feed the input activations into the reservoir - these too are assigned untrained random weights. The only weights that are trained are the output weights which connect the reservoir to the output neurons.
In training, the inputs will be fed to the reservoir and a teacher output will be applied to the output units. The reservoir states are captured over time and stored. Once all of the training inputs have been applied, a simple application of linear regression can be used between the captured reservoir states and the target outputs. These output weights can then be incorporated into the existing network and used for novel inputs.
The idea is that the sparse random connections in the reservoir allow previous states to "echo" even after they have passed, so that if the network receives a novel input that is similar to something it trained on, the dynamics in the reservoir will start to follow the activation trajectory appropriate for the input and in that way can provide a matching signal to what it trained on, and if it is well-trained it will be able to generalize from what it has already seen, following activation trajectories that would make sense given the input signal driving the reservoir.
The advantage of this approach is in the incredibly simple training procedure since most of the weights are assigned only once and at random. Yet they are able to capture complex dynamics over time and are able to model properties of dynamical systems. By far the most helpful papers I have found on ESNs are:
A tutorial on training RNNs by Herbert Jaeger (curator of the Scholarpedia page on ESNs)
A Practical Guide to Applying Echo State Networks by Mantas LukoΕ‘eviΔius
They both have easy to understand explanations to go along with the formalism and outstanding advice for creating an implementation with guidance for choosing appropriate parameter values.
UPDATE: The Deep Learning book from Goodfellow, Bengio, and Courville has a slightly more detailed but still nice high-level discussion of Echo State Networks. Section 10.7 discusses the vanishing (and exploding) gradient problem and the difficulties of learning long-term dependencies. Section 10.8 is all about Echo State Networks. It specifically goes into detail about why it's crucial to select reservoir weights that have an appropriate spectral radius value - it works together with the nonlinear activation units to encourage stability while still propagating information through time.
|
What is an intuitive explanation of Echo State Networks?
|
An Echo State Network is an instance of the more general concept of Reservoir Computing. The basic idea behind the ESN is to get the benefits of a RNN (process a sequence of inputs that are dependent
|
What is an intuitive explanation of Echo State Networks?
An Echo State Network is an instance of the more general concept of Reservoir Computing. The basic idea behind the ESN is to get the benefits of a RNN (process a sequence of inputs that are dependent on each other, i.e. time dependencies like a signal) but without the problems of training a traditional RNN like the vanishing gradient problem.
ESNs achieve this by having a relatively large reservoir of sparsely connected neurons using a sigmoidal transfer function (relative to input size, something like 100-1000 units). The connections in the reservoir are assigned once and are completely random; the reservoir weights do not get trained. Input neurons are connected to the reservoir and feed the input activations into the reservoir - these too are assigned untrained random weights. The only weights that are trained are the output weights which connect the reservoir to the output neurons.
In training, the inputs will be fed to the reservoir and a teacher output will be applied to the output units. The reservoir states are captured over time and stored. Once all of the training inputs have been applied, a simple application of linear regression can be used between the captured reservoir states and the target outputs. These output weights can then be incorporated into the existing network and used for novel inputs.
The idea is that the sparse random connections in the reservoir allow previous states to "echo" even after they have passed, so that if the network receives a novel input that is similar to something it trained on, the dynamics in the reservoir will start to follow the activation trajectory appropriate for the input and in that way can provide a matching signal to what it trained on, and if it is well-trained it will be able to generalize from what it has already seen, following activation trajectories that would make sense given the input signal driving the reservoir.
The advantage of this approach is in the incredibly simple training procedure since most of the weights are assigned only once and at random. Yet they are able to capture complex dynamics over time and are able to model properties of dynamical systems. By far the most helpful papers I have found on ESNs are:
A tutorial on training RNNs by Herbert Jaeger (curator of the Scholarpedia page on ESNs)
A Practical Guide to Applying Echo State Networks by Mantas LukoΕ‘eviΔius
They both have easy to understand explanations to go along with the formalism and outstanding advice for creating an implementation with guidance for choosing appropriate parameter values.
UPDATE: The Deep Learning book from Goodfellow, Bengio, and Courville has a slightly more detailed but still nice high-level discussion of Echo State Networks. Section 10.7 discusses the vanishing (and exploding) gradient problem and the difficulties of learning long-term dependencies. Section 10.8 is all about Echo State Networks. It specifically goes into detail about why it's crucial to select reservoir weights that have an appropriate spectral radius value - it works together with the nonlinear activation units to encourage stability while still propagating information through time.
|
What is an intuitive explanation of Echo State Networks?
An Echo State Network is an instance of the more general concept of Reservoir Computing. The basic idea behind the ESN is to get the benefits of a RNN (process a sequence of inputs that are dependent
|
14,766
|
What is an intuitive explanation of Echo State Networks?
|
Learning in an ESN isnβt primary forced to adapting weights, more respectively the output layer learns which output to produce for the current state the network has. The internal state is based on network dynamics and is called dynamic reservoir state. To understand how the reservoir states shape out, we need to look at the topology of an ESN.
The input unit(s) are connected to neurons in the internal units (reservoir units), the weights are randomly initialized. The reservoir units are randomly and sparsely connected and as well have random weights. The output unit is also connected to all reservoir units thus receives the reservoir state and produces a corresponding output.
The input activation raises the network dynamics. The signal floats $t$ timesteps through the recurrent connected reservoir units. You can imagine it as an echo reoccurring $t$ times in the net (which gets distorted).
The only weights which get adapted are the weights to the output unit. This means, that the output layer learns which output has to belong to a given reservoir state. That also means the training becomes a linear regression task.
Before we can explain how the training works in detail we have to explain and define some things:
Teacher Forcing means feed time series input into network as well as corresponding desired output (time delayed). To feed the desired output of $T$ at $t$ back is called output feedback. We therefore need some randomly initialized weights stored in the matrix $W_{fb}$. In figure 1 those edges are displayed with dotted arrows.
Variable definitons:
$r$ = number of reservoir unites,
$o$ = number of output units,
$t$ = number of timesteps,
$o$ = number of output units.
$T$ = Matrix (of size $t$ x $o$) which contains the desired output for each
timestep.
Finally how does the training work in detail?
Record reservoir states for $t$ time steps while applying teacher
forcing. The output is: A matrix $M$ of ($t$ x $r$) reservoir states.
Determine the output weight matrix $W_{out}$ which contains the final
output weights. It can be calculated using any regression technique
e.g. using the pseudoinverse. This means, look at the reservoir
states and find a function to map them multiplied with the output
weights to the output. Mathematically: Approximate $M \bullet W_{out} = T -> W_{out} = M \bullet T^{-1}$
Because learning is very fast we can try out many network topologies to get one which fits well.
To measure the performance of an ESN:
Run the Echo State Network further without teacher forcing (own
output is fed back into the ESNβs dynamic reservoir via $W_{fb}$).
Record performance, such as squared errors $\left|\left|M \bullet W_{out} β T\right|\right|^2$
Spectral Radius and ESN
Some smart people have proven, that the Echo State Property of an ESN may only be given if the Spec-tral Radius of the reservoir weight matrix is smaller or equal than $1$. The Echo State Property means the system forgets its inputs after a limited amount of time. This property is necessary for an ESN to not ex-plode in activity and to be able to learn.
|
What is an intuitive explanation of Echo State Networks?
|
Learning in an ESN isnβt primary forced to adapting weights, more respectively the output layer learns which output to produce for the current state the network has. The internal state is based on net
|
What is an intuitive explanation of Echo State Networks?
Learning in an ESN isnβt primary forced to adapting weights, more respectively the output layer learns which output to produce for the current state the network has. The internal state is based on network dynamics and is called dynamic reservoir state. To understand how the reservoir states shape out, we need to look at the topology of an ESN.
The input unit(s) are connected to neurons in the internal units (reservoir units), the weights are randomly initialized. The reservoir units are randomly and sparsely connected and as well have random weights. The output unit is also connected to all reservoir units thus receives the reservoir state and produces a corresponding output.
The input activation raises the network dynamics. The signal floats $t$ timesteps through the recurrent connected reservoir units. You can imagine it as an echo reoccurring $t$ times in the net (which gets distorted).
The only weights which get adapted are the weights to the output unit. This means, that the output layer learns which output has to belong to a given reservoir state. That also means the training becomes a linear regression task.
Before we can explain how the training works in detail we have to explain and define some things:
Teacher Forcing means feed time series input into network as well as corresponding desired output (time delayed). To feed the desired output of $T$ at $t$ back is called output feedback. We therefore need some randomly initialized weights stored in the matrix $W_{fb}$. In figure 1 those edges are displayed with dotted arrows.
Variable definitons:
$r$ = number of reservoir unites,
$o$ = number of output units,
$t$ = number of timesteps,
$o$ = number of output units.
$T$ = Matrix (of size $t$ x $o$) which contains the desired output for each
timestep.
Finally how does the training work in detail?
Record reservoir states for $t$ time steps while applying teacher
forcing. The output is: A matrix $M$ of ($t$ x $r$) reservoir states.
Determine the output weight matrix $W_{out}$ which contains the final
output weights. It can be calculated using any regression technique
e.g. using the pseudoinverse. This means, look at the reservoir
states and find a function to map them multiplied with the output
weights to the output. Mathematically: Approximate $M \bullet W_{out} = T -> W_{out} = M \bullet T^{-1}$
Because learning is very fast we can try out many network topologies to get one which fits well.
To measure the performance of an ESN:
Run the Echo State Network further without teacher forcing (own
output is fed back into the ESNβs dynamic reservoir via $W_{fb}$).
Record performance, such as squared errors $\left|\left|M \bullet W_{out} β T\right|\right|^2$
Spectral Radius and ESN
Some smart people have proven, that the Echo State Property of an ESN may only be given if the Spec-tral Radius of the reservoir weight matrix is smaller or equal than $1$. The Echo State Property means the system forgets its inputs after a limited amount of time. This property is necessary for an ESN to not ex-plode in activity and to be able to learn.
|
What is an intuitive explanation of Echo State Networks?
Learning in an ESN isnβt primary forced to adapting weights, more respectively the output layer learns which output to produce for the current state the network has. The internal state is based on net
|
14,767
|
Does a seasonal time series imply a stationary or a non stationary time series
|
Seasonality does not make your series non-stationary. The stationarity applies to the errors of your data generating process, e.g. $y_t=sin(t)+\varepsilon_t$, where $\varepsilon_t\sim\mathcal{N}(0,\sigma^2)$ and $Cov[\varepsilon_s,\varepsilon_t]=\sigma^21_{s=t}$ is a stationary process, despite having a periodic wave in it, because the errors are stationary.
Seasonality does not make your process stationary either. Consider the same process but $\varepsilon_t\sim\mathcal{N}(0,t\sigma^2)$, in this case the error variance is non-stationary and seasonality has nothing to do with it.
|
Does a seasonal time series imply a stationary or a non stationary time series
|
Seasonality does not make your series non-stationary. The stationarity applies to the errors of your data generating process, e.g. $y_t=sin(t)+\varepsilon_t$, where $\varepsilon_t\sim\mathcal{N}(0,\si
|
Does a seasonal time series imply a stationary or a non stationary time series
Seasonality does not make your series non-stationary. The stationarity applies to the errors of your data generating process, e.g. $y_t=sin(t)+\varepsilon_t$, where $\varepsilon_t\sim\mathcal{N}(0,\sigma^2)$ and $Cov[\varepsilon_s,\varepsilon_t]=\sigma^21_{s=t}$ is a stationary process, despite having a periodic wave in it, because the errors are stationary.
Seasonality does not make your process stationary either. Consider the same process but $\varepsilon_t\sim\mathcal{N}(0,t\sigma^2)$, in this case the error variance is non-stationary and seasonality has nothing to do with it.
|
Does a seasonal time series imply a stationary or a non stationary time series
Seasonality does not make your series non-stationary. The stationarity applies to the errors of your data generating process, e.g. $y_t=sin(t)+\varepsilon_t$, where $\varepsilon_t\sim\mathcal{N}(0,\si
|
14,768
|
Does a seasonal time series imply a stationary or a non stationary time series
|
IMHO, persistent seasonality, by definition, is a type of non-stationarity: the mean of a seasonal process varies with the season, E[z(t*s+j)] = f(j), where s is the number of seasons, j is a particular season (j=1,...,s), and t is specific period (typically a year). Thus, E[y(t)] = E[sin(t)+u(t)] = sin(t) is not a stable mean, although it is deterministic: you could group observations with different means.
Luis
|
Does a seasonal time series imply a stationary or a non stationary time series
|
IMHO, persistent seasonality, by definition, is a type of non-stationarity: the mean of a seasonal process varies with the season, E[z(t*s+j)] = f(j), where s is the number of seasons, j is a particul
|
Does a seasonal time series imply a stationary or a non stationary time series
IMHO, persistent seasonality, by definition, is a type of non-stationarity: the mean of a seasonal process varies with the season, E[z(t*s+j)] = f(j), where s is the number of seasons, j is a particular season (j=1,...,s), and t is specific period (typically a year). Thus, E[y(t)] = E[sin(t)+u(t)] = sin(t) is not a stable mean, although it is deterministic: you could group observations with different means.
Luis
|
Does a seasonal time series imply a stationary or a non stationary time series
IMHO, persistent seasonality, by definition, is a type of non-stationarity: the mean of a seasonal process varies with the season, E[z(t*s+j)] = f(j), where s is the number of seasons, j is a particul
|
14,769
|
Does a seasonal time series imply a stationary or a non stationary time series
|
A seasonal pattern that remains stable over time does not make the series non-stationary. A non-stable seasonal pattern, for example a seasonal random walk, will make the data non-stationary.
Edit (after new answer and comments)
A stable seasonal pattern is not stationary in the sense that the mean of the series will vary across seasons and, hence, depends on time; but it is stationary in the sense that we can expect the same mean for the same month in different years.
A stable seasonal pattern may therefore fit in the concept of a cyclostationary process, i.e., a process with a periodic mean and a periodic autocorrelation function.
The above does not apply to a non-stable seasonal pattern.
|
Does a seasonal time series imply a stationary or a non stationary time series
|
A seasonal pattern that remains stable over time does not make the series non-stationary. A non-stable seasonal pattern, for example a seasonal random walk, will make the data non-stationary.
Edit (af
|
Does a seasonal time series imply a stationary or a non stationary time series
A seasonal pattern that remains stable over time does not make the series non-stationary. A non-stable seasonal pattern, for example a seasonal random walk, will make the data non-stationary.
Edit (after new answer and comments)
A stable seasonal pattern is not stationary in the sense that the mean of the series will vary across seasons and, hence, depends on time; but it is stationary in the sense that we can expect the same mean for the same month in different years.
A stable seasonal pattern may therefore fit in the concept of a cyclostationary process, i.e., a process with a periodic mean and a periodic autocorrelation function.
The above does not apply to a non-stable seasonal pattern.
|
Does a seasonal time series imply a stationary or a non stationary time series
A seasonal pattern that remains stable over time does not make the series non-stationary. A non-stable seasonal pattern, for example a seasonal random walk, will make the data non-stationary.
Edit (af
|
14,770
|
Does a seasonal time series imply a stationary or a non stationary time series
|
I don`t agree that seasonality is a type of non-stationarity because the concept of stationarity in natural systems already incorporates the idea of fluctuation within an unchanging envelope of variability (Milly et al., 2008).
Speaking about hydrological time-series, even though they are stochastic (random process) and commonly have seasonality, that is, they contain wet and dry periods, they will always be stationary if the mean and the variance do not change over time.
So, ignoring the uncertainties of the effects of climate change, a hydrological time series should normally be stationary even though it still has seasonality.
This is why stationarity is a widely accepted concept for civil engineering design, and because of this concept hydrologists are able to calculate the recurrence time of floods, for example.
Link for Milly et al. (2008): "Stationarity Is Dead: Whither Water Management?"
|
Does a seasonal time series imply a stationary or a non stationary time series
|
I don`t agree that seasonality is a type of non-stationarity because the concept of stationarity in natural systems already incorporates the idea of fluctuation within an unchanging envelope of variab
|
Does a seasonal time series imply a stationary or a non stationary time series
I don`t agree that seasonality is a type of non-stationarity because the concept of stationarity in natural systems already incorporates the idea of fluctuation within an unchanging envelope of variability (Milly et al., 2008).
Speaking about hydrological time-series, even though they are stochastic (random process) and commonly have seasonality, that is, they contain wet and dry periods, they will always be stationary if the mean and the variance do not change over time.
So, ignoring the uncertainties of the effects of climate change, a hydrological time series should normally be stationary even though it still has seasonality.
This is why stationarity is a widely accepted concept for civil engineering design, and because of this concept hydrologists are able to calculate the recurrence time of floods, for example.
Link for Milly et al. (2008): "Stationarity Is Dead: Whither Water Management?"
|
Does a seasonal time series imply a stationary or a non stationary time series
I don`t agree that seasonality is a type of non-stationarity because the concept of stationarity in natural systems already incorporates the idea of fluctuation within an unchanging envelope of variab
|
14,771
|
Effective Sample Size for posterior inference from MCMC sampling
|
The question you are asking is different from "convergence diagnostics". Lets say you have run all convergence diagnostics(choose your favorite(s)), and now are ready to start sampling from the posterior.
There are two options in terms of effective sample size(ESS), you can choose a univariate ESS or a multivariate ESS. A univariate ESS will provide an effective sample size for each parameter separately, and conservative methods dictate, you choose the smallest estimate. This method ignores all cross-correlations across components. This is probably what most people have been using for a while
Recently, a multivariate definition of ESS was introduced. The multivariate ESS returns one number for the effective sample size for the quantities you want to estimate; and it does so by accounting for all the cross-correlations in the process. Personally, I far prefer multivariate ESS. Suppose you are interested in the $p$-vector of means of the posterior distribution. The mESS is defined as follows
$$\text{mESS} = n \left(\dfrac{|\Lambda|}{|\Sigma|}\right)^{1/p}. $$
Here
$\Lambda$ is the covariance structure of the posterior (also the asymptotic covariance in the CLT if you had independent samples)
$\Sigma$ is the asymptotic covariance matrix in the Markov chain CLT (different from $\Lambda$ since samples are correlated.
$p$ is number of quantities being estimated (or in this case, the dimension of the posterior.
$|\cdot|$ is the determinant.
mESS can be estimated by using the sample covariance matrix to estimate $\Lambda$ and the batch means covariance matrix to estimate $\Sigma$. This has been coded in the function multiESS in the R package mcmcse.
This recent paper provides a theoretically valid lower bound of the number of effective samples required. Before simulation, you need to decide
$\epsilon$: the precision. $\epsilon$ is the fraction of error you want the Monte Carlo to be in comparison to the posterior error. This is similar to the margin of error idea when doing sample size calculations in the classical setting.
$\alpha$: the level for constructing confidence intervals.
$p$: the number of quantities you are estimating.
With these three quantities, you will know how many effective samples you require. The paper asks to stop simulation the first time
$$ \text{mESS} \geq \dfrac{2^{2/p} \pi}{(p \Gamma(p/2))^{2/p}} \dfrac{\chi^2_{1-\alpha, p}}{\epsilon^2},$$
where $\Gamma(\cdot)$ is the gamma function. This lower bound can be calculated by using minESS in the R package mcmcse.
So now suppose you have $p = 20$ parameters in the posterior, and you want $95\%$ confidence in your estimate, and you want the Monte Carlo error to be 5% ($\epsilon = .05$) of the posterior error, you will need
> minESS(p = 20, alpha = .05, eps = .05)
[1] 8716
This is true for any problem (under regularity conditions). The way this method adapts from problem to problem is that slowly mixing Markov chains take longer to reach that lower bound, since mESS will be smaller. So now you can check a couple of times using multiESS whether your Markov chain has reached that bound; if not go and grab more samples.
|
Effective Sample Size for posterior inference from MCMC sampling
|
The question you are asking is different from "convergence diagnostics". Lets say you have run all convergence diagnostics(choose your favorite(s)), and now are ready to start sampling from the poster
|
Effective Sample Size for posterior inference from MCMC sampling
The question you are asking is different from "convergence diagnostics". Lets say you have run all convergence diagnostics(choose your favorite(s)), and now are ready to start sampling from the posterior.
There are two options in terms of effective sample size(ESS), you can choose a univariate ESS or a multivariate ESS. A univariate ESS will provide an effective sample size for each parameter separately, and conservative methods dictate, you choose the smallest estimate. This method ignores all cross-correlations across components. This is probably what most people have been using for a while
Recently, a multivariate definition of ESS was introduced. The multivariate ESS returns one number for the effective sample size for the quantities you want to estimate; and it does so by accounting for all the cross-correlations in the process. Personally, I far prefer multivariate ESS. Suppose you are interested in the $p$-vector of means of the posterior distribution. The mESS is defined as follows
$$\text{mESS} = n \left(\dfrac{|\Lambda|}{|\Sigma|}\right)^{1/p}. $$
Here
$\Lambda$ is the covariance structure of the posterior (also the asymptotic covariance in the CLT if you had independent samples)
$\Sigma$ is the asymptotic covariance matrix in the Markov chain CLT (different from $\Lambda$ since samples are correlated.
$p$ is number of quantities being estimated (or in this case, the dimension of the posterior.
$|\cdot|$ is the determinant.
mESS can be estimated by using the sample covariance matrix to estimate $\Lambda$ and the batch means covariance matrix to estimate $\Sigma$. This has been coded in the function multiESS in the R package mcmcse.
This recent paper provides a theoretically valid lower bound of the number of effective samples required. Before simulation, you need to decide
$\epsilon$: the precision. $\epsilon$ is the fraction of error you want the Monte Carlo to be in comparison to the posterior error. This is similar to the margin of error idea when doing sample size calculations in the classical setting.
$\alpha$: the level for constructing confidence intervals.
$p$: the number of quantities you are estimating.
With these three quantities, you will know how many effective samples you require. The paper asks to stop simulation the first time
$$ \text{mESS} \geq \dfrac{2^{2/p} \pi}{(p \Gamma(p/2))^{2/p}} \dfrac{\chi^2_{1-\alpha, p}}{\epsilon^2},$$
where $\Gamma(\cdot)$ is the gamma function. This lower bound can be calculated by using minESS in the R package mcmcse.
So now suppose you have $p = 20$ parameters in the posterior, and you want $95\%$ confidence in your estimate, and you want the Monte Carlo error to be 5% ($\epsilon = .05$) of the posterior error, you will need
> minESS(p = 20, alpha = .05, eps = .05)
[1] 8716
This is true for any problem (under regularity conditions). The way this method adapts from problem to problem is that slowly mixing Markov chains take longer to reach that lower bound, since mESS will be smaller. So now you can check a couple of times using multiESS whether your Markov chain has reached that bound; if not go and grab more samples.
|
Effective Sample Size for posterior inference from MCMC sampling
The question you are asking is different from "convergence diagnostics". Lets say you have run all convergence diagnostics(choose your favorite(s)), and now are ready to start sampling from the poster
|
14,772
|
Effective Sample Size for posterior inference from MCMC sampling
|
The convergence depends on several things: the number of parameters, the model itself, the sampling algorithm, the data ...
I would suggests to avoid any general rule and to employ a couple of convergence diagnostics tools to detect appropriate burn-in and thinning number of iterations in each specific example. See also http://www.johnmyleswhite.com/notebook/2010/08/29/mcmc-diagnostics-in-r-with-the-coda-package/,http://users.stat.umn.edu/~geyer/mcmc/diag.html.
|
Effective Sample Size for posterior inference from MCMC sampling
|
The convergence depends on several things: the number of parameters, the model itself, the sampling algorithm, the data ...
I would suggests to avoid any general rule and to employ a couple of converg
|
Effective Sample Size for posterior inference from MCMC sampling
The convergence depends on several things: the number of parameters, the model itself, the sampling algorithm, the data ...
I would suggests to avoid any general rule and to employ a couple of convergence diagnostics tools to detect appropriate burn-in and thinning number of iterations in each specific example. See also http://www.johnmyleswhite.com/notebook/2010/08/29/mcmc-diagnostics-in-r-with-the-coda-package/,http://users.stat.umn.edu/~geyer/mcmc/diag.html.
|
Effective Sample Size for posterior inference from MCMC sampling
The convergence depends on several things: the number of parameters, the model itself, the sampling algorithm, the data ...
I would suggests to avoid any general rule and to employ a couple of converg
|
14,773
|
Two-dimensional KolmogorovβSmirnov
|
Python implementation
I have written a python implementation using numpy. You can find the code here, you may find more infomation in the docstring in the code.
And here's another one (not by me). This Notebook provide a Python implementation for 2D K-S test with 2 samples. The .py file can be downloaded here. The code seems to be a straight translation of C code, the efficiency might be a problem if sample size is large.
However you'd better check the codes (no matter which one) with the original papers/books before you use. The python implementations of 2d KS test are far less checked than the ones in R.
More infomation
The algorithm is first developed in two papers (as I see)
Peacock, J.A. 1983, Two-Dimensional Goodness-of-Fit Testing in Astronomy
Fasano, G. and Franceschini, A. 1987, A Multidimensional Version of the Kolmogorov-Smirnov Test.
A nice introduction and the C implementation can be found in
Press, W.H. et al. 1992, Numerical Recipes in C, Section 14.7, p645.
You can find C++/Fortran implementation in other versions of the book.
Here's a post titled Beware the Kolmogorov-Smirnov test is also related to the subject, you may want to have a look. It encourages using resample method to evaluate the p-value with given KS distance.
|
Two-dimensional KolmogorovβSmirnov
|
Python implementation
I have written a python implementation using numpy. You can find the code here, you may find more infomation in the docstring in the code.
And here's another one (not by me). Thi
|
Two-dimensional KolmogorovβSmirnov
Python implementation
I have written a python implementation using numpy. You can find the code here, you may find more infomation in the docstring in the code.
And here's another one (not by me). This Notebook provide a Python implementation for 2D K-S test with 2 samples. The .py file can be downloaded here. The code seems to be a straight translation of C code, the efficiency might be a problem if sample size is large.
However you'd better check the codes (no matter which one) with the original papers/books before you use. The python implementations of 2d KS test are far less checked than the ones in R.
More infomation
The algorithm is first developed in two papers (as I see)
Peacock, J.A. 1983, Two-Dimensional Goodness-of-Fit Testing in Astronomy
Fasano, G. and Franceschini, A. 1987, A Multidimensional Version of the Kolmogorov-Smirnov Test.
A nice introduction and the C implementation can be found in
Press, W.H. et al. 1992, Numerical Recipes in C, Section 14.7, p645.
You can find C++/Fortran implementation in other versions of the book.
Here's a post titled Beware the Kolmogorov-Smirnov test is also related to the subject, you may want to have a look. It encourages using resample method to evaluate the p-value with given KS distance.
|
Two-dimensional KolmogorovβSmirnov
Python implementation
I have written a python implementation using numpy. You can find the code here, you may find more infomation in the docstring in the code.
And here's another one (not by me). Thi
|
14,774
|
Two-dimensional KolmogorovβSmirnov
|
A two-dimensional extension of the Kolmogorov-Smirnov test has been described by Justel, Pena and Zamar in a "A multivariate Komogorov-Smirnov test of goodness of fit". @Procrastinator's comments suggests there may be other such proposals.
However, I haven't seen a package with a straightforward implementation.
Depending on what you want to do, kde.test() in Tarn Duong's ks package for R might be more useful.
|
Two-dimensional KolmogorovβSmirnov
|
A two-dimensional extension of the Kolmogorov-Smirnov test has been described by Justel, Pena and Zamar in a "A multivariate Komogorov-Smirnov test of goodness of fit". @Procrastinator's comments sug
|
Two-dimensional KolmogorovβSmirnov
A two-dimensional extension of the Kolmogorov-Smirnov test has been described by Justel, Pena and Zamar in a "A multivariate Komogorov-Smirnov test of goodness of fit". @Procrastinator's comments suggests there may be other such proposals.
However, I haven't seen a package with a straightforward implementation.
Depending on what you want to do, kde.test() in Tarn Duong's ks package for R might be more useful.
|
Two-dimensional KolmogorovβSmirnov
A two-dimensional extension of the Kolmogorov-Smirnov test has been described by Justel, Pena and Zamar in a "A multivariate Komogorov-Smirnov test of goodness of fit". @Procrastinator's comments sug
|
14,775
|
Two-dimensional KolmogorovβSmirnov
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
you may find this Matlab code to be useful.
http://www.mathworks.com/matlabcentral/fileexchange/38617-two-dimensional-2d-paired-kolmogorov-smirnov-test
|
Two-dimensional KolmogorovβSmirnov
|
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
Two-dimensional KolmogorovβSmirnov
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
you may find this Matlab code to be useful.
http://www.mathworks.com/matlabcentral/fileexchange/38617-two-dimensional-2d-paired-kolmogorov-smirnov-test
|
Two-dimensional KolmogorovβSmirnov
Want to improve this post? Provide detailed answers to this question, including citations and an explanation of why your answer is correct. Answers without enough detail may be edited or deleted.
|
14,776
|
What are the sharpest known tail bounds for $\chi_k^2$ distributed variables?
|
The Sharpest bound I know is that of Massart and Laurent Lemma 1 p1325.
A corollary of their bound is:
$$P(X-k\geq 2\sqrt{kx}+2x)\leq \exp (-x) $$
$$P(k-X\geq 2\sqrt{kx})\leq \exp (-x) $$
|
What are the sharpest known tail bounds for $\chi_k^2$ distributed variables?
|
The Sharpest bound I know is that of Massart and Laurent Lemma 1 p1325.
A corollary of their bound is:
$$P(X-k\geq 2\sqrt{kx}+2x)\leq \exp (-x) $$
$$P(k-X\geq 2\sqrt{kx})\leq \exp (-x) $$
|
What are the sharpest known tail bounds for $\chi_k^2$ distributed variables?
The Sharpest bound I know is that of Massart and Laurent Lemma 1 p1325.
A corollary of their bound is:
$$P(X-k\geq 2\sqrt{kx}+2x)\leq \exp (-x) $$
$$P(k-X\geq 2\sqrt{kx})\leq \exp (-x) $$
|
What are the sharpest known tail bounds for $\chi_k^2$ distributed variables?
The Sharpest bound I know is that of Massart and Laurent Lemma 1 p1325.
A corollary of their bound is:
$$P(X-k\geq 2\sqrt{kx}+2x)\leq \exp (-x) $$
$$P(k-X\geq 2\sqrt{kx})\leq \exp (-x) $$
|
14,777
|
Mathematical/Algorithmic definition for overfitting
|
Yes there is a (slightly more) rigorous definition:
Given a model with a set of parameters, the model can be said to be overfitting the data if after a certain number of training steps, the training error continues to decrease while the out of sample (test) error starts increasing.
In this example out of sample (test/validation) error first decreases in synch with the train error, then it starts increasing around the 90th epoch, that is when overfitting starts
Another way to look at it is in terms of bias and variance. The out of sample error for a model can be decomposed into two components:
Bias: Error due to the expected value from the estimated model being different from the expected value of the true model.
Variance: Error due to the model being sensitive to small fluctuations in the data set.
Overfitting occurs when the bias is low, but the variance is high.
For a data set $X$ where the true (unknown) model is:
$ Y = f(X) + \epsilon $ - $\epsilon$ being the irreducible noise in the data set, with $E(\epsilon)=0$ and $Var(\epsilon) = \sigma_{\epsilon}$,
and the estimated model is:
$ \hat{Y} = \hat{f}(X)$,
then the test error (for a test data point $x_t$) can be written as:
$Err(x_t) = \sigma_{\epsilon} + Bias^2 + Variance$
with
$Bias^2 = E[f(x_t)- \hat{f}(x_t)]^2$
and
$Variance = E[\hat{f}(x_t)- E[\hat{f}(x_t)]]^2$
(Strictly speaking this decomposition applies in the regression case, but a similar decomposition works for any loss function, i.e. in the classification case as well).
Both of the above definitions are tied to the model complexity (measured in terms of the numbers of parameters in the model): The higher the complexity of the model the more likely it is for overfitting to occur.
See chapter 7 of Elements of Statistical Learning for a rigorous mathematical treatment of the topic.
Bias-Variance tradeoff and Variance (i.e. overfitting) increasing with model complexity. Taken from ESL Chapter 7
|
Mathematical/Algorithmic definition for overfitting
|
Yes there is a (slightly more) rigorous definition:
Given a model with a set of parameters, the model can be said to be overfitting the data if after a certain number of training steps, the training
|
Mathematical/Algorithmic definition for overfitting
Yes there is a (slightly more) rigorous definition:
Given a model with a set of parameters, the model can be said to be overfitting the data if after a certain number of training steps, the training error continues to decrease while the out of sample (test) error starts increasing.
In this example out of sample (test/validation) error first decreases in synch with the train error, then it starts increasing around the 90th epoch, that is when overfitting starts
Another way to look at it is in terms of bias and variance. The out of sample error for a model can be decomposed into two components:
Bias: Error due to the expected value from the estimated model being different from the expected value of the true model.
Variance: Error due to the model being sensitive to small fluctuations in the data set.
Overfitting occurs when the bias is low, but the variance is high.
For a data set $X$ where the true (unknown) model is:
$ Y = f(X) + \epsilon $ - $\epsilon$ being the irreducible noise in the data set, with $E(\epsilon)=0$ and $Var(\epsilon) = \sigma_{\epsilon}$,
and the estimated model is:
$ \hat{Y} = \hat{f}(X)$,
then the test error (for a test data point $x_t$) can be written as:
$Err(x_t) = \sigma_{\epsilon} + Bias^2 + Variance$
with
$Bias^2 = E[f(x_t)- \hat{f}(x_t)]^2$
and
$Variance = E[\hat{f}(x_t)- E[\hat{f}(x_t)]]^2$
(Strictly speaking this decomposition applies in the regression case, but a similar decomposition works for any loss function, i.e. in the classification case as well).
Both of the above definitions are tied to the model complexity (measured in terms of the numbers of parameters in the model): The higher the complexity of the model the more likely it is for overfitting to occur.
See chapter 7 of Elements of Statistical Learning for a rigorous mathematical treatment of the topic.
Bias-Variance tradeoff and Variance (i.e. overfitting) increasing with model complexity. Taken from ESL Chapter 7
|
Mathematical/Algorithmic definition for overfitting
Yes there is a (slightly more) rigorous definition:
Given a model with a set of parameters, the model can be said to be overfitting the data if after a certain number of training steps, the training
|
14,778
|
What is inverse CDF Normal Distribution Formula
|
There's no closed form expression for the inverse cdf of a normal (a.k.a. the quantile function of a normal). It looks like this:
There are various ways to express the function (e.g. as an infinite series or as a continued fraction), and numerous approximations (which is how computers are able to "calculate" it).
Reasonably accurate approximations are tedious to write and not especially enlightening (except in so far as the general forms convey a little insight into common ways of approximating functions you can't easily obtain in closed form).
If you regard $\text{erf}^{-1}$ or $\text{erf}$ itself
as a special function, then it could be written in terms of one of those, but one could as well call $\Phi^{-1}$ a special function and be done in one step.
|
What is inverse CDF Normal Distribution Formula
|
There's no closed form expression for the inverse cdf of a normal (a.k.a. the quantile function of a normal). It looks like this:
There are various ways to express the function (e.g. as an infinite s
|
What is inverse CDF Normal Distribution Formula
There's no closed form expression for the inverse cdf of a normal (a.k.a. the quantile function of a normal). It looks like this:
There are various ways to express the function (e.g. as an infinite series or as a continued fraction), and numerous approximations (which is how computers are able to "calculate" it).
Reasonably accurate approximations are tedious to write and not especially enlightening (except in so far as the general forms convey a little insight into common ways of approximating functions you can't easily obtain in closed form).
If you regard $\text{erf}^{-1}$ or $\text{erf}$ itself
as a special function, then it could be written in terms of one of those, but one could as well call $\Phi^{-1}$ a special function and be done in one step.
|
What is inverse CDF Normal Distribution Formula
There's no closed form expression for the inverse cdf of a normal (a.k.a. the quantile function of a normal). It looks like this:
There are various ways to express the function (e.g. as an infinite s
|
14,779
|
Can one-sided confidence intervals have 95% coverage
|
Yes we can construct one sided confidence intervals with 95% coverage.
The two sided confidence interval corresponds to the critical values in a two-tailed hypothesis test, the same applies to one sided confidence intervals and one-tailed hypothesis tests.
For example, if you have data with sample statistics $\bar{x}=7$, $s=4$ from a sample size $n=40$
The two-sided 95% confidence interval for the mean is $7 \pm 1.96\frac{4}{\sqrt{40}} = (5.76,8.24)$
If we were doing a hypothesis test for $\mu = \mu_0$ then the null hypothesis would be rejected if we were using a value of $\mu_0$ which is $\mu_0>8.24$ or $\mu_0 < 5.76$
Constructing one-sided 95% confidence intervals
In the above confidence interval we get 95% coverage with 47.5% of the population above the mean and 47.5% below the mean. In a one sided interval we can get 95% coverage with 50% below the mean and 45% above the mean.
For a standard normal distribution the value which corresponds to 50% below the mean is $-\infty$. 45% of the population above the mean is $1.64$, you can check this in any Z tables. Using the above example we get that the upper limit to the confidence interval is $7+1.64 \frac{4}{\sqrt{40}} = 8.04$
The one-sided confidence interval is therefore $(-\infty,8.04)$
If we were doing a hypothesis test for $\mu<\mu_0$ then we would reject the null hypothesis if we were considering a value of $\mu_0$ that is larger than $8.04$
Two sided interval for a one sided test
When you construct a two-sided 95% confidence interval $(a,b)$ you have 2.5% of the population which is below $a$ and 2.5% of the population is above $b$ (hence 5% of the population is outside the interval).
You could use this for a one-sided test, if you want to test the hypothesis that $\mu>\mu_0$ then check if $\mu_0<a$. If $\mu_0<a$ then you reject the hypothesis $\mu>\mu_0$ with a significance of 2.5%.
Do not use this to test both $\mu>\mu_0$ or $\mu<\mu_0$. You have to decide before you look at the data which hypothesis you are going to test. If you don't decide before then you are introducing a bias and your significance will only be 5%
|
Can one-sided confidence intervals have 95% coverage
|
Yes we can construct one sided confidence intervals with 95% coverage.
The two sided confidence interval corresponds to the critical values in a two-tailed hypothesis test, the same applies to one si
|
Can one-sided confidence intervals have 95% coverage
Yes we can construct one sided confidence intervals with 95% coverage.
The two sided confidence interval corresponds to the critical values in a two-tailed hypothesis test, the same applies to one sided confidence intervals and one-tailed hypothesis tests.
For example, if you have data with sample statistics $\bar{x}=7$, $s=4$ from a sample size $n=40$
The two-sided 95% confidence interval for the mean is $7 \pm 1.96\frac{4}{\sqrt{40}} = (5.76,8.24)$
If we were doing a hypothesis test for $\mu = \mu_0$ then the null hypothesis would be rejected if we were using a value of $\mu_0$ which is $\mu_0>8.24$ or $\mu_0 < 5.76$
Constructing one-sided 95% confidence intervals
In the above confidence interval we get 95% coverage with 47.5% of the population above the mean and 47.5% below the mean. In a one sided interval we can get 95% coverage with 50% below the mean and 45% above the mean.
For a standard normal distribution the value which corresponds to 50% below the mean is $-\infty$. 45% of the population above the mean is $1.64$, you can check this in any Z tables. Using the above example we get that the upper limit to the confidence interval is $7+1.64 \frac{4}{\sqrt{40}} = 8.04$
The one-sided confidence interval is therefore $(-\infty,8.04)$
If we were doing a hypothesis test for $\mu<\mu_0$ then we would reject the null hypothesis if we were considering a value of $\mu_0$ that is larger than $8.04$
Two sided interval for a one sided test
When you construct a two-sided 95% confidence interval $(a,b)$ you have 2.5% of the population which is below $a$ and 2.5% of the population is above $b$ (hence 5% of the population is outside the interval).
You could use this for a one-sided test, if you want to test the hypothesis that $\mu>\mu_0$ then check if $\mu_0<a$. If $\mu_0<a$ then you reject the hypothesis $\mu>\mu_0$ with a significance of 2.5%.
Do not use this to test both $\mu>\mu_0$ or $\mu<\mu_0$. You have to decide before you look at the data which hypothesis you are going to test. If you don't decide before then you are introducing a bias and your significance will only be 5%
|
Can one-sided confidence intervals have 95% coverage
Yes we can construct one sided confidence intervals with 95% coverage.
The two sided confidence interval corresponds to the critical values in a two-tailed hypothesis test, the same applies to one si
|
14,780
|
What is the VC dimension of a decision tree?
|
I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees.
Consult Aslan et al., Calculating the VC-Dimension of Trees (2009). They address this problem by doing an exhaustive search, in small trees, and then providing an approximate, recursive formula for estimating the VC dimension on larger trees. They then use this formula as part of a pruning algorithm. Had there been a closed-form answer to your question, I am sure they would have supplied it. They felt the need to iterate their way through even fairly small trees.
My two cents worth. I'm not sure that it's meaningful to talk about the VC dimension for decision tres. Consider a $d$ dimensional response, where each item is a binary outcome. This is the situation considered by Aslan et al. There are $2^d$ possible outcomes in this sample space and $2^d$ possible response patterns. If I build a complete tree, with $d$ levels and $2^d$ leaves, then I can shatter any pattern of $2^d$ responses. But nobody fits complete trees. Typically, you overfit and then prune back using cross-validation. What you get at the end is a smaller and simpler tree, but your hypothesis set is still large. Aslan et al. try to estimate the VC dimension of families of isomorphic trees. Each family is a hypothesis set with its own VC dimension.
The previous picture illustrates a tree for a space with $d=3$ that shatters 4 points: $(1,0,0,1),(1,1,1,0),(0,1,0,1), (1,1,0,1)$. The fourth entry is the "response". Aslan et al. would regard a tree with the same shape, but using $x1$ and $x2$, say, to be isomorphic and part of the same hypothesis set. So, although there are only 3 leaves on each of these trees, the set of such trees can shatter 4 points and the VC dimension is 4 in this case. However, the same tree could occur in a space with 4 variables, in which case the VC dimension would be 5. So it's complicated.
Aslan's brute force solution seems to work fairly well, but what they get isn't really the VC dimension of the algorithms people use, since these rely on pruning and cross-validation. It's hard to say what the hypothesis space actually is, since in principle, we start with a shattering number of possible trees, but then prune back to something more reasonable. Even if someone begins with an a priori choice not to go beyond two layers, say, there may still be a need to prune the tree. And we don't really need the VC dimension, since cross-validation goes after the out of sample error directly.
To be fair to Aslan et al., they don't use the VC dimension to characterize their hypothesis space. They calculate the VC dimension of branches and use that quantity to determine if the branch should be cut. At each stage, they use the VC dimension of the specific configuration of the branch under consideration. They don't look at the VC dimension of the problem as a whole.
If your variables are continuous and the response depends on reaching a threshold, then a decision tree is basically creating a bunch of perceptrons, so the VC dimension would presumably be greater than that (since you have to estimate the cutoff point to make the split). If the response depends monotonically on a continuous response, CART will chop it up into a bunch of steps, trying to recreate a regression model. I would not use trees in that case -- possibly gam or regression.
|
What is the VC dimension of a decision tree?
|
I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees.
Consult Aslan et al., Calculating the VC-Dimension of Trees (2
|
What is the VC dimension of a decision tree?
I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees.
Consult Aslan et al., Calculating the VC-Dimension of Trees (2009). They address this problem by doing an exhaustive search, in small trees, and then providing an approximate, recursive formula for estimating the VC dimension on larger trees. They then use this formula as part of a pruning algorithm. Had there been a closed-form answer to your question, I am sure they would have supplied it. They felt the need to iterate their way through even fairly small trees.
My two cents worth. I'm not sure that it's meaningful to talk about the VC dimension for decision tres. Consider a $d$ dimensional response, where each item is a binary outcome. This is the situation considered by Aslan et al. There are $2^d$ possible outcomes in this sample space and $2^d$ possible response patterns. If I build a complete tree, with $d$ levels and $2^d$ leaves, then I can shatter any pattern of $2^d$ responses. But nobody fits complete trees. Typically, you overfit and then prune back using cross-validation. What you get at the end is a smaller and simpler tree, but your hypothesis set is still large. Aslan et al. try to estimate the VC dimension of families of isomorphic trees. Each family is a hypothesis set with its own VC dimension.
The previous picture illustrates a tree for a space with $d=3$ that shatters 4 points: $(1,0,0,1),(1,1,1,0),(0,1,0,1), (1,1,0,1)$. The fourth entry is the "response". Aslan et al. would regard a tree with the same shape, but using $x1$ and $x2$, say, to be isomorphic and part of the same hypothesis set. So, although there are only 3 leaves on each of these trees, the set of such trees can shatter 4 points and the VC dimension is 4 in this case. However, the same tree could occur in a space with 4 variables, in which case the VC dimension would be 5. So it's complicated.
Aslan's brute force solution seems to work fairly well, but what they get isn't really the VC dimension of the algorithms people use, since these rely on pruning and cross-validation. It's hard to say what the hypothesis space actually is, since in principle, we start with a shattering number of possible trees, but then prune back to something more reasonable. Even if someone begins with an a priori choice not to go beyond two layers, say, there may still be a need to prune the tree. And we don't really need the VC dimension, since cross-validation goes after the out of sample error directly.
To be fair to Aslan et al., they don't use the VC dimension to characterize their hypothesis space. They calculate the VC dimension of branches and use that quantity to determine if the branch should be cut. At each stage, they use the VC dimension of the specific configuration of the branch under consideration. They don't look at the VC dimension of the problem as a whole.
If your variables are continuous and the response depends on reaching a threshold, then a decision tree is basically creating a bunch of perceptrons, so the VC dimension would presumably be greater than that (since you have to estimate the cutoff point to make the split). If the response depends monotonically on a continuous response, CART will chop it up into a bunch of steps, trying to recreate a regression model. I would not use trees in that case -- possibly gam or regression.
|
What is the VC dimension of a decision tree?
I'm not sure this is a question with a simple answer, nor do I believe it is a question that even needs to be asked about decision trees.
Consult Aslan et al., Calculating the VC-Dimension of Trees (2
|
14,781
|
What is the VC dimension of a decision tree?
|
I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give some new information as a follow up.
In a recent paper, Decision trees as partitioning machines to characterize their generalization properties by
Jean-Samuel Leboeuf, FrΓ©dΓ©ric LeBlanc and Mario Marchand, the authors consider the VC dimension of decision trees on examples of $\ell$ features (which is a generalization of your question which concerns only 2 dimensions). There, they show that the VC dimension of the class of a single split (AKA decision stumps) is given by the largest integer $d$ which satisfies
$2\ell \ge \binom{d}{\left\lfloor\frac{d}{2}\right\rfloor}$. The proof is quite complex and proceeds by reformulating the problem as a matching problem on graphs.
Furthermore, while an exact expression is still out of reach, they are able to give an upper bound on the growth function of general decision trees in a recursive fashion, from which they show that the VC dimension is of order $\mathcal{O}(L_T \log (\ell L_T))$, with $L_T$ the number of leaves of the tree. They also develop a new pruning algorithm based on their results, which seems to perform better in practice than CART's cost complexity pruning algorithm without the need for cross-validation, showing that the VC dimension of decision trees can be useful.
Disclaimer: I am one of the author of the paper.
|
What is the VC dimension of a decision tree?
|
I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give
|
What is the VC dimension of a decision tree?
I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give some new information as a follow up.
In a recent paper, Decision trees as partitioning machines to characterize their generalization properties by
Jean-Samuel Leboeuf, FrΓ©dΓ©ric LeBlanc and Mario Marchand, the authors consider the VC dimension of decision trees on examples of $\ell$ features (which is a generalization of your question which concerns only 2 dimensions). There, they show that the VC dimension of the class of a single split (AKA decision stumps) is given by the largest integer $d$ which satisfies
$2\ell \ge \binom{d}{\left\lfloor\frac{d}{2}\right\rfloor}$. The proof is quite complex and proceeds by reformulating the problem as a matching problem on graphs.
Furthermore, while an exact expression is still out of reach, they are able to give an upper bound on the growth function of general decision trees in a recursive fashion, from which they show that the VC dimension is of order $\mathcal{O}(L_T \log (\ell L_T))$, with $L_T$ the number of leaves of the tree. They also develop a new pruning algorithm based on their results, which seems to perform better in practice than CART's cost complexity pruning algorithm without the need for cross-validation, showing that the VC dimension of decision trees can be useful.
Disclaimer: I am one of the author of the paper.
|
What is the VC dimension of a decision tree?
I know this post is kind of old and already has an accepted answered, but as it is the first to link appear on Google when asking about the VC dimension of decision trees, I will allow myself to give
|
14,782
|
What are Bayesian p-values?
|
If I understand it correctly, then a Bayesian p-value is the comparison of a some metric calculated from your observed data with the same metric calculated from your simulated data (being generated with parameters drawn from the posterior distribution).
In Gelmans words: "From a Bayesian context, a posterior p-value is
the probability, given the data, that a future observation
is more extreme (as measured by some test variable)
than the data"
For example, the number of zeros generated from a poisson based model could be such a metric or test statistic, and you could calculate how many of your simulated datasets have a larger fraction of zeros than you actually observe in your real data. The closer this value to 0.5, the better the values calculated from your simulated data distribute around the real observation.
|
What are Bayesian p-values?
|
If I understand it correctly, then a Bayesian p-value is the comparison of a some metric calculated from your observed data with the same metric calculated from your simulated data (being generated wi
|
What are Bayesian p-values?
If I understand it correctly, then a Bayesian p-value is the comparison of a some metric calculated from your observed data with the same metric calculated from your simulated data (being generated with parameters drawn from the posterior distribution).
In Gelmans words: "From a Bayesian context, a posterior p-value is
the probability, given the data, that a future observation
is more extreme (as measured by some test variable)
than the data"
For example, the number of zeros generated from a poisson based model could be such a metric or test statistic, and you could calculate how many of your simulated datasets have a larger fraction of zeros than you actually observe in your real data. The closer this value to 0.5, the better the values calculated from your simulated data distribute around the real observation.
|
What are Bayesian p-values?
If I understand it correctly, then a Bayesian p-value is the comparison of a some metric calculated from your observed data with the same metric calculated from your simulated data (being generated wi
|
14,783
|
What are Bayesian p-values?
|
Bayesian p-values are normally used when one would like to check how a model fits the data. That is, given a model $M$ we wish to examine how well it fits the observed data $x_{obs}$ based on a statistic $T$, which measures the goodness of fit of data and model. For this, suppose we have a model $M$ with probability density function $f(x|\theta)$ and with prior $g(\theta)$. Then, one can define the prior predictive p-value or tail area under the predictive distribution through the expression
$$ p = P(T(x)\geq T(x_{obs})|M) = \int_{T(x)\geq T(x_{obs})}h(x)dx, $$
where
$$h(x)dx = \int f(x|\theta)g(\theta)d\theta$$
is the prior predictive density.
Notice that this approach may be influenced by the choice of the prior (for an example, see pg.180 of [1]). For this reason, the posterior predictive p-value was introduced. Now, consider that the prior depends on the observed data $g(\theta|x_{obs})$, thus,
$$ h(x|x_{obs}) = \int f(x|\theta)g(\theta|x_{obs})d\theta. $$
However, this approach presents two disadvantages. First, we're considering a double use of the data (for the definiton of $h(x)$ and $p$). Second, for larger sample sizes, the posterior distribution of $\theta$ concentrates at the Maximum Likelihood Estimate of $\theta$ (the frequentist or classical approach).
To overcome this, the conditional predictive distribution was introduced. Consider a statistic $U$ that does not involve the statistic $T$. Then, the conditional predictive p-value is
$$ p_{c} = P^{h(\cdot|u_{obs})}(T(x)\geq T(x_{obs})|M) = \int_{T(x)\geq T(x_{obs})} h(t|u_{obs}) dt, $$
where $h(t|u_{obs})$ is the conditional predictive density of $T$ given $U$ and $u_{obs}$ is $U(x_{obs})$.
Additionally, one could consider the partial posterior predictive p-value with the advantage of not requiring a choice for the statistic $U$, see pg.184 of [1] for more details.
[1] Ghosh, Jayanta; Delampady, Mohan; Samanta, Tapas. An Introduction to Bayesian Analysis:Theory and Methods. Springer, 2006.
|
What are Bayesian p-values?
|
Bayesian p-values are normally used when one would like to check how a model fits the data. That is, given a model $M$ we wish to examine how well it fits the observed data $x_{obs}$ based on a statis
|
What are Bayesian p-values?
Bayesian p-values are normally used when one would like to check how a model fits the data. That is, given a model $M$ we wish to examine how well it fits the observed data $x_{obs}$ based on a statistic $T$, which measures the goodness of fit of data and model. For this, suppose we have a model $M$ with probability density function $f(x|\theta)$ and with prior $g(\theta)$. Then, one can define the prior predictive p-value or tail area under the predictive distribution through the expression
$$ p = P(T(x)\geq T(x_{obs})|M) = \int_{T(x)\geq T(x_{obs})}h(x)dx, $$
where
$$h(x)dx = \int f(x|\theta)g(\theta)d\theta$$
is the prior predictive density.
Notice that this approach may be influenced by the choice of the prior (for an example, see pg.180 of [1]). For this reason, the posterior predictive p-value was introduced. Now, consider that the prior depends on the observed data $g(\theta|x_{obs})$, thus,
$$ h(x|x_{obs}) = \int f(x|\theta)g(\theta|x_{obs})d\theta. $$
However, this approach presents two disadvantages. First, we're considering a double use of the data (for the definiton of $h(x)$ and $p$). Second, for larger sample sizes, the posterior distribution of $\theta$ concentrates at the Maximum Likelihood Estimate of $\theta$ (the frequentist or classical approach).
To overcome this, the conditional predictive distribution was introduced. Consider a statistic $U$ that does not involve the statistic $T$. Then, the conditional predictive p-value is
$$ p_{c} = P^{h(\cdot|u_{obs})}(T(x)\geq T(x_{obs})|M) = \int_{T(x)\geq T(x_{obs})} h(t|u_{obs}) dt, $$
where $h(t|u_{obs})$ is the conditional predictive density of $T$ given $U$ and $u_{obs}$ is $U(x_{obs})$.
Additionally, one could consider the partial posterior predictive p-value with the advantage of not requiring a choice for the statistic $U$, see pg.184 of [1] for more details.
[1] Ghosh, Jayanta; Delampady, Mohan; Samanta, Tapas. An Introduction to Bayesian Analysis:Theory and Methods. Springer, 2006.
|
What are Bayesian p-values?
Bayesian p-values are normally used when one would like to check how a model fits the data. That is, given a model $M$ we wish to examine how well it fits the observed data $x_{obs}$ based on a statis
|
14,784
|
What is the long run variance?
|
It is a measure of the standard error of the sample mean when there is serial dependence.
If $Y_t$ is covariance stationary with $E(Y_t)=\mu$ and $Cov(Y_t,Y_{t-j})=\gamma_j$ (in an iid setting, this quantity would be zero!) such that $\sum_{j=0}^\infty|\gamma_j|<\infty$. Then
$$\lim_{T\to\infty}\{Var[\sqrt{T}(\bar{Y}_T- \mu)]\}=\lim_{T\to\infty}\{TE(\bar{Y}_T- \mu)^2\}=\sum_{j=-\infty}^\infty\gamma_j=\gamma_0+2\sum_{j=1}^\infty\gamma_j,$$
where the first equality is definitional, the second a bit more tricky to establish and the third a consequence of stationarity, which implies that $\gamma_j=\gamma_{-j}$.
So the problem is indeed lack of independence. To see this more clearly, write the variance of the sample mean as
\begin{align*}
E(\bar{Y}_T- \mu)^2&=E\left[(1/T)\sum_{t=1}^T(Y_t- \mu)\right]^2\\
&=1/T^2E[\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}\\
&\quad\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}]\\
&=1/T^2\{[\gamma_0+\gamma_1+\ldots+\gamma_{T-1}]+[\gamma_1+\gamma_0+\gamma_1+\ldots+\gamma_{T-2}]\\
&\quad+\ldots+[\gamma_{T-1}+\gamma_{T-2}+\ldots+\gamma_1+\gamma_0]\}
\end{align*}
A problem with estimating the long-run variance is that we of course do not observe all autocovariances with finite data. Kernel (in econometrics, "Newey-West" or HAC estimators) are used to this end,
$$
\hat{J_T}\equiv\hat{\gamma}_0+2\sum_{j=1}^{T-1}k\left(\frac{j}{\ell_T}\right)\hat{\gamma}_j
$$
$k$ is a kernel or weighting function, the $\hat\gamma_j$ are sample autocovariances. $k$, among other things must be symmetric and have $k(0)=1$. $\ell_T$ is a bandwidth parameter.
A popular kernel is the Bartlett kernel
$$k\left(\frac{j}{\ell_T}\right) = \begin{cases}
\bigl(1 - \frac{j}{\ell_T}\bigr)
\qquad &\mbox{for} \qquad 0 \leqslant j \leqslant \ell_T-1 \\
0 &\mbox{for} \qquad j > \ell_T-1
\end{cases}
$$
Good textbook references are Hamilton, Time Series Analysis or Fuller. A seminal (but technical) journal article is Newey and West, Econometrica 1987.
|
What is the long run variance?
|
It is a measure of the standard error of the sample mean when there is serial dependence.
If $Y_t$ is covariance stationary with $E(Y_t)=\mu$ and $Cov(Y_t,Y_{t-j})=\gamma_j$ (in an iid setting, this q
|
What is the long run variance?
It is a measure of the standard error of the sample mean when there is serial dependence.
If $Y_t$ is covariance stationary with $E(Y_t)=\mu$ and $Cov(Y_t,Y_{t-j})=\gamma_j$ (in an iid setting, this quantity would be zero!) such that $\sum_{j=0}^\infty|\gamma_j|<\infty$. Then
$$\lim_{T\to\infty}\{Var[\sqrt{T}(\bar{Y}_T- \mu)]\}=\lim_{T\to\infty}\{TE(\bar{Y}_T- \mu)^2\}=\sum_{j=-\infty}^\infty\gamma_j=\gamma_0+2\sum_{j=1}^\infty\gamma_j,$$
where the first equality is definitional, the second a bit more tricky to establish and the third a consequence of stationarity, which implies that $\gamma_j=\gamma_{-j}$.
So the problem is indeed lack of independence. To see this more clearly, write the variance of the sample mean as
\begin{align*}
E(\bar{Y}_T- \mu)^2&=E\left[(1/T)\sum_{t=1}^T(Y_t- \mu)\right]^2\\
&=1/T^2E[\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}\\
&\quad\{(Y_1- \mu)+(Y_2- \mu)+\ldots+(Y_T- \mu)\}]\\
&=1/T^2\{[\gamma_0+\gamma_1+\ldots+\gamma_{T-1}]+[\gamma_1+\gamma_0+\gamma_1+\ldots+\gamma_{T-2}]\\
&\quad+\ldots+[\gamma_{T-1}+\gamma_{T-2}+\ldots+\gamma_1+\gamma_0]\}
\end{align*}
A problem with estimating the long-run variance is that we of course do not observe all autocovariances with finite data. Kernel (in econometrics, "Newey-West" or HAC estimators) are used to this end,
$$
\hat{J_T}\equiv\hat{\gamma}_0+2\sum_{j=1}^{T-1}k\left(\frac{j}{\ell_T}\right)\hat{\gamma}_j
$$
$k$ is a kernel or weighting function, the $\hat\gamma_j$ are sample autocovariances. $k$, among other things must be symmetric and have $k(0)=1$. $\ell_T$ is a bandwidth parameter.
A popular kernel is the Bartlett kernel
$$k\left(\frac{j}{\ell_T}\right) = \begin{cases}
\bigl(1 - \frac{j}{\ell_T}\bigr)
\qquad &\mbox{for} \qquad 0 \leqslant j \leqslant \ell_T-1 \\
0 &\mbox{for} \qquad j > \ell_T-1
\end{cases}
$$
Good textbook references are Hamilton, Time Series Analysis or Fuller. A seminal (but technical) journal article is Newey and West, Econometrica 1987.
|
What is the long run variance?
It is a measure of the standard error of the sample mean when there is serial dependence.
If $Y_t$ is covariance stationary with $E(Y_t)=\mu$ and $Cov(Y_t,Y_{t-j})=\gamma_j$ (in an iid setting, this q
|
14,785
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
|
Let's assume we restrict consideration to symmetric distributions where the mean and variance are finite (so the Cauchy, for example, is excluded from consideration).
Further, I'm going to limit myself initially to continuous unimodal cases, and indeed mostly to 'nice' situations (though I might come back later and discuss some other cases).
The relative variance depends on sample size. It's common to discuss the ratio of ($n$ times the) the asymptotic variances, but we should keep in mind that at smaller sample sizes the situation will be somewhat different. (The median sometimes does noticeably better or worse than its asymptotic behaviour would suggest. For example, at the normal with $n=3$ it has an efficiency of about 74% rather than 63%. The asymptotic behavior is generally a good guide at quite moderate sample sizes, though.)
The asymptotics are fairly easy to deal with:
Mean: $n\times$ variance = $\sigma^2$.
Median: $n\times$ variance = $\frac{1}{[4f(m)^2]}$ where $f(m)$ is the height of the density at the median.
So if $f(m)>\frac{1}{2\sigma}$, the median will be asymptotically more efficient.
[In the normal case, $f(m)=
\frac{1}{\sqrt{2\pi}\sigma}$, so $\frac{1}{[4f(m)^2]}=\frac{\pi\sigma^2}{2}$, whence the asymptotic relative efficiency of $2/\pi$)]
We can see that the variance of the median will depend on the behaviour of the density very near the center, while the variance of the mean depends on the variance of the original distribution (which in some sense is affected by the density everywhere, and in particular, more by the way it behaves further away from the center)
Which is to say, while the median is less affected by outliers than the mean, and we often see that it has lower variance than the mean when the distribution is heavy tailed (which does produce more outliers), what really drives the performance of the median is inliers. It often happens that (for a fixed variance) there's a tendency for the two to go together.
That is, broadly speaking, as the tail gets heavier, there's a tendency for (at a fixed value of $\sigma^2$) the distribution to get "peakier" at the same time (more kurtotic, in Pearson's original, if loose, sense). This is not, however, a certain thing - it tends to be the case across a broad range of commonly considered densities, but it doesn't always hold. When it does hold, the variance of the median will reduce (because the distribution has more probability in the immediate neighborhood of the median), while the variance of the mean is held constant (because we fixed $\sigma^2$).
So across a variety of common cases the median will often tend to do "better" than the mean when the tail is heavy, (but we must keep in mind that it's relatively easy to construct counterexamples). So we can consider a few cases, which can show us what we often see, but we shouldn't read too much into them, because heavier tail doesn't universally go with higher peak.
We know the median is about 63.7% as efficient (for $n$ large) as the mean at the normal.
What about, say a logistic distribution, which like the normal is approximately parabolic about the center, but has heavier tails (as $x$ becomes large, they become exponential).
If we take the scale parameter to be 1, the logistic has variance $\pi^2/3$ and height at the median of 1/4, so $\frac{1}{4f(m)^2}=4$. The ratio of variances is then $\pi^2/12\approx 0.82$ so in large samples, the median is roughly 82% as efficient as the mean.
Let's consider two other densities with exponential-like tails, but different peakedness.
First, the hyperbolic secant ($\text{sech}$) distribution, for which the standard form has variance 1 and height at the center of $\frac{1}{2}$, so the ratio of asymptotic variances is 1 (the two are equally efficient in large samples). However, in small samples the mean is more efficient (its variance is about 95% of that for the median when $n=5$, for example).
Here we can see how, as we progress through those three densities (holding variance constant), that the height at the median increases:
Can we make it go still higher? Indeed we can. Consider, for example, the double exponential. The standard form has variance 2, and the height at the median is $\frac{1}{2}$ (so if we scale to unit variance as in the diagram, the peak is at $\frac{1}{\sqrt{2}}$, just above 0.7). The asymptotic variance of the median is half that of the mean.
If we make the distribution peakier still for a given variance, (perhaps by making the tail heavier than exponential), the median can be far more efficient (relatively speaking) still. There's really no limit to how high that peak can go.
If we had instead used examples from say the t-distributions, broadly similar effects would be seen, but the progression would be different; the crossover point is a little below $\nu=5$ df (actually around 4.68) -- for smaller df the median is more efficient (asymptotically), for large df the mean is.
...
At finite sample sizes, it's sometimes possible to compute the variance of the distribution of the median explicitly. Where that's not feasible - or even just inconvenient - we can use simulation to compute the variance of the median (or the ratio of the variance*) across random samples drawn from the distribution (which is what I did to get the small sample figures above).
* Even though we often don't actually need the variance of the mean, since we can compute it if we know the variance of the distribution, it may be more computationally efficient to do so, since it acts like a control variate (the mean and median are often quite correlated).
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
|
Let's assume we restrict consideration to symmetric distributions where the mean and variance are finite (so the Cauchy, for example, is excluded from consideration).
Further, I'm going to limit myse
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
Let's assume we restrict consideration to symmetric distributions where the mean and variance are finite (so the Cauchy, for example, is excluded from consideration).
Further, I'm going to limit myself initially to continuous unimodal cases, and indeed mostly to 'nice' situations (though I might come back later and discuss some other cases).
The relative variance depends on sample size. It's common to discuss the ratio of ($n$ times the) the asymptotic variances, but we should keep in mind that at smaller sample sizes the situation will be somewhat different. (The median sometimes does noticeably better or worse than its asymptotic behaviour would suggest. For example, at the normal with $n=3$ it has an efficiency of about 74% rather than 63%. The asymptotic behavior is generally a good guide at quite moderate sample sizes, though.)
The asymptotics are fairly easy to deal with:
Mean: $n\times$ variance = $\sigma^2$.
Median: $n\times$ variance = $\frac{1}{[4f(m)^2]}$ where $f(m)$ is the height of the density at the median.
So if $f(m)>\frac{1}{2\sigma}$, the median will be asymptotically more efficient.
[In the normal case, $f(m)=
\frac{1}{\sqrt{2\pi}\sigma}$, so $\frac{1}{[4f(m)^2]}=\frac{\pi\sigma^2}{2}$, whence the asymptotic relative efficiency of $2/\pi$)]
We can see that the variance of the median will depend on the behaviour of the density very near the center, while the variance of the mean depends on the variance of the original distribution (which in some sense is affected by the density everywhere, and in particular, more by the way it behaves further away from the center)
Which is to say, while the median is less affected by outliers than the mean, and we often see that it has lower variance than the mean when the distribution is heavy tailed (which does produce more outliers), what really drives the performance of the median is inliers. It often happens that (for a fixed variance) there's a tendency for the two to go together.
That is, broadly speaking, as the tail gets heavier, there's a tendency for (at a fixed value of $\sigma^2$) the distribution to get "peakier" at the same time (more kurtotic, in Pearson's original, if loose, sense). This is not, however, a certain thing - it tends to be the case across a broad range of commonly considered densities, but it doesn't always hold. When it does hold, the variance of the median will reduce (because the distribution has more probability in the immediate neighborhood of the median), while the variance of the mean is held constant (because we fixed $\sigma^2$).
So across a variety of common cases the median will often tend to do "better" than the mean when the tail is heavy, (but we must keep in mind that it's relatively easy to construct counterexamples). So we can consider a few cases, which can show us what we often see, but we shouldn't read too much into them, because heavier tail doesn't universally go with higher peak.
We know the median is about 63.7% as efficient (for $n$ large) as the mean at the normal.
What about, say a logistic distribution, which like the normal is approximately parabolic about the center, but has heavier tails (as $x$ becomes large, they become exponential).
If we take the scale parameter to be 1, the logistic has variance $\pi^2/3$ and height at the median of 1/4, so $\frac{1}{4f(m)^2}=4$. The ratio of variances is then $\pi^2/12\approx 0.82$ so in large samples, the median is roughly 82% as efficient as the mean.
Let's consider two other densities with exponential-like tails, but different peakedness.
First, the hyperbolic secant ($\text{sech}$) distribution, for which the standard form has variance 1 and height at the center of $\frac{1}{2}$, so the ratio of asymptotic variances is 1 (the two are equally efficient in large samples). However, in small samples the mean is more efficient (its variance is about 95% of that for the median when $n=5$, for example).
Here we can see how, as we progress through those three densities (holding variance constant), that the height at the median increases:
Can we make it go still higher? Indeed we can. Consider, for example, the double exponential. The standard form has variance 2, and the height at the median is $\frac{1}{2}$ (so if we scale to unit variance as in the diagram, the peak is at $\frac{1}{\sqrt{2}}$, just above 0.7). The asymptotic variance of the median is half that of the mean.
If we make the distribution peakier still for a given variance, (perhaps by making the tail heavier than exponential), the median can be far more efficient (relatively speaking) still. There's really no limit to how high that peak can go.
If we had instead used examples from say the t-distributions, broadly similar effects would be seen, but the progression would be different; the crossover point is a little below $\nu=5$ df (actually around 4.68) -- for smaller df the median is more efficient (asymptotically), for large df the mean is.
...
At finite sample sizes, it's sometimes possible to compute the variance of the distribution of the median explicitly. Where that's not feasible - or even just inconvenient - we can use simulation to compute the variance of the median (or the ratio of the variance*) across random samples drawn from the distribution (which is what I did to get the small sample figures above).
* Even though we often don't actually need the variance of the mean, since we can compute it if we know the variance of the distribution, it may be more computationally efficient to do so, since it acts like a control variate (the mean and median are often quite correlated).
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
Let's assume we restrict consideration to symmetric distributions where the mean and variance are finite (so the Cauchy, for example, is excluded from consideration).
Further, I'm going to limit myse
|
14,786
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
|
The median will generally be better than the mean if there are heavy tails, while the mean will be best with light tails. An interesting concrete example is the double exponential (or Laplace) distribution https://en.wikipedia.org/wiki/Laplace_distribution with density function
$$
f(x) = \frac12 e^{-|x-\mu|} , \quad -\infty < x < \infty
$$
which has expectation $\mu$ and variance 2. Let $X_1, X_2, \dotsc , X_n$ be an iid sample. Then for large samples the arithmetic mean will have a normal distribution (approximately) with variance (exact) $2/n$, while the median will have an asymptotic normal distribution with variance
$ \frac1{4 n f(\mu)^2} = \frac1{4 n / 4} = 1/n < 2/n$, so the difference is rather large.
For the normal distribution (with $\sigma^2 = 1$) we get the opposite comparison, the arithmetic mean has variance (exact) $1/n$ while the median has variance (approximately, large $n$) $\frac1{4 n (1/\sqrt{2\pi})^2} = \frac{\pi}{2 n} \approx 1.57/n > 1/n$
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
|
The median will generally be better than the mean if there are heavy tails, while the mean will be best with light tails. An interesting concrete example is the double exponential (or Laplace) distri
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
The median will generally be better than the mean if there are heavy tails, while the mean will be best with light tails. An interesting concrete example is the double exponential (or Laplace) distribution https://en.wikipedia.org/wiki/Laplace_distribution with density function
$$
f(x) = \frac12 e^{-|x-\mu|} , \quad -\infty < x < \infty
$$
which has expectation $\mu$ and variance 2. Let $X_1, X_2, \dotsc , X_n$ be an iid sample. Then for large samples the arithmetic mean will have a normal distribution (approximately) with variance (exact) $2/n$, while the median will have an asymptotic normal distribution with variance
$ \frac1{4 n f(\mu)^2} = \frac1{4 n / 4} = 1/n < 2/n$, so the difference is rather large.
For the normal distribution (with $\sigma^2 = 1$) we get the opposite comparison, the arithmetic mean has variance (exact) $1/n$ while the median has variance (approximately, large $n$) $\frac1{4 n (1/\sqrt{2\pi})^2} = \frac{\pi}{2 n} \approx 1.57/n > 1/n$
|
For what (symmetric) distributions is sample mean a more efficient estimator than sample median?
The median will generally be better than the mean if there are heavy tails, while the mean will be best with light tails. An interesting concrete example is the double exponential (or Laplace) distri
|
14,787
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
|
I won't give a very satisfactory answer to your question because it seems to me to be a little bit too open, but let me try to shed some light on why this question is a hard one.
I think you are struggling with the fact that the conventional topologies we use on probability distributions and random variables are bad. I've written a bigger piece about this on my blog but let me try to summarize: you can converge in the weak (and the total-variation) sense while violating commonsensical assumptions about what convergence means.
For example, you can converge in weak topology towards a constant while having variance = 1 (which is exactly what your $Z_n$ sequence is doing). There is then a limit distribution (in the weak topology) that is this monstruous random variable which is most of the time equal to 0 but infinitesimally rarely equal to infinity.
I personally take this to mean that the weak topology (and the total-variation topology too) is a poor notion of convergence that should be discarded. Most of the convergences we actually use are stronger than that. However, I don't really know what should we use instead of the weak topology sooo ...
If you really want to find an essential difference between $\hat \theta= \bar X+Z_n$ and $\tilde \theta=\bar X$, here is my take: both estimators are equivalent for the [0,1]-loss (when the size of your mistake doesn't matter). However, $\tilde \theta $ is much better if the size of your mistakes matter, because $\hat \theta$ sometimes fails catastrophically.
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
|
I won't give a very satisfactory answer to your question because it seems to me to be a little bit too open, but let me try to shed some light on why this question is a hard one.
I think you are strug
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
I won't give a very satisfactory answer to your question because it seems to me to be a little bit too open, but let me try to shed some light on why this question is a hard one.
I think you are struggling with the fact that the conventional topologies we use on probability distributions and random variables are bad. I've written a bigger piece about this on my blog but let me try to summarize: you can converge in the weak (and the total-variation) sense while violating commonsensical assumptions about what convergence means.
For example, you can converge in weak topology towards a constant while having variance = 1 (which is exactly what your $Z_n$ sequence is doing). There is then a limit distribution (in the weak topology) that is this monstruous random variable which is most of the time equal to 0 but infinitesimally rarely equal to infinity.
I personally take this to mean that the weak topology (and the total-variation topology too) is a poor notion of convergence that should be discarded. Most of the convergences we actually use are stronger than that. However, I don't really know what should we use instead of the weak topology sooo ...
If you really want to find an essential difference between $\hat \theta= \bar X+Z_n$ and $\tilde \theta=\bar X$, here is my take: both estimators are equivalent for the [0,1]-loss (when the size of your mistake doesn't matter). However, $\tilde \theta $ is much better if the size of your mistakes matter, because $\hat \theta$ sometimes fails catastrophically.
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
I won't give a very satisfactory answer to your question because it seems to me to be a little bit too open, but let me try to shed some light on why this question is a hard one.
I think you are strug
|
14,788
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
|
27-10-2014: Unfortunately (for me that is), no-one has as yet contributed an answer here -perhaps because it looks like a weird, "pathological" theoretical issue and nothing more?
Well to quote a comment for user Cardinal (which I will subsequently explore)
"Here is an admittedly absurd, but simple example. The idea is to
illustrate exactly what can go wrong and why. It does have practical
applications (my emphasis). Example: Consider the typical i.i.d. model with finite
second moment. Let $\hat ΞΈ_n=\bar X_n+Z_n$ where $Z_n$ is independent of
$\bar X_n$ and $Z_n=\pm an$ each with probability $1/n^2$ and is zero
otherwise, with $a>0$ arbitrary. Then $\hat ΞΈ_n$ is unbiased, has
variance bounded below by $a^2$, and $\hat ΞΈ_nβ\mu$ almost surely
(it's strongly consistent). I leave as an exercise the case regarding
the bias".
The maverick random variable here is $Z_n$, so let's see what we can say about it.
The variable has support $\{-an,0,an\}$ with corresponding probabilities $\{1/n^2,1-2/n^2,1/n^2\}$. It is symmetric around zero, so we have
$$E(Z_n) = 0,\;\; \text{Var}(Z_n) = \frac {(-an)^2}{n^2} + 0 + \frac {(an)^2}{n^2} = 2a^2$$
These moments do not depend on $n$ so I guess we are allowed to trivially write
$$\lim_{n\rightarrow \infty} E(Z_n) = 0,\;\;\lim_{n\rightarrow \infty}\text{Var}(Z_n) = 2a^2$$
In Poor Man's Asymptotics, we know of a condition for the limits of moments to equal the moments of the limiting distribution. If the $r$-th moment of the finite case distribution converges to a constant (as is our case), then, if moreover,
$$\exists \delta >0 :\lim \sup E(|Z_n|^{r+\delta}) < \infty $$
the limit of the $r$-th moment will be the $r$-th moment of the limiting distribution. In our case
$$E(|Z_n|^{r+\delta}) = \frac {|-an|^{r+\delta}}{n^2} + 0 + \frac {|an|^{r+\delta}}{n^2} = 2a^{r+\delta}\cdot n^{r+\delta-2}$$
For $r\geq2$ this diverges for any $\delta >0$, so this sufficient condition does not hold for the variance (it does hold for the mean).
Take the other way: What is the asymptotic distribution of $Z_n$? Does the CDF of $Z_n$ converge to a non-degenerate CDF at the limit?
It doesn't look like it does: the limiting support will be $\{-\infty, 0, \infty\}$ (if we are permitted to write this), and the corresponding probabilities $\{0,1,0\}$. Looks like a constant to me.
But if we don't have a limiting distribution in the first place, how can we talk about its moments?
Then, going back to the estimator $\hat \theta_n$, since $\bar X_n$ also converges to a constant, it appears that
$\hat \theta_n$ does not have a (non-trivial) limiting distribution, but it does
have a variance at the limit. Or, maybe this variance is infinite? But an infinite variance with a constant distribution?
How can we understand this? What does it tell us about the estimator? What is the essential difference, at the limit, between $\hat \theta_n = \bar X_n + Z_n$ and $\tilde \theta_n = \bar X_n$?
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
|
27-10-2014: Unfortunately (for me that is), no-one has as yet contributed an answer here -perhaps because it looks like a weird, "pathological" theoretical issue and nothing more?
Well to quote a co
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
27-10-2014: Unfortunately (for me that is), no-one has as yet contributed an answer here -perhaps because it looks like a weird, "pathological" theoretical issue and nothing more?
Well to quote a comment for user Cardinal (which I will subsequently explore)
"Here is an admittedly absurd, but simple example. The idea is to
illustrate exactly what can go wrong and why. It does have practical
applications (my emphasis). Example: Consider the typical i.i.d. model with finite
second moment. Let $\hat ΞΈ_n=\bar X_n+Z_n$ where $Z_n$ is independent of
$\bar X_n$ and $Z_n=\pm an$ each with probability $1/n^2$ and is zero
otherwise, with $a>0$ arbitrary. Then $\hat ΞΈ_n$ is unbiased, has
variance bounded below by $a^2$, and $\hat ΞΈ_nβ\mu$ almost surely
(it's strongly consistent). I leave as an exercise the case regarding
the bias".
The maverick random variable here is $Z_n$, so let's see what we can say about it.
The variable has support $\{-an,0,an\}$ with corresponding probabilities $\{1/n^2,1-2/n^2,1/n^2\}$. It is symmetric around zero, so we have
$$E(Z_n) = 0,\;\; \text{Var}(Z_n) = \frac {(-an)^2}{n^2} + 0 + \frac {(an)^2}{n^2} = 2a^2$$
These moments do not depend on $n$ so I guess we are allowed to trivially write
$$\lim_{n\rightarrow \infty} E(Z_n) = 0,\;\;\lim_{n\rightarrow \infty}\text{Var}(Z_n) = 2a^2$$
In Poor Man's Asymptotics, we know of a condition for the limits of moments to equal the moments of the limiting distribution. If the $r$-th moment of the finite case distribution converges to a constant (as is our case), then, if moreover,
$$\exists \delta >0 :\lim \sup E(|Z_n|^{r+\delta}) < \infty $$
the limit of the $r$-th moment will be the $r$-th moment of the limiting distribution. In our case
$$E(|Z_n|^{r+\delta}) = \frac {|-an|^{r+\delta}}{n^2} + 0 + \frac {|an|^{r+\delta}}{n^2} = 2a^{r+\delta}\cdot n^{r+\delta-2}$$
For $r\geq2$ this diverges for any $\delta >0$, so this sufficient condition does not hold for the variance (it does hold for the mean).
Take the other way: What is the asymptotic distribution of $Z_n$? Does the CDF of $Z_n$ converge to a non-degenerate CDF at the limit?
It doesn't look like it does: the limiting support will be $\{-\infty, 0, \infty\}$ (if we are permitted to write this), and the corresponding probabilities $\{0,1,0\}$. Looks like a constant to me.
But if we don't have a limiting distribution in the first place, how can we talk about its moments?
Then, going back to the estimator $\hat \theta_n$, since $\bar X_n$ also converges to a constant, it appears that
$\hat \theta_n$ does not have a (non-trivial) limiting distribution, but it does
have a variance at the limit. Or, maybe this variance is infinite? But an infinite variance with a constant distribution?
How can we understand this? What does it tell us about the estimator? What is the essential difference, at the limit, between $\hat \theta_n = \bar X_n + Z_n$ and $\tilde \theta_n = \bar X_n$?
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
27-10-2014: Unfortunately (for me that is), no-one has as yet contributed an answer here -perhaps because it looks like a weird, "pathological" theoretical issue and nothing more?
Well to quote a co
|
14,789
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
|
An estimator is consistent in probability but not in MSE if there is an arbitrarily small probability of the estimator "exploding".
While an interesting mathematical curiosity, for any practical purpose this should not bother you.
For any practical purpose, estimators have finite supports and thus cannot explode (the real world is not infinitesimally small, nor large).
If you still wish to call upon a continuous approximation of the "real world", and your approximation is such that is converges in probability and not in MSE, then take it as it is:
Your estimator can be right with arbitrarily large probability, but there will always be an arbitrarily small chance of it exploding. Luckily, when it does, you will notice, so that otherwise, you can trust it. :-)
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
|
An estimator is consistent in probability but not in MSE if there is an arbitrarily small probability of the estimator "exploding".
While an interesting mathematical curiosity, for any practical purpo
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
An estimator is consistent in probability but not in MSE if there is an arbitrarily small probability of the estimator "exploding".
While an interesting mathematical curiosity, for any practical purpose this should not bother you.
For any practical purpose, estimators have finite supports and thus cannot explode (the real world is not infinitesimally small, nor large).
If you still wish to call upon a continuous approximation of the "real world", and your approximation is such that is converges in probability and not in MSE, then take it as it is:
Your estimator can be right with arbitrarily large probability, but there will always be an arbitrarily small chance of it exploding. Luckily, when it does, you will notice, so that otherwise, you can trust it. :-)
|
Asymptotic consistency with non-zero asymptotic variance - what does it represent?
An estimator is consistent in probability but not in MSE if there is an arbitrarily small probability of the estimator "exploding".
While an interesting mathematical curiosity, for any practical purpo
|
14,790
|
Random Forest can't overfit?
|
Random forest can overfit. I am sure of this. What is usually meant is that the model would not overfit if you use more trees.
Try for example to estimate the model $y = log(x) + \epsilon$ with a random forest. You will get an almost zero training error but a bad prediction error
|
Random Forest can't overfit?
|
Random forest can overfit. I am sure of this. What is usually meant is that the model would not overfit if you use more trees.
Try for example to estimate the model $y = log(x) + \epsilon$ with a ran
|
Random Forest can't overfit?
Random forest can overfit. I am sure of this. What is usually meant is that the model would not overfit if you use more trees.
Try for example to estimate the model $y = log(x) + \epsilon$ with a random forest. You will get an almost zero training error but a bad prediction error
|
Random Forest can't overfit?
Random forest can overfit. I am sure of this. What is usually meant is that the model would not overfit if you use more trees.
Try for example to estimate the model $y = log(x) + \epsilon$ with a ran
|
14,791
|
Random Forest can't overfit?
|
I will try to give a more thorough answer building on Donbeo's answer and Itachi's comment.
Can Random Forests overfit?
In short, yes, they can.
Why is there a common misconception that Random Forests cannot overfit?
The reason is that, from the outside, the training of Random Forests looks similar to the ones of other iterative methods such as Gradient Boosted Machines, or Neural Networks.
Most of these other iterative methods, however, reduce the model's bias over the iterations, as they make the model more complex (GBM) or more suited to the training data (NN). It is therefore common knowledge that these methods suffer from overtraining, and will overfit the training data if trained for too long since bias reduction involves an increase in variance.
Random Forests, on the other hand, simply average trees over the iterations, reducing the model's variance instead, while leaving the bias unchanged. This means that they do not suffer from overtraining, and indeed adding more trees (therefore training longer) cannot be source of overfitting. This is where they get their non-overfitting reputation from!
Then how can they overfit?
Random Forests are usually built of high-variance, low-bias fully grown decision trees, and their strength comes from the variance reduction that comes from the averaging of these trees. However, if the predictions of the trees are too close to each other then the variance reduction effect is limited, and they might end up overfitting.
This can happen for example if the dataset is relatively simple, and therefore the fully grown trees perfectly learn its patterns and predict very similarly. Also having a high value for mtry, the number of features considered at every split, causes the trees to be more correlated, and therefore limits the variance reduction and might cause some overfitting (it is important to know that a high value of mtry can still be very useful in many situations, as it makes the model more robust to noisy features)
Can I fix this overfitting?
Like always, more data helps.
Limiting the depth of the trees has also been shown to help in this situation, and reducing the number of selected features to make the trees as uncorrelated as possible.
For reference, I really suggest reading the relative chapter of Elements of
Statistical Learning, which I think gives a very detailed analysis, and dives deeper into the math behind it.
|
Random Forest can't overfit?
|
I will try to give a more thorough answer building on Donbeo's answer and Itachi's comment.
Can Random Forests overfit?
In short, yes, they can.
Why is there a common misconception that Random Forests
|
Random Forest can't overfit?
I will try to give a more thorough answer building on Donbeo's answer and Itachi's comment.
Can Random Forests overfit?
In short, yes, they can.
Why is there a common misconception that Random Forests cannot overfit?
The reason is that, from the outside, the training of Random Forests looks similar to the ones of other iterative methods such as Gradient Boosted Machines, or Neural Networks.
Most of these other iterative methods, however, reduce the model's bias over the iterations, as they make the model more complex (GBM) or more suited to the training data (NN). It is therefore common knowledge that these methods suffer from overtraining, and will overfit the training data if trained for too long since bias reduction involves an increase in variance.
Random Forests, on the other hand, simply average trees over the iterations, reducing the model's variance instead, while leaving the bias unchanged. This means that they do not suffer from overtraining, and indeed adding more trees (therefore training longer) cannot be source of overfitting. This is where they get their non-overfitting reputation from!
Then how can they overfit?
Random Forests are usually built of high-variance, low-bias fully grown decision trees, and their strength comes from the variance reduction that comes from the averaging of these trees. However, if the predictions of the trees are too close to each other then the variance reduction effect is limited, and they might end up overfitting.
This can happen for example if the dataset is relatively simple, and therefore the fully grown trees perfectly learn its patterns and predict very similarly. Also having a high value for mtry, the number of features considered at every split, causes the trees to be more correlated, and therefore limits the variance reduction and might cause some overfitting (it is important to know that a high value of mtry can still be very useful in many situations, as it makes the model more robust to noisy features)
Can I fix this overfitting?
Like always, more data helps.
Limiting the depth of the trees has also been shown to help in this situation, and reducing the number of selected features to make the trees as uncorrelated as possible.
For reference, I really suggest reading the relative chapter of Elements of
Statistical Learning, which I think gives a very detailed analysis, and dives deeper into the math behind it.
|
Random Forest can't overfit?
I will try to give a more thorough answer building on Donbeo's answer and Itachi's comment.
Can Random Forests overfit?
In short, yes, they can.
Why is there a common misconception that Random Forests
|
14,792
|
Random Forest can't overfit?
|
Hastie et al. address this question very briefly in Elements of Statistical Learning (page 596).
Another claim is that random forests βcannot overfitβ the data. It is certainly true that increasing $\mathcal{B}$ [the number of trees in the ensemble] does not cause the random forest sequence to overfit... However, this limit can overfit the data; the average of fully grown trees can result in too rich a model, and incur unnecessary variance. Segal (2004) demonstrates small gains in performance by controlling the depths of the individual trees grown in random forests. Our experience is that using full-grown trees seldom costs much, and results in one less tuning parameter.
|
Random Forest can't overfit?
|
Hastie et al. address this question very briefly in Elements of Statistical Learning (page 596).
Another claim is that random forests βcannot overfitβ the data. It is certainly true that increasing $
|
Random Forest can't overfit?
Hastie et al. address this question very briefly in Elements of Statistical Learning (page 596).
Another claim is that random forests βcannot overfitβ the data. It is certainly true that increasing $\mathcal{B}$ [the number of trees in the ensemble] does not cause the random forest sequence to overfit... However, this limit can overfit the data; the average of fully grown trees can result in too rich a model, and incur unnecessary variance. Segal (2004) demonstrates small gains in performance by controlling the depths of the individual trees grown in random forests. Our experience is that using full-grown trees seldom costs much, and results in one less tuning parameter.
|
Random Forest can't overfit?
Hastie et al. address this question very briefly in Elements of Statistical Learning (page 596).
Another claim is that random forests βcannot overfitβ the data. It is certainly true that increasing $
|
14,793
|
Statistical test for two distributions where only 5-number summary is known?
|
Under the null hypothesis that the distributions are the same and both samples are obtained randomly and independently from the common distribution, we can work out the sizes of all $5\times 5$ (deterministic) tests that can be made by comparing one letter value to another. Some of these tests appear to have reasonable power to detect differences in distributions.
Analysis
The original definition of the $5$-letter summary of any ordered batch of numbers $x_1 \le x_2 \le \cdots \le x_n$ is the following [Tukey EDA 1977]:
For any number $m = (i + (i+1))/2$ in $\{(1+2)/2, (2+3)/2, \ldots, (n-1+n)/2\}$ define $x_m = (x_i + x_{i+1})/2.$
Let $\bar{i} = n+1-i$.
Let $m = (n+1)/2$ and $h = (\lfloor m \rfloor + 1)/2.$
The $5$-letter summary is the set $\{X^{-} = x_1, H^{-}=x_h, M=x_m, H^{+}=x_\bar{h}, X^{+}=x_n\}.$ Its elements are known as the minimum, lower hinge, median, upper hinge, and maximum, respectively.
For example, in the batch of data $(-3, 1, 1, 2, 3, 5, 5, 5, 7, 13, 21)$ we may compute that $n=12$, $m=13/2$, and $h=7/2$, whence
$$\eqalign{
&X^{-} &= -3, \\
&H^{-} &= x_{7/2} = (x_3+x_4)/2 = (1+2)/2 = 3/2, \\
&M &= x_{13/2} = (x_6+x_7)/2 = (5+5)/2 = 5, \\
&H^{+} &= x_\overline{7/2} = x_{19/2} = (x_9+x_{10})/2 = (5+7)/2 = 6, \\
&X^{+} &= x_{12} = 21.
}$$
The hinges are close to (but usually not exactly the same as) the quartiles. If quartiles are used, note that in general they will be weighted arithmetic means of two of the order statistics and thereby will lie within one of the intervals $[x_i, x_{i+1}]$ where $i$ can be determined from $n$ and the algorithm used to compute the quartiles. In general, when $q$ is in an interval $[i, i+1]$ I will loosely write $x_q$ to refer to some such weighted mean of $x_i$ and $x_{i+1}$.
With two batches of data $(x_i, i=1,\ldots, n)$ and $(y_j, j=1,\ldots,m),$ there are two separate five-letter summaries. We can test the null hypothesis that both are iid random samples of a common distribution $F$ by comparing one of the $x$-letters $x_q$ to one of the $y$-letters $y_r$. For instance, we might compare the upper hinge of $x$ to the lower hinge of $y$ in order to see whether $x$ is significantly less than $y$. This leads to a definite question: how to compute this chance,
$${\Pr}_F(x_q \lt y_r).$$
For fractional $q$ and $r$ this is not possible without knowing $F$. However, because $x_q \le x_{\lceil q \rceil} $ and $y_{\lfloor r \rfloor} \le y_r,$ then a fortiori
$${\Pr}_F(x_q \lt y_r) \le {\Pr}_F(x_{\lceil q \rceil} \lt y_{\lfloor r \rfloor}).$$
We can thereby obtain universal (independent of $F$) upper bounds on the desired probabilities by computing the right hand probability, which compares individual order statistics. The general question in front of us is
What is the chance that the $q^\text{th}$ highest of $n$ values will be less than the $r^\text{th}$ highest of $m$ values drawn iid from a common distribution?
Even this does not have a universal answer unless we rule out the possibility that probability is too heavily concentrated on individual values: in other words, we need to assume that ties are not possible. This means $F$ must be a continuous distribution. Although this is an assumption, it is a weak one and it is non-parametric.
Solution
The distribution $F$ plays no role in the calculation, because upon re-expressing all values by means of the probability transform $F$, we obtain new batches
$$X^{(F)} = F(x_1) \le F(x_2) \le \cdots \le F(x_n)$$
and
$$Y^{(F)} = F(y_1) \le F(y_2) \le \cdots \le F(y_m).$$
Moreover, this re-expression is monotonic and increasing: it preserves order and in so doing preserves the event $x_q \lt y_r.$ Because $F$ is continuous, these new batches are drawn from a Uniform$[0,1]$ distribution. Under this distribution--and dropping the now superfluous "$F$" from the notation--we easily find that $x_q$ has a Beta$(q, n+1-q)$ = Beta$(q, \bar{q})$ distribution:
$$\Pr(x_q\le x) = \frac{n!}{(n-q)!(q-1)!}\int_0^x t^{q-1}(1-t)^{n-q}dt.$$
Similarly the distribution of $y_r$ is Beta$(r, m+1-r)$. By performing the double integration over the region $x_q \lt y_r$ we can obtain the desired probability,
$$\Pr(x_q \lt y_r) = \frac{\Gamma (m+1) \Gamma (n+1) \Gamma (q+r)\, _3\tilde{F}_2(q,q-n,q+r;\ q+1,m+q+1;\ 1)}{\Gamma (r) \Gamma (n-q+1)}$$
Because all values $n, m, q, r$ are integral, all the $\Gamma$ values are really just factorials: $\Gamma(k) = (k-1)! = (k-1)(k-2)\cdots(2)(1)$ for integral $k\ge 0.$
The little-known function $_3\tilde{F}_2$ is a regularized hypergeometric function. In this case it can be computed as a rather simple alternating sum of length $n-q+1$, normalized by some factorials:
$$\Gamma(q+1)\Gamma(m+q+1)\ {_3\tilde{F}_2}(q,q-n,q+r;\ q+1,m+q+1;\ 1) \\
=\sum_{i=0}^{n-q}(-1)^i \binom{n-q}{i} \frac{q(q+r)\cdots(q+r+i-1)}{(q+i)(1+m+q)(2+m+q)\cdots(i+m+q)} \\
= 1 - \frac{\binom{n-q}{1}q(q+r)}{(1+q)(1+m+q)} + \frac{\binom{n-q}{2}q(q+r)(1+q+r)}{(2+q)(1+m+q)(2+m+q)} - \cdots.$$
This has reduced the calculation of the probability to nothing more complicated than addition, subtraction, multiplication, and division. The computational effort scales as $O((n-q)^2).$ By exploiting the symmetry
$$\Pr(x_q \lt y_r) = 1 - \Pr(y_r \lt x_q)$$
the new calculation scales as $O((m-r)^2),$ allowing us to pick the easier of the two sums if we wish. This will rarely be necessary, though, because $5$-letter summaries tend to be used only for small batches, rarely exceeding $n, m \approx 300.$
Application
Suppose the two batches have sizes $n=8$ and $m=12$. The relevant order statistics for $x$ and $y$ are $1,3,5,7,8$ and $1,3,6,9,12,$ respectively. Here is a table of the chance that $x_q \lt y_r$ with $q$ indexing the rows and $r$ indexing the columns:
q\r 1 3 6 9 12
1 0.4 0.807 0.9762 0.9987 1.
3 0.0491 0.2962 0.7404 0.9601 0.9993
5 0.0036 0.0521 0.325 0.7492 0.9856
7 0.0001 0.0032 0.0542 0.3065 0.8526
8 0. 0.0004 0.0102 0.1022 0.6
A simulation of 10,000 iid sample pairs from a standard Normal distribution gave results close to these.
To construct a one-sided test at size $\alpha,$ such as $\alpha = 5\%,$ to determine whether the $x$ batch is significantly less than the $y$ batch, look for values in this table close to or just under $\alpha$. Good choices are at $(q,r)=(3,1),$ where the chance is $0.0491,$ at $(5,3)$ with a chance of $0.0521$, and at $(7,6)$ with a chance of $0.0542.$ Which one to use depends on your thoughts about the alternative hypothesis. For instance, the $(3,1)$ test compares the lower hinge of $x$ to the smallest value of $y$ and finds a significant difference when that lower hinge is the smaller one. This test is sensitive to an extreme value of $y$; if there is some concern about outlying data, this might be a risky test to choose. On the other hand the test $(7,6)$ compares the upper hinge of $x$ to the median of $y$. This one is very robust to outlying values in the $y$ batch and moderately robust to outliers in $x$. However, it compares middle values of $x$ to middle values of $y$. Although this is probably a good comparison to make, it will not detect differences in the distributions that occur only in either tail.
Being able to compute these critical values analytically helps in selecting a test. Once one (or several) tests are identified, their power to detect changes is probably best evaluated through simulation. The power will depend heavily on how the distributions differ. To get a sense of whether these tests have any power at all, I conducted the $(5,3)$ test with the $y_j$ drawn iid from a Normal$(1,1)$ distribution: that is, its median was shifted by one standard deviation. In a simulation the test was significant $54.4\%$ of the time: that is appreciable power for datasets this small.
Much more can be said, but all of it is routine stuff about conducting two-sided tests, how to assess effects sizes, and so on. The principal point has been demonstrated: given the $5$-letter summaries (and sizes) of two batches of data, it is possible to construct reasonably powerful non-parametric tests to detect differences in their underlying populations and in many cases we might even have several choices of test to select from. The theory developed here has a broader application to comparing two populations by means of a appropriately selected order statistics from their samples (not just those approximating the letter summaries).
These results have other useful applications. For instance, a boxplot is a graphical depiction of a $5$-letter summary. Thus, along with knowledge of the sample size shown by a boxplot, we have available a number of simple tests (based on comparing parts of one box and whisker to another one) to assess the significance of visually apparent differences in those plots.
|
Statistical test for two distributions where only 5-number summary is known?
|
Under the null hypothesis that the distributions are the same and both samples are obtained randomly and independently from the common distribution, we can work out the sizes of all $5\times 5$ (deter
|
Statistical test for two distributions where only 5-number summary is known?
Under the null hypothesis that the distributions are the same and both samples are obtained randomly and independently from the common distribution, we can work out the sizes of all $5\times 5$ (deterministic) tests that can be made by comparing one letter value to another. Some of these tests appear to have reasonable power to detect differences in distributions.
Analysis
The original definition of the $5$-letter summary of any ordered batch of numbers $x_1 \le x_2 \le \cdots \le x_n$ is the following [Tukey EDA 1977]:
For any number $m = (i + (i+1))/2$ in $\{(1+2)/2, (2+3)/2, \ldots, (n-1+n)/2\}$ define $x_m = (x_i + x_{i+1})/2.$
Let $\bar{i} = n+1-i$.
Let $m = (n+1)/2$ and $h = (\lfloor m \rfloor + 1)/2.$
The $5$-letter summary is the set $\{X^{-} = x_1, H^{-}=x_h, M=x_m, H^{+}=x_\bar{h}, X^{+}=x_n\}.$ Its elements are known as the minimum, lower hinge, median, upper hinge, and maximum, respectively.
For example, in the batch of data $(-3, 1, 1, 2, 3, 5, 5, 5, 7, 13, 21)$ we may compute that $n=12$, $m=13/2$, and $h=7/2$, whence
$$\eqalign{
&X^{-} &= -3, \\
&H^{-} &= x_{7/2} = (x_3+x_4)/2 = (1+2)/2 = 3/2, \\
&M &= x_{13/2} = (x_6+x_7)/2 = (5+5)/2 = 5, \\
&H^{+} &= x_\overline{7/2} = x_{19/2} = (x_9+x_{10})/2 = (5+7)/2 = 6, \\
&X^{+} &= x_{12} = 21.
}$$
The hinges are close to (but usually not exactly the same as) the quartiles. If quartiles are used, note that in general they will be weighted arithmetic means of two of the order statistics and thereby will lie within one of the intervals $[x_i, x_{i+1}]$ where $i$ can be determined from $n$ and the algorithm used to compute the quartiles. In general, when $q$ is in an interval $[i, i+1]$ I will loosely write $x_q$ to refer to some such weighted mean of $x_i$ and $x_{i+1}$.
With two batches of data $(x_i, i=1,\ldots, n)$ and $(y_j, j=1,\ldots,m),$ there are two separate five-letter summaries. We can test the null hypothesis that both are iid random samples of a common distribution $F$ by comparing one of the $x$-letters $x_q$ to one of the $y$-letters $y_r$. For instance, we might compare the upper hinge of $x$ to the lower hinge of $y$ in order to see whether $x$ is significantly less than $y$. This leads to a definite question: how to compute this chance,
$${\Pr}_F(x_q \lt y_r).$$
For fractional $q$ and $r$ this is not possible without knowing $F$. However, because $x_q \le x_{\lceil q \rceil} $ and $y_{\lfloor r \rfloor} \le y_r,$ then a fortiori
$${\Pr}_F(x_q \lt y_r) \le {\Pr}_F(x_{\lceil q \rceil} \lt y_{\lfloor r \rfloor}).$$
We can thereby obtain universal (independent of $F$) upper bounds on the desired probabilities by computing the right hand probability, which compares individual order statistics. The general question in front of us is
What is the chance that the $q^\text{th}$ highest of $n$ values will be less than the $r^\text{th}$ highest of $m$ values drawn iid from a common distribution?
Even this does not have a universal answer unless we rule out the possibility that probability is too heavily concentrated on individual values: in other words, we need to assume that ties are not possible. This means $F$ must be a continuous distribution. Although this is an assumption, it is a weak one and it is non-parametric.
Solution
The distribution $F$ plays no role in the calculation, because upon re-expressing all values by means of the probability transform $F$, we obtain new batches
$$X^{(F)} = F(x_1) \le F(x_2) \le \cdots \le F(x_n)$$
and
$$Y^{(F)} = F(y_1) \le F(y_2) \le \cdots \le F(y_m).$$
Moreover, this re-expression is monotonic and increasing: it preserves order and in so doing preserves the event $x_q \lt y_r.$ Because $F$ is continuous, these new batches are drawn from a Uniform$[0,1]$ distribution. Under this distribution--and dropping the now superfluous "$F$" from the notation--we easily find that $x_q$ has a Beta$(q, n+1-q)$ = Beta$(q, \bar{q})$ distribution:
$$\Pr(x_q\le x) = \frac{n!}{(n-q)!(q-1)!}\int_0^x t^{q-1}(1-t)^{n-q}dt.$$
Similarly the distribution of $y_r$ is Beta$(r, m+1-r)$. By performing the double integration over the region $x_q \lt y_r$ we can obtain the desired probability,
$$\Pr(x_q \lt y_r) = \frac{\Gamma (m+1) \Gamma (n+1) \Gamma (q+r)\, _3\tilde{F}_2(q,q-n,q+r;\ q+1,m+q+1;\ 1)}{\Gamma (r) \Gamma (n-q+1)}$$
Because all values $n, m, q, r$ are integral, all the $\Gamma$ values are really just factorials: $\Gamma(k) = (k-1)! = (k-1)(k-2)\cdots(2)(1)$ for integral $k\ge 0.$
The little-known function $_3\tilde{F}_2$ is a regularized hypergeometric function. In this case it can be computed as a rather simple alternating sum of length $n-q+1$, normalized by some factorials:
$$\Gamma(q+1)\Gamma(m+q+1)\ {_3\tilde{F}_2}(q,q-n,q+r;\ q+1,m+q+1;\ 1) \\
=\sum_{i=0}^{n-q}(-1)^i \binom{n-q}{i} \frac{q(q+r)\cdots(q+r+i-1)}{(q+i)(1+m+q)(2+m+q)\cdots(i+m+q)} \\
= 1 - \frac{\binom{n-q}{1}q(q+r)}{(1+q)(1+m+q)} + \frac{\binom{n-q}{2}q(q+r)(1+q+r)}{(2+q)(1+m+q)(2+m+q)} - \cdots.$$
This has reduced the calculation of the probability to nothing more complicated than addition, subtraction, multiplication, and division. The computational effort scales as $O((n-q)^2).$ By exploiting the symmetry
$$\Pr(x_q \lt y_r) = 1 - \Pr(y_r \lt x_q)$$
the new calculation scales as $O((m-r)^2),$ allowing us to pick the easier of the two sums if we wish. This will rarely be necessary, though, because $5$-letter summaries tend to be used only for small batches, rarely exceeding $n, m \approx 300.$
Application
Suppose the two batches have sizes $n=8$ and $m=12$. The relevant order statistics for $x$ and $y$ are $1,3,5,7,8$ and $1,3,6,9,12,$ respectively. Here is a table of the chance that $x_q \lt y_r$ with $q$ indexing the rows and $r$ indexing the columns:
q\r 1 3 6 9 12
1 0.4 0.807 0.9762 0.9987 1.
3 0.0491 0.2962 0.7404 0.9601 0.9993
5 0.0036 0.0521 0.325 0.7492 0.9856
7 0.0001 0.0032 0.0542 0.3065 0.8526
8 0. 0.0004 0.0102 0.1022 0.6
A simulation of 10,000 iid sample pairs from a standard Normal distribution gave results close to these.
To construct a one-sided test at size $\alpha,$ such as $\alpha = 5\%,$ to determine whether the $x$ batch is significantly less than the $y$ batch, look for values in this table close to or just under $\alpha$. Good choices are at $(q,r)=(3,1),$ where the chance is $0.0491,$ at $(5,3)$ with a chance of $0.0521$, and at $(7,6)$ with a chance of $0.0542.$ Which one to use depends on your thoughts about the alternative hypothesis. For instance, the $(3,1)$ test compares the lower hinge of $x$ to the smallest value of $y$ and finds a significant difference when that lower hinge is the smaller one. This test is sensitive to an extreme value of $y$; if there is some concern about outlying data, this might be a risky test to choose. On the other hand the test $(7,6)$ compares the upper hinge of $x$ to the median of $y$. This one is very robust to outlying values in the $y$ batch and moderately robust to outliers in $x$. However, it compares middle values of $x$ to middle values of $y$. Although this is probably a good comparison to make, it will not detect differences in the distributions that occur only in either tail.
Being able to compute these critical values analytically helps in selecting a test. Once one (or several) tests are identified, their power to detect changes is probably best evaluated through simulation. The power will depend heavily on how the distributions differ. To get a sense of whether these tests have any power at all, I conducted the $(5,3)$ test with the $y_j$ drawn iid from a Normal$(1,1)$ distribution: that is, its median was shifted by one standard deviation. In a simulation the test was significant $54.4\%$ of the time: that is appreciable power for datasets this small.
Much more can be said, but all of it is routine stuff about conducting two-sided tests, how to assess effects sizes, and so on. The principal point has been demonstrated: given the $5$-letter summaries (and sizes) of two batches of data, it is possible to construct reasonably powerful non-parametric tests to detect differences in their underlying populations and in many cases we might even have several choices of test to select from. The theory developed here has a broader application to comparing two populations by means of a appropriately selected order statistics from their samples (not just those approximating the letter summaries).
These results have other useful applications. For instance, a boxplot is a graphical depiction of a $5$-letter summary. Thus, along with knowledge of the sample size shown by a boxplot, we have available a number of simple tests (based on comparing parts of one box and whisker to another one) to assess the significance of visually apparent differences in those plots.
|
Statistical test for two distributions where only 5-number summary is known?
Under the null hypothesis that the distributions are the same and both samples are obtained randomly and independently from the common distribution, we can work out the sizes of all $5\times 5$ (deter
|
14,794
|
Statistical test for two distributions where only 5-number summary is known?
|
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable --- you could look at something like an ECDF-type statistic - say some equivalent to a Kolmogorov-Smirnov-type statistic or something akin to an Anderson-Darling statistic (though of course the distribution of the statistic will be very different in this case).
The distribution for small samples will depend on the precise definitions of the quantiles used in the five number summary.
Consider, for example, the default quartiles and extreme values in R (n=10):
> summary(x)[-4]
Min. 1st Qu. Median 3rd Qu. Max.
-2.33500 -0.26450 0.07787 0.33740 0.94770
compared to those generated by its command for the five number summary:
> fivenum(x)
[1] -2.33458172 -0.34739104 0.07786866 0.38008143 0.94774213
Note that the upper and lower quartiles differ from the corresponding hinges in the fivenum command.
By contrast, at n=9 the two results are identical (when they all occur at observations)
(R comes with nine different definitions for quantiles.)
The case for all three quartiles occurring at observations (when $n=4k+1$, I believe, possibly under more cases under some definitions of them) might actually be doable algebraically and should be nonparametric, but the general case (across many definitions) may not be so doable, and may not be nonparametric (consider the case where you're averaging observations to produce quantiles in at least one of the samples ... in that case the probabilities of different arrangements of sample quantiles may no longer be unaffected by the distribution of the data).
Once a fixed definition is chosen, simulation would seem to be the way to proceed.
Because it will be nonparametric at a subset of possible values of $n$, the fact that it's no longer distribution free for other values may not be such a big concern; one might say nearly distribution free at intermediate sample sizes, at least if $n$'s are not too small.
Let's look at some cases that should be distribution free, and consider some small sample sizes. Say a KS-type statistic applied directly to the five number summary itself, for sample sizes where the five number summary values will be individual order statistics.
Note that this doesn't really 'emulate' the K-S test exactly, since the jumps in the tail are too large compared to the KS, for example. On the other hand, it's not easy to assert that the jumps at the summary values should be for all the values between them. Different sets of weights/jumps will have different type-I error characteristics and different power characteristics and I am not sure what is best to choose (choosing slightly different from equal values could help get a finer set of significance levels, though). My purpose, then is simply to show that the general approach may be feasible, not to recommend any specific procedure. An arbitrary set of weights to each value in the summary will still give a nonparametric test, as long as they're not taken with reference to the data.
Anyway, here goes:
Finding the null distribution/critical values via simulation
At n=5 and 5 in the two samples, we needn't do anything special - that's a straight KS test.
At n=9 and 9, we can do uniform simulation:
ks9.9 <- replicate(10000, ks.test(fivenum(runif(9)),
fivenum(runif(9)))$statistic)
plot(table(ks9.9)/10000,type="h"); abline(h=0,col=8)
# Here's the empirical cdf:
cumsum(table(ks9.9)/10000)
0.2 0.4 0.6 0.8
0.3730 0.9092 0.9966 1.0000
so at $n_1 = n_2=9$, you can get roughly $\alpha=0.1$ ($D_{crit}=0.6$), and roughly $\alpha=0.005$ ($D_{crit}=0.8$). (We shouldn't expect nice alpha steps. When the $n$'s are moderately large we should expect not to have anything but very big or very tiny choices for $\alpha$).
$n_1 = 9, n_2=13$ has a nice near-5% significance level ($D=0.6$)
$n_1 = n_2=13$ has a nice near-2.5% significance level ($D=0.6$)
At sample sizes near these, this approach should be feasible, but if both $n$s are much above 21 ($\alpha \approx 0.2$ and $\alpha\approx 0.001$), this won't work well at all.
--
A very fast 'by inspection' test
We see a rejection rule of $D\geq 0.6$ coming up often in the cases we looked at. What sample arrangements lead to that? I think the following two cases:
(i) When the whole of one sample is on one side of the other group's median.
(ii) When the boxes (the range covered by the quartiles) don't overlap.
So there's a nice super-simple nonparametric rejection rule for you -- but it usually won't be at a 'nice' significance level unless the sample sizes aren't too far from 9-13.
Getting a finer set of possible $\alpha$ levels
Anyway, producing tables for similar cases should be relatively straightforward. At medium to large $n$, this test will only have very small possible $\alpha$ levels (or very large) and won't be of practical use except for cases where the difference is obvious).
Interestingly, one approach to increasing the achievable $\alpha$ levels would be to set the jumps in the 'fivenum' cdf according to a Golomb-ruler. If the cdf values were $0,\frac{1}{11},\frac{4}{11},\frac{9}{11}$ and $1$, for example, then the difference between any pair of cdf-values would be different from any other pair. It might be worth seeing if that has much effect on power (my guess: probably not a lot).
Compared to these K-S like tests, I'd expect something more like an Anderson-Darling to be more powerful, but the question is how to weight for this five-number summary case. I imagine that can be tackled, but I'm not sure the extent to which it's worth it.
Power
Let's see how it goes on picking up a difference at $n_1=9,n_2=13$. This is a power curve for normal data, and the effect, del, is in number of standard deviations the second sample is shifted up:
This seems like quite a plausible power curve. So it seems to work okay at least at these small sample sizes.
What about robust, rather than nonparametric?
If nonparametric tests aren't so crucial, but robust-tests are instead okay, we could instead look at some more direct comparison of the three quartile values in the summary, such as an interval for the median based off the IQR and the sample size (based off some nominal distribution around which robustness is desired, such as the normal -- this is the reasoning behind notched box plots, for example). This should tend to work much better at large sample sizes than the nonparametric test which will suffer from lack of appropriate significance levels.
|
Statistical test for two distributions where only 5-number summary is known?
|
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable ---
|
Statistical test for two distributions where only 5-number summary is known?
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable --- you could look at something like an ECDF-type statistic - say some equivalent to a Kolmogorov-Smirnov-type statistic or something akin to an Anderson-Darling statistic (though of course the distribution of the statistic will be very different in this case).
The distribution for small samples will depend on the precise definitions of the quantiles used in the five number summary.
Consider, for example, the default quartiles and extreme values in R (n=10):
> summary(x)[-4]
Min. 1st Qu. Median 3rd Qu. Max.
-2.33500 -0.26450 0.07787 0.33740 0.94770
compared to those generated by its command for the five number summary:
> fivenum(x)
[1] -2.33458172 -0.34739104 0.07786866 0.38008143 0.94774213
Note that the upper and lower quartiles differ from the corresponding hinges in the fivenum command.
By contrast, at n=9 the two results are identical (when they all occur at observations)
(R comes with nine different definitions for quantiles.)
The case for all three quartiles occurring at observations (when $n=4k+1$, I believe, possibly under more cases under some definitions of them) might actually be doable algebraically and should be nonparametric, but the general case (across many definitions) may not be so doable, and may not be nonparametric (consider the case where you're averaging observations to produce quantiles in at least one of the samples ... in that case the probabilities of different arrangements of sample quantiles may no longer be unaffected by the distribution of the data).
Once a fixed definition is chosen, simulation would seem to be the way to proceed.
Because it will be nonparametric at a subset of possible values of $n$, the fact that it's no longer distribution free for other values may not be such a big concern; one might say nearly distribution free at intermediate sample sizes, at least if $n$'s are not too small.
Let's look at some cases that should be distribution free, and consider some small sample sizes. Say a KS-type statistic applied directly to the five number summary itself, for sample sizes where the five number summary values will be individual order statistics.
Note that this doesn't really 'emulate' the K-S test exactly, since the jumps in the tail are too large compared to the KS, for example. On the other hand, it's not easy to assert that the jumps at the summary values should be for all the values between them. Different sets of weights/jumps will have different type-I error characteristics and different power characteristics and I am not sure what is best to choose (choosing slightly different from equal values could help get a finer set of significance levels, though). My purpose, then is simply to show that the general approach may be feasible, not to recommend any specific procedure. An arbitrary set of weights to each value in the summary will still give a nonparametric test, as long as they're not taken with reference to the data.
Anyway, here goes:
Finding the null distribution/critical values via simulation
At n=5 and 5 in the two samples, we needn't do anything special - that's a straight KS test.
At n=9 and 9, we can do uniform simulation:
ks9.9 <- replicate(10000, ks.test(fivenum(runif(9)),
fivenum(runif(9)))$statistic)
plot(table(ks9.9)/10000,type="h"); abline(h=0,col=8)
# Here's the empirical cdf:
cumsum(table(ks9.9)/10000)
0.2 0.4 0.6 0.8
0.3730 0.9092 0.9966 1.0000
so at $n_1 = n_2=9$, you can get roughly $\alpha=0.1$ ($D_{crit}=0.6$), and roughly $\alpha=0.005$ ($D_{crit}=0.8$). (We shouldn't expect nice alpha steps. When the $n$'s are moderately large we should expect not to have anything but very big or very tiny choices for $\alpha$).
$n_1 = 9, n_2=13$ has a nice near-5% significance level ($D=0.6$)
$n_1 = n_2=13$ has a nice near-2.5% significance level ($D=0.6$)
At sample sizes near these, this approach should be feasible, but if both $n$s are much above 21 ($\alpha \approx 0.2$ and $\alpha\approx 0.001$), this won't work well at all.
--
A very fast 'by inspection' test
We see a rejection rule of $D\geq 0.6$ coming up often in the cases we looked at. What sample arrangements lead to that? I think the following two cases:
(i) When the whole of one sample is on one side of the other group's median.
(ii) When the boxes (the range covered by the quartiles) don't overlap.
So there's a nice super-simple nonparametric rejection rule for you -- but it usually won't be at a 'nice' significance level unless the sample sizes aren't too far from 9-13.
Getting a finer set of possible $\alpha$ levels
Anyway, producing tables for similar cases should be relatively straightforward. At medium to large $n$, this test will only have very small possible $\alpha$ levels (or very large) and won't be of practical use except for cases where the difference is obvious).
Interestingly, one approach to increasing the achievable $\alpha$ levels would be to set the jumps in the 'fivenum' cdf according to a Golomb-ruler. If the cdf values were $0,\frac{1}{11},\frac{4}{11},\frac{9}{11}$ and $1$, for example, then the difference between any pair of cdf-values would be different from any other pair. It might be worth seeing if that has much effect on power (my guess: probably not a lot).
Compared to these K-S like tests, I'd expect something more like an Anderson-Darling to be more powerful, but the question is how to weight for this five-number summary case. I imagine that can be tackled, but I'm not sure the extent to which it's worth it.
Power
Let's see how it goes on picking up a difference at $n_1=9,n_2=13$. This is a power curve for normal data, and the effect, del, is in number of standard deviations the second sample is shifted up:
This seems like quite a plausible power curve. So it seems to work okay at least at these small sample sizes.
What about robust, rather than nonparametric?
If nonparametric tests aren't so crucial, but robust-tests are instead okay, we could instead look at some more direct comparison of the three quartile values in the summary, such as an interval for the median based off the IQR and the sample size (based off some nominal distribution around which robustness is desired, such as the normal -- this is the reasoning behind notched box plots, for example). This should tend to work much better at large sample sizes than the nonparametric test which will suffer from lack of appropriate significance levels.
|
Statistical test for two distributions where only 5-number summary is known?
I'm pretty confident there isn't going to be one already in the literature, but if you seek a nonparametric test, it would have to be under the assumption of continuity of the underlying variable ---
|
14,795
|
Statistical test for two distributions where only 5-number summary is known?
|
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change only 2 numbers, but clearly more numbers could be changed
set.seed(123)
#Create data
x <- rnorm(1000)
#Modify it without changing 5 number summary
x2 <- sort(x)
x2[100] <- x[100] - 1
x2[900] <- x[900] + 1
fivenum(x)
fivenum(x2)
|
Statistical test for two distributions where only 5-number summary is known?
|
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change o
|
Statistical test for two distributions where only 5-number summary is known?
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change only 2 numbers, but clearly more numbers could be changed
set.seed(123)
#Create data
x <- rnorm(1000)
#Modify it without changing 5 number summary
x2 <- sort(x)
x2[100] <- x[100] - 1
x2[900] <- x[900] + 1
fivenum(x)
fivenum(x2)
|
Statistical test for two distributions where only 5-number summary is known?
I don't see how there could be such a test, at least without some assumptions.
You can have two different distributions that have the same 5 number summary:
Here is a trivial example, where I change o
|
14,796
|
Regularized bayesian logistic regression in JAGS
|
Since L1 regularization is equivalent to a Laplace (double exponential) prior on the relevant coefficients, you can do it as follows. Here I have three independent variables x1, x2, and x3, and y is the binary target variable. Selection of the regularization parameter $\lambda$ is done here by putting a hyperprior on it, in this case just uniform over a good-sized range.
model {
# Likelihood
for (i in 1:N) {
y[i] ~ dbern(p[i])
logit(p[i]) <- b0 + b[1]*x1[i] + b[2]*x2[i] + b[3]*x3[i]
}
# Prior on constant term
b0 ~ dnorm(0,0.1)
# L1 regularization == a Laplace (double exponential) prior
for (j in 1:3) {
b[j] ~ ddexp(0, lambda)
}
lambda ~ dunif(0.001,10)
# Alternatively, specify lambda via lambda <- 1 or some such
}
Let's try it out using the dclone package in R!
library(dclone)
x1 <- rnorm(100)
x2 <- rnorm(100)
x3 <- rnorm(100)
prob <- exp(x1+x2+x3) / (1+exp(x1+x2+x3))
y <- rbinom(100, 1, prob)
data.list <- list(
y = y,
x1 = x1, x2 = x2, x3 = x3,
N = length(y)
)
params = c("b0", "b", "lambda")
temp <- jags.fit(data.list,
params=params,
model="modela.jags",
n.chains=3,
n.adapt=1000,
n.update=1000,
thin=10,
n.iter=10000)
And here are the results, compared to an unregularized logistic regression:
> summary(temp)
<< blah, blah, blah >>
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
b[1] 1.21064 0.3279 0.005987 0.005641
b[2] 0.64730 0.3192 0.005827 0.006014
b[3] 1.25340 0.3217 0.005873 0.006357
b0 0.03313 0.2497 0.004558 0.005580
lambda 1.34334 0.7851 0.014333 0.014999
2. Quantiles for each variable: << deleted to save space >>
> summary(glm(y~x1+x2+x3, family="binomial"))
<< blah, blah, blah >>
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.02784 0.25832 0.108 0.9142
x1 1.34955 0.32845 4.109 3.98e-05 ***
x2 0.78031 0.32191 2.424 0.0154 *
x3 1.39065 0.32863 4.232 2.32e-05 ***
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
<< more stuff deleted to save space >>
And we can see that the three b parameters have indeed been shrunk towards zero.
I don't know much about priors for the hyperparameter of the Laplace distribution / the regularization parameter, I'm sorry to say. I tend to use uniform distributions and look at the posterior to see if it looks reasonably well-behaved, e.g., not piled up near an endpoint and pretty much peaked in the middle w/o horrible skewness problems. So far, that's typically been the case. Treating it as a variance parameter and using the recommendation(s) by Gelman Prior distributions for variance parameters in hierarchical models works for me, too.
|
Regularized bayesian logistic regression in JAGS
|
Since L1 regularization is equivalent to a Laplace (double exponential) prior on the relevant coefficients, you can do it as follows. Here I have three independent variables x1, x2, and x3, and y is
|
Regularized bayesian logistic regression in JAGS
Since L1 regularization is equivalent to a Laplace (double exponential) prior on the relevant coefficients, you can do it as follows. Here I have three independent variables x1, x2, and x3, and y is the binary target variable. Selection of the regularization parameter $\lambda$ is done here by putting a hyperprior on it, in this case just uniform over a good-sized range.
model {
# Likelihood
for (i in 1:N) {
y[i] ~ dbern(p[i])
logit(p[i]) <- b0 + b[1]*x1[i] + b[2]*x2[i] + b[3]*x3[i]
}
# Prior on constant term
b0 ~ dnorm(0,0.1)
# L1 regularization == a Laplace (double exponential) prior
for (j in 1:3) {
b[j] ~ ddexp(0, lambda)
}
lambda ~ dunif(0.001,10)
# Alternatively, specify lambda via lambda <- 1 or some such
}
Let's try it out using the dclone package in R!
library(dclone)
x1 <- rnorm(100)
x2 <- rnorm(100)
x3 <- rnorm(100)
prob <- exp(x1+x2+x3) / (1+exp(x1+x2+x3))
y <- rbinom(100, 1, prob)
data.list <- list(
y = y,
x1 = x1, x2 = x2, x3 = x3,
N = length(y)
)
params = c("b0", "b", "lambda")
temp <- jags.fit(data.list,
params=params,
model="modela.jags",
n.chains=3,
n.adapt=1000,
n.update=1000,
thin=10,
n.iter=10000)
And here are the results, compared to an unregularized logistic regression:
> summary(temp)
<< blah, blah, blah >>
1. Empirical mean and standard deviation for each variable,
plus standard error of the mean:
Mean SD Naive SE Time-series SE
b[1] 1.21064 0.3279 0.005987 0.005641
b[2] 0.64730 0.3192 0.005827 0.006014
b[3] 1.25340 0.3217 0.005873 0.006357
b0 0.03313 0.2497 0.004558 0.005580
lambda 1.34334 0.7851 0.014333 0.014999
2. Quantiles for each variable: << deleted to save space >>
> summary(glm(y~x1+x2+x3, family="binomial"))
<< blah, blah, blah >>
Coefficients:
Estimate Std. Error z value Pr(>|z|)
(Intercept) 0.02784 0.25832 0.108 0.9142
x1 1.34955 0.32845 4.109 3.98e-05 ***
x2 0.78031 0.32191 2.424 0.0154 *
x3 1.39065 0.32863 4.232 2.32e-05 ***
---
Signif. codes: 0 β***β 0.001 β**β 0.01 β*β 0.05 β.β 0.1 β β 1
<< more stuff deleted to save space >>
And we can see that the three b parameters have indeed been shrunk towards zero.
I don't know much about priors for the hyperparameter of the Laplace distribution / the regularization parameter, I'm sorry to say. I tend to use uniform distributions and look at the posterior to see if it looks reasonably well-behaved, e.g., not piled up near an endpoint and pretty much peaked in the middle w/o horrible skewness problems. So far, that's typically been the case. Treating it as a variance parameter and using the recommendation(s) by Gelman Prior distributions for variance parameters in hierarchical models works for me, too.
|
Regularized bayesian logistic regression in JAGS
Since L1 regularization is equivalent to a Laplace (double exponential) prior on the relevant coefficients, you can do it as follows. Here I have three independent variables x1, x2, and x3, and y is
|
14,797
|
Interpretation of incidence-rate ratios
|
Ah, the incident rate ratio, my old friend.
You're correct. If we have a 0/1 variable, an IRR of 0.7 means that those with X = 1 will have 0.7 times the incident events as those with X = 0. If you want the actual number of predicted counts, you'll have to back-track to the unexponentiated model coefficients. Then your expected cases would be:
counts = exp(B0 + B1*X), where B0 is the intercept term, B1 is the coefficient for your variable (equal in this example to ~-0.3365) and X is the value of X for whatever group you're trying to calculate this for. I find that's occasionally a useful sanity check to make sure I haven't done something horribly wrong in the model itself.
If you're more familiar with Hazard Ratios from other areas of survival analysis, note that an incidence rate ratio is a hazard ratio, just with a very particular set of assumptions to it - that the hazard is both proportional and constant. It can be interpreted the same way.
|
Interpretation of incidence-rate ratios
|
Ah, the incident rate ratio, my old friend.
You're correct. If we have a 0/1 variable, an IRR of 0.7 means that those with X = 1 will have 0.7 times the incident events as those with X = 0. If you wa
|
Interpretation of incidence-rate ratios
Ah, the incident rate ratio, my old friend.
You're correct. If we have a 0/1 variable, an IRR of 0.7 means that those with X = 1 will have 0.7 times the incident events as those with X = 0. If you want the actual number of predicted counts, you'll have to back-track to the unexponentiated model coefficients. Then your expected cases would be:
counts = exp(B0 + B1*X), where B0 is the intercept term, B1 is the coefficient for your variable (equal in this example to ~-0.3365) and X is the value of X for whatever group you're trying to calculate this for. I find that's occasionally a useful sanity check to make sure I haven't done something horribly wrong in the model itself.
If you're more familiar with Hazard Ratios from other areas of survival analysis, note that an incidence rate ratio is a hazard ratio, just with a very particular set of assumptions to it - that the hazard is both proportional and constant. It can be interpreted the same way.
|
Interpretation of incidence-rate ratios
Ah, the incident rate ratio, my old friend.
You're correct. If we have a 0/1 variable, an IRR of 0.7 means that those with X = 1 will have 0.7 times the incident events as those with X = 0. If you wa
|
14,798
|
Interpretation of incidence-rate ratios
|
Yes, that sounds about right: to be precise, the expected count is multiplied by a factor of .7 when the independent variable increases by one unit.
The term 'incidence rate ratio" assumes that you're fitting a model with an exposure() (offset) term as well, typically specifying the time each unit was observed for, in which case instead of expected counts you have expected counts per unit time, i.e. rates. Calling them incidence rates is terminology from epidemiology.
|
Interpretation of incidence-rate ratios
|
Yes, that sounds about right: to be precise, the expected count is multiplied by a factor of .7 when the independent variable increases by one unit.
The term 'incidence rate ratio" assumes that you're
|
Interpretation of incidence-rate ratios
Yes, that sounds about right: to be precise, the expected count is multiplied by a factor of .7 when the independent variable increases by one unit.
The term 'incidence rate ratio" assumes that you're fitting a model with an exposure() (offset) term as well, typically specifying the time each unit was observed for, in which case instead of expected counts you have expected counts per unit time, i.e. rates. Calling them incidence rates is terminology from epidemiology.
|
Interpretation of incidence-rate ratios
Yes, that sounds about right: to be precise, the expected count is multiplied by a factor of .7 when the independent variable increases by one unit.
The term 'incidence rate ratio" assumes that you're
|
14,799
|
Logic behind the ANOVA F-test in simple linear regression
|
In the simplest case, when you have only one predictor (simple regression), say $X_1$, the $F$-test tells you whether including $X_1$ does explain a larger part of the variance observed in $Y$ compared to the null model (intercept only). The idea is then to test if the added explained variance (total variance, TSS, minus residual variance, RSS) is large enough to be considered as a "significant quantity". We are here comparing a model with one predictor, or explanatory variable, to a baseline which is just "noise" (nothing except the grand mean).
Likewise, you can compute an $F$ statistic in a multiple regression setting: In this case, it amounts to a test of all predictors included in the model, which under the HT framework means that we wonder whether any of them is useful in predicting the response variable. This is the reason why you may encounter situations where the $F$-test for the whole model is significant whereas some of the $t$ or $z$-tests associated to each regression coefficient are not.
The $F$ statistic looks like
$$ F = \frac{(\text{TSS}-\text{RSS})/(p-1)}{\text{RSS}/(n-p)},$$
where $p$ is the number of model parameters and $n$ the number of observations. This quantity should be referred to an $F_{p-1,n-p}$ distribution for a critical or $p$-value. It applies for the simple regression model as well, and obviously bears some analogy with the classical ANOVA framework.
Sidenote.
When you have more than one predictor, then you may wonder whether considering only a subset of those predictors "reduces" the quality of model fit. This corresponds to a situation where we consider nested models. This is exactly the same situation as the above ones, where we compare a given regression model with a null model (no predictors included). In order to assess the reduction in explained variance, we can compare the residual sum of squares (RSS) from both model (that is, what is left unexplained once you account for the effect of predictors present in the model). Let $\mathcal{M}_0$ and $\mathcal{M}_1$ denote the base model (with $p$ parameters) and a model with an additional predictor ($q=p+1$ parameters), then if $\text{RSS}_{\mathcal{M}_1}-\text{RSS}_{\mathcal{M}_0}$ is small, we would consider that the smaller model performs as good as the larger one. A good statistic to use would the ratio of such SS, $(\text{RSS}_{\mathcal{M}_1}-\text{RSS}_{\mathcal{M}_0})/\text{RSS}_{\mathcal{M}_0}$, weighted by their degrees of freedom ($p-q$ for the numerator, and $n-p$ for the denominator). As already said, it can be shown that this quantity follows an $F$ (or Fisher-Snedecor) distribution with $p-q$ and $n-p$ degrees of freedom. If the observed $F$ is larger than the corresponding $F$ quantile at a given $\alpha$ (typically, $\alpha=0.05$), then we would conclude that the larger model makes a "better job". (This by no means implies that the model is correct, from a practical point of view!)
A generalization of the above idea is the likelihood ratio test.
If you are using R, you can play with the above concepts like this:
df <- transform(X <- as.data.frame(replicate(2, rnorm(100))),
y = V1+V2+rnorm(100))
## simple regression
anova(lm(y ~ V1, df)) # "ANOVA view"
summary(lm(y ~ V1, df)) # "Regression view"
## multiple regression
summary(lm0 <- lm(y ~ ., df))
lm1 <- update(lm0, . ~ . -V2) # reduced model
anova(lm1, lm0) # test of V2
|
Logic behind the ANOVA F-test in simple linear regression
|
In the simplest case, when you have only one predictor (simple regression), say $X_1$, the $F$-test tells you whether including $X_1$ does explain a larger part of the variance observed in $Y$ compare
|
Logic behind the ANOVA F-test in simple linear regression
In the simplest case, when you have only one predictor (simple regression), say $X_1$, the $F$-test tells you whether including $X_1$ does explain a larger part of the variance observed in $Y$ compared to the null model (intercept only). The idea is then to test if the added explained variance (total variance, TSS, minus residual variance, RSS) is large enough to be considered as a "significant quantity". We are here comparing a model with one predictor, or explanatory variable, to a baseline which is just "noise" (nothing except the grand mean).
Likewise, you can compute an $F$ statistic in a multiple regression setting: In this case, it amounts to a test of all predictors included in the model, which under the HT framework means that we wonder whether any of them is useful in predicting the response variable. This is the reason why you may encounter situations where the $F$-test for the whole model is significant whereas some of the $t$ or $z$-tests associated to each regression coefficient are not.
The $F$ statistic looks like
$$ F = \frac{(\text{TSS}-\text{RSS})/(p-1)}{\text{RSS}/(n-p)},$$
where $p$ is the number of model parameters and $n$ the number of observations. This quantity should be referred to an $F_{p-1,n-p}$ distribution for a critical or $p$-value. It applies for the simple regression model as well, and obviously bears some analogy with the classical ANOVA framework.
Sidenote.
When you have more than one predictor, then you may wonder whether considering only a subset of those predictors "reduces" the quality of model fit. This corresponds to a situation where we consider nested models. This is exactly the same situation as the above ones, where we compare a given regression model with a null model (no predictors included). In order to assess the reduction in explained variance, we can compare the residual sum of squares (RSS) from both model (that is, what is left unexplained once you account for the effect of predictors present in the model). Let $\mathcal{M}_0$ and $\mathcal{M}_1$ denote the base model (with $p$ parameters) and a model with an additional predictor ($q=p+1$ parameters), then if $\text{RSS}_{\mathcal{M}_1}-\text{RSS}_{\mathcal{M}_0}$ is small, we would consider that the smaller model performs as good as the larger one. A good statistic to use would the ratio of such SS, $(\text{RSS}_{\mathcal{M}_1}-\text{RSS}_{\mathcal{M}_0})/\text{RSS}_{\mathcal{M}_0}$, weighted by their degrees of freedom ($p-q$ for the numerator, and $n-p$ for the denominator). As already said, it can be shown that this quantity follows an $F$ (or Fisher-Snedecor) distribution with $p-q$ and $n-p$ degrees of freedom. If the observed $F$ is larger than the corresponding $F$ quantile at a given $\alpha$ (typically, $\alpha=0.05$), then we would conclude that the larger model makes a "better job". (This by no means implies that the model is correct, from a practical point of view!)
A generalization of the above idea is the likelihood ratio test.
If you are using R, you can play with the above concepts like this:
df <- transform(X <- as.data.frame(replicate(2, rnorm(100))),
y = V1+V2+rnorm(100))
## simple regression
anova(lm(y ~ V1, df)) # "ANOVA view"
summary(lm(y ~ V1, df)) # "Regression view"
## multiple regression
summary(lm0 <- lm(y ~ ., df))
lm1 <- update(lm0, . ~ . -V2) # reduced model
anova(lm1, lm0) # test of V2
|
Logic behind the ANOVA F-test in simple linear regression
In the simplest case, when you have only one predictor (simple regression), say $X_1$, the $F$-test tells you whether including $X_1$ does explain a larger part of the variance observed in $Y$ compare
|
14,800
|
Logit with ordinal independent variables
|
To add to @dmk38's response, "any set of scores gives a valid test, provided they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly distorts a numerical scale that really does underlie the ordered classification, the test will not be sensitive. The scores should therefore embody the best insight available about the way in which the classification was constructed and used." (Cochran, 1954, cited by Agresti, 2002, pp. 88-89). In other words, treating an ordered factor as a numerically scored variable is merely a modelling issue. Provided it makes sense, this will only impact the way you interpret the result, and there is no definitive rule of thumb on how to choose the best representation for an ordinal variable.
Consider the following example on maternal Alcohol consumption and presence or absence of congenital malformation (Agresti, Categorical Data Analysis, Table 3.7 p.89):
0 <1 1-2 3-5 6+
Absent 17066 14464 788 126 37
Present 48 38 5 1 1
In this particular case, we can model the outcome using logistic regression or simple association table. Let's do it in R:
tab3.7 <- matrix(c(17066,48,14464,38,788,5,126,1,37,1), nr=2,
dimnames=list(c("Absent","Present"),
c("0","<1","1-2","3-5","6+")))
library(vcd)
assocstats(tab3.7)
Usual $\chi^2$ (12.08, p=0.016751) or LR (6.20, p=0.184562) statistic (with 4 df) do not account for the ordered levels in Alcohol consumption.
Treating both variables as ordinal with equally spaced scores (this has no impact for binary variables, like malformation, and we choose the baseline as 0=absent), we could test for a linear by linear association. Let's first construct an exploded version of this contingency Table:
library(reshape)
tab3.7.df <- untable(data.frame(malform=gl(2,1,10,labels=0:1),
alcohol=gl(5,2,10,labels=colnames(tab3.7))),
c(tab3.7))
# xtabs(~malform+alcohol, tab3.7.df) # check
Then we can test for a linear association using
library(coin)
#lbl_test(as.table(tab3.7))
lbl_test(malform ~ alcohol, data=tab3.7.df)
which yields $\chi^2(1)=1.83$ with $p=0.1764$. Note that this statistic is simply the correlation between the two series of scores (that Agresti called $M^2=(n-1)r^2$), which is readily computed as
cor(sapply(tab3.7.df, as.numeric))[1,2]^2*(32574-1)
As can be seen, there is not much evidence of a clear association between the two variables. As done by Agresti, if we choose to recode Alcohol levels as {0,0.5,1.5,4,7}, that is using mid-range values for an hypothesized continuous scale with the last score being somewhat purely arbitrary, then we would conclude to a larger effect of maternal Alcohol consumption on the development of congenital malformation:
lbl_test(malform ~ alcohol, data=tab3.7.df,
scores=list(alcohol=c(0,0.5,1.5,4,7)))
yields a test statistic of 6.57 with an associated p-value of 0.01037.
There are alternative coding schemes, including midranks (in which case, we fall back to Spearman $\rho$ instead of Pearson $r$) that is discussed by Agresti, but I hope you catch the general idea here: It is best to select scores that actually reflect a reasonable measures of the distance between adjacent categories of your ordinal variable, and equal spacing is often a good compromise (in the absence of theoretical justification).
Using the GLM approach, we would proceed as follows. But first check how Alcohol is encoded in R:
class(tab3.7.df$alcohol)
It is a simple unordered factor ("factor"), hence a nominal predictor. Now, here are three models were we consider Alcohol as a nominal, ordinal or continuous predictor.
summary(mod1 <- glm(malform ~ alcohol, data=tab3.7.df,
family=binomial))
summary(mod2 <- glm(malform ~ ordered(alcohol), data=tab3.7.df,
family=binomial))
summary(mod3 <- glm(malform ~ as.numeric(alcohol), data=tab3.7.df,
family=binomial))
The last case implicitly assumes an equal-interval scale, and the $\hat\beta$ is interpreted as @dmk38 did: it reflects the effect of a one-unit increase in Alcohol on the outcome through the logit link, that is the increase in probability of observing a malformation (compared to no malformation, i.e. the odds-ratio) is $\exp(\hat\theta)=\exp(0.228)=1.256$. The Wald test is not significant at the usual 5% level. In this case, the design matrix only includes 2 columns: the first is a constant column of 1's for the intercept, the second is the numerical value (1 to 5) for the predictor, as in a simple linear regression. In sum, this model tests for a linear effect of Alcohol on the outcome (on the logit scale).
However, in the two other cases (mod1 and mod2), we get different output because the design matrix used to model the predictor differs, as can be checked by using:
model.matrix(mod1)
model.matrix(mod2)
We can see that the associated design matrix for mod1 includes dummy variables for the $k-1$ levels of Alcohol (0 is always the baseline) after the intercept term in the first column, whereas in the case of mod2 we have four columns of contrast-coded effects (after the column of 1's for the intercept). The coefficient for the category "3-5" is estimated at 1.03736 under mod1, but 0.01633 under mod2. Note that AIC and other likelihood-based measures remain, however, identical between these two models.
You can try assigning new scores to Alcohol and see how it will impact the predicted probability of a malformation.
|
Logit with ordinal independent variables
|
To add to @dmk38's response, "any set of scores gives a valid test, provided they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly disto
|
Logit with ordinal independent variables
To add to @dmk38's response, "any set of scores gives a valid test, provided they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly distorts a numerical scale that really does underlie the ordered classification, the test will not be sensitive. The scores should therefore embody the best insight available about the way in which the classification was constructed and used." (Cochran, 1954, cited by Agresti, 2002, pp. 88-89). In other words, treating an ordered factor as a numerically scored variable is merely a modelling issue. Provided it makes sense, this will only impact the way you interpret the result, and there is no definitive rule of thumb on how to choose the best representation for an ordinal variable.
Consider the following example on maternal Alcohol consumption and presence or absence of congenital malformation (Agresti, Categorical Data Analysis, Table 3.7 p.89):
0 <1 1-2 3-5 6+
Absent 17066 14464 788 126 37
Present 48 38 5 1 1
In this particular case, we can model the outcome using logistic regression or simple association table. Let's do it in R:
tab3.7 <- matrix(c(17066,48,14464,38,788,5,126,1,37,1), nr=2,
dimnames=list(c("Absent","Present"),
c("0","<1","1-2","3-5","6+")))
library(vcd)
assocstats(tab3.7)
Usual $\chi^2$ (12.08, p=0.016751) or LR (6.20, p=0.184562) statistic (with 4 df) do not account for the ordered levels in Alcohol consumption.
Treating both variables as ordinal with equally spaced scores (this has no impact for binary variables, like malformation, and we choose the baseline as 0=absent), we could test for a linear by linear association. Let's first construct an exploded version of this contingency Table:
library(reshape)
tab3.7.df <- untable(data.frame(malform=gl(2,1,10,labels=0:1),
alcohol=gl(5,2,10,labels=colnames(tab3.7))),
c(tab3.7))
# xtabs(~malform+alcohol, tab3.7.df) # check
Then we can test for a linear association using
library(coin)
#lbl_test(as.table(tab3.7))
lbl_test(malform ~ alcohol, data=tab3.7.df)
which yields $\chi^2(1)=1.83$ with $p=0.1764$. Note that this statistic is simply the correlation between the two series of scores (that Agresti called $M^2=(n-1)r^2$), which is readily computed as
cor(sapply(tab3.7.df, as.numeric))[1,2]^2*(32574-1)
As can be seen, there is not much evidence of a clear association between the two variables. As done by Agresti, if we choose to recode Alcohol levels as {0,0.5,1.5,4,7}, that is using mid-range values for an hypothesized continuous scale with the last score being somewhat purely arbitrary, then we would conclude to a larger effect of maternal Alcohol consumption on the development of congenital malformation:
lbl_test(malform ~ alcohol, data=tab3.7.df,
scores=list(alcohol=c(0,0.5,1.5,4,7)))
yields a test statistic of 6.57 with an associated p-value of 0.01037.
There are alternative coding schemes, including midranks (in which case, we fall back to Spearman $\rho$ instead of Pearson $r$) that is discussed by Agresti, but I hope you catch the general idea here: It is best to select scores that actually reflect a reasonable measures of the distance between adjacent categories of your ordinal variable, and equal spacing is often a good compromise (in the absence of theoretical justification).
Using the GLM approach, we would proceed as follows. But first check how Alcohol is encoded in R:
class(tab3.7.df$alcohol)
It is a simple unordered factor ("factor"), hence a nominal predictor. Now, here are three models were we consider Alcohol as a nominal, ordinal or continuous predictor.
summary(mod1 <- glm(malform ~ alcohol, data=tab3.7.df,
family=binomial))
summary(mod2 <- glm(malform ~ ordered(alcohol), data=tab3.7.df,
family=binomial))
summary(mod3 <- glm(malform ~ as.numeric(alcohol), data=tab3.7.df,
family=binomial))
The last case implicitly assumes an equal-interval scale, and the $\hat\beta$ is interpreted as @dmk38 did: it reflects the effect of a one-unit increase in Alcohol on the outcome through the logit link, that is the increase in probability of observing a malformation (compared to no malformation, i.e. the odds-ratio) is $\exp(\hat\theta)=\exp(0.228)=1.256$. The Wald test is not significant at the usual 5% level. In this case, the design matrix only includes 2 columns: the first is a constant column of 1's for the intercept, the second is the numerical value (1 to 5) for the predictor, as in a simple linear regression. In sum, this model tests for a linear effect of Alcohol on the outcome (on the logit scale).
However, in the two other cases (mod1 and mod2), we get different output because the design matrix used to model the predictor differs, as can be checked by using:
model.matrix(mod1)
model.matrix(mod2)
We can see that the associated design matrix for mod1 includes dummy variables for the $k-1$ levels of Alcohol (0 is always the baseline) after the intercept term in the first column, whereas in the case of mod2 we have four columns of contrast-coded effects (after the column of 1's for the intercept). The coefficient for the category "3-5" is estimated at 1.03736 under mod1, but 0.01633 under mod2. Note that AIC and other likelihood-based measures remain, however, identical between these two models.
You can try assigning new scores to Alcohol and see how it will impact the predicted probability of a malformation.
|
Logit with ordinal independent variables
To add to @dmk38's response, "any set of scores gives a valid test, provided they are constructed without consulting the results of the experiment. If the set of scores is poor, in that it badly disto
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.